WO2018090573A1 - 缓存空间的管理方法和装置、电子设备和存储介质 - Google Patents

缓存空间的管理方法和装置、电子设备和存储介质 Download PDF

Info

Publication number
WO2018090573A1
WO2018090573A1 PCT/CN2017/082635 CN2017082635W WO2018090573A1 WO 2018090573 A1 WO2018090573 A1 WO 2018090573A1 CN 2017082635 W CN2017082635 W CN 2017082635W WO 2018090573 A1 WO2018090573 A1 WO 2018090573A1
Authority
WO
WIPO (PCT)
Prior art keywords
threshold
unicast
discarding
dynamic
level
Prior art date
Application number
PCT/CN2017/082635
Other languages
English (en)
French (fr)
Inventor
王莉
Original Assignee
深圳市中兴微电子技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市中兴微电子技术有限公司 filed Critical 深圳市中兴微电子技术有限公司
Publication of WO2018090573A1 publication Critical patent/WO2018090573A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/61Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
    • H04L65/611Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for multicast or broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching

Definitions

  • the present invention relates to the field of communications, and in particular, to a method and apparatus for buffer management in a data network traffic management technology, an electronic device, and a computer storage medium.
  • Internet Internet network services not only have more and more bandwidth requirements, but also have more and more demand for multicast services based on the huge demand for original unicast services, such as distance education, video conferencing, video on demand, etc.
  • Access networks, routers, and switches all need to support multicast processing.
  • Multicast is a point-to-multipoint distribution data technology. Due to the complexity of multicast management and resource requirements, multicast and unicast in existing chip design techniques usually use static methods and separate management for multicast and unicast.
  • the cache space is divided into a unicast cache space and a multicast cache space.
  • the packet type is first determined, the unicast packet enters the unicast cache space, and the multicast packet enters the multicast cache space. Assume that the total cache is S, initially dividing the fixed multicast buffer space S1 for the multicast packet, and dividing the fixed unicast buffer space S2 for the unicast packet, so that the allocation of the cache space is easy for the data packet to be distributed when the data is distributed.
  • the cache space is larger than the allocated cache space, resulting in the discarding of multicast packets; thus, the packet loss rate is high; in some cases, once the cache space is allocated, if there is no packet distribution, the cache space is idle, resulting in cache space. Low utilization rate.
  • the existing management method of the cache space may result in the inability to fully utilize the cache space, resulting in low utilization of the cache space.
  • the embodiments of the present invention provide a method and a device for managing a cache space, an electronic device, and a storage medium, so that the cache space can be fully utilized, thereby improving the utilization of the cache space.
  • a method for managing a cache space comprising:
  • a cache space management device includes:
  • Obtaining a module configured to obtain a multicast cache space occupied by the multicast packet
  • a calculation module configured to calculate a dynamic discarding threshold of the unicast buffer space according to the multicast cache space, where the unicast cache space is a cache space occupied by the unicast packet;
  • the processing module is configured to determine whether to allocate a cache space for the current unicast packet according to the dynamic discard threshold of the unicast cache space.
  • the embodiment of the invention provides a method for managing a cache space, including:
  • the multicast packet is cached
  • the estimated cache capacity is a sum of a unicast packet buffer capacity occupied by the unicast packet and a required cache capacity of the cached current unicast packet;
  • a cache space management device includes:
  • a cache unit configured to cache the multicast packet when receiving the multicast packet
  • a first determining unit configured to determine a multicast packet buffer capacity of a buffer space occupied by the multicast packet
  • a second determining unit configured to determine a dynamic discarding threshold of the unicast packet according to the multicast packet buffer capacity
  • a third determining unit configured to determine an estimated cache capacity, where the estimated cache capacity is a sum of a unicast packet buffer capacity occupied by the unicast packet and a required cache capacity of the cached current unicast packet;
  • the processing unit is configured to determine whether to cache the current unicast packet according to the comparison result of the estimated cache capacity and the dynamic discard threshold.
  • An electronic device comprising:
  • a buffer including a cache space, for caching unicast packets and/or multicast packets
  • a memory for storing a computer program
  • the processor is respectively connected to the buffer and the memory, and is configured to manage the buffer space of the buffer by executing the computer program, and can implement the method provided by any one of the foregoing.
  • a computer storage medium having stored therein computer executable instructions for performing the method of any of the preceding.
  • the method and device for managing a cache space, the electronic device and the computer storage medium provided by the embodiment of the present invention, the method includes: acquiring a multicast cache space occupied by the multicast packet; and calculating a dynamic discard threshold of the unicast cache space according to the multicast cache space; Dynamic discard threshold based on unicast cache space Determining whether to allocate a buffer space for the current unicast packet; thus, the discarding threshold of the unicast buffer space can be dynamically calculated according to the actual cache space occupied by the multicast packet, and then determining the current unicast packet according to the calculated dynamic discarding threshold. Whether the cache space is allocated or the current unicast packet is discarded, thereby achieving the purpose of making full use of the cache space and improving the utilization of the cache space.
  • FIG. 1 is a schematic flowchart of a method for managing a cache space according to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart of another method for managing a cache space according to an embodiment of the present invention.
  • FIG. 3 is a schematic flowchart of still another method for managing a cache space according to an embodiment of the present disclosure
  • FIG. 4 is a schematic flowchart diagram of still another method for managing a cache space according to an embodiment of the present disclosure
  • FIG. 5 is a schematic structural diagram of a buffer space management apparatus according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of another cache space management apparatus according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic structural diagram of still another cache space management apparatus according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of still another cache space management apparatus according to an embodiment of the present invention.
  • FIG. 9 is a schematic structural diagram of still another buffer space management apparatus according to an embodiment of the present invention.
  • FIG. 1 is a schematic flowchart of a method for managing a cache space according to an embodiment of the present invention. As shown in FIG. 1 , the method provided in this embodiment includes the following steps:
  • Step 101 Obtain a multicast cache space occupied by the multicast packet.
  • the step 101 acquires the multicast cache space occupied by the multicast packet, which may be implemented by the management device of the cache space. It should be noted that the multicast cache space occupied by the multicast packet refers to the size of the multicast cache space occupied by the multicast packet.
  • the priority of the current multicast packet is obtained, and the multicast cache space occupied by all the multicast packets after the current multicast packet is enqueued is matched with the dynamic discarding threshold of each priority multicast cache space, and the corresponding discard probability is obtained. Determines whether to allocate cache space for the current multicast packet or discard the current multicast packet.
  • Step 102 Calculate a dynamic discard threshold of the unicast cache space according to the multicast cache space.
  • the unicast cache space is the cache space occupied by the unicast packet.
  • the step 102 of calculating a dynamic discarding threshold of the unicast buffer space according to the multicast cache space may be implemented by a management device of the cache space.
  • the dynamic discarding threshold can be one or multiple. If there are multiple, the dynamic discarding threshold can be divided into N levels, and N is an integer greater than or equal to 1.
  • the specific value of N can be based on the required utilization of the actual cache space. To set the degree, if the cache space needs to be used to a large extent, N can be set larger, so that the calculated number of dynamic discard thresholds is correspondingly larger, and the basis for the judgment in step 103 is further More to better determine whether to allocate cache space for the current unicast packet or to discard the current unicast packet.
  • Step 103 Determine, according to the dynamic discarding threshold of the unicast buffer space, whether the current unicast packet is With cache space.
  • step 103 determines whether to allocate a cache space for the current unicast packet according to a dynamic discarding threshold of the unicast buffer space, which may be implemented by a management device of the cache space.
  • the dynamic discarding threshold in step 103 is the dynamic discarding threshold calculated in step 102. If a dynamic discarding threshold is calculated in step 102, step 103 determines whether to allocate a buffer space for the current unicast packet according to the dynamic discarding threshold. A plurality of dynamic discarding thresholds are calculated in step 102, and step 103 determines whether to allocate a buffer space for the current unicast packet according to the dynamic discarding thresholds.
  • the method for managing the cache space obtains the multicast cache space occupied by the multicast packet; calculates the dynamic discard threshold of the unicast buffer space according to the multicast cache space; and determines whether the current discard threshold is the current discarding threshold according to the dynamic discard threshold of the unicast cache space.
  • the unicast packet allocates the cache space. In this way, the discarding threshold of the unicast buffer space can be dynamically calculated according to the actual cache space occupied by the multicast packet, and then the cache disc space is allocated for the current unicast packet according to the calculated dynamic discard threshold. Discard the current unicast packet, which makes full use of the cache space and improves the cache space utilization.
  • FIG. 2 is a schematic flowchart of another method for managing a cache space according to an embodiment of the present invention. As shown in FIG. 2, the method provided in this embodiment includes the following steps:
  • Step 201 The management device of the cache space acquires a multicast cache space occupied by the multicast packet.
  • Step 202 The management device of the cache space sets a maximum discarding threshold of the unicast buffer space and an N-level discarding coefficient of the unicast buffer space.
  • N is a positive integer
  • the maximum discarding threshold is a positive integer
  • the N-level discarding coefficients are all positive integers
  • the maximum discard threshold is an initial input value for calculating the N-level dynamic discard threshold of the unicast buffer space.
  • Step 203 The management device of the cache space calculates an N-level dynamic discarding threshold of the unicast buffer space according to a maximum discarding threshold of the unicast buffer space, an N-level discarding coefficient of the unicast buffer space, and a multicast cache space.
  • the step 203 includes: the management device of the cache space calculates a maximum dynamic discard threshold according to the maximum discard threshold and the multicast cache space; and the management device of the cache space calculates the discard coefficient according to the maximum dynamic discard threshold and the unicast buffer space.
  • the N-level dynamic discard threshold of the unicast buffer space is obtained.
  • the N-level discarding coefficient includes the first-level discarding coefficient, the second-level discarding coefficient, the N-th level discarding coefficient, the maximum discarding threshold according to the unicast buffer space, and the N-level discarding coefficient of the unicast buffer space.
  • the N-level dynamic discarding threshold of the unicast buffer space is calculated by the multicast cache space.
  • the level 1 dynamic discard threshold of the broadcast buffer space calculates the level 1 dynamic discard threshold of the broadcast buffer space according to the maximum discarding threshold of the unicast buffer space, the level 1 discarding coefficient of the unicast buffer space, and the multicast cache space; the maximum discarding threshold according to the unicast buffer space, the unicast buffer
  • the level 2 discarding coefficient of the space and the multicast cache space calculate the level 2 dynamic discarding threshold of the broadcast buffer space; and so on, according to the maximum discarding threshold of the unicast buffer space, the Nth discarding coefficient of the unicast buffer space, and more
  • the broadcast buffer space calculates the Nth dynamic drop threshold of the broadcast buffer space.
  • the level 1 dynamic discard threshold T 1 (T 0 -Q m )/ ⁇ 1 of the unicast buffer space
  • the Nth dynamic discard threshold T N (T 0 -Q m )/ ⁇ N of the unicast buffer space
  • Q m is the multicast buffer space occupied by the broadcast packet
  • T 0 is
  • the maximum discarding threshold of the configured unicast buffer space, ⁇ 1 , ⁇ 2 ... ⁇ N are the N-level discarding coefficients of the unicast buffer space, respectively.
  • Step 204 The management device of the cache space determines whether to allocate a cache space for the current unicast packet according to the N-level dynamic discard threshold of the unicast cache space.
  • the method for managing the cache space obtains the multicast cache space occupied by the multicast packet; sets the maximum discarding threshold of the unicast buffer space and the N-level discarding coefficient of the unicast buffer space;
  • the N-level dynamic discarding threshold of the unicast buffer space is calculated according to the maximum discarding threshold of the unicast buffer space, the N-level discarding coefficient of the unicast buffer space, and the multicast cache space; and whether the threshold is determined according to the N-level dynamic discard threshold of the unicast buffer space.
  • the current unicast packet allocates a buffer space.
  • the N-level discarding threshold of the unicast buffer space can be dynamically calculated according to the actual cache space occupied by the multicast packet, and then the current unicast is determined according to the calculated N-level dynamic discard threshold. Whether the packet allocates the cache space or discards the current unicast packet, thereby achieving the purpose of making full use of the cache space and improving the utilization of the cache space.
  • FIG. 3 is a schematic flowchart of still another method for managing a cache space according to an embodiment of the present invention. As shown in FIG. 3, the method provided in this embodiment includes the following steps:
  • Step 301 The management device of the cache space acquires a multicast cache space occupied by the multicast packet.
  • Step 302 The management device of the cache space sets a maximum discarding threshold of the unicast buffer space and an N-level discarding coefficient of the unicast buffer space.
  • N is a positive integer
  • the maximum discarding threshold is a positive integer
  • the N-level discarding coefficients are all positive integers
  • Step 303 The management device of the cache space calculates a maximum dynamic discard threshold according to the maximum discard threshold and the multicast cache space.
  • Step 304 The management device of the cache space calculates an N-level dynamic discarding threshold of the unicast buffer space according to the maximum dynamic discarding threshold and the N-level discarding coefficient of the unicast buffer space.
  • Step 305 The management device of the cache space acquires the queue number carried by the current unicast packet.
  • the queue number of the unicast packet is an attribute of the unicast packet, because the cache space has more than one unicast packet queue. Before each unicast packet is enqueued, the management device of the cache space needs to know the ticket. The broadcast packet will be related to the enqueue, so each unicast packet carries information about the enqueue, which is the queue number of the unicast packet to be enqueued.
  • Step 306 The cache space management device calculates an estimated cache space required for the corresponding queue after the current unicast packet is enqueued. The current unicast packet is enqueued according to the queue number.
  • step 406 is a process of calculating the estimated cache space required for the corresponding queue after enqueuing in the case that the current unicast packet is enqueued.
  • Step 307 The management device of the cache space sets an N-level drop probability of the unicast buffer space.
  • the discarding probability of each level has a corresponding relationship with each level of the dynamic discarding threshold, and the level 1 discarding probability> the level 2 discarding probability>...> the Nth level discarding probability.
  • the first-level discard probability corresponds to the first-stage dynamic discard threshold
  • the second-level discard probability corresponds to the second-stage dynamic discard threshold
  • the N-th disc discard probability corresponds to the N-th dynamic discard threshold
  • Step 308 The management device of the cache space determines whether to allow the current unicast packet to be enqueued according to the estimated cache space, the N-level dynamic discarding threshold of the unicast buffer space, and the N-level discarding probability of the unicast buffer space, and is the current unicast packet. Allocate cache space.
  • the current unicast packet is allocated a buffer space to place the current unicast packet, and if the current unicast packet is not allowed to be enqueried, the current unicast packet is discarded.
  • the method for managing the cache space obtains the multicast cache space occupied by the multicast packet; sets the maximum discarding threshold of the unicast buffer space and the N-level discarding coefficient of the unicast buffer space; and discards the N-level discard according to the maximum discarding threshold.
  • the coefficient and the multicast buffer space are used to calculate the N-level dynamic discarding threshold of the unicast buffer space; the estimated cache space required for the corresponding queue after the current unicast packet is enqueued; the estimated cache space, the N-level dynamic discard threshold, and The preset N-level discard probability determines whether the current unicast packet is allowed to be enqueued, and allocates buffer space for the current unicast packet; thus, the N-level of the unicast buffer space can be dynamically calculated according to the buffer space actually occupied by the multicast packet.
  • the threshold is discarded, and the N-level dynamic discarding threshold is used to determine whether to allocate buffer space or discard the current unicast packet for the current unicast packet, thereby achieving the purpose of fully utilizing the cache space and improving the cache space utilization.
  • FIG. 4 is a schematic flowchart of still another method for managing a cache space according to an embodiment of the present invention. As shown in FIG. 4, the method provided in this embodiment includes the following steps:
  • Step 401 The management device of the cache space acquires a multicast cache space occupied by the multicast packet.
  • Step 402 The management device of the cache space sets a maximum discarding threshold of the unicast buffer space and an N-level discarding coefficient of the unicast buffer space.
  • N is a positive integer
  • the maximum discarding threshold is a positive integer
  • the N-level discarding coefficients are all positive integers
  • Step 403 The management device of the cache space calculates a maximum dynamic discard threshold according to the maximum discard threshold and the multicast cache space.
  • Step 404 The management device of the cache space calculates an N-level dynamic discarding threshold of the unicast buffer space according to the maximum dynamic discarding threshold and the N-level discarding coefficient of the unicast buffer space.
  • Step 405 The management device of the cache space acquires the queue number carried by the current unicast packet.
  • Step 406 The management device of the cache space obtains the cache space occupied by the original queue in the queue depth real-time statistics table according to the queue number.
  • the queue depth real-time statistics table records the cache space description of each queue, and can be obtained from the queue depth real-time statistics table according to the queue number. The cache space occupied by the queue corresponding to the queue number.
  • Step 407 The cache space management device acquires a cache space required for the current unicast packet.
  • Step 408 The cache space management device calculates, according to the cache space occupied by the original queue, the cache space required by the current unicast packet, and the number of queue actives, the estimated cache space required for the current unicast packet to be queued.
  • the number of queue actives is the number of unicast packet queues that are valid in the cache space.
  • the estimated buffer space may be added according to the buffer space occupied by the original queue and the buffer space required by the current unicast packet, and then multiplied by the active number of the queue.
  • Step 409 The management device of the cache space sets an N-level discard probability of the unicast buffer space.
  • the discarding probability of each level has a corresponding relationship with each level of the dynamic discarding threshold, and the level 1 discarding probability> the level 2 discarding probability>...> the Nth level discarding probability.
  • Step 410 The management device of the cache space determines whether to allow the current unicast packet to be enqueued according to the estimated cache space, the N-level dynamic discarding threshold of the unicast buffer space, and the N-level discarding probability of the unicast buffer space, and is the current unicast packet. Allocate cache space.
  • step 410 includes: discarding the unicast packet if the estimated cache space is greater than the level 1 dynamic discard threshold; and if the estimated cache space is less than the level 1 dynamic discard threshold and greater than the level 2 dynamic discard threshold, comparing The second-level drop probability and the size of the random number generated by the random number algorithm. If the second-level drop probability is greater than the random number, the unicast packet is discarded. If the second-level drop probability is less than the random number, the current unicast packet is allowed to enter the queue.
  • the Nth-level drop probability is compared with the random number algorithm. If the number of the random number is greater than the random number, the unicast packet is discarded. If the discarding probability of the Nth level is less than the random number, the current unicast packet is allowed to be enqueued, and the buffer space is allocated for the current unicast packet.
  • the estimated cache space is less than the Nth dynamic drop threshold, allowing unicast packets to be enqueued and allocating cache space for the current unicast packets.
  • the random number calculated by the number algorithm when determining whether to allow the second current unicast packet to be enqueued, the random number used to compare with the discard probability is a new random number calculated by the last random number through the random number algorithm; That is to say, only when determining whether to allow the first unicast packet to be enqueued, the random number used is calculated by the initial value through the random number algorithm, and the random number used when determining whether to allow other unicast packets to be enqueued. It is calculated by the random number algorithm from the last random number obtained.
  • the method for managing the cache space obtains the multicast cache space occupied by the multicast packet; and calculates the N-level of the unicast cache space according to the preset maximum discard threshold, the preset N-level discard coefficient, and the multicast cache space.
  • Dynamically discarding the threshold calculating the estimated buffer space required for the corresponding queue after the current unicast packet is enqueued; determining whether to allow the current unicast packet according to the estimated cache space, the N-level dynamic discard threshold, and the preset N-level discard probability Enrolling, and allocating the cache space for the current unicast packet; thus, the N-level discarding threshold of the unicast buffer space can be dynamically calculated according to the actual cache space occupied by the multicast packet, and then determined according to the calculated N-level dynamic discard threshold. Whether to allocate buffer space for the current unicast packet or discard the current unicast packet, thereby achieving the purpose of making full use of the cache space and improving the utilization of the cache space.
  • the method for managing the cache space provided by the present invention further includes: acquiring a multicast cache space occupied by the update of the multicast packet; and correspondingly, calculating a dynamic discarding threshold of the unicast cache space according to the multicast cache space, including: The multicast cache space occupied by the multicast packet is calculated to calculate the N-level dynamic discard threshold of the unicast buffer space.
  • the multicast cache space occupied by the multicast packet is re-acquired, and the unicast cache is recalculated according to the multicast cache space.
  • the dynamic discard threshold of the space that is, the discard threshold of the unicast space is dynamically changed, and changes with the change of the multicast buffer space.
  • the maximum discarding threshold T 0 of the unicast buffer space, the N-level discarding coefficients ⁇ 1 , ⁇ 2 ... ⁇ N of the unicast buffer space, and the N-level discarding probability P 1 , P 2 of the unicast buffer space are ... P N , wherein T 0 is an integer greater than 0, ⁇ 1 ⁇ 2 ... ⁇ ⁇ N , P 1 > P 2 ... > P N .
  • the level 1 dynamic discard threshold T 1 of the unicast buffer space, the second level dynamic discard threshold T 2 , . . . , and the Nth stage dynamic discard threshold T N are respectively calculated.
  • the queue new cache footprint value E t exceeds the first stage dynamic discard threshold value T 1, the current unicast packets all discarded; if the queue new cache footprint value E t exceeds the second level dynamic discard threshold value T 2 and No more than the level 1 dynamic discard threshold T 1 , the second level discard probability P 2 and the random number generated by the random number algorithm are compared.
  • the current unicast packet is discarded, if P 2 is less than the random number Allowing the current unicast packet to be enqueued and allocating buffer space for the current unicast packet; and so on, if the new cache space occupancy value E t of the queue exceeds the Nth dynamic drop threshold T N and does not exceed the N-1 level Dynamically discarding the threshold T N-1 , comparing the N-th drop probability P N and the random number generated by the random number algorithm. If P N is greater than the random number, discard the current unicast packet. If P N is less than the random number, the current The unicast packet is enqueued and the buffer space is allocated for the current unicast packet. If the new cache space occupancy value E t of the queue does not exceed the dynamic discarding threshold T N of the Nth level, the current unicast packet is allowed to be enqueued and is currently Unicast packets allocate cache space.
  • the dynamic discarding threshold of the unicast space is always in the dynamic adjustment. As the multicast buffer space Q m occupied by the multicast packet increases, the dynamic discarding threshold space of the unicast buffer space decreases. The available space of the broadcast packet is reduced; and as the multicast buffer space Q m occupied by the multicast packet decreases, the dynamic threshold increase of the unicast buffer space indicates that the available space of the unicast packet increases.
  • the following describes how to implement the management of the cache space in two specific embodiments.
  • the first embodiment is:
  • each cache area includes a predetermined amount of swap cache capacity, for example, the capacity of one of the cache areas includes 1024M. It is currently necessary to allocate cache space for a multicast packet with a priority of 0 and a unicast packet.
  • the priority of the multicast packet is set to 4, which is priority 0, priority 1, priority 2, and priority 3.
  • the discarding threshold of the multicast packet priority 0 is Mth 0 and the discard probability is Mp 0.
  • the threshold is Mth 3 and the discard probability is Mp 3 ; wherein Mth 0 >Mth 1 >Mth 2 >Mth 3 and Mth 0 ⁇ 128, and Mp 0 >Mp 1 >Mp 2 >Mp 3 .
  • the maximum threshold T 0 of the unicast buffer space is set, and the three-level discarding coefficients ⁇ 1 , ⁇ 2 , and ⁇ 3 of the unicast buffer space are set, and the three-level discarding probability P 1 , P 2 , and P 3 of the unicast buffer space are set.
  • the unicast packet is discarded if the queried space of the unicast packet is greater than the dynamic discarding threshold of the first level. If the unicast packet enters the queue, the cache space occupied by the queue is smaller than the dynamic discarding threshold of the first level.
  • level 2 dynamic discard threshold 44 compare P 2 with the random number generated by the random number algorithm to compare the size to determine whether to allow the unicast packet to be enqueued, and allocate buffer space; if the unicast packet is enqueued If the cache space occupied by the queue is smaller than the level 2 dynamic discard threshold 44 and greater than the level 3 dynamic discard threshold 29, the comparison between the P 3 and the random number generated by the random number algorithm is performed to determine whether to allow the unicast packet to be enqueued. And allocate buffer space for it; if the cache space occupied by the queue after the unicast packet is enquembled is smaller than the dynamic discarding threshold 29 of the third level, the unicast packet is enqueued and allocated buffer space.
  • the dynamic discarding threshold of the unicast buffer space can be calculated according to the dynamic discarding threshold of the unicast buffer space.
  • the threshold and level 3 dynamic discard thresholds are 104, 52, and 34, respectively.
  • the buffer space is 128 cache areas.
  • the package allocates cache space.
  • the priority number of the multicast packet is set to 2, which is priority 0 and priority 1, respectively.
  • the drop probability of the priority 0 of the multicast packet is Mth 0
  • the drop probability is Mp 0
  • the multicast packet priority is set.
  • the discarding threshold of 1 is Mth 1 and the discarding probability is Mp 1 ; where Mth 0 >Mth 1 and Mth 0 ⁇ 128, and Mp 0 >Mp 1 .
  • the cache space is allocated for the multicast packet with the priority of 1.
  • the multicast package allocates cache space.
  • the multicast package allocates cache space.
  • the method for managing the cache space obtains the multicast cache space occupied by the multicast packet; and calculates the N-level of the unicast cache space according to the preset maximum discard threshold, the preset N-level discard coefficient, and the multicast cache space.
  • Dynamically discarding the threshold calculating the estimated buffer space required for the corresponding queue after the current unicast packet is enqueued; determining whether to allow the current unicast packet according to the estimated cache space, the N-level dynamic discard threshold, and the preset N-level discard probability Enrolling, and allocating the cache space for the current unicast packet; thus, the N-level discarding threshold of the unicast buffer space can be dynamically calculated according to the actual cache space occupied by the multicast packet, and then determined according to the calculated N-level dynamic discard threshold. Whether to allocate buffer space for the current unicast packet or discard the current unicast packet, thereby achieving the purpose of fully utilizing the cache space and improving the utilization of the cache space.
  • FIG. 5 is a schematic structural diagram of a cache space management apparatus according to an embodiment of the present invention. As shown in FIG. 5, the apparatus 5 includes:
  • the obtaining module 51 is configured to obtain a multicast cache space occupied by the multicast packet.
  • the calculation module 52 is configured to calculate a dynamic discarding threshold of the unicast buffer space according to the multicast cache space, where the unicast buffer space is a cache space occupied by the unicast packet.
  • the processing module 53 is configured to determine whether to allocate a cache space for the current unicast packet according to the dynamic discard threshold of the unicast buffer space.
  • the device for managing the cache space obtains the multicast cache space occupied by the multicast packet; calculates the dynamic discard threshold of the unicast buffer space according to the multicast cache space; and determines whether the current discard threshold is the current discarding threshold according to the unicast buffer space.
  • the unicast packet allocates the cache space. In this way, the discarding threshold of the unicast buffer space can be dynamically calculated according to the actual cache space occupied by the multicast packet, and whether the cache space is allocated for the current unicast packet according to the calculated dynamic discard threshold. Discard the current unicast packet, thus achieving the goal of making full use of the cache space and improving the cache space utilization. of.
  • FIG. 6 is a schematic structural diagram of another cache space management apparatus according to an embodiment of the present invention.
  • the calculation module 52 includes:
  • the first setting unit 521 is configured to set a maximum discarding threshold of the unicast buffer space and an N-level discarding coefficient of the unicast buffer space, where N is a positive integer, the maximum discarding threshold is a positive integer, and the N-level discarding coefficient is A positive integer, and the level 1 discard coefficient ⁇ the second level discard coefficient ⁇ ... ⁇ the Nth stage discard coefficient.
  • the calculating unit 522 is configured to calculate an N-level dynamic discarding threshold of the unicast buffer space according to a maximum discarding threshold of the unicast buffer space, an N-level discarding coefficient of the unicast buffer space, and a multicast buffer space.
  • the calculating unit 522 is specifically configured to calculate a maximum dynamic discarding threshold according to the maximum discarding threshold and the multicast buffer space; and calculate the N-level of the unicast buffer space according to the maximum dynamic discarding threshold and the N-level discarding coefficient of the unicast buffer space. Dynamically discard the threshold.
  • the obtaining module 51 is further configured to obtain a queue number carried by the current unicast packet.
  • the calculation module 52 is further configured to estimate the buffer space required for the corresponding queue after the current unicast packet is enqueued; wherein the current unicast packet is enqueued according to the queue number.
  • the processing module 53 is further configured to determine whether to allow the current unicast packet to be enqueued according to the estimated N-level dynamic discard threshold of the cache space and the unicast buffer space, and allocate a buffer space for the current unicast packet.
  • FIG. 7 is a schematic structural diagram of another apparatus for managing a cache space according to an embodiment of the present invention.
  • the processing module 53 includes:
  • the second setting unit 531 is configured to set an N-level drop probability of the unicast buffer space; wherein, each level of the drop probability has a correspondence relationship with each level of the dynamic drop threshold, and the level 1 drop probability > the second level of discarding Probability >...> Nth level drop probability.
  • the processing unit 532 is configured to determine, according to the estimated cache space, the N-level dynamic discarding threshold of the unicast buffer space, and the N-level discarding probability of the unicast buffer space, whether to allow the current unicast packet to be enqueued, and Allocate cache space for the current unicast package.
  • FIG. 8 is a schematic structural diagram of another apparatus for managing a cache space according to an embodiment of the present invention.
  • the calculation module 52 further includes:
  • the obtaining unit 523 is configured to obtain, in the queue depth real-time statistics table, the buffer space occupied by the original queue according to the queue number; and obtain the buffer space required for the current unicast packet;
  • the calculating unit 522 is configured to calculate, according to the buffer space occupied by the original queue, the buffer space required by the current unicast packet, and the number of queue actives, the estimated cache space required for the current unicast packet to be queued;
  • the number of actives is the number of unicast packet queues that are valid in the cache space.
  • the processing unit 532 is configured to discard the unicast packet if the estimated cache space is greater than the level 1 dynamic discard threshold; if the estimated cache space is smaller than the i-th dynamic discard threshold, and greater than the i+1 level dynamic The threshold is discarded, and the i+1th drop probability and the random number generated by the random number algorithm are compared.
  • the obtaining module 51 is further configured to obtain a multicast cache space occupied by the multicast packet after the update.
  • the calculating module 52 is further configured to calculate an N-level dynamic discarding threshold of the unicast buffer space according to the multicast buffer space occupied by the multicast packet update.
  • the buffer space management apparatus acquires the multicast buffer space occupied by the multicast packet; and calculates the N-level of the unicast buffer space according to the preset maximum discarding threshold, the preset N-level discard coefficient, and the multicast cache space. Dynamically discarding the threshold; calculating the estimated cache space required for the corresponding queue after the current unicast packet is enqueued; based on the estimated cache space, the N-level dynamic discard threshold, and The preset N-level discard probability determines whether the current unicast packet is allowed to be enqueued, and allocates buffer space for the current unicast packet; thus, the N-level of the unicast buffer space can be dynamically calculated according to the buffer space actually occupied by the multicast packet.
  • the threshold is discarded, and the calculated N-level dynamic discarding threshold is used to determine whether to allocate buffer space or discard the current unicast packet for the current unicast packet, thereby achieving the purpose of fully utilizing the cache space and improving the cache space utilization.
  • the obtaining module 51, the calculating module 52, the first setting unit 521, the calculating unit 522, the obtaining unit 523, the processing module 53, the second setting unit 531, and the processing unit 532 may all be managed by the cache space.
  • Central Processing Unit CPU
  • MPU Micro Processor Unit
  • DSP Digital Signal Processor
  • FPGA Field Programmable Gate Array
  • FIG. 9 is a schematic structural diagram of another apparatus for managing a cache space according to an embodiment of the present invention. As shown in FIG. 9, the apparatus 6 includes:
  • the message type distinguishing module 61 is configured to distinguish whether the newly enqueue message is multicast or unicast, the multicast message is sent to the multicast processing module, and the unicast message is sent to the unicast processing module.
  • the configuration module 62 is configured to configure a buffer size, a threshold of the multicast cache space, a maximum threshold of the unicast buffer space, a discarding coefficient of the unicast buffer space, and a discarding probability of the unicast buffer space.
  • the multicast processing module 63 is configured to determine whether to allocate a cache space for the current multicast packet.
  • the unicast processing module 64 is configured to determine whether to allocate a cache space for the current unicast packet.
  • the dynamic threshold calculation module 65 is configured to calculate a dynamic discard threshold of the unicast buffer space.
  • the random number calculation module 66 is configured to perform random number calculation.
  • the message type distinguishing module 61, the configuration module 62, the multicast processing module 63, the unicast processing module 64, the dynamic threshold calculation module 65, and the random number calculation module 66 may all be located in the management device of the cache space.
  • the embodiment of the invention further provides a method for managing a cache space, including:
  • the multicast packet is cached
  • the estimated cache capacity is a sum of a unicast packet buffer capacity occupied by the unicast packet and a required cache capacity of the cached current unicast packet;
  • the buffer space for storing the unicast packet and the multicast packet in the electronic device is limited.
  • the cache efficiency of the cache space is better utilized.
  • the multicast packets are cached directly using the currently remaining cache space.
  • the capacity of the total buffer space currently occupied by the multicast packet is also determined.
  • the capacity of the buffer space occupied by the multicast packet is referred to as a multicast packet buffer capacity.
  • the dynamic discard threshold of the unicast packet is determined according to the buffer capacity of the multicast packet. Since the multicast packet and the unicast packet share the total buffer space in the electronic device, generally the larger the multicast buffer capacity, the smaller the capacity of the remaining swap space for the unicast packet buffer, and the corresponding dynamic discard is usually at this time. The smaller the threshold, that is, the multicast packet buffer capacity is inversely proportional to the dynamic discard threshold.
  • the dynamic discarding threshold is dynamically determined according to the multicast packet buffer space, and is used to discard the currently received ticket according to the buffer space capacity occupied by the current unicast packet when receiving a unicast packet. Broadcast the packet to determine if you want to cache the currently received unicast packet.
  • the dynamic discarding threshold of the unicast packet is dynamically set according to the buffering condition before the multicast packet, thereby determining whether to cache the currently received unicast packet, thereby ensuring the required buffer space in the unicast packet. Smaller, free up more cache space for caching multicast packets, ensuring less probability of multicast packets being dropped, improving multicast playback, and more efficient use Save space.
  • the determining, according to the multicast packet buffer capacity, the dynamic discarding threshold of the unicast packet includes:
  • the dynamic discarding threshold is determined not only by the capacity of the currently occupied multicast packet buffer space of the multicast packet, but also by the discarding coefficient and the maximum discarding threshold of the unicast packet.
  • the maximum discard threshold of the broadcast packet can be the maximum buffer space occupied by the set unicast packet.
  • the multi-level dynamic discard threshold is calculated, and then combined with the current The occupied buffer space capacity of the unicast packet (that is, the unicast packet buffer capacity) determines whether to discard the current unicast packet.
  • the determining, according to the multicast packet buffer capacity, the dynamic discarding threshold of the unicast packet includes:
  • Determining whether to cache the current unicast packet according to the discard probability corresponding to the dynamic discarding threshold including:
  • each dynamic discarding threshold also corresponds to a discarding probability, which is a probability of discarding the currently received single pre-unicast packet.
  • the random number generated by the random algorithm can be solved, and discarded. The comparison of the probabilities determines whether to discard the current unicast packet to determine whether to cache the current unicast packet.
  • the nth dynamic drop threshold is smaller than the n+1th drop threshold, where n is a positive integer smaller than the N;
  • the comparing the estimated cache capacity and the multi-level dynamic discarding threshold to determine a currently applicable dynamic discarding threshold including:
  • the level 1 dynamic discarding threshold determines that the level 1 dynamic discard threshold is the currently applicable dynamic discard threshold
  • the i-th dynamic discarding threshold When the estimated buffer space is smaller than the i-th dynamic discarding threshold, and is not less than the i+1th-th dynamic discarding threshold, determining that the i-th level dynamic discarding threshold is the currently applicable dynamic discarding threshold, where i is a positive integer less than the N.
  • the determining whether to cache the current unicast packet according to the currently applicable dynamic discarding threshold and/or the currently applicable dynamic discarding threshold corresponding to the discarding probability comprises:
  • the currently applicable dynamic discarding threshold is a level 1 dynamic discarding threshold, discarding the unicast packet
  • the currently applicable dynamic discarding threshold is the (i+1)th dynamic discarding threshold, determining whether to cache the current unicast packet according to the (i+1)th discarding probability.
  • the currently applicable dynamic discarding threshold is an i+1th level dynamic discarding threshold
  • determining whether to cache the current unicast packet according to the i+1th level discarding probability includes: generating a random number; comparing the random number with the (i+1)th discarding coefficient; and when the i+1th level discarding coefficient is smaller than the random number, buffering the current unicast packet.
  • the maximum discard threshold is equal to the maximum unicast packet buffer capacity allowed to be occupied.
  • the embodiment further provides a cache space management apparatus, including:
  • a cache unit configured to cache the multicast packet when receiving the multicast packet
  • a first determining unit configured to determine a multicast packet buffer capacity of a buffer space occupied by the multicast packet
  • a second determining unit configured to determine a dynamic discarding threshold of the unicast packet according to the multicast packet buffer capacity
  • a third determining unit configured to determine an estimated cache capacity, where the estimated cache capacity is a sum of a unicast packet buffer capacity occupied by the unicast packet and a required cache capacity of the cached current unicast packet;
  • the processing unit is configured to determine whether to cache the current unicast packet according to the comparison result of the estimated cache capacity and the dynamic discard threshold.
  • the cache unit herein may include a cache space for buffering unicast packets and/or multicast packets.
  • the first determining unit, the second determining unit, the third determining unit, and the processing unit may each correspond to a processor or a processing circuit.
  • the processor can include a central processing unit, a microprocessor, a digital signal processor, a programmable array or an application processor, and the like.
  • the processing circuit can include an application specific integrated circuit.
  • the processor or processing circuit can implement the functions of the various units described above by executing a computer program.
  • the processing unit is configured to compare the estimated cache capacity and the multi-level dynamic discard threshold to determine a currently applicable dynamic discard threshold; according to the currently applicable dynamic discard threshold and/or the currently applicable dynamic discard The threshold corresponds to the drop probability, and determines whether to cache the current unicast packet.
  • the nth dynamic drop threshold is less than the n+1th drop threshold, wherein n is a positive integer less than the N;
  • the processing unit is configured to: when the estimated buffer space is greater than the level 1 dynamic discarding threshold, determine that the level 1 dynamic discarding threshold is the currently applicable dynamic discarding threshold; and when the estimated buffer space is smaller than When the i-th dynamic discarding threshold is not less than the i+1th-th dynamic discarding threshold, the i-th level dynamic discarding threshold is determined to be the currently applicable dynamic discarding threshold, where the i is smaller than the N A positive integer.
  • the processing unit is configured to discard the unicast packet when the currently applicable dynamic discard threshold is a level 1 dynamic discard threshold; when the currently applicable dynamic discard threshold is i When the +1 level dynamic discarding threshold is used, determining whether to cache the current unicast packet according to the (i+1)th discarding probability.
  • the processing unit is configured to generate a random number
  • the current unicast packet is buffered.
  • the maximum discard threshold is equal to the maximum unicast packet buffer capacity allowed to be occupied.
  • the embodiment provides an electronic device, including:
  • a buffer including a cache space, for caching unicast packets and/or multicast packets
  • a memory for storing a computer program
  • the processor is respectively connected to the buffer and the memory, and is configured to manage the buffer space of the buffer by executing the computer program, and can implement a method for managing a cache space provided by any one of the foregoing technical solutions.
  • the buffer in this embodiment may include: a cache space.
  • the memory may include various storage media capable of being a computer program, such as a random access memory, a read only memory, a flash memory, or an optical disk.
  • the processor may include various types of processors, such as a central processing unit, a microprocessor, and the like.
  • the processor is connected to the buffer or the memory through a bus to perform management of a buffer space, where the bus may be an integrated circuit (IIC) bus or the like.
  • IIC integrated circuit
  • the electronic devices here can be various types of servers, desktop or notebook computers, mobile phones or wearable devices.
  • the embodiment further provides a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions are used to execute a management method capable of implementing a cache space provided by any one of the foregoing technical solutions.
  • the computer storage medium herein may include: a variety of storage media such as an optical disk, a magnetic tape, a read-only storage medium, a removable hard disk, a flash memory, and the like, optionally a non-transitory storage medium, and may be used to store a computer program executable by the processor. After the computer program is executed by the processor, it can be used for the implementation of any of the foregoing cache space management methods.
  • embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention can take the form of a hardware embodiment, a software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) including computer usable program code.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
  • the multicast packet when the cache space is used, the multicast packet is directly stored after receiving the multicast packet, and then the dynamic discard threshold of the unicast packet is determined based on the buffer status of the multicast packet, and the unicast is controlled by the dynamic discard threshold.
  • the buffer space shared by the packet and the multicast packet caches the unicast packet, ensuring that the multicast packet is preferentially cached at the same time, so that the buffer required for the unicast packet is small, and the cache for the multicast packet is used to improve the cache space. Use, reduce the packet loss rate of multicast packets, and improve the multicast effect. At the same time, it can be realized industrially by the setting of a computer program in an electronic device, and has the characteristics of being simple to implement and highly reproducible.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本发明实施例提供一种缓存空间的管理方法,该方法包括获取多播包占用的多播缓存空间;根据多播缓存空间计算单播缓存空间的动态丢弃阈值;根据单播缓存空间的动态丢弃阈值确定是否为当前单播包分配缓存空间。本发明实施例同时还提供一种缓存空间的管理装置、缓存设备及计算机存储介质。

Description

缓存空间的管理方法和装置、电子设备和存储介质
本申请基于申请号为201611018368.6、申请日为2016年11月18日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本发明涉及通讯领域,尤其涉及数据网络流量管理技术中的缓存管理的方法和装置、电子设备和计算机存储介质。
背景技术
现在互联网Internet网络业务中不仅对带宽要求日益增多,在原有单播业务需求量巨大的基础上,对多播业务需求也越来越多,比如远程教育,视频会议,视频点播等等,因此无论接入网,路由器,交换机都需要支持多播处理。
多播是一点对多点的分发数据技术,由于多播管理的复杂性和对资源的要求,现有芯片设计技术中多播和单播通常采用静态方法,并且对多播和单播分开管理,即缓存空间分为单播缓存空间和多播缓存空间,分配缓存空间时首先判断包类型,单播包进入单播缓存空间,多播包进入多播缓存空间。假设总缓存为S,初始时为多播包划分固定的多播缓存空间S1,为单播包划分固定的单播缓存空间S2,这样缓存空间的分配,数据分发时很容易出现数据包所需缓存空间大于分配的缓存空间,导致多播包的丢弃;从而导致丢包率高;而在有些情况下,缓存空间一旦分配,若当前没有数据包分发,该缓存空间就闲置,就导致缓存空间利用率低的问题。
其中S=S1+S2,当多播包占用的缓存空间超过多播缓存空间S1时,将当前多播包全部丢弃,这样即使单播缓存空间S2空闲,多播包也无法占用, 当单播包占用的缓存空间超过单播缓存空间S2时,将当前单播包全部丢弃,这样即使多播缓存空间S1空闲,单播包也无法占用。
因此,现有的这种缓存空间的管理方法会导致无法充分利用缓存空间,造成缓存空间的利用率低。
发明内容
有鉴于此,本发明实施例提供一种缓存空间的管理方法及装置、电子设备和存储介质,以使得缓存空间能够被充分利用,从而提高缓存空间的利用率。
本发明实施例的技术方案是这样实现的:
一种缓存空间的管理方法,包括:
获取多播包占用的多播缓存空间;
根据所述多播缓存空间计算单播缓存空间的动态丢弃阈值;其中,所述单播缓存空间为单播包占用的缓存空间;
根据所述单播缓存空间的动态丢弃阈值确定是否为当前单播包分配缓存空间。
一种缓存空间的管理装置,包括:
获取模块,配置为获取多播包占用的多播缓存空间;
计算模块,配置为根据所述多播缓存空间计算单播缓存空间的动态丢弃阈值;其中,所述单播缓存空间为单播包占用的缓存空间;
处理模块,配置为根据所述单播缓存空间的动态丢弃阈值确定是否为当前单播包分配缓存空间。
本发明实施例提供一种缓存空间的管理方法,包括:
当接收到多播包时,缓存所述多播包;
确定多播包占用的缓存空间的多播包缓存容量;
根据所述多播包缓存容量,确定单播包的动态丢弃阈值;
确定预估缓存容量,其中,所述预估缓存容量为单播包已占用的单播包缓存容量和缓存当前单播包的所需缓存容量之和;
根据所述预估缓存容量及所述动态丢弃阈值的比较结果,确定是否缓存所述当前单播包。
一种缓存空间的管理装置,包括:
缓存单元,配置为当接收到多播包时,缓存所述多播包;
第一确定单元,配置为确定多播包占用的缓存空间的多播包缓存容量;
第二确定单元,配置为根据所述多播包缓存容量,确定单播包的动态丢弃阈值;
第三确定单元,配置为确定预估缓存容量,其中,所述预估缓存容量为单播包已占用的单播包缓存容量和缓存当前单播包的所需缓存容量之和;
处理单元,配置为根据所述预估缓存容量及所述动态丢弃阈值的比较结果,确定是否缓存所述当前单播包。
一种电子设备,包括:
缓存器,包括缓存空间,用于缓存单播包和/或多播包;
存储器,用于存储计算机程序;
处理器,分别与所述缓存器及存储器相连,配置为通过执行所述计算机程序,对所述缓存器的缓存空间进行管理,能够实现前述任一项提供的方法。
一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于执行能够实现前述任一项提供的方法。
本发明实施例提供的缓存空间的管理方法及装置、电子设备和计算机存储介质,该方法包括获取多播包占用的多播缓存空间;根据多播缓存空间计算单播缓存空间的动态丢弃阈值;根据单播缓存空间的动态丢弃阈值 确定是否为当前单播包分配缓存空间;这样,能够根据多播包实际所占用的缓存空间动态地计算单播缓存空间的丢弃阈值,进而根据计算出的动态丢弃阈值确定是为当前单播包分配缓存空间还是丢弃当前单播包,从而实现了充分利用缓存空间,提高缓存空间利用率的目的。
附图说明
图1为本发明实施例提供的一种缓存空间的管理方法的流程示意图;
图2为本发明实施例提供的另一种缓存空间的管理方法的流程示意图;
图3为本发明实施例提供的又一种缓存空间的管理方法的流程示意图;
图4为本发明实施例提供的又一种缓存空间的管理方法的流程示意图;
图5为本发明实施例提供的一种缓存空间的管理装置的结构示意图;
图6为本发明实施例提供的另一种缓存空间的管理装置的结构示意图;
图7为本发明实施例提供的又一种缓存空间的管理装置的结构示意图;
图8为本发明实施例提供的又一种缓存空间的管理装置的结构示意图;
图9为本发明实施例提供的又一种缓存空间的管理装置的结构示意图。
具体实施方式
以下结合附图对本发明的优选实施例进行详细说明,应当理解,以下所说明的优选实施例仅用于说明和解释本发明,并不用于限定本发明。
图1为本发明实施例提供的一种缓存空间的管理方法的流程示意图,如图1所示,本实施例提供的方法包括以下步骤:
步骤101、获取多播包占用的多播缓存空间。
可选地,步骤101获取多播包占用的多播缓存空间可以由缓存空间的管理装置实现。需要说明的是,多播包占用的多播缓存空间就是指多播包占用的多播缓存空间的大小。
还需要说明的是,当确定是为当前多播包分配缓存空间还是丢弃当前多播包时采用现有技术的全局优先级算法,可选地处理过程如下:
设置多播包的优先级个数和每个优先级多播缓存空间的丢弃阈值以及丢弃概率。
假设当前多播包能够正常入队并获得相应的缓存空间,计算当前多播包入队后所有多播包占用的多播缓存空间,当前多播包入队后所有多播包占用的多播缓存空间等于当前多播包没有入队前所有多播包占用的多播缓存空间与当前多播包所需的缓存空间之和,即Mnew=Mold+Bm,其中,Mold为当前多播包没有入队前所有多播包占用的多播缓存空间,Bm为当前多播包所需的缓存空间。
获取当前多播包的优先级,用当前多播包入队后所有多播包占用的多播缓存空间和每个优先级多播缓存空间的动态丢弃阈值进行匹配,获得相应的丢弃概率,进而确定是为当前的多播包分配缓存空间还是丢弃当前多播包。
步骤102、根据多播缓存空间计算单播缓存空间的动态丢弃阈值。其中,单播缓存空间为单播包占用的缓存空间。
可选地,步骤102根据多播缓存空间计算单播缓存空间的动态丢弃阈值可以由缓存空间的管理装置实现。动态丢弃阈值可以是一个,也可以是多个,如果是多个,动态丢弃阈值可以分为N级,N为大于等于1的整数,N的具体取值可以根据实际缓存空间的所需要的利用程度进行设置,如果需要较大程度地利用缓存空间,则可以将N设置地大一些,这样计算出的动态丢弃阈值的级数就相应地多,进而在步骤103中所进行判断的依据会更多,从而更好地确定是否为当前单播包分配缓存空间还是丢弃当前单播包。
步骤103、根据单播缓存空间的动态丢弃阈值确定是否为当前单播包分 配缓存空间。
可选地,步骤103根据单播缓存空间的动态丢弃阈值确定是否为当前单播包分配缓存空间可以由缓存空间的管理装置实现。步骤103中的动态丢弃阈值是步骤102中计算出的动态丢弃阈值,如果步骤102中计算得到一个动态丢弃阈值,步骤103则根据这一个动态丢弃阈值确定是否为当前单播包分配缓存空间,如果步骤102中计算得到多个动态丢弃阈值,步骤103则根据这些动态丢弃阈值确定是否为当前单播包分配缓存空间。
本实施例提供的缓存空间的管理方法,获取多播包占用的多播缓存空间;根据多播缓存空间计算单播缓存空间的动态丢弃阈值;根据单播缓存空间的动态丢弃阈值确定是否为当前单播包分配缓存空间;这样,能够根据多播包实际所占用的缓存空间动态地计算单播缓存空间的丢弃阈值,进而根据计算出的动态丢弃阈值确定是为当前单播包分配缓存空间还是丢弃当前单播包,从而实现了充分利用缓存空间,提高缓存空间利用率的目的。
图2为本发明实施例提供的另一种缓存空间的管理方法的流程示意图,如图2所示,本实施例提供的方法包括以下步骤:
步骤201、缓存空间的管理装置获取多播包占用的多播缓存空间。
步骤202、缓存空间的管理装置设置单播缓存空间的最大丢弃阈值和单播缓存空间的N级丢弃系数。其中,N为正整数,最大丢弃阈值为正整数,N级丢弃系数均为正整数,且第1级丢弃系数<第2级丢弃系数<...<第N级丢弃系数。
需要说明的是,最大丢弃阈值为计算单播缓存空间的N级动态丢弃阈值的初始输入值。
步骤203、缓存空间的管理装置根据单播缓存空间的最大丢弃阈值、单播缓存空间的N级丢弃系数和多播缓存空间计算单播缓存空间的N级动态丢弃阈值。
可选地,步骤203包括:缓存空间的管理装置根据最大丢弃阈值和多播缓存空间计算得到最大动态丢弃阈值;缓存空间的管理装置根据最大动态丢弃阈值和单播缓存空间的N级丢弃系数计算得到单播缓存空间的N级动态丢弃阈值。
需要说明的是,N级丢弃系数包括第1级丢弃系数、第2级丢弃系数...第N级丢弃系数,根据单播缓存空间的最大丢弃阈值、单播缓存空间的N级丢弃系数和多播缓存空间计算单播缓存空间的N级动态丢弃阈值,具体包括:
根据单播缓存空间的最大丢弃阈值、单播缓存空间的第1级丢弃系数和多播缓存空间计算播缓存空间的第1级动态丢弃阈值;根据单播缓存空间的最大丢弃阈值、单播缓存空间的第2级丢弃系数和多播缓存空间计算播缓存空间的第2级动态丢弃阈值;以此类推,根据单播缓存空间的最大丢弃阈值、单播缓存空间的第N级丢弃系数和多播缓存空间计算播缓存空间的第N级动态丢弃阈值。
可选地,单播缓存空间的第1级动态丢弃阈值T1=(T0-Qm)/β1,单播缓存空间的第2级动态丢弃阈值T2=(T0-Qm)/β2,...,单播缓存空间的第N级动态丢弃阈值TN=(T0-Qm)/βN,其中,Qm为播包占用的多播缓存空间,T0为所配置的配置单播缓存空间的最大丢弃阈值,β1、β2...βN分别为单播缓存空间的N级丢弃系数。
步骤204、缓存空间的管理装置根据单播缓存空间的N级动态丢弃阈值确定是否为当前单播包分配缓存空间。
需要说明的是,本实施例中与其它实施例中相同步骤或概念的解释可以参照其它实施例中的描述,此处不再赘述。
本实施例提供的缓存空间的管理方法,获取多播包占用的多播缓存空间;设置单播缓存空间的最大丢弃阈值和单播缓存空间的N级丢弃系数; 根据单播缓存空间的最大丢弃阈值、单播缓存空间的N级丢弃系数和多播缓存空间计算单播缓存空间的N级动态丢弃阈值;根据单播缓存空间的N级动态丢弃阈值确定是否为当前单播包分配缓存空间;这样,能够根据多播包实际所占用的缓存空间动态地计算单播缓存空间的N级丢弃阈值,进而根据计算出的N级动态丢弃阈值确定是为当前单播包分配缓存空间还是丢弃当前单播包,从而实现了充分利用缓存空间,提高缓存空间利用率的目的。
图3为本发明实施例提供的又一种缓存空间的管理方法的流程示意图,如图3所示,本实施例提供的方法包括以下步骤:
步骤301、缓存空间的管理装置获取多播包占用的多播缓存空间。
步骤302、缓存空间的管理装置设置单播缓存空间的最大丢弃阈值和单播缓存空间的N级丢弃系数。其中,N为正整数,所述最大丢弃阈值为正整数,所述N级丢弃系数均为正整数,且第1级丢弃系数<第2级丢弃系数<...<第N级丢弃系数。
步骤303、缓存空间的管理装置根据最大丢弃阈值和多播缓存空间计算得到最大动态丢弃阈值。
步骤304、缓存空间的管理装置根据最大动态丢弃阈值和单播缓存空间的N级丢弃系数计算得到单播缓存空间的N级动态丢弃阈值。
步骤305、缓存空间的管理装置获取当前单播包携带的队列号。
需要说明的是,单播包的队列号是单播包的一个属性,因为缓存空间有不止一列的单播包队列,每个单播包入队前,缓存空间的管理装置都要知道该单播包将要入队的相关信息,因此每个单播包都会携带关于入队的相关信息,该相关信息就是该单播包所要入队的队列号。
步骤306、缓存空间的管理装置计算若当前单播包入队后对应的队列所需的预估缓存空间。其中,当前单播包根据队列号入队。
需要说明的是,步骤406是假设在当前单播包入队的情况下,对入队后对应的队列所需的预估缓存空间进行计算的过程。
步骤307、缓存空间的管理装置设置单播缓存空间的N级丢弃概率。其中,每一级丢弃概率与每一级动态丢弃阈值之间具有对应关系,并且第1级丢弃概率>第2级丢弃概率>...>第N级丢弃概率。
需要说明的是,第1级丢弃概率对应第1级动态丢弃阈值,第2级丢弃概率对应第2级动态丢弃阈值,依次类推,第N级丢弃概率对应第N级动态丢弃阈值。
步骤308、缓存空间的管理装置根据预估缓存空间、单播缓存空间的N级动态丢弃阈值和单播缓存空间的N级丢弃概率确定是否允许当前单播包入队,并为当前单播包分配缓存空间。
需要说明的是,如果允许当前单播包入队,则为当前单播包分配缓存空间以放置当前单播包,如果不允许当前单播包入队,则丢弃当前单播包。
还需要说明的是,本实施例中与其它实施例中相同步骤或概念的解释可以参照其它实施例中的描述,此处不再赘述。
本实施例提供的缓存空间的管理方法,获取多播包占用的多播缓存空间;设置单播缓存空间的最大丢弃阈值和单播缓存空间的N级丢弃系数;根据最大丢弃阈值、N级丢弃系数和多播缓存空间计算单播缓存空间的N级动态丢弃阈值;计算若当前单播包入队后对应的队列所需的预估缓存空间;根据预估缓存空间、N级动态丢弃阈值和预先设置的N级丢弃概率确定是否允许当前单播包入队,并为当前单播包分配缓存空间;这样,能够根据多播包实际所占用的缓存空间动态地计算单播缓存空间的N级丢弃阈值,进而根据计算出的N级动态丢弃阈值确定是否为当前单播包分配缓存空间还是丢弃当前单播包,从而实现了充分利用缓存空间,提高缓存空间利用率的目的。
图4为本发明实施例提供的又一种缓存空间的管理方法的流程示意图,如图4所示,本实施例提供的方法包括以下步骤:
步骤401、缓存空间的管理装置获取多播包占用的多播缓存空间。
步骤402、缓存空间的管理装置设置单播缓存空间的最大丢弃阈值和单播缓存空间的N级丢弃系数。其中,N为正整数,所述最大丢弃阈值为正整数,所述N级丢弃系数均为正整数,且第1级丢弃系数<第2级丢弃系数<...<第N级丢弃系数。
步骤403、缓存空间的管理装置根据最大丢弃阈值和多播缓存空间计算得到最大动态丢弃阈值。
步骤404、缓存空间的管理装置根据最大动态丢弃阈值和单播缓存空间的N级丢弃系数计算得到单播缓存空间的N级动态丢弃阈值。
步骤405、缓存空间的管理装置获取当前单播包携带的队列号。
步骤406、缓存空间的管理装置根据队列号在队列深度实时统计表中获取原始队列占用的缓存空间。
需要说明的是,原始队列是指当前单播包没有入队之前的队列,队列深度实时统计表记录有每个队列所占用的缓存空间说明,根据队列号就可以从队列深度实时统计表中获取该队列号对应的队列占用的缓存空间。
步骤407、缓存空间的管理装置获取当前单播包所需的缓存空间。
步骤408、缓存空间的管理装置根据原始队列占用的缓存空间、当前单播包所需的缓存空间和队列活跃个数计算得到当前单播包入队后对应的队列所需的预估缓存空间。其中队列活跃个数是所述缓存空间有效的单播包队列个数。
需要说明的是,预估缓存空间可以根据原始队列占用的缓存空间与当前单播包所需的缓存空间做加法运算,再与队列活跃个数做乘法运算得到。当前单播包入队后对应的队列所需的预估缓存空间可以通过下面的计算方 法获得:首先计算如果当前单播包入队该队列新的缓存空间占用值,该队列新的缓存空间占用值(Qnew)等于该队列原来的缓存占用值(Qold)+当前单播包所需的缓存空间占用值(Bu),即Qnew=Qold+Bu;然后,获取缓存空间中的单播激活队列数An;最后计算当前单播包入队后对应的队列所需的预估缓存空间Et=Qnew*An
步骤409、缓存空间的管理装置设置单播缓存空间的N级丢弃概率。其中,每一级丢弃概率与每一级动态丢弃阈值之间具有对应关系,并且第1级丢弃概率>第2级丢弃概率>...>第N级丢弃概率。
步骤410、缓存空间的管理装置根据预估缓存空间、单播缓存空间的N级动态丢弃阈值和单播缓存空间的N级丢弃概率确定是否允许当前单播包入队,并为当前单播包分配缓存空间。
可选地,步骤410包括:若预估缓存空间大于第1级动态丢弃阈值,将单播包丢弃;若预估缓存空间小于第1级动态丢弃阈值,且大于第2级动态丢弃阈值,比较第2级丢弃概率和由随机数算法生成的随机数的大小,若第2级丢弃概率大于随机数,丢弃单播包,若第2级丢弃概率小于随机数,允许当前单播包入队,并为当前单播包分配缓存空间;依次类推,若预估缓存空间小于第N-1级动态丢弃阈值,且大于第N级动态丢弃阈值,比较第N级丢弃概率和由随机数算法生成的随机数的大小,若第N级丢弃概率大于随机数,丢弃单播包,若第N级丢弃概率小于随机数,允许当前单播包入队,并为当前单播包分配缓存空间;若预估缓存空间小于第N级动态丢弃阈值,允许单播包入队,并为当前单播包分配缓存空间。
需要说明的是,生成随机数的随机数算法有多种,其中一种随机数算法为:NEW[7:0]={OLD[6:0],OLD[0]^OLD[1]^OLD[2]^OLD[3]^OLD[7]},其中用于计算随机数的初始值OLD=8‘h56。可选地,在确定是否允许第一个当前单播包入队时,用来与丢弃概率比较的随机数是由初始值通过随机 数算法计算得到的随机数,在确定是否允许第二个当前单播包入队时,用来与丢弃概率比较的随机数是由上次随机数通过随机数算法计算得到的新随机数;也就是说,只有在确定是否允许第一个单播包入队时,所用的随机数是由初始值通过随机数算法计算得到的,在确定是否允许其它单播包入队时,所用的随机数是由上一次得到的随机数通过随机数算法计算得到的。
还需要说明的是,本实施例中与其它实施例中相同步骤或概念的解释可以参照其它实施例中的描述,此处不再赘述。
本实施例提供的缓存空间的管理方法,获取多播包占用的多播缓存空间;根据预先设置的最大丢弃阈值、预先设置的N级丢弃系数和多播缓存空间计算单播缓存空间的N级动态丢弃阈值;计算若当前单播包入队后对应的队列所需的预估缓存空间;根据预估缓存空间、N级动态丢弃阈值和预先设置的N级丢弃概率确定是否允许当前单播包入队,并为当前单播包分配缓存空间;这样,能够根据多播包实际所占用的缓存空间动态地计算单播缓存空间的N级丢弃阈值,进而根据计算出的N级动态丢弃阈值确定是否为当前单播包分配缓存空间还是丢弃当前单播包,从而实现了充分利用缓存空间,提高缓存空间利用率的目的。
可选地,本发明提供的缓存空间的管理方法还包括:获取多播包更新后占用的多播缓存空间;相应的,根据多播缓存空间计算单播缓存空间的动态丢弃阈值,包括:根据多播包更新后占用的多播缓存空间计算单播缓存空间的N级动态丢弃阈值。
需要说明的是,若多播包发生更新,即多播缓存空间释放一部分缓存空间或增加一部分缓存空间,则重新获取多播包占用的多播缓存空间,根据多播缓存空间重新计算单播缓存空间的动态丢弃阈值,也就是说,单播空间的丢弃阈值是动态变化的,随着多播缓存空间的变动而变化。
下面以一个实施例说明如何实现缓存空间的管理:
计算当前单播包入队后对应的队列所需的预估缓存空间。
配置单播缓存空间的最大丢弃阈值T0,单播缓存空间的N级丢弃系数β1、β2...βN,和单播缓存空间的N级丢弃概率P1、P2...PN,其中T0为大于0的整数,β1<β2...<βN,P1>P2...>PN
获取多播包占用的多播缓存空间Qm
分别计算单播缓存空间的第1级动态丢弃阈值T1,第2级动态丢弃阈值T2,...,以及第N级动态丢弃阈值TN
若该队列新的缓存空间占用值Et超过第1级动态丢弃阈值T1,将当前单播包全部丢弃;若该队列新的缓存空间占用值Et超过第2级动态丢弃阈值T2并且没有超过第1级动态丢弃阈值T1,比较第2级丢弃概率P2和由随机数算法生成的随机数的大小,若P2大于随机数,丢弃当前单播包,若P2小于随机数,允许当前单播包入队,并为当前单播包分配缓存空间;依次类推,若该队列新的缓存空间占用值Et超过第N级动态丢弃阈值TN并且没有超过第N-1级动态丢弃阈值TN-1,比较第N级丢弃概率PN和由随机数算法生成的随机数的大小,若PN大于随机数,丢弃当前单播包,若PN小于随机数,允许当前单播包入队,并为当前单播包分配缓存空间;若该队列新的缓存空间占用值Et没有超过第N级动态丢弃阈值TN,则允许当前单播包入队,并为当前单播包分配缓存空间。
由上述实施例可以看出,单播空间的动态丢弃阈值一直处于动态调整中,随着多播包占用的多播缓存空间Qm增大,单播缓存空间的动态丢弃阈值空间减小表示单播包可用空间减小;而随着多播包占用的多播缓存空间Qm减少,单播缓存空间的动态阈值增大表示单播包可用空间增大。
下面再以二个具体实施例说明如何实现缓存空间的管理,第一个实施例为:
假设缓存空间包括128个缓存区域;每一个缓存区域包括预定量的换缓存容量,例如1个所述缓存区域的容量包括1024M。当前需要为一个优先级为0的多播包和一个单播包分配缓存空间。
首先配置多播缓存空间共有128个缓存区域,配置单播缓存空间共有128个缓存区域。
其次设置多播包的优先级个数为4,分别为优先级0、优先级1、优先级2和优先级3,设置多播包优先级0的丢弃阈值为Mth0,丢弃概率为Mp0,设置多播包优先级1的丢弃阈值为Mth1、丢弃概率为Mp1,设置多播包优先级2的丢弃阈值为Mth2、丢弃概率为Mp2,设置多播包优先级3的丢弃阈值为Mth3、丢弃概率为Mp3;其中,Mth0>Mth1>Mth2>Mth3且Mth0≤128,Mp0>Mp1>Mp2>Mp3。设置单播缓存空间的最大阈值T0,设置单播缓存空间的3级丢弃系数β1、β2、β3,设置单播缓存空间的3级丢弃概率P1、P2、P3
如果先确定是否为优先级为0的多播包分配缓存空间,具体过程包括:假设之前的多播包占用的缓存空间为32,当前多播包所需的缓存空间为8,则如果为当前多播包分配缓存空间,则多播包占用的多播缓存空间为32+8=40。假设配置多播包优先级0的丢弃阈值为Mth0=120,由于120>40,为当前优先级0的多播包分配缓存空间。
如果先确定是否为单播包分配缓存空间,具体过程包括:假设之前的多播包占用的缓存空间为32,假设T0=120,β1=1,β2=2,β3=3;此时多播缓存空间为40,计算单播缓存空间的第1级动态丢弃阈值T1=(120-32)/1=88,计算单播缓存空间的第2级动态丢弃阈值T2=(120-32)/2=44,计算单播缓存空间的第3级丢动态弃阈值T3=(120-32)/3≈29。若单播包入队后队列所占的缓存空间大于第1级动态丢弃阈值88,将该单播包丢弃;若单播包入队后队列所占的缓存空间小于第1级动态丢弃阈值88,并且大于第2级动态丢弃阈值44,则比较P2与随机数算法生成的随机数进行大小 比较从而确定是否让该单播包入队,并为分配缓存空间;若单播包入队后队列所占的缓存空间小于第2级动态丢弃阈值44,并且大于第3级动态丢弃阈值29,则比较P3与随机数算法生成的随机数进行大小比较从而确定是否让该单播包入队,并为其分配缓存空间;若单播包入队后队列所占的缓存空间小于第3级动态丢弃阈值29,让该单播包入队,并为其分配缓存空间。
若假设多播包更新后占用的多播缓存空间为16,那么根据上述单播缓存空间的动态丢弃阈值计算公式可以计算得出单播缓存空间的第1级动态丢弃阈值、第2级动态丢弃阈值、第3级动态丢弃阈值分别为104,52,34。
由此可以看出,随着多播缓存空间的释放,单播包可用的缓存空间开始增加。
第二个实施例为:
假设缓存空间是128个缓存区域,当前需要为一个优先级为0的多播包、一个优先级为1的多播包和一个携带队列号0的单播包、一个携带队列号1的单播包分配缓存空间。
首先配置多播缓存空间最大为128个缓存区域,配置单播缓存空间最大为128个缓存区域。
其次设置多播包的优先级个数为2,分别为优先级0和优先级1,设置多播包的优先级0的丢弃概率为Mth0、丢弃概率为Mp0,设置多播包优先级1的丢弃阈值为Mth1、丢弃概率为Mp1;其中Mth0>Mth1且Mth0≤128,Mp0>Mp1
设置单播缓存空间的最大阈值T0且T0≤128,设置单播缓存空间的3级丢弃系数β1、β2、β3,设置单播缓存空间的3级丢弃概率P1、P2、P3
先确定是否为优先级为1的多播包分配缓存空间,具体过程包括:假设之前的多播包占用的缓存空间为32,当前多播包所需的缓存空间为8, 则如果为当前多播包分配缓存空间,则多播包占用的多播缓存空间为32+8=40,假设配置多播包优先级1的丢弃阈值为Mth1=60,由于60>40,为当前优先级1的多播包分配缓存空间。
然后确定是否为优先级为0的多播包分配缓存空间,具体过程包括:之前的多播包占用的缓存空间为40,假设当前多播包所需的缓存空间为16,则如果为当前多播包分配缓存空间,则多播包占用的多播缓存空间为40+16=56,假设配置多播包优先级0的丢弃阈值为Mth0=120,由于120>56,为当前优先级0的多播包分配缓存空间。
接下来确定是否为携带队列号0的单播包分配缓存空间,具体过程包括:假设T0=120,β1=1,β2=2,β3=3;此时多播缓存空间为56,计算单播缓存空间的第1级动态丢弃阈值T1=(120-56)/1=64,计算单播缓存空间的第2级动态丢弃阈值T2=(120-56)/2=32,计算单播缓存空间的第3级动态丢弃阈值T3=(120-56)/3≈21。假设队列0之前占用的缓存空间Qold=32,队列号0的单播包所需要的缓存空间为8,因为当前单播包的队列号种类数为2,所以若队列号为0的单播包进入队列0后队列0所需的预估缓存空间Et=Qnew*An=(32+8)×2=80,由于80>第1级动态丢弃阈值64,因此将该单播包全部丢弃。
若此时,多播包更新导致多播包占用的多播缓存空间释放了一部分,假设释放的多播缓存空间为32,那么多播缓存空间变为56-32=24。相应的,计算单播缓存空间的第1级动态丢弃阈值T1=(120-24)/1=96,计算单播缓存空间的第2级动态丢弃阈值T2=(120-24)/2=48,计算单播缓存空间的第3级动态丢弃阈值T3=(120-24)/3=32。
确定是否为携带队列号1的单播包分配缓存空间,具体过程包括:假设队列1之前占用的缓存空间Qold=36,队列号1的单播包所需要的缓存空间为8,计算若队列号为1的单播包进入队列1后队列1所需的预估缓存空 间Et=Qnew*An=(36+8)×2=88,由于第1级动态丢弃阈值96>80>第2级动态丢弃阈值48,因此将第2级丢弃概率P2与随机数算法生成的随机数进行大小比较从而确定是否让该单播包入队,并为其分配缓存空间。
本实施例提供的缓存空间的管理方法,获取多播包占用的多播缓存空间;根据预先设置的最大丢弃阈值、预先设置的N级丢弃系数和多播缓存空间计算单播缓存空间的N级动态丢弃阈值;计算若当前单播包入队后对应的队列所需的预估缓存空间;根据预估缓存空间、N级动态丢弃阈值和预先设置的N级丢弃概率确定是否允许当前单播包入队,并为当前单播包分配缓存空间;这样,能够根据多播包实际所占用的缓存空间动态地计算单播缓存空间的N级丢弃阈值,进而根据计算出的N级动态丢弃阈值确定是为当前单播包分配缓存空间还是丢弃当前单播包,从而实现了充分利用缓存空间,提高缓存空间利用率的目的。
图5为本发明实施例提供的一种缓存空间的管理装置的结构示意图,如图5所示,该装置5包括:
获取模块51,配置为获取多播包占用的多播缓存空间。
计算模块52,配置为根据多播缓存空间计算单播缓存空间的动态丢弃阈值;其中,单播缓存空间为单播包占用的缓存空间。
处理模块53,配置为根据单播缓存空间的动态丢弃阈值确定是否为当前单播包分配缓存空间。
本实施例提供的缓存空间的管理装置,获取多播包占用的多播缓存空间;根据多播缓存空间计算单播缓存空间的动态丢弃阈值;根据单播缓存空间的动态丢弃阈值确定是否为当前单播包分配缓存空间;这样,能够根据多播包实际所占用的缓存空间动态地计算单播缓存空间的丢弃阈值,进而根据计算出的动态丢弃阈值确定是否为当前单播包分配缓存空间还是丢弃当前单播包,从而实现了充分利用缓存空间,提高缓存空间利用率的目 的。
可选地,图6为本发明实施例提供的另一种缓存空间的管理装置的结构示意图,如图6所示,计算模块52包括:
第一设置单元521,配置为设置单播缓存空间的最大丢弃阈值和单播缓存空间的N级丢弃系数;其中,其中,N为正整数,最大丢弃阈值为正整数,N级丢弃系数均为正整数,且第1级丢弃系数<第2级丢弃系数<...<第N级丢弃系数。
计算单元522,配置为根据单播缓存空间的最大丢弃阈值、单播缓存空间的N级丢弃系数和多播缓存空间计算单播缓存空间的N级动态丢弃阈值。
可选地,计算单元522具体用于根据最大丢弃阈值和多播缓存空间计算得到最大动态丢弃阈值;根据最大动态丢弃阈值和单播缓存空间的N级丢弃系数计算得到单播缓存空间的N级动态丢弃阈值。
获取模块51,还配置为获取当前单播包携带的队列号。
计算模块52,还配置为若当前单播包入队后对应的队列所需的预估缓存空间;其中,当前单播包根据队列号入队。
处理模块53,还配置为根据预估缓存空间和单播缓存空间的N级动态丢弃阈值确定是否允许当前单播包入队,并为当前单播包分配缓存空间。
可选地,图7为本发明实施例提供的又一种缓存空间的管理装置的结构示意图,如图7所示,处理模块53包括:
第二设置单元531,配置为设置单播缓存空间的N级丢弃概率;其中,每一级丢弃概率与每一级动态丢弃阈值之间具有对应关系,并且第1级丢弃概率>第2级丢弃概率>...>第N级丢弃概率。
处理单元532,用于根据预估缓存空间、单播缓存空间的N级动态丢弃阈值和单播缓存空间的N级丢弃概率确定是否允许当前单播包入队,并 为当前单播包分配缓存空间。
可选地,图8为本发明实施例提供的又一种缓存空间的管理装置的结构示意图,如图8所示,计算模块52还包括:
获取单元523,配置为根据队列号在队列深度实时统计表中获取原始队列占用的缓存空间;获取当前单播包所需的缓存空间;
计算单元522,配置为根据原始队列占用的缓存空间、当前单播包所需的缓存空间和队列活跃个数计算得到当前单播包入队后对应的队列所需的预估缓存空间;其中队列活跃个数是所述缓存空间有效的单播包队列个数。
可选地,处理单元532,配置为若预估缓存空间大于第1级动态丢弃阈值,将单播包丢弃;若预估缓存空间小于第i级动态丢弃阈值,且大于第i+1级动态丢弃阈值,比较第i+1级丢弃概率和由随机数算法生成的随机数的大小,若第i+1级丢弃概率大于随机数,丢弃当前单播包,若第i+1级丢弃概率小于随机数,允许当前单播包入队,并为当前单播包分配缓存空间;其中,i=1、2...N-1;若预估缓存空间小于第N级动态丢弃阈值,允许单播包入队,并为当前单播包分配缓存空间。
可选地,获取模块51,还配置为获取多播包更新后占用的多播缓存空间。
计算模块52,还配置为根据多播包更新后占用的多播缓存空间计算单播缓存空间的N级动态丢弃阈值。
需要说明的是,本实施例中各个模块和单元之间的交互过程,可以参照图1~5对应的方法实施例,此处不再赘述。
本实施例提供的缓存空间的管理装置,获取多播包占用的多播缓存空间;根据预先设置的最大丢弃阈值、预先设置的N级丢弃系数和多播缓存空间计算单播缓存空间的N级动态丢弃阈值;计算若当前单播包入队后对应的队列所需的预估缓存空间;根据预估缓存空间、N级动态丢弃阈值和 预先设置的N级丢弃概率确定是否允许当前单播包入队,并为当前单播包分配缓存空间;这样,能够根据多播包实际所占用的缓存空间动态地计算单播缓存空间的N级丢弃阈值,进而根据计算出的N级动态丢弃阈值确定是为当前单播包分配缓存空间还是丢弃当前单播包,从而实现了充分利用缓存空间,提高缓存空间利用率的目的。
在实际应用中,所述获取模块51、计算模块52、第一设置单元521、计算单元522、获取单元523、处理模块53、第二设置单元531、处理单元532,均可由位于缓存空间的管理装置中的中央处理器(Central Processing Unit,CPU)、微处理器(Micro Processor Unit,MPU)、数字信号处理器(Digital Signal Processor,DSP)或现场可编程门阵列(Field Programmable Gate Array,FPGA)等实现。
图9为本发明实施例提供的又一种缓存空间的管理装置的结构示意图,如图9所示,该装置6包括:
报文类型区分模块61,配置为区分新入队的报文是多播还是单播,多播报文送到多播处理模块,单播报文送到单播处理模块。
配置模块62,配置为配置缓存大小,多播缓存空间的阈值,单播缓存空间的最大阈值,单播缓存空间的丢弃系数以及单播缓存空间的丢弃概率等。
多播处理模块63,配置为确定是否为当前多播包分配缓存空间。
单播处理模块64,配置为确定是否为当前单播包分配缓存空间。
动态阈值计算模块65,配置为计算单播缓存空间的动态丢弃阈值。
随机数计算模块66,配置为进行随机数计算。
在实际应用中,报文类型区分模块61、配置模块62、多播处理模块63、单播处理模块64、动态阈值计算模块65、随机数计算模块66,均可由位于缓存空间的管理装置中的CPU、MPU、DSP或FPGA等实现。
本发明实施例还提供一种缓存空间的管理方法,包括:
当接收到多播包时,缓存所述多播包;
确定多播包占用的缓存空间的多播包缓存容量;
根据所述多播包缓存容量,确定单播包的动态丢弃阈值;
确定预估缓存容量,其中,所述预估缓存容量为单播包已占用的单播包缓存容量和缓存当前单播包的所需缓存容量之和;
根据所述预估缓存容量及所述动态丢弃阈值的比较结果,确定是否缓存所述当前单播包。
在本实施例中电子设备中用于存储单播包和多播包的缓存空间是有限的,在本实施例中为了确保多播包的本缓存,更好的利用缓存空间的缓存效率,在本实施例中当接收到多包时,将直接利用当前剩余的缓存空间缓存多播包。
在本实施例中还会确定多播包当前占用的总缓存空间的容量,在本实施中多播包占用的缓存空间的容量称之为多播包缓存容量。
可选地,根据多播包的缓存容量,确定单播包的动态丢弃阈值。由于多播包和单播包共享电子设备内的总缓存空间,通常多播缓存容量越大,则剩余用于单播包缓存的换存空间容量就越小,通常此时则对应的动态丢弃阈值就越小,即所述多播包缓存容量与所述动态丢弃阈值成反比。在本实施例中所述动态丢弃阈值为根据多播包缓存空间动态确定的,用于当接收到一个单播包时根据当前单播包已占用的缓存空间容量,是否丢弃当前接收到的单播包,从而确定是否要缓存当前接收到的单播包。
总之,本实施例中根据多播包单前的缓存情况,动态的设置单播包的动态丢弃阈值,进而确定是否缓存当前接收到的单播包,从而能够确保在单播包所需缓存空间较小,腾出更多的缓存空间用于缓存多播包,从而确保减少多播包被丢弃的概率,提升多播的播放效果,同时更有效的利用缓 存空间。
可选地,所述根据所述多播包缓存容量,确定单播包的动态丢弃阈值,包括:
采用公式t=(T0-Qm)/β计算所述动态丢弃阈值,其中,所述t为所述动态丢弃阈值,所述T0为所述单播包的最大丢弃阈值,所述Qm为所述多播缓存空间的容量;所述β为丢弃系数。
在本实施例中所述动态丢弃阈值,不仅决定于多播包的当前占用的多播包缓存空间的容量,还决定于丢弃系数以及单播包的最大丢弃阈值,在本实施中所述单播包的最大丢弃阈值可为设置的单播包占用的最大缓存空间容量。
可选地,为了避免单播包在缓存过程中,仅丢球后进入的单播包(即队尾的单播包),在本实施例中,会计算多级动态丢弃阈值,再结合当前单播包的已占用的缓存空间容量(即单播包缓存容量)确定是否丢弃当前单播包。具体如,所述根据所述多播包缓存容量,确定单播包的动态丢弃阈值,包括:
采用公式TN=(T0-Qm)/βN,计算所述单播包的多级动态丢弃阈值,其中,所述N的取值为正整数;所述TN为第N级动态丢弃阈值,所述T0为最大丢弃阈值;所述βN为N级丢弃系数;
所述根据所述动态丢弃阈值对应的丢弃概率,确定是否缓存当前单播包,包括:
比较所述预估缓存容量和多级所述动态丢弃阈值,确定当前适用的动态丢弃阈值;
根据所述当前适用的动态丢弃阈值和/或所述当前适用的动态丢弃阈值对应丢弃概率,确定是否缓存所述当前单播包。
由于存在多级动态丢弃阈值,故需要根据当前的预估缓存容量,确定 出当前适用的动态丢弃阈值,然后结合当前适用的动态丢弃阈值,确定是否丢弃当前单播包,若决定丢弃显然不缓存当前单播包,若决定不丢弃显然需要缓存当前单播包。在本实施例中每一动态丢弃阈值还对应于一个丢弃概率,该丢弃概率为丢弃当前接收到的单前单播包的概率,在本实施例中可以解决随机算法生成的随机数,与丢弃概率的比较,最终确定是否丢弃该当前单播包,从而确定出是否缓存当前单播包。
可选地,第n级动态丢弃阈值小于第n+1级丢弃阈值,其中,所述n为小于所述N的正整数;
所述比较所述预估缓存容量和多级所述动态丢弃阈值,确定当前适用的动态丢弃阈值,包括:
当所述预估缓存空间大于第1级动态丢弃阈值时,确定所述第1级动态丢弃阈值为所述当前适用的动态丢弃阈值;
当所述预估缓存空间小于第i级动态丢弃阈值,且不小于第i+1级动态丢弃阈值时,确定所述第i+级动态丢弃阈值为所述当前适用的动态丢弃阈值,其中,所述i为小于所述N的正整数。
可选地,所述根据所述当前适用的动态丢弃阈值和/或所述当前适用的动态丢弃阈值对应丢弃概率,确定是否缓存所述当前单播包,包括:
当所述当前适用的动态丢弃阈值为第1级动态丢弃阈值时,丢弃所述单播包;
当所述当前适用的动态丢弃阈值为第i+1级动态丢弃阈值时,根据所述第i+1级丢弃概率确定是否缓存所述当前单播包。
在一些实施例中,所述当所述当前适用的动态丢弃阈值为第i+1级动态丢弃阈值时,根据所述第i+1级丢弃概率确定是否缓存所述当前单播包。包括:生成随机数;比较所述随机数与所述第i+1级丢弃系数;当所述第i+1级丢弃系数小于所述随机数时,缓存所述当前单播包。
可选地,所述最大丢弃阈值等于允许占用的最大单播包缓存容量。
本实施例还提供一种缓存空间的管理装置,包括:
缓存单元,配置为当接收到多播包时,缓存所述多播包;
第一确定单元,配置为确定多播包占用的缓存空间的多播包缓存容量;
第二确定单元,配置为根据所述多播包缓存容量,确定单播包的动态丢弃阈值;
第三确定单元,配置为确定预估缓存容量,其中,所述预估缓存容量为单播包已占用的单播包缓存容量和缓存当前单播包的所需缓存容量之和;
处理单元,配置为根据所述预估缓存容量及所述动态丢弃阈值的比较结果,确定是否缓存所述当前单播包。、
这里的缓存单元,可包括缓存空间,用于进行单播包和/或多播包的缓存。
第一确定单元、第二确定单元、第三确定单元及处理单元,都可以对应于处理器或处理电路。所述处理器可包括:中央处理器、微处理器、数字信号处理器、可编程阵列或应用处理器等。所述处理电路可包括专用集成电路。所述处理器或处理电路,可通过执行计算机程序,实现上述各个单元的功能。
可选地,所述第二确定单元,配置为采用公式t=(T0-Qm)/β计算所述动态丢弃阈值,其中,所述t为所述动态丢弃阈值,所述T0为所述单播包的最大丢弃阈值,所述Qm为所述多播缓存空间的容量;所述β为丢弃系数。
可选地,所述第二确定单元,配置为采用公式TN=(T0-Qm)/βN,计算所述单播包的多级动态丢弃阈值,其中,所述N的取值为正整数;所述TN为第N级动态丢弃阈值,所述T0为最大丢弃阈值;所述βN为N级丢弃系数;
所述处理单元,配置为比较所述预估缓存容量和多级所述动态丢弃阈值,确定当前适用的动态丢弃阈值;根据所述当前适用的动态丢弃阈值和/或所述当前适用的动态丢弃阈值对应丢弃概率,确定是否缓存所述当前单播包。
在一些实施例中,第n级动态丢弃阈值小于第n+1级丢弃阈值,其中,所述n为小于所述N的正整数;
所述处理单元,配置为当所述预估缓存空间大于第1级动态丢弃阈值时,确定所述第1级动态丢弃阈值为所述当前适用的动态丢弃阈值;当所述预估缓存空间小于第i级动态丢弃阈值,且不小于第i+1级动态丢弃阈值时,确定所述第i+级动态丢弃阈值为所述当前适用的动态丢弃阈值,其中,所述i为小于所述N的正整数。
在一些实施例中,所述处理单元,配置为当所述当前适用的动态丢弃阈值为第1级动态丢弃阈值时,丢弃所述单播包;当所述当前适用的动态丢弃阈值为第i+1级动态丢弃阈值时,根据所述第i+1级丢弃概率确定是否缓存所述当前单播包。
在另一些实施例中,所述处理单元,配置为生成随机数;
比较所述随机数与所述第i+1级丢弃系数;
当所述第i+1级丢弃系数小于所述随机数时,缓存所述当前单播包。
此外,所述最大丢弃阈值等于允许占用的最大单播包缓存容量。
本实施例提供一种电子设备,包括:
缓存器,包括缓存空间,用于缓存单播包和/或多播包;
存储器,用于存储计算机程序;
处理器,分别与所述缓存器及存储器相连,配置为通过执行所述计算机程序,对所述缓存器的缓存空间进行管理,能够实现前述任意一个技术方案提供的缓存空间的管理方法。
本实施例中的缓存器可包括:缓存空间。所述存储器可包括能够计算机程序的各种存储介质,例如,随机存储器、只读存储器、闪存或光盘等。所述处理器可包括各种类型的处理器,例如,中央处理器、微处理器等。
所述处理器通过总线与所述缓存器或存储器连接,从而进行缓存空间的管理,这里的总线可为集成电路(IIC)总线等。
这里的电子设备可为各种类型的服务器、台式电脑或笔记本电脑、手机或可穿戴式设备扥。
本实施例还提供一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于执行能够实现前述任意一个技术方案提供的缓存空间的管理方法。
这里的计算机存储介质可包括:光盘、磁带、只读存储介质、移动硬盘、闪存等各种存储介质,可选为非瞬间存储介质,可用于存储可供处理器执行的计算机程序。该计算机程序被处理器执行之后,可以用于前述任意一个缓存空间的管理方法的实现。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用硬件实施例、软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现 在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上,仅为本发明的较佳实施例而已,并非用于限定本发明的保护范围,凡按照本发明原理所作的修改,都应当理解为落入本发明的保护范围。
工业实用性
本发明实施例中在进行缓存空间的使用时,在接收到多播包之后直接存储多播包,然后基于多播包的缓存状况确定单播包的动态丢弃阈值,用动态丢弃阈值控制单播包和多播包共享的缓存空间对单播包的缓存,确保优先缓存多播包的同时,以在单播包的缓存所需容量小,用于多播包的缓存,提升缓存空间的有效利用,减少多播包的丢包率,提升多播效果。与此同时,在工业上可通过电子设备内计算机程序的设置来实现,具有实现简便及可复制性高的特点。

Claims (28)

  1. 一种缓存空间的管理方法,所述方法包括:
    获取多播包占用的多播缓存空间;
    根据所述多播缓存空间计算单播缓存空间的动态丢弃阈值;其中,所述单播缓存空间为单播包占用的缓存空间;
    根据所述单播缓存空间的动态丢弃阈值确定是否为当前单播包分配缓存空间。
  2. 根据权利要求1所述的方法,其中,所述根据所述多播缓存空间计算单播缓存空间的动态丢弃阈值,包括:
    设置所述单播缓存空间的最大丢弃阈值和所述单播缓存空间的N级丢弃系数;其中,N为正整数,所述最大丢弃阈值为正整数,所述N级丢弃系数均为正整数,且第1级丢弃系数<第2级丢弃系数<...<第N级丢弃系数;
    根据所述单播缓存空间的最大丢弃阈值、所述单播缓存空间的N级丢弃系数和所述多播缓存空间,计算所述单播缓存空间的N级动态丢弃阈值。
  3. 根据权利要求2所述的方法,其中,所述根据所述单播缓存空间的最大丢弃阈值、所述单播缓存空间的N级丢弃系数和所述多播缓存空间,计算所述单播缓存空间的N级动态丢弃阈值,包括:
    根据所述单播缓存空间的最大丢弃阈值和所述多播缓存空间计算得到最大动态丢弃阈值;
    根据所述最大动态丢弃阈值和单播缓存空间的所述单播缓存空间的N级丢弃系数计算得到单播缓存空间的N级动态丢弃阈值。
  4. 根据权利要求2所述的方法,其中,所述根据所述单播缓存空间的动态丢弃阈值确定是否为当前单播包分配缓存空间之前,所述方法还包括:
    获取所述当前单播包携带的队列号;
    计算若所述当前单播包入队后对应的队列所需的预估缓存空间;其中,所述当前单播包根据所述队列号入队;
    相应的,所述根据所述单播缓存空间的动态丢弃阈值确定是否为当前单播包分配缓存空间,包括:
    根据所述预估缓存空间和所述单播缓存空间的N级动态丢弃阈值确定是否允许所述当前单播包入队,并为所述当前单播包分配缓存空间。
  5. 根据权利要求4所述的方法,其中,所述根据所述预估缓存空间和所述单播缓存空间的N级动态丢弃阈值确定是否允许所述当前单播包入队,并为所述当前单播包分配缓存空间,包括:
    设置所述单播缓存空间的N级丢弃概率;其中,每一级丢弃概率与每一级动态丢弃阈值之间具有对应关系,并且第1级丢弃概率>第2级丢弃概率>...>第N级丢弃概率;
    根据所述预估缓存空间、所述单播缓存空间的N级动态丢弃阈值和所述单播缓存空间的N级丢弃概率确定是否允许所述当前单播包入队,并为所述当前单播包分配缓存空间。
  6. 根据权利要求4所述的方法,其中,所述计算若所述当前单播包入队后对应的队列所需的预估缓存空间,包括:
    根据所述队列号在队列深度实时统计表中获取原始队列占用的缓存空间;
    获取所述当前单播包所需的缓存空间;
    根据所述原始队列占用的缓存空间、所述当前单播包所需的缓存空间和队列活跃个数计算得到所述当前单播包入队后对应的队列所需的预估缓存空间;其中所述队列活跃个数是所述缓存空间有效的单播包队列个数。
  7. 根据权利要求5所述的方法,其中,所述根据所述预估缓存空间、所述单播缓存空间的N级动态丢弃阈值和所述单播缓存空间的N级丢弃概 率确定是否允许所述当前单播包入队,并为所述当前单播包分配缓存空间,包括:
    若所述预估缓存空间大于第1级动态丢弃阈值,将所述单播包丢弃;
    若所述预估缓存空间小于第i级动态丢弃阈值,且大于第i+1级动态丢弃阈值,比较第i+1级丢弃概率和由随机数算法生成的随机数的大小,若所述第i+1级丢弃概率大于所述随机数,丢弃所述当前单播包,若所述第i+1级丢弃概率小于所述随机数,允许所述当前单播包入队,并为所述当前单播包分配缓存空间;其中,i=1、2...N-1,其中,所述N为大于1的正整数;
    若所述预估缓存空间小于第N级动态丢弃阈值,允许所述单播包入队,并为所述当前单播包分配缓存空间。
  8. 根据权利要求1所述的方法,其中,所述方法还包括:
    获取所述多播包更新后占用的多播缓存空间;
    相应的,所述根据所述多播缓存空间计算单播缓存空间的动态丢弃阈值,包括:
    根据所述多播包更新后占用的多播缓存空间计算单播缓存空间的N级动态丢弃阈值。
  9. 一种缓存空间的管理装置,其中,所述装置包括:
    获取模块,配置为获取多播包占用的多播缓存空间;
    计算模块,配置为根据所述多播缓存空间计算单播缓存空间的动态丢弃阈值;其中,所述单播缓存空间为单播包占用的缓存空间;
    处理模块,配置为根据所述单播缓存空间的动态丢弃阈值确定是否为当前单播包分配缓存空间。
  10. 根据权利要求9所述的装置,其中,所述计算模块包括:
    第一设置单元,配置为设置所述单播缓存空间的最大丢弃阈值和所述单播缓存空间的N级丢弃系数;其中,N为正整数,所述最大丢弃阈值为 正整数,所述N级丢弃系数均为正整数,且第1级丢弃系数<第2级丢弃系数<...<第N级丢弃系数;
    计算单元,配置为根据所述单播缓存空间的最大丢弃阈值、所述单播缓存空间的N级丢弃系数和所述多播缓存空间计算所述单播缓存空间的N级动态丢弃阈值。
  11. 根据权利要求10所述的装置,其中,所述处理模块包括:
    第二设置单元,配置为设置所述单播缓存空间的N级丢弃概率;其中,每一级丢弃概率与每一级动态丢弃阈值之间具有对应关系,并且第1级丢弃概率>第2级丢弃概率>...>第N级丢弃概率;
    处理单元,用于根据预估缓存空间、所述单播缓存空间的N级动态丢弃阈值和所述单播缓存空间的N级丢弃概率确定是否允许所述当前单播包入队,并为所述当前单播包分配缓存空间。
  12. 根据权利要求11所述的装置,其中,
    所述处理单元,配置为若所述预估缓存空间大于第1级动态丢弃阈值,将所述单播包丢弃;若所述预估缓存空间小于第i级动态丢弃阈值,且大于第i+1级动态丢弃阈值,比较第i+1级丢弃概率和由随机数算法生成的随机数的大小,若所述第i+1级丢弃概率大于所述随机数,丢弃所述当前单播包,若所述第i+1级丢弃概率小于所述随机数,允许所述当前单播包入队,并为所述当前单播包分配缓存空间;其中,i=1、2...N-1;若所述预估缓存空间小于第N级动态丢弃阈值,允许所述单播包入队,并为所述当前单播包分配缓存空间。
  13. 一种缓存空间的管理方法,包括:
    当接收到多播包时,缓存所述多播包;
    确定多播包占用的缓存空间的多播包缓存容量;
    根据所述多播包缓存容量,确定单播包的动态丢弃阈值;
    确定预估缓存容量,其中,所述预估缓存容量为单播包已占用的单播包缓存容量和缓存当前单播包的所需缓存容量之和;
    根据所述预估缓存容量及所述动态丢弃阈值的比较结果,确定是否缓存所述当前单播包。
  14. 根据权利要求13所述的方法,其特征在于,
    所述根据所述多播包缓存容量,确定单播包的动态丢弃阈值,包括:
    采用公式t=(T0-Qm)/β计算所述动态丢弃阈值,其中,所述t为所述动态丢弃阈值,所述T0为所述单播包的最大丢弃阈值,所述Qm为所述多播缓存空间的容量;所述β为丢弃系数。
  15. 根据权利要求13或14所述的方法,其中,
    所述根据所述多播包缓存容量,确定单播包的动态丢弃阈值,包括:
    采用公式TN=(T0-Qm)/βN,计算所述单播包的多级动态丢弃阈值,其中,所述N的取值为正整数;所述TN为第N级动态丢弃阈值,所述T0为最大丢弃阈值;所述βN为N级丢弃系数;
    所述根据所述动态丢弃阈值对应的丢弃概率,确定是否缓存当前单播包,包括:
    比较所述预估缓存容量和多级所述动态丢弃阈值,确定当前适用的动态丢弃阈值;
    根据所述当前适用的动态丢弃阈值和/或所述当前适用的动态丢弃阈值对应丢弃概率,确定是否缓存所述当前单播包。
  16. 根据权利要求15所述的方法,其中,
    第n级动态丢弃阈值小于第n+1级丢弃阈值,其中,所述n为小于所述N的正整数;
    所述比较所述预估缓存容量和多级所述动态丢弃阈值,确定当前适用的动态丢弃阈值,包括:
    当所述预估缓存空间大于第1级动态丢弃阈值时,确定所述第1级动态丢弃阈值为所述当前适用的动态丢弃阈值;
    当所述预估缓存空间小于第i级动态丢弃阈值,且不小于第i+1级动态丢弃阈值时,确定所述第i+级动态丢弃阈值为所述当前适用的动态丢弃阈值,其中,所述i为小于所述N的正整数。
  17. 根据权利要求16所述的方法,其中,
    所述根据所述当前适用的动态丢弃阈值和/或所述当前适用的动态丢弃阈值对应丢弃概率,确定是否缓存所述当前单播包,包括:
    当所述当前适用的动态丢弃阈值为第1级动态丢弃阈值时,丢弃所述单播包;
    当所述当前适用的动态丢弃阈值为第i+1级动态丢弃阈值时,根据所述第i+1级丢弃概率确定是否缓存所述当前单播包。
  18. 根据权利要求17所述的方法,其中,
    所述当所述当前适用的动态丢弃阈值为第i+1级动态丢弃阈值时,根据所述第i+1级丢弃概率确定是否缓存所述当前单播包。包括:
    生成随机数;
    比较所述随机数与所述第i+1级丢弃系数;
    当所述第i+1级丢弃系数小于所述随机数时,缓存所述当前单播包。
  19. 根据权利要求14所述的方法,其中,
    所述最大丢弃阈值等于允许占用的最大单播包缓存容量。
  20. 一种缓存空间的管理装置,包括:
    缓存单元,配置为当接收到多播包时,缓存所述多播包;
    第一确定单元,配置为确定多播包占用的缓存空间的多播包缓存容量;
    第二确定单元,配置为根据所述多播包缓存容量,确定单播包的动态丢弃阈值;
    第三确定单元,配置为确定预估缓存容量,其中,所述预估缓存容量为单播包已占用的单播包缓存容量和缓存当前单播包的所需缓存容量之和;
    处理单元,配置为根据所述预估缓存容量及所述动态丢弃阈值的比较结果,确定是否缓存所述当前单播包。
  21. 根据权利要求20所述的装置,其特征在于,
    所述第二确定单元,配置为采用公式t=(T0-Qm)/β计算所述动态丢弃阈值,其中,所述t为所述动态丢弃阈值,所述T0为所述单播包的最大丢弃阈值,所述Qm为所述多播缓存空间的容量;所述β为丢弃系数。
  22. 根据权利要求20或21所述的装置,其中,
    所述第二确定单元,配置为采用公式TN=(T0-Qm)/βN,计算所述单播包的多级动态丢弃阈值,其中,所述N的取值为正整数;所述TN为第N级动态丢弃阈值,所述T0为最大丢弃阈值;所述βN为N级丢弃系数;
    所述处理单元,配置为比较所述预估缓存容量和多级所述动态丢弃阈值,确定当前适用的动态丢弃阈值;根据所述当前适用的动态丢弃阈值和/或所述当前适用的动态丢弃阈值对应丢弃概率,确定是否缓存所述当前单播包。
  23. 根据权利要求22所述的装置,其中,
    第n级动态丢弃阈值小于第n+1级丢弃阈值,其中,所述n为小于所述N的正整数;
    所述处理单元,配置为当所述预估缓存空间大于第1级动态丢弃阈值时,确定所述第1级动态丢弃阈值为所述当前适用的动态丢弃阈值;当所述预估缓存空间小于第i级动态丢弃阈值,且不小于第i+1级动态丢弃阈值时,确定所述第i+级动态丢弃阈值为所述当前适用的动态丢弃阈值,其中,所述i为小于所述N的正整数。
  24. 根据权利要求23所述的装置,其中,
    所述处理单元,配置为当所述当前适用的动态丢弃阈值为第1级动态丢弃阈值时,丢弃所述单播包;当所述当前适用的动态丢弃阈值为第i+1级动态丢弃阈值时,根据所述第i+1级丢弃概率确定是否缓存所述当前单播包。
  25. 根据权利要求24所述的装置,其中,
    所述处理单元,配置为生成随机数;
    比较所述随机数与所述第i+1级丢弃系数;
    当所述第i+1级丢弃系数小于所述随机数时,缓存所述当前单播包。
  26. 根据权利要求21所述的装置,其中,
    所述最大丢弃阈值等于允许占用的最大单播包缓存容量。
  27. 一种电子设备,包括:
    缓存器,包括缓存空间,用于缓存单播包和/或多播包;
    存储器,用于存储计算机程序;
    处理器,分别与所述缓存器及存储器相连,配置为通过执行所述计算机程序,对所述缓存器的缓存空间进行管理,能够实现权利要求1到8及13至19任一项所述的方法。
  28. 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于执行能够实现权利要求1到8及13至19任一项所述的方法。
PCT/CN2017/082635 2016-11-18 2017-04-28 缓存空间的管理方法和装置、电子设备和存储介质 WO2018090573A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611018368.6 2016-11-18
CN201611018368.6A CN108076020B (zh) 2016-11-18 2016-11-18 一种缓存空间的管理方法及装置

Publications (1)

Publication Number Publication Date
WO2018090573A1 true WO2018090573A1 (zh) 2018-05-24

Family

ID=62146054

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/082635 WO2018090573A1 (zh) 2016-11-18 2017-04-28 缓存空间的管理方法和装置、电子设备和存储介质

Country Status (2)

Country Link
CN (1) CN108076020B (zh)
WO (1) WO2018090573A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113099272A (zh) * 2021-04-12 2021-07-09 上海商汤智能科技有限公司 视频处理方法及装置、电子设备和存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110557432B (zh) * 2019-07-26 2022-04-26 苏州浪潮智能科技有限公司 一种缓存池均衡优化方法、系统、终端及存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102036177A (zh) * 2009-09-29 2011-04-27 华为技术有限公司 组播广播业务流量控制方法及相关设备
CN102857446A (zh) * 2011-06-30 2013-01-02 中兴通讯股份有限公司 以太网交换芯片的缓存管理方法及装置
KR101241507B1 (ko) * 2011-11-30 2013-03-11 한국과학기술원 멀티캐스트를 이용한 주문형 컨텐츠 서비스를 위한 캐시 시스템 및 캐시 할당 방법

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1184777C (zh) * 2002-04-17 2005-01-12 华为技术有限公司 以太网交换芯片传输数据过程中缓存的管理和分配方法
GB201116737D0 (en) * 2011-09-28 2011-11-09 Ericsson Telefon Ab L M Caching in mobile networks
CN104102693B (zh) * 2014-06-19 2017-10-24 广州华多网络科技有限公司 对象处理方法和装置
US9866401B2 (en) * 2015-05-13 2018-01-09 Cisco Technology, Inc. Dynamic protection of shared memory and packet descriptors used by output queues in a network device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102036177A (zh) * 2009-09-29 2011-04-27 华为技术有限公司 组播广播业务流量控制方法及相关设备
CN102857446A (zh) * 2011-06-30 2013-01-02 中兴通讯股份有限公司 以太网交换芯片的缓存管理方法及装置
KR101241507B1 (ko) * 2011-11-30 2013-03-11 한국과학기술원 멀티캐스트를 이용한 주문형 컨텐츠 서비스를 위한 캐시 시스템 및 캐시 할당 방법

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113099272A (zh) * 2021-04-12 2021-07-09 上海商汤智能科技有限公司 视频处理方法及装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN108076020A (zh) 2018-05-25
CN108076020B (zh) 2020-09-08

Similar Documents

Publication Publication Date Title
US9823947B2 (en) Method and system for allocating FPGA resources
US8230110B2 (en) Work-conserving packet scheduling in network devices
US10193831B2 (en) Device and method for packet processing with memories having different latencies
US10044646B1 (en) Systems and methods for efficiently storing packet data in network switches
EP3504849B1 (en) Queue protection using a shared global memory reserve
US20130074091A1 (en) Techniques for ensuring resources achieve performance metrics in a multi-tenant storage controller
US20140052938A1 (en) Clumsy Flow Control Method and Apparatus for Improving Performance and Energy Efficiency in On-Chip Network
WO2017206587A1 (zh) 一种优先级队列调度的方法及装置
CN111245732B (zh) 一种流量控制方法、装置及设备
CN107347039B (zh) 一种共享缓存空间的管理方法及装置
WO2017000872A1 (zh) 缓存分配方法及装置
WO2020134425A1 (zh) 一种数据处理方法、装置、设备及存储介质
US9942169B1 (en) Systems and methods for efficiently searching for stored data
WO2013189364A1 (zh) 一种处理报文的方法和装置
WO2017107363A1 (zh) 缓存管理的方法和装置、计算机存储介质
WO2018090573A1 (zh) 缓存空间的管理方法和装置、电子设备和存储介质
WO2017133439A1 (zh) 一种数据管理方法及装置、计算机存储介质
US8717891B2 (en) Shaping apparatus and method
WO2020168563A1 (zh) 一种存储器的管理方法及装置
CN112148644A (zh) 处理输入/输出请求的方法、装置和计算机程序产品
US11646970B2 (en) Method and apparatus for determining packet dequeue rate
CN109905331B (zh) 队列调度方法及装置、通信设备、存储介质
EP3440547B1 (en) Qos class based servicing of requests for a shared resource
CN111756586B (zh) 一种数据中心网络中基于优先级队列的公平带宽分配方法、交换机及可读存储介质
US10205666B2 (en) End-to-end flow control in system on chip interconnects

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17871275

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17871275

Country of ref document: EP

Kind code of ref document: A1