WO2017032075A1 - 一种服务质量复用方法及装置、计算机存储介质 - Google Patents

一种服务质量复用方法及装置、计算机存储介质 Download PDF

Info

Publication number
WO2017032075A1
WO2017032075A1 PCT/CN2016/082148 CN2016082148W WO2017032075A1 WO 2017032075 A1 WO2017032075 A1 WO 2017032075A1 CN 2016082148 W CN2016082148 W CN 2016082148W WO 2017032075 A1 WO2017032075 A1 WO 2017032075A1
Authority
WO
WIPO (PCT)
Prior art keywords
queue
packet
qos
management unit
message
Prior art date
Application number
PCT/CN2016/082148
Other languages
English (en)
French (fr)
Inventor
胡学权
Original Assignee
深圳市中兴微电子技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市中兴微电子技术有限公司 filed Critical 深圳市中兴微电子技术有限公司
Publication of WO2017032075A1 publication Critical patent/WO2017032075A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6215Individual queue per QOS, rate or priority

Definitions

  • the present invention relates to a quality of service (QoS) scheduling technology in the field of communications, and in particular, to a QoS multiplexing method and apparatus, and a computer storage medium.
  • QoS quality of service
  • QoS is used to perform two aspects of quality of service monitoring:
  • the total bandwidth is limited, so at initialization, the network can agree to different usage bandwidths with different users, and the user according to the agreement
  • the bandwidth is used for the payment, and the network sets the priority for the user according to the bandwidth used by different users.
  • the user with the larger bandwidth has higher priority.
  • the packets sent by users with higher priority are processed preferentially, and the packets sent by users with lower priority are processed in time, so that the legal use bandwidth of users with higher priority is obtained. Guaranteed, while maximizing the QoS of users with low priority.
  • each service has different network delay requirements. For example, for ordinary Internet users, the response time of the network page may be longer, but For users who use services such as video communication or VoIP, if the network delay is long, it will affect the call quality or picture quality. Therefore, when there is a problem such as network delay or congestion in the communication network, the packets sent by the users of the call service are preferentially processed, and the packets sent by the ordinary Internet users are processed as soon as possible so that the video communication or the network is not affected. On the basis of the call quality or picture quality of services such as telephone, try to ensure the QoS of ordinary Internet access.
  • the embodiment of the present invention is to provide a QoS multiplexing method and device, and a computer storage medium, which can save logic resources for implementing QoS and reduce the complexity of the QoS chip.
  • an embodiment of the present invention provides a QoS multiplexing method, where the method includes:
  • N queue numbers to the received packets, respectively corresponding to N QoSs performed on the packets, where the i th queue number corresponds to the i th QoS, and the i is an integer greater than 0 less than or equal to N ;
  • Determining whether to discard the packet according to a comparison between a cache space that has been occupied by the i-th queue number of the packet and a cache space that is configured in advance for the i-th queue number;
  • the packet is stored in a queue corresponding to the i-th queue number
  • i is incremented by 1 and the above-mentioned determination message is repeatedly executed to discard the step of acquiring the message until i is equal to N, and the message is output.
  • the N is 2, and the two QoSs are e8 scheduling and T-cont scheduling, respectively.
  • the N Queues are respectively assigned to the received packets, and the N QoSs respectively performed on the packets include:
  • Two queue numbers are assigned to the received message, which are respectively the queue number of the e8 schedule and the queue number of the T-cont schedule.
  • the preset rule is Weighted Fair Queuing. WFQ), Deficit Round Robin (DRR), Subspace Pursuit (SP), or Round-Robin (RR).
  • WFQ Weighted Fair Queuing.
  • DRR Deficit Round Robin
  • SP Subspace Pursuit
  • RR Round-Robin
  • the method further includes:
  • the N queue numbers assigned to the message are added to the descriptor of the message.
  • the e8 scheduling includes eight different queue numbers, and the T-cont scheduling includes 256 different queue numbers;
  • the assigning two queue numbers to the received message includes:
  • an embodiment of the present invention provides a QoS multiplexing apparatus, where the QoS multiplexing apparatus can perform N QoS, and the N is an integer greater than 1;
  • the QoS multiplexing apparatus includes: a queue mapping unit, and a cache. Management unit, queue management unit, scheduling and shaping unit;
  • the queue mapping unit is configured to allocate N queue numbers to the received packets, respectively corresponding to the N QoSs, where the ith queue number corresponds to the ith QoS, and the i is greater than 0, less than or An integer equal to N;
  • the cache management unit is configured to determine whether to discard the packet according to a comparison between a cache space that has been occupied by the ith queue number of the packet and a cache space configured for the ith queue number. When the packet is not discarded, the packet is sent to the queue management unit; the cache management unit is configured with different cache spaces for different queue numbers of the ith QoS;
  • the queue management unit is configured to store the packet to a queue corresponding to the i-th queue number; the queue management unit sets different queues for different queue numbers of the i-th QoS;
  • the scheduling and shaping unit is configured to acquire the packet from the queue corresponding to the i-th queue number according to a preset rule, and further configured to send the packet to the cache management unit, so that the packet is sent to the cache management unit.
  • the cache management unit determines whether to discard the packet according to a comparison between a cache space that has been occupied by the i+1th queue number of the packet and a cache space configured for the i+1th queue number.
  • the N is 2, and the QoS multiplexing device performs two QoSs, namely e8 scheduling and T-cont scheduling;
  • the queue mapping unit is configured to allocate two queue numbers to the received message, which are respectively the queue number of the e8 scheduling and the queue number of the T-cont scheduling;
  • the cache management unit is configured to determine whether to discard the packet according to a comparison between a cache space currently occupied by the queue number of the e8 scheduled packet of the packet and a cache space configured for the queue number scheduled by the e8, and Sending the packet to the queue management unit when the packet is not discarded; the cache management unit pre-configures different cache spaces for different queue numbers scheduled by the e8;
  • the queue management unit is configured to store the packet to a queue corresponding to the queue number of the e8 schedule; the queue management unit sets different queues for different queue numbers scheduled by e8;
  • the scheduling and shaping unit is configured to acquire the packet from the queue corresponding to the queue number scheduled by the e8 according to the preset rule, and send the packet to the cache management unit;
  • the cache management unit is configured to determine whether to discard the cache space according to the queue number currently scheduled by the T-cont of the packet and the cache space configured for the queue number scheduled by the T-cont. a packet, and when the packet is not discarded, the packet is sent to the queue management unit; the cache management unit pre-configures different cache spaces for different queue numbers scheduled by the T-cont;
  • the queue management unit is configured to store the message to a queue corresponding to the queue number of the T-cont schedule; the queue management unit sets different queue numbers for the T-cont scheduling, and set different Queue
  • the scheduling and shaping unit is configured to acquire the packet from a queue corresponding to the queue number scheduled by the T-cont according to the preset rule.
  • the preset rule is WFQ, DRR, SP, or RR.
  • the queue mapping unit is configured to add the N queue numbers allocated for the packet to the descriptor of the packet, and send the packet including the descriptor or the descriptor. Give the cache management unit.
  • the e8 scheduling includes eight different queue numbers, and the T-cont scheduling includes 256 different queue numbers;
  • the queue mapping unit is configured to allocate, according to the sending port of the packet, the queue number of the e8 scheduling for the packet from the eight different queue numbers, where the 256 different queue numbers are The message is assigned the queue number of the T-cont schedule.
  • the queue mapping unit, the cache management unit, the queue management unit, and the scheduling and shaping unit may use a central processing unit (CPU) and a digital signal processor (DSP, Digital) when performing processing. Singnal Processor) or Field-Programmable Gate Array (FPGA) implementation.
  • CPU central processing unit
  • DSP digital signal processor
  • FPGA Field-Programmable Gate Array
  • Embodiments of the present invention also provide a computer storage medium in which computer executable instructions are stored, the computer executable instructions being configured to perform the QoS multiplexing method described above.
  • An embodiment of the present invention provides a QoS multiplexing method and device, where the QoS multiplexing method includes: assigning N queue numbers to the received packets, respectively corresponding to N QoSs performed on the packets, according to the Determining whether to discard the packet by comparing the buffer space currently occupied by the ith queue number of the packet with the buffer space configured for the ith queue number; if the packet is not discarded, The message is stored in the queue corresponding to the i-th queue number; the message is obtained from the queue corresponding to the i-th queue number according to a preset rule; when i is less than N, i is incremented by 1 and the above determination is repeatedly performed.
  • the output is output.
  • the message when the communication network implements QoS, it is not necessary to set multiple sets of QoS devices, and each QoS is sequentially executed, and N queue numbers corresponding to N QoSs are respectively allocated for the received packets, and then Through the method of cyclic execution, one QoS is executed once per cycle. Therefore, multiple QoS can be realized by using one QoS multiplexing device, which saves the logic resources for realizing QoS and reduces the complexity of the QoS chip.
  • FIG. 1 is a flowchart of a QoS multiplexing method according to an embodiment of the present invention
  • FIG. 2 is a flowchart of another QoS multiplexing method according to an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of a QoS multiplexing apparatus according to an embodiment of the present disclosure
  • FIG. 4 is an interaction diagram of a QoS multiplexing device according to an embodiment of the present invention.
  • the embodiment of the invention provides a QoS multiplexing method. As shown in FIG. 1 , the method includes:
  • Step 101 Assign N queue numbers to the received packets, and respectively correspond to N QoSs performed on the packets.
  • the i-th queue number corresponds to the i-th QoS, and the i is an integer greater than 0 and less than or equal to N.
  • the communication network needs to perform quality of service scheduling on multiple aspects of a message, that is, perform N QoS on the received message, and each QoS represents one aspect of quality of service scheduling.
  • the commonly used QoS involves bandwidth according to usage.
  • the QoS of the users of different priorities is set, or the QoS is scheduled for the different services currently used by different users.
  • the QoS may be included in the actual application, which is not limited in the embodiment of the present invention.
  • the QoS is performed on the packet, which is e8 scheduling and T-cont scheduling.
  • the descriptor of the packet is first obtained, according to the port information in the descriptor.
  • the message allocates the queue number of the e8 schedule and the queue number of the T-cont schedule. Since the e8 scheduling has a total of eight queue numbers, the queue number of the e8 schedule can be allocated for the packets from the eight queue numbers according to the port information in the descriptor; the T-cont schedule has a total of 256 queue numbers, so According to the port information in the descriptor, the T-cont scheduled queue number is allocated for the message from the 256 queue numbers.
  • Step 102 Determine whether to discard the packet according to a comparison between a cache space that has been occupied by the i-th queue number of the packet and a cache space that is configured in advance for the i-th queue number.
  • different cache spaces may be pre-configured for different queue numbers of the i-th QoS, and then, according to the i-th queue number of the packet, the cache space occupied by the current i-th queue number is compared with the pre-existing The size of the cache space configured by the i-th queue number is discarded if the cache space occupied by the current i-th queue number is greater than or equal to the cache space configured for the i-th queue number. For example, a random drop or tail drop policy can be performed on the cache space.
  • Step 103 If the packet is not discarded, the packet is stored in a queue corresponding to the i-th queue number.
  • different queues may be set in advance for different queue numbers of the i-th QoS, and then the packets are stored in the queue corresponding to the i-th queue number according to the i-th queue number of the packet.
  • the e8 schedule includes eight different queue numbers
  • different queues can be set for eight different queue numbers, that is, one queue number corresponds to one queue.
  • the message is then stored in the corresponding queue according to the queue number of the e8 schedule of the message.
  • Step 104 Obtain the packet from a queue corresponding to the i-th queue number according to a preset rule.
  • the preset rule may be any one of the WFQ, the DRR, the SP, or the RR.
  • the WFQ, the DRR, the SP, and the RR are all related to the prior art, and the embodiments of the present invention are not described herein.
  • Step 105 when i is less than N, i is incremented by 1 and steps 102 to 104 are repeatedly performed until i is equal to When N, the message is output.
  • N QoS needs to be performed on the message, after each QoS is executed, it is necessary to first determine whether N QoS is performed, that is, whether i is greater than or equal to N, and when i is less than N, continue to execute. After a QoS is performed, the N QoS is outputted and sent to the subsequent unit for subsequent processing, which is not described again in the embodiment of the present invention.
  • the communication network implements multiple QoSs, it is not necessary to set multiple sets of QoS devices, and each QoS is executed in sequence, but a QoS multiplexing device is used to implement multiple QoSs by means of cyclic execution, thereby saving implementation.
  • the logical resources of QoS reduce the complexity of the QoS chip.
  • the N is 2, and the two QoSs are e8 scheduling and T-cont scheduling, respectively.
  • the allocated N queue numbers may be added to the descriptor of the packet, so that the packet can be conveniently obtained in subsequent processing.
  • the T-cont scheduling includes 256 different queue numbers; Therefore, when two queue numbers are allocated for the received message, The sending port of the packet allocates the queue number of the e8 scheduling for the packet from the eight different queue numbers, and allocates the queue number of the T-cont scheduling for the packet from the 256 different queue numbers. .
  • the embodiment of the present invention provides a QoS multiplexing method, which includes allocating N queue numbers for the received packets, corresponding to N QoSs performed on the packets, where the ith queue number corresponds to the ith QoS.
  • the i is an integer greater than 0 and less than or equal to N; determining whether the cache space currently occupied by the i-th queue number of the packet is compared with the cache space configured for the i-th queue number in advance Discarding the packet; if the packet is not discarded, the packet is stored in a queue corresponding to the i-th queue number; and the queue is obtained from the queue corresponding to the i-th queue number according to a preset rule.
  • the communication network implements QoS, it is not necessary to set multiple sets of QoS devices, and each QoS is executed in sequence, but a plurality of QoS can be realized by using a QoS multiplexing device by a cyclic execution method. The logic resources for realizing QoS are saved, and the complexity of the QoS chip is reduced.
  • the QoS multiplexing method can implement e8 scheduling and T-cont scheduling by the following steps:
  • Step 201 Assign two queue numbers to the received message, which are respectively the queue number of the e8 schedule and the queue number of the T-cont schedule.
  • the descriptor of the packet After receiving the packet, the descriptor of the packet is first obtained, and according to the port information in the descriptor, the queue number of the e8 schedule and the queue number of the T-cont schedule are allocated to the packet, and the port information indicates the location.
  • the sending port of the message After receiving the packet, the descriptor of the packet is first obtained, and according to the port information in the descriptor, the queue number of the e8 schedule and the queue number of the T-cont schedule are allocated to the packet, and the port information indicates the location. The sending port of the message.
  • the e8 scheduling includes eight different queue numbers. The larger the queue number, the higher the priority.
  • the QoS multiplexing device 10 sets different priorities for different sending ports. After receiving the packet, According to the port information, the queue number of the e8 schedule corresponding to the packet can be determined.
  • the T-cont schedule includes 32 T-conts, and each T-cont includes eight sequences, namely T-cont.
  • Step 202 Determine whether to discard the packet according to a comparison between a cache space that is currently occupied by the queue number of the e8 packet of the packet and a cache space configured for the queue number scheduled by the e8.
  • the queue number of the e8 schedule and the queue number of the T-cont schedule may be added to the descriptor of the packet, so as to facilitate subsequent Processing of the message.
  • different cache spaces may be allocated in advance for the eight different queue numbers scheduled by the e8.
  • the larger the queue number the larger the pre-configured cache space is, that is, the packets with higher priority have larger cache space and lower priority.
  • the buffer space corresponding to the packet is small.
  • the tail discarding policy or random is performed on the buffer space. Discard the policy.
  • the high-priority packets have a large cache space.
  • the packets with the highest priority are not discarded.
  • the pre-configured packets with low priority are small.
  • the packets with lower priority are discarded first, ensuring that the network can be restored.
  • Step 203 If the packet is not discarded, the packet is stored in a queue corresponding to the queue number of the e8 schedule.
  • the packet can be read according to the cache time of the packet, that is, the cached earlier packet is read first, and the cache is cached. After the later message is read, the order of reading is independent of the size of the pre-configured cache space.
  • the e8 scheduling includes eight different queue numbers
  • different queues can be set for eight different queue numbers in advance, that is, one queue number corresponds to one queue.
  • the message is then stored in the corresponding queue according to the queue number of the e8 schedule of the message.
  • Step 204 Obtain the packet from a queue corresponding to the queue number scheduled by the e8 according to a preset rule.
  • the preset rule may be any one of the WFQ, the DRR, the SP, or the RR.
  • the WFQ, the DRR, the SP, and the RR are all in the prior art, and are not described herein again.
  • Step 205 Determine whether to discard the packet according to a comparison between a buffer space that is currently occupied by the T-cont scheduled queue number of the packet and a cache space configured for the queue number scheduled by the T-cont.
  • different buffer spaces may be allocated in advance for the 256 different queue numbers scheduled by the T-cont, and the buffer space currently occupied by the T-cont scheduled queue number is greater than or equal to the queue scheduled for the T-cont.
  • Step 206 If the packet is not discarded, the packet is stored in a queue corresponding to the queue number of the T-cont scheduling.
  • the T-cont schedule includes 256 different queue numbers
  • different queues can be set for 256 different queue numbers in advance, that is, one queue number corresponds to one queue.
  • first, 32 queue codes A are set for 32 T-conts, and 8 different queue codes B are set for each queue code A. Determining the queue code A according to the first part of the queue number of the T-cont scheduling of the message, The queue code B is determined according to the second part of the queue number scheduled by the T-cont, and the queue code A and the queue code B are combined to be the queue code corresponding to the queue number of the T-cont scheduling, and the message can be stored in the queue code. Corresponding queue.
  • Step 207 Acquire the packet from a queue corresponding to the queue number scheduled by the T-cont according to the preset rule.
  • the preset rule may be any one of the WFQ, the DRR, the SP, or the RR.
  • the WFQ, the DRR, the SP, and the RR are all in the prior art, and are not described herein again.
  • Step 208 Output the message.
  • the packet from the queue corresponding to the queue number of the T-cont scheduling according to the foregoing rule.
  • the QoS for both the e8 scheduling and the T-cont scheduling of the packet is completed, that is, the report can be output.
  • the obtained packet is sent to the subsequent unit for subsequent processing, and the embodiment of the present invention does not describe it again.
  • Steps 205 to 207 are cyclic repetitions of steps 202 to 204, steps 202 to 204 correspond to e8 scheduling, and steps 205 to 207 correspond to T-cont scheduling, so that when the communication network implements QoS, it is not necessary to set multiple sets of QoS devices.
  • Each QoS is executed in turn, and a plurality of QoSs can be implemented by using a QoS multiplexing device by a cyclic execution method, which saves the logic resources for realizing QoS and reduces the complexity of the QoS chip.
  • the embodiment of the present invention provides a QoS multiplexing device 30, the QoS multiplexing device 30 is capable of performing N QoS, and the N is an integer greater than 1.
  • the QoS multiplexing device 30 includes: Queue mapping unit 301, cache management unit 302, queue management unit 303, scheduling and shaping unit 304.
  • the queue mapping unit 301 is configured to allocate N queue numbers to the received packets, respectively corresponding to the N QoSs, where the ith queue number corresponds to the ith QoS, and the i is greater than 0, less than or An integer equal to N.
  • the cache management unit 302 is configured to currently occupy the i-th queue number according to the packet. Comparing the cache space with the cache space configured for the ith queue number in advance, determining whether to discard the packet, and sending the packet to the queue management unit when the packet is not discarded; The cache management unit pre-configures different cache spaces for different queue numbers of the i-th QoS.
  • the queue management unit 303 is configured to store the message to a queue corresponding to the i-th queue number; the queue management unit 303 sets a different queue for different queue numbers of the i-th QoS.
  • the scheduling and shaping unit 304 is configured to acquire the packet from the queue corresponding to the i-th queue number according to a preset rule, and is further configured to send the packet to the cache management unit 302, so that The cache management unit 302 determines whether to discard the report according to the comparison between the cache space currently occupied by the i+1th queue number of the packet and the cache space configured for the i+1th queue number in advance. Text.
  • the communication network needs to perform quality of service scheduling on multiple aspects of a message, that is, N QoS needs to be performed, and each QoS represents one aspect of quality of service scheduling.
  • the commonly used QoS involves different priorities according to the bandwidth used.
  • the QoS of the user of the level, or the QoS of the different services currently used by different users, may also include other QoS in the actual application, which is not limited in the embodiment of the present invention.
  • the communication network implements multiple QoS
  • it is not necessary to set multiple sets of QoS devices but after the queue mapping unit of the QoS device allocates queue numbers for N QoS respectively, one QoS is executed once per cycle, and the loop is performed.
  • the multiplexing of the cache management unit, the queue management unit and the scheduling and shaping unit is realized, so that multiple QoS can be realized by using only one set of QoS devices, which saves the logic resources for realizing QoS and reduces the complexity of the QoS chip.
  • the QoS multiplexing device 10 is capable of performing two QoSs, e8 scheduling and T-cont scheduling, respectively. As shown in FIG. 4, the QoS multiplexing apparatus 10 can implement e8 scheduling and T-cont scheduling by the following steps:
  • Step 401 The queue mapping unit 301 allocates two queue numbers for the received message, which are respectively the queue number of the e8 scheduling and the queue number of the T-cont scheduling.
  • the queue mapping unit 301 After receiving the packet, the queue mapping unit 301 first obtains a descriptor of the packet, and allocates a queue number of the e8 schedule and a queue number of the T-cont schedule to the packet according to the port information in the descriptor.
  • the port information indicates the transmission port of the message.
  • the e8 scheduling includes eight different queue numbers. The larger the queue number, the higher the priority.
  • the QoS multiplexing device 30 sets different priorities for different sending ports. After receiving the packet, According to the port information, the queue number of the e8 schedule corresponding to the packet can be determined.
  • the QoS multiplexing device 30 sets different processing times for different sending ports. Therefore, after receiving the message, the T-cont scheduled queue number corresponding to the packet can be determined according to the port information.
  • Step 402 The queue mapping unit 301 sends the message to the cache management unit 302.
  • the e8 scheduled queue number and the T-cont scheduled queue number may be added to the descriptor of the packet, and then The packet including the modified descriptor is sent to the cache management unit 302, or the modified descriptor may be directly sent to the cache management unit 302.
  • the actual application may be set according to a specific situation. Make a limit.
  • Step 403 The cache management unit 302 determines whether to discard the packet according to the comparison between the cache space that the queue number of the e8 queue of the packet is currently occupied and the cache space configured for the queue number scheduled by the e8.
  • the cache management unit 302 After receiving the message including the modified descriptor or the modified descriptor, the cache management unit 302 first obtains the queue number of the e8 schedule and the queue number of the T-cont schedule in the descriptor. Then, according to the comparison between the cache space that the queue number of the e8 queue of the packet is currently occupied and the cache space configured for the queue number scheduled by the e8, it is determined whether to discard the packet.
  • the cache management unit 302 receives the packet including the modified descriptor as an example, and the cache management unit 302 configures different cache spaces for the eight different queue numbers scheduled by the e8.
  • the large pre-configured cache space is larger, that is, the cache space corresponding to the packet with the higher priority is larger, and the cache space corresponding to the packet with the lower priority is smaller.
  • a tail drop policy or a random drop policy is implemented on the cache space.
  • the high-priority packets have a large cache space.
  • the packets with the highest priority are not discarded.
  • the pre-configured packets with low priority are small.
  • the packets with lower priority are discarded first, ensuring that the network can be restored.
  • Step 404 If the packet is not discarded, the cache management unit 302 sends the packet to the queue management unit 303.
  • the cache management unit 302 can read the packet according to the buffering time of the packet, that is, the cached earlier packet is read first, the later packet is cached, and the read sequence is the size of the pre-configured cache space. None. After the message is read according to the buffer time of the message, the message is sent to the queue management unit 303.
  • Step 405 The queue management unit 303 stores the message in a queue corresponding to the queue number of the e8 schedule.
  • the queue management unit 303 sets different queues for 8 different queue numbers, that is, one queue number corresponds to one queue. After receiving the packet sent by the buffer management unit 302, the packet is stored in the corresponding queue according to the queue number scheduled by the e8 of the packet. The number of messages stored by each of the e8 queues by the scheduling and shaping unit 304 is then notified.
  • Step 406 the scheduling and shaping unit 304 is configured to use the team scheduled from the e8 according to a preset rule.
  • the message is obtained in the queue corresponding to the column number.
  • the preset rule may be any one of the WFQ, the DRR, the SP, or the RR.
  • the WFQ, the DRR, the SP, and the RR are all in the prior art, and are not described herein again.
  • Step 407 the scheduling and shaping unit 304 sends the message to the cache management unit 302.
  • the scheduling and shaping unit 304 obtains the packet from the queue corresponding to the queue number of the e8 schedule according to the above rule, and then sends the packet to the cache management unit 302, so that the cache management unit 302 schedules according to the T-cont.
  • the queue number is cached.
  • Step 408 The cache management unit 302 determines whether to discard the cache space according to the queue number currently scheduled by the T-cont of the packet and the cache space configured for the queue number scheduled by the T-cont. Message.
  • the cache management unit 302 After receiving the packet sent by the scheduling and shaping unit 304, the cache management unit 302 first acquires the descriptor of the packet, and then according to the queue number scheduled by the T-cont in the descriptor, the cache space that has been currently occupied is as described above. A comparison of the cache space configured by the queue number of the T-cont schedule determines whether to discard the message.
  • the cache management unit 302 may allocate different buffer spaces for the 256 different queue numbers scheduled by the T-cont in advance, and the buffer space currently occupied by the T-cont scheduled queue number is greater than or equal to the foregoing T-
  • a tail drop policy or a random drop policy may be implemented on the cache space. In this way, when the network is delayed or blocked, the packets buffered in the cached cache space are discarded first, so that the network can be restored smoothly.
  • Step 409 If the packet is not discarded, the cache management unit 302 sends the packet to the queue management unit 303.
  • Step 410 The queue management unit 303 stores the packet in a queue corresponding to the queue number of the T-cont schedule.
  • the queue management unit 303 can separately set different queues for 256 different queue numbers, that is, one queue number corresponds to one queue. Specifically, the queue management unit 303 first sets 32 queue codes A for 32 T-conts, and sets 8 different queue codes B for each queue code A. The queue management unit 303 determines the queue code A according to the first part of the queue number of the T-cont schedule of the message, and determines the queue code B according to the second part of the queue number scheduled by the T-cont, and the queue code A and the queue code B are combined. The queue code corresponding to the queue number of the T-cont schedule, and the message can be stored in the queue corresponding to the queue code. The queue management unit 303 then notifies the scheduling and shaping unit 304 of the number of messages stored in each T-cont queue.
  • Step 411 The scheduling and shaping unit 304 obtains the packet from the queue corresponding to the queue number scheduled by the T-cont according to the preset rule.
  • the preset rule may be any one of the WFQ, the DRR, the SP, or the RR.
  • the WFQ, the DRR, the SP, and the RR are all in the prior art, and are not described herein again.
  • Step 412 the scheduling and shaping unit 304 outputs the message.
  • the scheduling and shaping unit 304 obtains the packet from the queue corresponding to the queue number of the e8 scheduling according to the above rule. At this time, the QoS of both the e8 and the T-cont of the packet is completed, so the scheduling and shaping unit The packet may be sent to the subsequent unit for subsequent processing, which is not described again in the embodiment of the present invention.
  • the queue mapping unit 301, the cache management unit 302, the queue management unit 303, and the scheduling and shaping unit 304 may each be a central processing unit (CPU) located in the QoS multiplexing device.
  • CPU central processing unit
  • DSP Digital Signal Processor
  • FPGA Field Programmable Gate Array
  • the embodiment of the present invention provides a QoS multiplexing device, where the device is capable of performing N QoS, and the N is an integer greater than 1.
  • the device includes: a queue mapping unit, a cache management unit, a queue management unit, a scheduling and shaping unit, configured to allocate N queue numbers to the received packets, respectively corresponding to the N QoSs, wherein the ith queue number corresponds to the ith QoS,
  • the i is an integer greater than 0 and less than or equal to N
  • the cache management unit is configured to cache the cache space currently occupied by the i-th queue number of the packet and the cache configured for the i-th queue number in advance Comparing the space, determining whether to discard the packet, and sending the packet to the queue management unit when the packet is not discarded;
  • the cache management unit is a different queue number of the i-th QoS Different cache spaces are pre-configured; the cache management unit pre-configures different cache spaces for different queue numbers of the i-th QoS;
  • the communication network implements multiple QoS
  • the looping mode realizes multiplexing of the cache management unit, the queue management unit, and the scheduling and shaping unit, so that multiple QoS can be realized by using only one set of QoS multiplexing devices, saving logic resources for realizing QoS and reducing QoS chips. Complexity.
  • Embodiments of the present invention also provide a computer storage medium in which computer executable instructions are stored, the computer executable instructions being configured to perform the QoS multiplexing method described above.
  • the embodiment of the present invention provides a QoS multiplexing solution, where the QoS multiplexing method includes: assigning N queue numbers to the received packets, respectively corresponding to N QoSs performed on the packets, Determining whether to discard the packet according to the comparison between the buffer space currently occupied by the ith queue number of the packet and the buffer space configured for the ith queue number; if the packet is not discarded, And storing the packet in the queue corresponding to the i-th queue number; acquiring the packet from the queue corresponding to the i-th queue number according to a preset rule; when i is less than N, i is incremented by 1 and repeatedly executed The foregoing determines whether the packet is discarded to the step of acquiring the packet, and when i is equal to N, the packet is output.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本发明实施例公开了一种服务质量(QoS)复用方法,所述方法包括:为接收到的报文分配N个队列号,分别对应对所述报文执行的N个QoS,其中,第i个队列号对应第i个QoS,所述i为大于0小于或等于N的整数;根据所述报文的第i个队列号当前已经占有的缓存空间与预先为所述第i个队列号配置的缓存空间的比较,确定是否丢弃所述报文;若不丢弃所述报文,将所述报文存储到第i个队列号对应的队列;根据预设规则从所述第i个的队列号对应的队列中获取所述报文;i小于N时,i加1且重复执行上述确定报文是否丢弃到获取所述报文的步骤,直至i等于N时,输出所述报文。本发明实施例进一步公开了一种QoS复用装置、计算机存储介质。

Description

一种服务质量复用方法及装置、计算机存储介质 技术领域
本发明涉及通信领域的服务质量(Quality of Service,QoS)调度技术,尤其涉及一种QoS复用方法及装置、计算机存储介质。
背景技术
由于使用通信网络的用户越来越多,对于通信资源的竞争也越来越激烈。为满足用户对不同应用不同服务质量的要求,需要通信网络能根据用户的要求分配和调度资源,对不同的数据流提供不同的服务,所以QoS应运而生。
通常,采用QoS进行两个方面的服务质量监控:第一方面,在通信网络中,总的带宽是有限的,所以在初始化时,网络可以与不同的用户协定不同的使用带宽,用户根据协定的使用带宽的大小进行付费,网络根据不同用户的使用带宽为用户设定优先级,使用带宽越大的用户优先级越高。当通信网络出现网络延迟或拥塞等问题时,优先处理优先级高的用户发送的报文,并尽量保证及时处理优先级低的用户发送的报文,使优先级高的用户的合法使用带宽得到保证,同时最大程度的保证优先级低的用户的QoS。
第二方面,由于通信网络中不同的用户使用的业务不同,每个业务对网络延时的要求是不同的,例如,对普通上网用户而言,网络页面的响应时间可以较长,但是,对使用视频通信或网络电话等业务的用户而言,若网络延时较长,则会影响通话质量或画面质量。因此,当通信网络出现网络延迟或拥塞等问题时,优先处理使用通话类业务的用户发送的报文,并尽量保证及时处理普通上网用户发送的报文,使得在不影响视频通信或网 络电话等业务的通话质量或画面质量的基础上,尽量保证普通上网的QoS。
为了实现上述两种QoS,现有通信网络中通常需要设置两套QoS装置,分别对接收到的报文进行第一方面的QoS的监控和第二方面的QoS的监控,因此,需要的逻辑资源较大,QoS芯片的设计复杂性也相应的增加。
发明内容
为解决上述技术问题,本发明实施例期望提供一种QoS复用方法及装置、计算机存储介质,能够节约实现QoS的逻辑资源,降低QoS芯片的复杂性。
本发明实施例的技术方案是这样实现的:
一方面,本发明实施例提供一种QoS复用方法,所述方法包括:
为接收到的报文分配N个队列号,分别对应对所述报文执行的N个QoS,其中,第i个队列号对应第i个QoS,所述i为大于0小于或等于N的整数;
根据所述报文的第i个队列号当前已经占有的缓存空间与预先为所述第i个队列号配置的缓存空间的比较,确定是否丢弃所述报文;
若不丢弃所述报文,将所述报文存储到第i个队列号对应的队列;
根据预设规则从所述第i个的队列号对应的队列中获取所述报文;
i小于N时,i加1且重复执行上述确定报文是否丢弃到获取所述报文的步骤,直至i等于N时,输出所述报文。
可选的,所述N为2,所述两个QoS,分别为e8调度和T-cont调度;
所述为接收到的报文分配N个队列号,分别对应对所述报文执行的N个QoS包括:
为接收到的所述报文分配两个队列号,分别为e8调度的队列号和T-cont调度的队列号。
可选的,所述预设规则为加权公平排队法(Weighted Fair Queuing, WFQ)、亏空轮循算法(Deficit Round Robin,DRR)、严格优先级(Subspace Pursuit,SP)、或者轮询调度算法(Round-Robin,RR)。
可选的,在所述为接收到的报文分配N个队列号之后,所述方法还包括:
将为所述报文分配的N个队列号添加在所述报文的描述符中。
可选的,所述e8调度包括八个不同的队列号,所述T-cont调度包括256个不同的队列号;
所述为接收到的所述报文分配两个队列号包括:
根据所述报文的发送端口从所述八个不同的队列号中为所述报文分配e8调度的队列号,从所述256个不同的队列号中为所述报文分配T-cont调度的队列号。
另一方面,本发明实施例提供一种QoS复用装置,所述QoS复用装置能执行N个QoS,所述N为大于1的整数;所述QoS复用装置包括:队列映射单元、缓存管理单元、队列管理单元、调度和整形单元;
所述队列映射单元,配置为给接收到的报文分配N个队列号,分别对应所述N个QoS,其中,第i个队列号对应第i个QoS,所述i为大于0,小于或等于N的整数;
所述缓存管理单元,配置为根据所述报文的第i个队列号当前已经占有的缓存空间与预先为所述第i个队列号配置的缓存空间的比较,确定是否丢弃所述报文,并当不丢弃所述报文时,将所述报文发送给队列管理单元;所述缓存管理单元为所述第i个QoS的不同队列号,预先配置不同的缓存空间;
所述队列管理单元,配置为将所述报文存储到所述第i个的队列号对应的队列;所述队列管理单元为所述第i个QoS的不同队列号,设置不同的队列;
所述调度和整形单元,配置为根据预设规则从所述第i个的队列号对应的队列中获取所述报文,还配置为将所述报文发送给所述缓存管理单元,使所述缓存管理单元根据所述报文的第i+1个队列号当前已经占有的缓存空间与预先为所述第i+1个队列号配置的缓存空间的比较,确定是否丢弃所述报文。
可选的,所述N为2,所述QoS复用装置执行两个QoS,分别为e8调度和T-cont调度;
所述队列映射单元配置为给接收到的报文分配两个队列号,分别为e8调度的队列号和T-cont调度的队列号;
所述缓存管理单元配置为根据所述报文的e8调度的队列号当前已经占有的缓存空间与预先为所述e8调度的队列号配置的缓存空间的比较,确定是否丢弃所述报文,并在不丢弃所述报文时,将所述报文发送给所述队列管理单元;所述缓存管理单元为所述e8调度的不同队列号,预先配置不同的缓存空间;
所述队列管理单元配置为将所述报文存储到所述e8调度的队列号对应的队列;所述队列管理单元为所e8调度的不同队列号,设置不同的队列;
所述调度和整形单元配置为根据所述预设规则从所述e8调度的队列号对应的队列中获取所述报文,并将所述报文发送给所述缓存管理单元;
所述缓存管理单元配置为根据所述报文的T-cont调度的队列号当前已经占有的缓存空间与预先为所述T-cont调度的队列号配置的缓存空间的比较,确定是否丢弃所述报文,并当不丢弃所述报文时,将报文发送给所述队列管理单元;所述缓存管理单元为所述T-cont调度的不同队列号,预先配置不同的缓存空间;
所述队列管理单元配置为将所述报文存储到所述T-cont调度的队列号对应的队列;所述队列管理单元为所T-cont调度的不同队列号,设置不同 的队列;
所述调度和整形单元配置为根据所述预设规则从所述T-cont调度的队列号对应的队列中获取所述报文。
可选的,所述预设规则为WFQ、DRR、SP、或者RR。
可选的,所述队列映射单元配置为将为所述报文分配的N个队列号添加在所述报文的描述符中,并将包括所述描述符的报文或者所述描述符发送给所述缓存管理单元。
可选的,所述e8调度包括八个不同的队列号,所述T-cont调度包括256个不同的队列号;
所述队列映射单元配置为根据所述报文的发送端口从所述八个不同的队列号中为所述报文分配e8调度的队列号,从所述256个不同的队列号中为所述报文分配T-cont调度的队列号。
所述队列映射单元、所述缓存管理单元、所述队列管理单元、所述调度和整形单元在执行处理时,可以采用中央处理器(CPU,Central Processing Unit)、数字信号处理器(DSP,Digital Singnal Processor)或可编程逻辑阵列(FPGA,Field-Programmable Gate Array)实现。
本发明实施例还提供一种计算机存储介质,其中存储有计算机可执行指令,该计算机可执行指令配置执行上述QoS复用方法。
本发明实施例提供了一种QoS复用方法及装置,所述QoS复用方法包括:为接收到的报文分配N个队列号,分别对应对所述报文执行的N个QoS,根据所述报文的第i个队列号当前已经占有的缓存空间与预先为所述第i个队列号配置的缓存空间的比较,确定是否丢弃所述报文;若不丢弃所述报文,将所述报文存储到第i个队列号对应的队列;根据预设规则从所述第i个的队列号对应的队列中获取所述报文;i小于N时,i加1且重复执行上述确定报文是否丢弃到获取所述报文的步骤,直至i等于N时,输出 所述报文。相较于现有技术,在通信网络实现QoS时,不需要设置多套QoS装置,依次执行每一个QoS,而是分别为接收到的报文分配与N个QoS对应的N个队列号,然后通过循环执行的方法,每循环一次执行一个QoS,因此采用一套QoS复用装置即可实现多个QoS,节约了实现QoS的逻辑资源,降低了QoS芯片的复杂性。
附图说明
图1为本发明实施例提供的一种QoS复用方法的流程图;
图2为本发明实施例提供的另一种QoS复用方法的流程图;
图3为本发明实施例提供的一种QoS复用装置结构示意图;
图4为本发明实施例提供的一种QoS复用装置的交互图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。
本发明实施例提供一种QoS复用方法,如图1所示,所述方法包括:
步骤101、为接收到的报文分配N个队列号,分别对应对所述报文执行的N个QoS。
其中,第i个队列号对应第i个QoS,所述i为大于0,小于或等于N的整数。通常通信网络在运作时需要对一个报文进行多个方面的服务质量调度,即对接收到的报文执行N个QoS,每个QoS表示一个方面的服务质量调度,常用的QoS涉及根据使用带宽设定的不同优先级的用户的服务质量调度,或者针对不同用户当前使用的不同业务的服务质量调度,实际应用中还可以包括其他QoS,本发明实施例再次不做限定。
示例的,对报文执行两个QoS,分别为e8调度和T-cont调度,在接收到报文之后,首先获取报文的描述符,根据描述符中的端口信息,为所述 报文分配e8调度的队列号和T-cont调度的队列号。由于e8调度共有八个队列号,因此可以根据描述符中的端口信息,从所述八个队列号中为报文分配e8调度的队列号;T-cont调度共有256个队列号,因此,可以根据描述符中的端口信息,从所述256个队列号中为报文分配T-cont调度的队列号。
步骤102、根据所述报文的第i个队列号当前已经占有的缓存空间与预先为所述第i个队列号配置的缓存空间的比较,确定是否丢弃所述报文。
示例的,可以首先为第i个QoS的不同队列号,预先配置不同的缓存空间,然后,根据报文的第i个队列号,比较当前第i个队列号已经占有的缓存空间与预先为所述第i个队列号配置的缓存空间的大小,若当前第i个队列号已经占有的缓存空间大于或等于预先为所述第i个队列号配置的缓存空间,则对所述报文进行丢弃,例如,可以对所述缓存空间执行随机丢弃或尾部丢弃策略。
步骤103、若不丢弃所述报文,将所述报文存储到第i个队列号对应的队列。
示例的,可以预先为第i个QoS的不同的队列号设置不同的队列,然后根据报文的第i个队列号,将所述报文存储到第i个队列号对应的队列。
例如,由于e8调度包括八个不同的队列号,所以可以为8个不同的队列号设置不同的队列,即一个队列号对应一个队列。然后根据报文的e8调度的队列号将所述报文存储在对应的队列。
步骤104、根据预设规则从所述第i个的队列号对应的队列中获取所述报文。
所述预设规则可以为WFQ、DRR、SP、或者RR中的任意一种,其中WFQ,DRR,SP和RR均为现有技术,本发明实施例在此不做赘述。
步骤105、i小于N时,i加1且重复执行步骤102至104,直至i等于 N时,输出所述报文。
由于需要对所述报文执行N个QoS,所以在每执行完一个QoS之后,需要首先判断是否执行完N个QoS,即判断i是否大于或等于N,并且在i小于N时,继续执行下一个QoS,直到将N个QoS均执行完成后,将所述报文输出,发送给后级单元进行后续处理,本发明实施例再次不做赘述。
示例的,当N等于2时,在执行完e8调度之后,此时i=1<2,所以令i=i+1=1+1=2,继续执行“根据所述报文的第i个队列号当前已经占有的缓存空间与预先为所述第i个队列号配置的缓存空间的比较,确定是否丢弃所述报文”,此时i=2,即执行对所述报文的T-cont调度,所述报文的T-cont调度与e8调度的过程相同,可参考前述e8调度的描述,本发明实施例在此不做详述。直到执行完T-cont调度之后,由于i=2=N,说明所有的QoS均已经执行完,此时可以将所述报文输出给后级单元进行后续处理。
这样一来,通信网络实现多个QoS时,不需要设置多套QoS装置,依次执行每一个QoS,而是通过循环执行的方法,采用一套QoS复用装置来实现多个QoS,节约了实现QoS的逻辑资源,降低了QoS芯片的复杂性。
可选的,所述N为2,所述两个QoS,分别为e8调度和T-cont调度;
所述为接收到的报文分配N个队列号,分别对应对所述报文执行的N个QoS包括:为接收到的所述报文分配两个队列号,分别为e8调度的队列号和T-cont调度的队列号。
可选的,在为报文分配N个队列号之后,可以将所分配的N个队列号添加在所述报文的描述符中,以便于在后续处理中,可以较为方便的获取该报文的每个QoS对应的队列号。
示例的,若为报文分别执行2个QoS,分别为e8调度和T-cont调度,由于所述e8调度包括八个不同的队列号,所述T-cont调度包括256个不同的队列号;因此在为接收到的所述报文分配两个队列号时,可以根据所述 报文的发送端口从所述八个不同的队列号中为所述报文分配e8调度的队列号,从所述256个不同的队列号中为所述报文分配T-cont调度的队列号。
本发明实施例提供一种QoS复用方法,包括为接收到的报文分配N个队列号,分别对应对所述报文执行的N个QoS,其中,第i个队列号对应第i个QoS,所述i为大于0小于或等于N的整数;根据所述报文的第i个队列号当前已经占有的缓存空间与预先为所述第i个队列号配置的缓存空间的比较,确定是否丢弃所述报文;若不丢弃所述报文,将所述报文存储到第i个队列号对应的队列;根据预设规则从所述第i个的队列号对应的队列中获取所述报文;i小于N时,i加1且重复执行上述确定报文是否丢弃到获取所述报文的步骤,直至i等于N时,输出所述报文。相较于现有技术,在通信网络实现QoS时,不需要设置多套QoS装置,依次执行每一个QoS,而是通过循环执行的方法,采用一套QoS复用装置即可实现多个QoS,节约了实现QoS的逻辑资源,降低了QoS芯片的复杂性。
示例的,假设所述报文执行两个QoS,分别为e8调度和T-cont调度。如图2所示,所述QoS复用方法可以通过如下步骤实现e8调度和T-cont调度:
步骤201、为接收到的报文分配两个队列号,分别为e8调度的队列号和T-cont调度的队列号。
在接收到报文之后,首先获取报文的描述符,根据描述符中的端口信息,为所述报文分配e8调度的队列号和T-cont调度的队列号,所述端口信息指示了所述报文的发送端口。
具体的,e8调度包括八个不同的队列号,队列号越大,表示优先级越高,初始化时,QoS复用装置10为不同的发送端口设置不同的优先级,在接收到报文之后,根据端口信息即可确定该报文对应的e8调度的队列号。同理,T-cont调度包括32个T-cont,每个T-cont包括八个序列,即T-cont 调度的队列号包括两个部分,第一部分用于指示所述报文对应的T-cont,第二部分用于指示所述报文在对应的T-cont下对应的序列,因此T-cont调度包括32*8=256个队列号,其中,不同的队列号对应的处理时间不同。初始化时,为不同的发送端口设置不同的处理时间,所以在接收到报文之后,根据端口信息即可确定该报文对应的T-cont调度的队列号。
步骤202、根据所述报文的e8调度的队列号当前已经占有的缓存空间与预先为所述e8调度的队列号配置的缓存空间的比较,确定是否丢弃所述报文。
在为报文分配e8调度的队列号和T-cont调度的队列号之后,可以将所述e8调度的队列号和T-cont调度的队列号添加在所述报文的描述符中,便于后续对所述报文的处理。
根据报文的描述符获取描述符中的e8调度的队列号和T-cont调度的队列号,然后根据e8调度的队列号当前已经占有的缓存空间与预先为所述e8调度的队列号配置的缓存空间的比较,确定是否丢弃所述报文。
具体的,可以预先为e8调度的8个不同的队列号分配不同的缓存空间,队列号越大的预先配置的缓存空间越大,即优先级高的报文对应的缓存空间大,优先级低的报文对应的缓存空间小,当报文e8调度的队列号当前已经占有的缓存空间大于或等于预先为所述e8调度的队列号配置的缓存空间时,对缓存空间实行尾部丢弃策略或随机丢弃策略。这样一来,由于优先级高的报文预先配置的缓存空间大,使得优先级高的报文尽量不会被丢弃,优先级低的报文预先配置的缓存空间小,使得在网络延迟或阻塞时,首先丢弃优先级低的报文,保证网络能够恢复通畅。
步骤203、若不丢弃所述报文,将所述报文存储到所述e8调度的队列号对应的队列。
可以按照报文的缓存时间读取报文,即缓存较早的报文先读取,缓存 较晚的报文后读取,读取的顺序与预先配置的缓存空间的大小无关。
由于e8调度包括8个不同的队列号,可以预先为8个不同的队列号分别设置不同的队列,即一个队列号对应一个队列。然后根据报文的e8调度的队列号将所述报文存储在对应的队列。
步骤204、根据预设规则从所述e8调度的队列号对应的队列中获取所述报文。
所述预设规则可以为WFQ,DRR,SP或者RR中的任意一种,其中WFQ,DRR,SP和RR均为现有技术,本发明实施例在此不做赘述。
步骤205、根据所述报文的T-cont调度的队列号当前已经占有的缓存空间与预先为所述T-cont调度的队列号配置的缓存空间的比较,确定是否丢弃所述报文。
从所述报文的描述符中获取所述报文的T-cont调度的队列号,然后根据T-cont调度的队列号当前已经占有的缓存空间与预先为所述T-cont调度的队列号配置的缓存空间的比较,确定是否丢弃所述报文。
具体的,可以预先为T-cont调度的256个不同的队列号分配不同的缓存空间,当T-cont调度的队列号当前已经占有的缓存空间大于或等于预先为所述T-cont调度的队列号配置的缓存空间时,可以对该缓存空间实行尾部丢弃策略或随机丢弃策略。这样一来,当网络延迟或阻塞时,首先丢弃缓存已满的缓存空间中缓存的报文,保证网络能够恢复通畅。
步骤206、若不丢弃所述报文,将所述报文存储到所述T-cont调度的队列号对应的队列。
由于T-cont调度包括256个不同的队列号,可以预先为256个不同的队列号分别设置不同的队列,即一个队列号对应一个队列。具体的,首先为32个T-cont设置32个队列编码A,每个队列编码A下设置8个不同的队列编码B。根据报文的T-cont调度的队列号的第一部分确定队列编码A, 根据T-cont调度的队列号的第二部分确定队列编码B,队列编码A与队列编码B合并即为T-cont调度的队列号对应的队列编码,进而可以将报文存储到所述队列编码对应的队列。
步骤207、根据所述预设规则从所述T-cont调度的队列号对应的队列中获取所述报文。
所述预设规则可以为WFQ,DRR,SP或者RR中的任意一种,其中WFQ,DRR,SP和RR均为现有技术,本发明实施例在此不做赘述。
步骤208、输出所述报文。
根据上述规则从所述T-cont调度的队列号对应的队列中获取该报文,此时对于报文的e8调度和T-cont调度两个方面的QoS均已完成,即可以输出所述报文,将获取的报文发送给后级单元进行后续处理,本发明实施例再次不做赘述。
其中步骤205至207是步骤202至204的循环重复,步骤202至204对应e8调度,步骤205至207对应T-cont调度,这样一来,在通信网络实现QoS时,不需要设置多套QoS装置,依次执行每一个QoS,而是通过循环执行的方法,采用一套QoS复用装置即可实现多个QoS,节约了实现QoS的逻辑资源,降低了QoS芯片的复杂性。
本发明实施例提供一种QoS复用装置30,所述QoS复用装置30能够执行N个QoS,所述N为大于1的整数;如图3所示,所述QoS复用装置30包括:队列映射单元301、缓存管理单元302、队列管理单元303、调度和整形单元304。
所述队列映射单元301配置为给接收到的报文分配N个队列号,分别对应所述N个QoS,其中,第i个队列号对应第i个QoS,所述i为大于0,小于或等于N的整数。
所述缓存管理单元302配置为根据所述报文的第i个队列号当前已经占 有的缓存空间与预先为所述第i个队列号配置的缓存空间的比较,确定是否丢弃所述报文,并当不丢弃所述报文时,将所述报文发送给队列管理单元;所述缓存管理单元为所述第i个QoS的不同队列号,预先配置不同的缓存空间。
所述队列管理单元303配置为将所述报文存储到所述第i个的队列号对应的队列;所述队列管理单元303为所述第i个QoS的不同队列号,设置不同的队列。
所述调度和整形单元304配置为根据预设规则从所述第i个的队列号对应的队列中获取所述报文,还配置为将所述报文发送给所述缓存管理单元302,使得所述缓存管理单元302根据所述报文的第i+1个队列号当前已经占有的缓存空间与预先为所述第i+1个队列号配置的缓存空间的比较,确定是否丢弃所述报文。
通常通信网络在运作时需要对一个报文进行多个方面的服务质量调度,即需要执行N个QoS,每个QoS表示一个方面的服务质量调度,常用的QoS涉及根据使用带宽设定的不同优先级的用户的服务质量调度,或者针对不同用户当前使用的不同业务的服务质量调度,实际应用中还可以包括其他QoS,本发明实施例再次不做限定。
这样一来,通信网络实现多个QoS时,不需要设置多套QoS装置,而是在QoS装置的队列映射单元分别为N个QoS分配队列号之后,每循环一次执行一个QoS,通过循环的方式实现了缓存管理单元、队列管理单元和调度和整形单元的复用,从而仅用一套QoS装置即可实现多个QoS,节约了实现QoS的逻辑资源,降低了QoS芯片的复杂性。
示例的,所述QoS复用装置10能够执行两个QoS,分别为e8调度和T-cont调度。如图4所示,所述QoS复用装置10可以通过如下步骤实现e8调度和T-cont调度:
步骤401、队列映射单元301为接收到的报文分配两个队列号,分别为e8调度的队列号和T-cont调度的队列号。
所述队列映射单元301接收到报文之后,首先获取报文的描述符,根据描述符中的端口信息,为所述报文分配e8调度的队列号和T-cont调度的队列号,所述端口信息指示了所述报文的发送端口。
具体的,e8调度包括八个不同的队列号,队列号越大,表示优先级越高,初始化时,QoS复用装置30为不同的发送端口设置不同的优先级,在接收到报文之后,根据端口信息即可确定该报文对应的e8调度的队列号。同理,T-cont调度包括32个T-cont,每个T-cont包括八个序列,即T-cont调度的队列号包括两个部分,第一部分用于指示所述报文对应的T-cont,第二部分用于指示所述报文在对应的T-cont下对应的序列,因此T-cont调度包括32*8=256个队列号,其中,不同的队列号对应的处理时间不同。初始化时,QoS复用装置30为不同的发送端口设置不同的处理时间,所以在接收到报文之后,根据端口信息即可确定该报文对应的T-cont调度的队列号。
步骤402、队列映射单元301将报文发送给缓存管理单元302。
在为报文分配e8调度的队列号和T-cont调度的队列号之后,可以将所述e8调度的队列号和T-cont调度的队列号添加在所述报文的描述符中,然后将包括修改后的描述符的报文发送给缓存管理单元302,或者也可以直接将修改后的描述符发送给缓存管理单元302,实际应用中可以根据具体情况进行设置,本发明实施例对此不做限定。
步骤403、缓存管理单元302根据所述报文的e8调度的队列号当前已经占有的缓存空间与预先为所述e8调度的队列号配置的缓存空间的比较,确定是否丢弃所述报文。
缓存管理单元302在接收到包括修改后描述符的报文或者修改后的描述符之后,首先获取描述符中的e8调度的队列号和T-cont调度的队列号, 然后根据所述报文的e8调度的队列号当前已经占有的缓存空间与预先为所述e8调度的队列号配置的缓存空间的比较,确定是否丢弃所述报文。
具体的,本发明实施例以缓存管理单元302接收到包括修改后描述符的报文为例进行说明,缓存管理单元302为e8调度的8个不同的队列号配置不同的缓存空间,队列号越大的预先配置的缓存空间越大,即优先级高的报文对应的缓存空间大,优先级低的报文对应的缓存空间小,当报文e8调度的队列号当前已经占有的缓存空间大于或等于预先为所述e8调度的队列号配置的缓存空间时,对缓存空间实行尾部丢弃策略或随机丢弃策略。这样一来,由于优先级高的报文预先配置的缓存空间大,使得优先级高的报文尽量不会被丢弃,优先级低的报文预先配置的缓存空间小,使得在网络延迟或阻塞时,首先丢弃优先级低的报文,保证网络能够恢复通畅。
步骤404、若不丢弃所述报文,缓存管理单元302将所述报文发送给所述队列管理单元303。
在缓存管理单元302可以按照报文的缓存时间读取报文,即缓存较早的报文先读取,缓存较晚的报文后读取,读取的顺序与预先配置的缓存空间的大小无关。在按照报文的缓存时间读取报文之后,将报文发送给队列管理单元303。
步骤405、队列管理单元303将所述报文存储到所述e8调度的队列号对应的队列。
由于e8调度包括8个不同的队列号,所以队列管理单元303为8个不同的队列号分别设置了不同的队列,即一个队列号对应一个队列。在接收到缓存管理单元302发送的报文之后,根据报文的e8调度的队列号将所述报文存储在对应的队列。然后通知调度和整形单元304每个e8队列存储的报文数量。
步骤406、调度和整形单元304用于根据预设规则从所述e8调度的队 列号对应的队列中获取所述报文。
所述预设规则可以为WFQ,DRR,SP或者RR中的任意一种,其中WFQ,DRR,SP和RR均为现有技术,本发明实施例在此不做赘述。
步骤407、调度和整形单元304将所述报文发送给所述缓存管理单元302。
调度和整形单元304根据上述规则从所述e8调度的队列号对应的队列中获取该报文,然后将该报文在此发送给缓存管理单元302,使得缓存管理单元302按照T-cont调度的队列号进行缓存判断。
步骤408、缓存管理单元302根据所述报文的T-cont调度的队列号当前已经占有的缓存空间与预先为所述T-cont调度的队列号配置的缓存空间的比较,确定是否丢弃所述报文。
缓存管理单元302接收到调度和整形单元304发送的报文之后,首先获取该报文的描述符,然后根据描述符中的T-cont调度的队列号当前已经占有的缓存空间与预先为所述T-cont调度的队列号配置的缓存空间的比较,确定是否丢弃所述报文。
具体的,缓存管理单元302可以预先为T-cont调度的256个不同的队列号分配不同的缓存空间,当T-cont调度的队列号当前已经占有的缓存空间大于或等于预先为所述T-cont调度的队列号配置的缓存空间时,可以对该缓存空间实行尾部丢弃策略或随机丢弃策略。这样一来,当网络延迟或阻塞时,首先丢弃缓存已满的缓存空间中缓存的报文,保证网络能够恢复通畅。
步骤409、若不丢弃所述报文,缓存管理单元302将所述报文发送给所述队列管理单元303。
步骤410、队列管理单元303将所述报文存储到所述T-cont调度的队列号对应的队列。
由于T-cont调度包括256个不同的队列号,所以队列管理单元303可以预先为256个不同的队列号分别设置了不同的队列,即一个队列号对应一个队列。具体的,队列管理单元303首先为32个T-cont设置32个队列编码A,每个队列编码A下设置8个不同的队列编码B。队列管理单元303根据报文的T-cont调度的队列号的第一部分确定队列编码A,根据T-cont调度的队列号的第二部分确定队列编码B,队列编码A与队列编码B合并即为T-cont调度的队列号对应的队列编码,进而可以将报文存储到所述队列编码对应的队列。然后队列管理单元303通知调度和整形单元304每个T-cont队列存储的报文数量。
步骤411、调度和整形单元304根据所述预设规则从所述T-cont调度的队列号对应的队列中获取所述报文。
所述预设规则可以为WFQ,DRR,SP或者RR中的任意一种,其中WFQ,DRR,SP和RR均为现有技术,本发明实施例在此不做赘述。
步骤412、调度和整形单元304输出所述报文。
调度和整形单元304根据上述规则从所述e8调度的队列号对应的队列中获取该报文,此时对于报文的e8和T-cont两个方面的QoS均已完成,所以调度和整形单元304可以输出所述报文,即将获取的报文发送给后级单元进行后续处理,本发明实施例再次不做赘述。
在实际应用中,所述队列映射单元301、缓存管理单元302、队列管理单元303、调度和整形单元304均可由位于QoS复用装置中的中央处理器(CPU,Central Processing Unit)、微处理器(MPU,Micro Processor Unit)、数字信号处理器(DSP,Digital Signal Processor)、或现场可编程门阵列(FPGA,Field Programmable Gate Array)等实现。
本发明实施例提供了一种QoS复用装置,所述装置能够执行N个QoS,所述N为大于1的整数;所述装置包括:队列映射单元,缓存管理单元, 队列管理单元,调度和整形单元;所述队列映射单元配置为给接收到的报文分配N个队列号,分别对应所述N个QoS,其中,第i个队列号对应第i个QoS,所述i为大于0,小于或等于N的整数;所述缓存管理单元配置为根据所述报文的第i个队列号当前已经占有的缓存空间与预先为所述第i个队列号配置的缓存空间的比较,确定是否丢弃所述报文,并当不丢弃所述报文时,将所述报文发送给队列管理单元;所述缓存管理单元为所述第i个QoS的不同队列号,预先配置不同的缓存空间;所述缓存管理单元为所述第i个QoS的不同队列号,预先配置不同的缓存空间;所述队列管理单元配置为将所述报文存储到所述第i个的队列号对应的队列;所述队列管理单元为所述第i个QoS的不同队列号,设置不同的队列;所述调度和整形单元配置为根据预设规则从所述第i个的队列号对应的队列中获取所述报文,还配置为将所述报文发送给所述缓存管理单元,使得所述缓存管理单元根据所述报文的第i+1个队列号缓存所述报文。相较于现有技术,通信网络实现多个QoS时,不需要设置多套QoS装置,而是在QoS装置的队列映射单元分别为N个QoS分配队列号之后,每循环一次执行一个QoS,通过循环的方式实现了缓存管理单元、队列管理单元和调度和整形单元的复用,从而仅用一套QoS复用装置即可实现多个QoS,节约了实现QoS的逻辑资源,降低了QoS芯片的复杂性。
本发明实施例还提供一种计算机存储介质,其中存储有计算机可执行指令,该计算机可执行指令配置执行上述QoS复用方法。
以上所述,仅为本发明的较佳实施例而已,并非用于限定本发明的保护范围。
工业实用性
本发明实施例提供了一种QoS复用方案,所述QoS复用方法包括:为接收到的报文分配N个队列号,分别对应对所述报文执行的N个QoS,根 据所述报文的第i个队列号当前已经占有的缓存空间与预先为所述第i个队列号配置的缓存空间的比较,确定是否丢弃所述报文;若不丢弃所述报文,将所述报文存储到第i个队列号对应的队列;根据预设规则从所述第i个的队列号对应的队列中获取所述报文;i小于N时,i加1且重复执行上述确定报文是否丢弃到获取所述报文的步骤,直至i等于N时,输出所述报文。相较于现有技术,在通信网络实现QoS时,不需要设置多套QoS装置,依次执行每一个QoS,而是分别为接收到的报文分配与N个QoS对应的N个队列号,然后通过循环执行的方法,每循环一次执行一个QoS,因此采用一套QoS复用装置即可实现多个QoS,节约了实现QoS的逻辑资源,降低了QoS芯片的复杂性。

Claims (11)

  1. 一种服务质量QoS复用方法,所述方法包括:
    为接收到的报文分配N个队列号,分别对应对所述报文执行的N个QoS,其中,第i个队列号对应第i个QoS,所述i为大于0小于或等于N的整数;
    根据所述报文的第i个队列号当前已经占有的缓存空间与预先为所述第i个队列号配置的缓存空间的比较,确定是否丢弃所述报文;
    若不丢弃所述报文,将所述报文存储到第i个队列号对应的队列;
    根据预设规则从所述第i个的队列号对应的队列中获取所述报文;
    i小于N时,i加1且重复执行上述确定报文是否丢弃到获取所述报文的步骤,直至i等于N时,输出所述报文。
  2. 根据权利要求1所述的QoS复用方法,其中,所述N为2,所述两个QoS,分别为e8调度和T-cont调度;
    所述为接收到的报文分配N个队列号,分别对应对所述报文执行的N个QoS包括:
    为接收到的所述报文分配两个队列号,分别为e8调度的队列号和T-cont调度的队列号。
  3. 根据权利要求1或2所述的QoS复用方法,其中,所述预设规则为加权公平排队法WFQ、亏空轮循算法DRR、严格优先级SP、或者轮询调度算法RR。
  4. 根据权利要求1或2所述的QoS复用方法,其中,
    在所述为接收到的报文分配N个队列号之后,所述方法还包括:
    将为所述报文分配的N个队列号添加在所述报文的描述符中。
  5. 根据权利要求2所述的QoS复用方法,其中,
    所述e8调度包括八个不同的队列号,所述T-cont调度包括256个不同 的队列号;
    所述为接收到的所述报文分配两个队列号包括:
    根据所述报文的发送端口从所述八个不同的队列号中为所述报文分配e8调度的队列号,从所述256个不同的队列号中为所述报文分配T-cont调度的队列号。
  6. 一种服务质量QoS复用装置,所述QoS复用装置能执行N个QoS,所述N为大于1的整数;所述QoS复用装置包括:队列映射单元、缓存管理单元、队列管理单元、调度和整形单元;
    所述队列映射单元,配置为给接收到的报文分配N个队列号,分别对应所述N个QoS,其中,第i个队列号对应第i个QoS,所述i为大于0,小于或等于N的整数;
    所述缓存管理单元,配置为根据所述报文的第i个队列号当前已经占有的缓存空间与预先为所述第i个队列号配置的缓存空间的比较,确定是否丢弃所述报文,并当不丢弃所述报文时,将所述报文发送给队列管理单元;所述缓存管理单元为所述第i个QoS的不同队列号,预先配置不同的缓存空间;
    所述队列管理单元,配置为将所述报文存储到所述第i个的队列号对应的队列;所述队列管理单元为所述第i个QoS的不同队列号,设置不同的队列;
    所述调度和整形单元,配置为根据预设规则从所述第i个的队列号对应的队列中获取所述报文,还配置为将所述报文发送给所述缓存管理单元,使所述缓存管理单元根据所述报文的第i+1个队列号当前已经占有的缓存空间与预先为所述第i+1个队列号配置的缓存空间的比较,确定是否丢弃所述报文。
  7. 根据权利要求6所述的QoS复用装置,其中,所述N为2,所述 QoS复用装置执行两个QoS,分别为e8调度和T-cont调度;
    所述队列映射单元配置为给接收到的报文分配两个队列号,分别为e8调度的队列号和T-cont调度的队列号;
    所述缓存管理单元配置为根据所述报文的e8调度的队列号当前已经占有的缓存空间与预先为所述e8调度的队列号配置的缓存空间的比较,确定是否丢弃所述报文,并在不丢弃所述报文时,将所述报文发送给所述队列管理单元;所述缓存管理单元为所述e8调度的不同队列号,预先配置不同的缓存空间;
    所述队列管理单元配置为将所述报文存储到所述e8调度的队列号对应的队列;所述队列管理单元为所e8调度的不同队列号,设置不同的队列;
    所述调度和整形单元配置为根据所述预设规则从所述e8调度的队列号对应的队列中获取所述报文,并将所述报文发送给所述缓存管理单元;
    所述缓存管理单元配置为根据所述报文的T-cont调度的队列号当前已经占有的缓存空间与预先为所述T-cont调度的队列号配置的缓存空间的比较,确定是否丢弃所述报文,并当不丢弃所述报文时,将报文发送给所述队列管理单元;所述缓存管理单元为所述T-cont调度的不同队列号,预先配置不同的缓存空间;
    所述队列管理单元配置为将所述报文存储到所述T-cont调度的队列号对应的队列;所述队列管理单元为所T-cont调度的不同队列号,设置不同的队列;
    所述调度和整形单元配置为根据所述预设规则从所述T-cont调度的队列号对应的队列中获取所述报文。
  8. 根据权利要求6或7所述的QoS复用装置,其中,所述预设规则为加权公平排队法WFQ、亏空轮循算法DRR、严格优先级SP、或者轮询调度算法RR。
  9. 根据权利要求6或7所述的QoS复用装置,其中,
    所述队列映射单元配置为将为所述报文分配的N个队列号添加在所述报文的描述符中,并将包括所述描述符的报文或者所述描述符发送给所述缓存管理单元。
  10. 根据权利要求7所述的QoS复用装置,其中,
    所述e8调度包括八个不同的队列号,所述T-cont调度包括256个不同的队列号;
    所述队列映射单元配置为根据所述报文的发送端口从所述八个不同的队列号中为所述报文分配e8调度的队列号,从所述256个不同的队列号中为所述报文分配T-cont调度的队列号。
  11. 一种计算机存储介质,其中存储有计算机可执行指令,该计算机可执行指令配置执行上述权利要求1-5任一项所述的QoS复用方法。
PCT/CN2016/082148 2015-08-26 2016-05-13 一种服务质量复用方法及装置、计算机存储介质 WO2017032075A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510532990.8A CN106487713A (zh) 2015-08-26 2015-08-26 一种服务质量复用方法及装置
CN201510532990.8 2015-08-26

Publications (1)

Publication Number Publication Date
WO2017032075A1 true WO2017032075A1 (zh) 2017-03-02

Family

ID=58099576

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/082148 WO2017032075A1 (zh) 2015-08-26 2016-05-13 一种服务质量复用方法及装置、计算机存储介质

Country Status (2)

Country Link
CN (1) CN106487713A (zh)
WO (1) WO2017032075A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114727340A (zh) * 2021-01-06 2022-07-08 华为技术有限公司 传输报文的方法和装置
CN113347116B (zh) * 2021-06-16 2022-05-27 杭州迪普科技股份有限公司 QoS调度延迟抖动处理方法及装置
CN114024923A (zh) * 2021-10-30 2022-02-08 江苏信而泰智能装备有限公司 一种多线程报文捕获方法、电子设备及计算机存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090028096A1 (en) * 2006-02-07 2009-01-29 Ntt Docomo, Inc. Upper station, base station, mobile station, and transmission control method
CN102132511A (zh) * 2008-08-27 2011-07-20 思科技术公司 用于虚拟机的虚拟交换机服务质量
CN104378309A (zh) * 2013-08-16 2015-02-25 中兴通讯股份有限公司 OpenFlow网络中实现QoS的方法、系统和相关设备

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090028096A1 (en) * 2006-02-07 2009-01-29 Ntt Docomo, Inc. Upper station, base station, mobile station, and transmission control method
CN102132511A (zh) * 2008-08-27 2011-07-20 思科技术公司 用于虚拟机的虚拟交换机服务质量
CN104378309A (zh) * 2013-08-16 2015-02-25 中兴通讯股份有限公司 OpenFlow网络中实现QoS的方法、系统和相关设备

Also Published As

Publication number Publication date
CN106487713A (zh) 2017-03-08

Similar Documents

Publication Publication Date Title
US11706149B2 (en) Packet sending method, network node, and system
US10326713B2 (en) Data enqueuing method, data dequeuing method, and queue management circuit
US8230110B2 (en) Work-conserving packet scheduling in network devices
US11968111B2 (en) Packet scheduling method, scheduler, network device, and network system
US7606250B2 (en) Assigning resources to items such as processing contexts for processing packets
US7843940B2 (en) Filling token buckets of schedule entries
US10263906B2 (en) Flow scheduling device and method
WO2018195728A1 (zh) 一种客户业务传输方法和装置
WO2017000872A1 (zh) 缓存分配方法及装置
JP7487316B2 (ja) サービスレベル構成方法および装置
US8379518B2 (en) Multi-stage scheduler with processor resource and bandwidth resource allocation
WO2017032075A1 (zh) 一种服务质量复用方法及装置、计算机存储介质
WO2020142867A1 (zh) 一种流量整形方法及相关设备
JP2020072336A (ja) パケット転送装置、方法、及びプログラム
US8929216B2 (en) Packet scheduling method and apparatus based on fair bandwidth allocation
US7065091B2 (en) Method and apparatus for scheduling and interleaving items using quantum and deficit values including but not limited to systems using multiple active sets of items or mini-quantum values
CN109905331B (zh) 队列调度方法及装置、通信设备、存储介质
WO2012171461A1 (zh) 报文转发方法及装置
US9083617B2 (en) Reducing latency of at least one stream that is associated with at least one bandwidth reservation
Patel et al. Design and implementation of low latency weighted round Robin (LL-WRR) scheduling for high speed networks
JP2020022023A (ja) パケット転送装置、方法、及びプログラム
US9128755B2 (en) Method and apparatus for scheduling resources in system architecture
JP2002344509A (ja) ルータとパケットの読み出しレート制御方法およびその処理プログラム
EP4099649A1 (en) Integrated scheduler for iec/ieee 60802 end-stations
US7599381B2 (en) Scheduling eligible entries using an approximated finish delay identified for an entry based on an associated speed group

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16838336

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16838336

Country of ref document: EP

Kind code of ref document: A1