WO2015113405A1 - 数据流量限制方法及装置 - Google Patents

数据流量限制方法及装置 Download PDF

Info

Publication number
WO2015113405A1
WO2015113405A1 PCT/CN2014/087395 CN2014087395W WO2015113405A1 WO 2015113405 A1 WO2015113405 A1 WO 2015113405A1 CN 2014087395 W CN2014087395 W CN 2014087395W WO 2015113405 A1 WO2015113405 A1 WO 2015113405A1
Authority
WO
WIPO (PCT)
Prior art keywords
token
bucket
tokens
credit
current
Prior art date
Application number
PCT/CN2014/087395
Other languages
English (en)
French (fr)
Inventor
李毅
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to KR1020167023586A priority Critical patent/KR101738657B1/ko
Priority to JP2016549033A priority patent/JP6268623B2/ja
Priority to EP14880535.1A priority patent/EP3101851B1/en
Publication of WO2015113405A1 publication Critical patent/WO2015113405A1/zh
Priority to US15/224,232 priority patent/US10560395B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/64Hybrid switching systems
    • H04L12/6418Hybrid transport
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/24Multipath
    • H04L45/245Link aggregation, e.g. trunking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/215Flow control; Congestion control using token-bucket
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/16Multipoint routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/28Routing or path finding of packets in data switching networks using route fault recovery
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Definitions

  • the present invention relates to the field of network communication technologies, and in particular, to a data traffic limiting method and apparatus.
  • the data traffic of a link is likely to exceed the service level agreement between the link and the service provider.
  • the link occupies the bandwidth of other links in the communication network, resulting in other links. Can't get normal bandwidth service.
  • the link data traffic needs to be restricted at the entrance of the communication network to ensure that the link data traffic does not exceed the bandwidth service signed by the service level agreement.
  • a token bucket is a commonly used data traffic limiting technology.
  • a token bucket is a storage pool inside a network device.
  • a token is a virtual packet that fills a token bucket at a given rate. The token is placed in the token bucket according to the data traffic allocated to the link, and the token is taken from the token bucket according to the actual data traffic of the link. When there is no token in the token bucket, the data cannot be sent to the link.
  • the network device configures one token bucket for each board chipset. If the token is placed at a rate greater than the actual data traffic, the token received after the token bucket is filled with the token will be discarded; The rate at which the token is placed continues to be less than the actual data traffic. After the token in the token bucket is emptied, the data cannot be sent continuously. This type of token access does not make good use of the remaining data traffic of other links in the link aggregation group interface.
  • the network topology of a link aggregation group interface is a member interface.
  • the link aggregation group interface consists of two member ports.
  • the link goes up to two different chipsets in the stacking device.
  • the traditional data traffic restriction methods include the following two methods:
  • the total CIR (Committed Information Rate) set for the link aggregation group interface is synchronized to each member port of the link aggregation group interface, which is assumed to be the link aggregation group shown in Figure 1.
  • the total CIR of the interface configuration is 100M.
  • the data traffic allowed by each member port of the link aggregation group interface is also 100M.
  • the total data traffic allowed by the two member ports is 200M, which is different from the overall CIR of 100M. .
  • the other one is to set the weights in the static estimation mode.
  • the overall CIR of the link aggregation group interface shown in Figure 1 is 100M, and the weight of the two member ports is 1:1, that is, the limit data of the two member ports.
  • a data traffic limiting method and apparatus are provided to solve the technical problem that the data traffic limiting mode in the prior art cannot accurately control the data traffic according to the overall data traffic requirement preset by the link aggregation group interface.
  • the present invention provides a data traffic limiting method, which is applied to a network device on which a member port of a link aggregation group interface is located on a different chip chipset, and a link is configured for each link aggregation group interface in the network device.
  • a credit bucket for storing a token overflowing token bucket corresponding to each member port, and tokens overflowing by different token buckets are marked differently; and pre-configuring the credit bucket and each token bucket a bucket parameter, where the bucket parameter includes a token placement rate of the token bucket, a space of the token bucket, and a space of the credit bucket;
  • the data flow restriction method includes:
  • the token in the current token bucket and the non-current token bucket in the credit bucket are determined. Whether the sum of the number of overflowed tokens reaches the number of tokens that send the to-be-sent message;
  • the packet to be sent is sent, and correspondingly reduced.
  • the current token bucket and the credit The number of tokens in the bucket.
  • determining whether the sum of the tokens in the current token bucket and the tokens in the credit bucket that are not overflowing by the current token bucket is The number of tokens that send the to-be-sent packet is reached including:
  • the method further includes: when the number of tokens in the current token bucket and If the sum of the number of tokens that are not in the current token bucket overflows in the credit bucket does not reach the number of the to-be-sent packets, the to-be-sent packet is discarded or the to-be-sent packet is re-marked.
  • the pre-setting the bucket parameters of the credit bucket and each token bucket includes:
  • the preset token placement rate determining method includes a rate halving determining method or a rate weight determining method
  • the space of each token bucket is determined according to a preset space determining method, and a sum of spaces of each of the token buckets is equal to an overall committed burst size or a peak burst size of the link aggregation group interface, where the pre-predetermined
  • the spatial determination method includes a spatial halving determination method or a spatial weight determination method
  • the space of the credit bucket is equal to the overall committed burst size or peak burst size.
  • the method further includes: when the credit bucket meets a preset condition, discarding the credit bucket Token
  • discarding the token in the credit bucket includes:
  • the token in the credit bucket exceeds a preset duration, the token is discarded according to a preset fairness algorithm
  • the token bucket stored in the credit bucket is discarded and overflows before the current period. Token.
  • the method further includes:
  • the token bucket is configured to store token token overflowing tokens belonging to the same board chipset, so that the credit buckets are Determining the token in the debit bucket, and setting a space of the debit bucket according to a preset space determining method, and a sum of spaces of each of the debit buckets is equal to the overall committed burst size or peak burst
  • the preset space determining method comprises a space halving determining method or a spatial weight determining method.
  • the method further includes:
  • the current token bucket In the preset latest period, if the current token bucket borrows the loan information of the borrowing token from the credit bucket to meet the preset condition of the bucket, the current token bucket is increased according to the borrowing information of the current token bucket. Space; wherein the loan information includes the number of loan tokens and/or the number of borrowing tokens;
  • the space of the corresponding non-current token bucket is reduced according to the token distribution information of the current token bucket from the credit bucket loan token, and the sum of the reduced spaces is equal to the sum of the increased spaces;
  • the new token placement rate of each token bucket is determined according to the re-determined space of each token bucket and the token placement rate calculation method.
  • the method further includes:
  • the current token bucket borrows the loan information from the credit bucket borrowing token to meet the preset condition of the bucket, according to the number of the borrow token of the current token bucket, the current The token placement rate of the token bucket is increased from an initial token placement rate to a new token placement rate; wherein the loan information includes the number of loan tokens and/or the number of loan tokens.
  • the method further includes:
  • the present invention further provides a data traffic limiting apparatus, including: a first pre-configuration unit, a receiving unit, a first determining unit, a second determining unit, a sending unit, and a token management unit;
  • the first pre-configuration unit is configured to pre-configure a credit bucket for each link aggregation group interface in the network device, where the credit bucket is used to store tokens overflowed by each token bucket, and the difference is The token overflowed by the token bucket is marked; and the bucket parameters of the token bucket and each token bucket are pre-configured, and the bucket parameters include a token placement rate of the token bucket, a space of the token bucket, and a credit bucket. space;
  • the receiving unit is configured to receive a message to be sent
  • the first determining unit is configured to determine whether the number of tokens in the current token bucket corresponding to the to-be-sent packet reaches the number of tokens that send the to-be-sent packet;
  • the second determining unit is configured to determine, when the number of tokens in the current token bucket does not reach the number of tokens to send the to-be-sent packet, the token in the current token bucket and the credit Whether the sum of the number of tokens that are not overflowed by the current token bucket in the bucket reaches the number of tokens that send the to-be-sent packet;
  • the sending unit configured to send the to-be-send when the sum of the number of tokens in the current token bucket and the token of the credit bucket reaches the number of tokens to send the to-be-sent packet Send a message;
  • a token management unit configured to correspondingly reduce the number of tokens in the current token bucket and the credit bucket.
  • the second determining unit includes:
  • Determining a subunit configured to determine a token difference between a number of tokens in the current token bucket and a number of tokens in which the to-be-sent packet is sent;
  • a determining subunit configured to determine whether the number of non-current token bucket overflow tokens in the credit bucket is not less than the token difference
  • a first determining subunit configured to determine the current token bucket token and the credit bucket when the number of non-current token bucket overflow tokens in the credit bucket is not less than the token difference The sum of the number of the tokens in the sum reaches the number of tokens that send the to-be-sent message;
  • a second determining subunit configured to determine a token in the current token bucket and the credit bucket when the number of non-current token bucket overflow tokens in the credit bucket is less than the token difference The token within the token does not reach the number of tokens that send the to-be-sent packet.
  • the device further includes: a message processing unit;
  • the packet processing unit is configured to: when the sum of the tokens in the current token bucket and the number of tokens in the credit bucket that are not overflowing by the current token bucket, the sum of the to-be-sent packets is not sent. Discarding the to-be-sent message when the number is exceeded;
  • the first pre-configured unit includes:
  • a token placement rate configuration unit configured to determine a token placement rate of each token bucket according to a preset token placement rate determining method, and a sum of token placement rates of each token bucket is equal to an interface of the link aggregation group interface
  • An overall commitment information rate or a peak information rate wherein the preset token placement rate method comprises a rate halving determination method or a rate weight determination method;
  • a first space configuration unit configured to determine a space of each token bucket according to a preset space determining method, and a sum of spaces of each of the token buckets is equal to an overall committed burst size or a peak of the link aggregation group interface a size determination method, wherein the space determination method comprises a space halving determination method or a spatial weight determination method;
  • a second space configuration unit configured to configure a space of the credit bucket as the overall commitment burst size or a peak burst size.
  • the device further includes:
  • a second pre-configuration unit configured to pre-configure a debit bucket for each single-board chipset, and set a space of the debit bucket according to a preset space determination method, and make a sum of spaces of each of the debit buckets Or equal to the overall committed burst size or peak burst size, where the preset space determining method includes a space halving determining method or a spatial weight determining method, where the debit bucket is used to store the chips belonging to the same single chip chipset.
  • the token overflowed by the token bucket, so that the credit bucket stores tokens uniformly taken out from each of the debit buckets according to a preset condition.
  • the device further includes:
  • a first space adjustment unit configured to increase, according to the current token bucket, a loan information information, if the number of tokens that the current token bucket borrows from the credit bucket exceeds a first threshold in a preset latest period The space of the current token bucket;
  • a second space adjustment unit configured to reduce a space of the corresponding non-current token bucket according to the label distribution information of the current token bucket from the credit bucket loan token, where the sum of the reduced spaces is equal to the sum of the increased spaces;
  • the first token placement rate adjustment unit is configured to determine a new token placement rate of each token bucket according to the re-determined space of each token bucket and a token placement rate calculation method.
  • the device further includes:
  • a second token placement rate adjustment unit configured to: according to the current token bucket, the borrowing information of the current token bucket, if the number of tokens that the current token bucket borrows from the credit bucket exceeds a first threshold in a preset latest period And increasing the token placement rate of the current token bucket from an initial token placement rate to a new token placement rate.
  • the device further includes:
  • a third space adjustment unit configured to adjust a space of the current token bucket according to a new token placement rate of the current token bucket, and a correspondence between a token placement rate and a space of the token bucket;
  • a fourth space adjustment unit configured to determine, according to the current token bucket, the adjustment space of the corresponding non-current token bucket from the mark distribution information of the credit bucket loan token, so that the sum of the reduced spaces is equal to the increased space Sum.
  • the present invention provides a network device, including any one of the data traffic limiting devices described in the second aspect.
  • the data traffic limiting method provided by the embodiment of the present invention first configures a credit bucket for each link aggregation group interface in the network device, and is used to store the token overflowed by each token bucket, and Different token bucket overflow tokens are distinguished; when receiving a packet to be sent, it is determined whether the token in the token bucket corresponding to the packet to be sent is sufficient to send a to-be-sent packet, if the token bucket is in the token bucket If the number of tokens does not satisfy the number of sent packets to be sent, the tokens that are not overflowed by the current token bucket are borrowed from the credit bucket.
  • FIG. 1 is a schematic structural diagram of an inter-frame link aggregation group interface
  • FIG. 2 is a schematic flowchart of a data traffic limiting method according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of a specific application scenario of FIG. 1 according to an embodiment of the present invention.
  • FIG. 4 is a graph showing a data flow change curve for the link aggregation group interface shown in FIG. 1;
  • FIG. 5 is a schematic diagram of another specific application scenario of FIG. 1 according to an embodiment of the present invention.
  • FIG. 6 is a schematic flowchart diagram of another data flow limiting method according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of a data traffic limiting apparatus according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of the first pre-configuration unit shown in FIG. 7 according to the present invention.
  • FIG. 9 is a schematic structural diagram of another data flow device according to an embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of another data flow limiting apparatus according to an embodiment of the present invention.
  • the data traffic limiting method provided by the embodiment of the present invention is applied to a link aggregation group interface in a network device (for example, a box device or a stack device, etc.), wherein member ports of the link aggregation group interface are spanned on a single board or The member interfaces of the link aggregation group interface are located in different board chipsets.
  • the data traffic limit parameter of the link aggregation group interface is for the entire link aggregation group interface, that is, the sum of the data traffic allowed by all member ports of the link aggregation group interface.
  • the overall data traffic set by the service provider for the link aggregation group interface is for the entire link aggregation group interface, that is, the sum of the data traffic allowed by all member ports of the link aggregation group interface.
  • the credit bucket is used to store tokens overflowing by the token buckets of the card chipsets of the link aggregation group interface, and the tokens of different token bucket overflows are marked (for example, different token buckets may be used) Overflowing tokens are dyed in different colors);
  • the credit bucket discards part or all of the tokens in the credit bucket according to a preset condition
  • the token bucket discards the tokens according to a preset fair algorithm by discarding the tokens from different token buckets, thereby ensuring the loan
  • the token in the bucket can accurately reflect the remaining data traffic of each member port of the link aggregation group interface.
  • the preset condition may include the following conditions:
  • the token is discarded according to a preset fair algorithm, and when When the tokens in the token bucket are from different token buckets, the tokens from different token buckets are discarded according to a preset fairness algorithm; wherein the period may be a preset multiple of a clock period of the CPU.
  • the bucket parameters of the credit bucket and each token bucket are pre-configured, and the bucket parameters include a token placement rate of the token bucket, a space of the token bucket, and a space of the credit bucket.
  • the number of token buckets of a single-board chipset (for example, a single-board chip of a frame device or a chip of a box-type device) of the link aggregation group interface is M
  • the overall CIR (or PIR) value of the link aggregation group interface For N1, the overall CBS or PBS value is N2, the initial space of the i-th token bucket is CBSi (or PBSi), and the initial token placement rate of the i-th token bucket is CIRi (or PIRi), then each token
  • the bucket parameter satisfies the following formula:
  • CBS Committed Burst Size
  • PBS Peak Burst Size
  • the space of the credit bucket is equal to the sum of the spaces of the token buckets, that is, the space of the credit bucket is equal to N2.
  • the process of configuring token bucket parameters can include the following process:
  • FIG. 2 is a schematic flowchart of a data traffic limiting method according to an embodiment of the present invention. As shown in FIG. 2, the method may include the following steps:
  • Determining, by the member port, the packet to be sent, and determining the The board chip group corresponding to the member port determines the corresponding token bucket. After the token bucket is determined, it is determined whether the token in the token bucket satisfies the number of tokens to be sent, and if yes, in step S12, the token in the token bucket is used to send the to-be-sent report. And reduce the corresponding number of tokens in the token bucket.
  • step S13 When the number of tokens in the current token bucket does not reach the number of tokens to be sent, the token in the current token bucket and the non-current order in the credit bucket are determined in step S13. Whether the sum of the number of tokens overflowed by the card bucket reaches the number of tokens for transmitting the to-be-sent message; if yes, step S14 is performed; if not, step S15 is performed.
  • the determining process of step S13 may include the following steps:
  • step 3 determining the current token bucket token and the green in the credit bucket The number of tokens reaches the number of tokens that send the to-be-sent message;
  • step 4 determining the current token bucket token and the green order in the credit bucket The card does not reach the number of tokens that send the message to be sent.
  • the sending the to-be-sent packet is sent in step S14. And correspondingly reduce the current token bucket and the number of tokens in the credit bucket.
  • the token in the current token bucket is cleared, and at the same time, the non-current token bucket in the bucket is overflowed.
  • the token is reduced by N.
  • the packet is discarded or re-marked in step S15.
  • the message When the sum of the number of tokens in the current token bucket and the token of the credit bucket does not reach the number of tokens for sending the to-be-sent packet, the packet is discarded or re-marked in step S15. The message.
  • the method for limiting the data traffic firstly configures a credit bucket for each link aggregation group interface in the network device, and stores the token overflowed by each token bucket, and overflows for different token buckets. Token distinguishing mark;
  • the packet to be sent is received, it is determined whether the number of tokens in the token bucket corresponding to the packet to be sent is equal to the number of sent packets to be sent.
  • the token that is not overflowed by the current token bucket is borrowed from the credit bucket. If the token of the current token bucket and the token of the borrowed token are sufficient to send the to-be-sent packet, the packet to be sent is sent. , reduce the number of tokens in the current token bucket and the corresponding token in the credit bucket.
  • FIG. 3 is a schematic diagram of a specific application scenario corresponding to the embodiment shown in FIG. 1.
  • the two member ports of the current link aggregation group are located on the board chipset 1 and the board chipset 2.
  • the token buckets configured on the two board chipsets are token buckets 11 and
  • the brand bucket 21 configures a credit bucket 3 for the link aggregation group interface.
  • the credit bucket 3 dyes the token overflowing the token bucket 11 into blue, and the token overflowing the token bucket 21 is colored green.
  • the link aggregation group interface shown in this example includes two links.
  • the data traffic restriction method provided in this embodiment of the present application can be applied to a link aggregation group interface including any multiple links.
  • the overall CIR of the link aggregation group interface is 100M, and the CIRs of the token bucket 11 and the token bucket 21 are both set to 50M data traffic.
  • the token bucket is full, that is, the token bucket is full, and the token is taken from the token bucket according to the actual data traffic of the member port. The card is then placed in the token bucket at a preset token placement rate.
  • the data flow curve of the two member ports of the link aggregation group interface changes with time; in Figure 4, the horizontal axis is the time axis, and the vertical axis is the data traffic of the member ports.
  • the curve indicates the data flow curve of the member port corresponding to the single chip chipset 1.
  • the curve of the ⁇ in the figure indicates the data flow curve of the member port corresponding to the single chip chipset 2, where it is assumed that between every two time points
  • the time interval is one cycle, that is, the figure shows a data flow curve of nine cycles, wherein the cycle is a preset multiple of the clock cycle of the CPU of the network device.
  • the data traffic of the member ports corresponding to the chipset 1 is 30M (the data traffic does not exceed the limit of 50M), and the data traffic of the member ports corresponding to the chipset 2 is 40M (no more than The limited data traffic is 50M.
  • the total data traffic of the link aggregation group interface is 70M.
  • the token placement rate (50M) of the token bucket 11 is greater than the data traffic rate (30M)
  • the token placement rate (50M) of the token bucket 21 is greater than the data traffic rate (40M).
  • the overflowed token is dyed into a corresponding color by a certain fair algorithm, and the token overflowed by the token bucket 11 is dyed blue and placed in the credit bucket 3
  • the token overflowing from the token bucket 21 is dyed into green and placed in the credit bucket 3, and the token stored in the credit bucket 3 reaches the preset time.
  • the tokens of different colors are discarded according to the preset fairness algorithm. Assuming that the preset duration is one cycle, all the blue tokens and the green tokens in the credit bucket 3 are discarded.
  • the data traffic of the member ports corresponding to the load sharing to the chipset 2 is increased from 40M to 60M, and the data traffic of the member ports corresponding to the chipset 1 is unchanged.
  • the load is shared to the chipset 2.
  • the data flow rate of the corresponding member port is 60 M.
  • the token placement rate (50 M) of the token bucket 11 is greater than the data traffic rate (30 M), that is, the token overflow of the token bucket 11 is overflowed, and the overflow token is credited.
  • the bucket 3 is taken away and dyed blue. From time t2 to time t3, the token of the remaining 20M*(t3-t2) of the token bucket 11 is placed in the credit bucket 3.
  • the token placement rate of the token bucket 21 is smaller than the data traffic. That is, when the member port corresponding to the chipset 2 sends a packet, it needs to borrow the token overflowed from the token bucket 11 to the credit bucket 3, and credit the blue in the bucket.
  • the remaining 10M*(t3-t2) blue token in the credit bucket is discarded.
  • the data traffic of the two chipsets is the same as the data traffic during the period from t2 to t3.
  • the token allocation process is the same and will not be described here.
  • the data traffic of the load sharing to the corresponding member port of the chipset 1 is increased from 30M to 50M, and the time interval of the token bucket 21 is equal to the rate at which the token is taken out, from time t7 to time t8.
  • No additional tokens are placed in the credit bucket 3; during this time period, the data traffic of the load port to the member port of the chipset 2 is still 60M, that is, the token placement rate of the token bucket 11 is smaller than the rate of fetching the token.
  • the token bucket 21 since the token bucket 21 has no extra tokens to be placed in the credit bucket, there is no token in the credit bucket that satisfies the condition.
  • FIG. 5 a schematic diagram of another application scenario of the embodiment of the present application is shown.
  • a debit bucket can also be configured for each single chip chipset. After the debit bucket is set, the process of sending the packet and the token is the same as the corresponding process in the embodiment shown in FIG. 1 , and details are not described herein again.
  • the two member ports of the link aggregation group interface are respectively disposed on the single chip chipset 1 and the single chip chipset 2.
  • the token bucket 11 and the debit bucket 12 are configured for the single chip chipset 1
  • the token bucket 21 and the debit bucket 22 are configured for the single board chipset 2.
  • the token bucket is used to store the token overflowed by the token bucket in the same single chip chipset. When the debit bucket is full, the token overflowed by the token bucket is directly discarded.
  • the credit bucket 3 is configured for the link aggregation group interface of the network device. After the preset trigger condition is met, the credit bucket 3 takes the token from each debit bucket according to a preset fairness algorithm to ensure that the loan is in the period T.
  • the bucket 3 uniformly removes the token from the debit bucket corresponding to the token bucket in which all tokens overflow (that is, the token of the token in the credit bucket is guaranteed) If the data distribution of any member port in the link aggregation group interface suddenly increases, and the data traffic of the remaining member ports is left, the member with increased data traffic increases.
  • the port can borrow the token from the credit bucket to avoid the single color of the token in the credit bucket, and the token bucket cannot borrow other color tokens.
  • the link aggregation group interface of the member ports across the chipset is implemented. The overall data flow is precisely controlled.
  • the preset trigger condition may include the following conditions:
  • the credit bucket is according to the first preset duration, and the token is uniformly taken from each debit bucket according to a fairness algorithm;
  • the credit bucket takes the token from each debit bucket according to a fairness algorithm.
  • the preset value can be freely set according to requirements.
  • the sum of the spaces of the respective debit buckets is equal to the overall CBS or the overall PBS of the link aggregation group interface, and the space of each debit bucket can be determined according to the equal division determination method or the weight determination method, and of course, other methods can also be used to determine .
  • the token is placed into the token bucket 11 and the token bucket 21 according to the specified token placement rate (50M), and the token bucket 11 is ordered from the time period t1 to t2.
  • the card placement rate (50M) is greater than the data traffic (30M), the token overflows, and the overflowed token is placed in the debit bucket 12; at the same time, the token placement rate (50M) of the token bucket 21 is greater than the data traffic (40M).
  • the token overflows and the overflowed token is placed in the debit bucket 22.
  • the credit bucket 3 takes the tokens from the debit bucket 12 and the debit bucket 22 according to a certain fairness algorithm, and dyes the tokens of the two debit buckets into different colors.
  • the fairness algorithm may be any fair algorithm, which is not limited in this application.
  • the link aggregation group interface structure configureds a debit bucket for each board chipset in the link aggregation group interface, and uses the debit bucket to store the token bucket overflow token of the same board chipset. Then, the credit bucket takes the token from each debit bucket at the triggering time point to prevent the token bucket from overflowing from the token bucket, and the credit bucket needs to take the token from the token bucket, that is, avoiding the loan.
  • the frequent interaction between the main control board corresponding to the bucket and the board corresponding to the token bucket reduces the requirement of the performance parameters of the main control board corresponding to the credit bucket, and reduces the cost of the network device.
  • FIG. 6 a schematic flowchart of another data traffic limiting method in the embodiment of the present application is shown. On the basis of the embodiment shown in FIG. 2, the following steps are further included:
  • step S16 is performed to count the tag distribution information of the lending token; the tag distribution letter
  • the information includes the token of the token and the corresponding number of loans, that is, the tokens of the respective color markers and the corresponding number of loans are counted, for example, the green token is loaned out by 20M.
  • the current token bucket adjusts the corresponding token according to the token borrowing information of the current token bucket and the token distribution information when the borrowing information of the borrowing token in the credit bucket satisfies the preset condition of the tuning bucket.
  • the bucket parameter of the bucket; and the adjusted bucket parameter of each token bucket still needs to meet the data traffic setting parameter of the link aggregation group interface, where the bucket parameter includes the token placement rate and space.
  • the loan information includes the number of the borrowing tokens, the number of the borrowing tokens, and/or the borrowing ratio, and the borrowing ratio refers to the number of the token bucket borrowing tokens and the order required to send the to-be-sent message. The ratio of the number of cards.
  • the preset bucket condition may include that a token bucket continuously borrows from the credit bucket exceeds a first threshold, and/or a token bucket continuously borrows from the credit bucket After the preset number of times is reached, it indicates that the traffic of a member port always exceeds the data traffic set for it, and the traffic of other member ports is always smaller than the set data traffic. At this time, the dynamic bucket is triggered and increased.
  • the first threshold can be freely set as needed.
  • the process of adjusting the bucket parameter of the current token bucket may include the following steps:
  • the token placement rate of each token bucket matches the data traffic of the member port in the period from t4 to t7, and the token is not required to be borrowed from the credit bucket. Dynamically control the data traffic of member ports and meet the overall data traffic of the link aggregation group interface, improving the control accuracy of the overall data traffic.
  • the difference from the dynamic bucket tuning process is that the token placement rate needs to be adjusted first, and then the relationship between the token placement rate and the space of the token bucket is determined to be adjusted.
  • the space of the token bucket and the final adjustment strategy may include the following steps:
  • the space of the debit bucket can be dynamically adjusted, and the space of the debit bucket can be adjusted according to the adjustment ratio of the token bucket.
  • the data restriction method provided in this embodiment when a certain token bucket in the link aggregation group interface borrows the loan information of the loan token from the credit bucket to meet the preset condition of the adjustment bucket, according to the current token bucket
  • the loan information adjusts the bucket parameters of the corresponding token bucket, and the bucket parameters include the token placement rate and space. Therefore, the bucket parameters of the token buckets of each link in the link aggregation group interface are dynamically adjusted, so that the token placement rate of each token bucket matches the data traffic of the member port corresponding thereto, thereby improving link aggregation.
  • the overall data flow control accuracy of the group interface when a certain token bucket in the link aggregation group interface borrows the loan information of the loan token from the credit bucket to meet the preset condition of the adjustment bucket, according to the current token bucket
  • the loan information adjusts the bucket parameters of the corresponding token bucket, and the bucket parameters include the token placement rate and space. Therefore, the bucket parameters of the token buckets of each link in the link aggregation group interface are dynamically adjusted,
  • the present invention can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is better.
  • Implementation Based on such understanding, the technical solution of the present invention, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium, including a plurality of instructions for causing a A computer device (which may be a personal computer, server, or network device, etc.) performs all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes various types of media that can store program codes, such as a read only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.
  • the present invention also provides a data traffic limiting device. Embodiments are set.
  • FIG. 7 a schematic structural diagram of a data traffic limiting apparatus is provided.
  • the data traffic limiting apparatus is applied to a network device, and the network device may be a frame device or a stacking device.
  • the apparatus includes: a first pre-configuration unit 110, a receiving unit 120, a first judging unit 130, a second judging unit 140, a sending unit 150, and a token management unit 160.
  • the first pre-configuration unit 110 is configured to pre-configure a credit bucket for each link aggregation group interface in the network device, where the credit bucket is used to store tokens overflowed by each token bucket, and is different.
  • the token overflowed by the token bucket is marked separately; and the bucket parameters of the credit bucket and each token bucket are pre-configured, and the bucket parameters include a token placement rate of the token bucket, a space of the token bucket, and a credit bucket. Space.
  • the receiving unit 120 is configured to receive a message to be sent.
  • the first determining unit 130 is configured to determine whether the number of tokens in the current token bucket corresponding to the to-be-sent packet reaches the number of tokens that send the to-be-sent packet.
  • the second determining unit 140 is configured to determine, when the number of tokens in the current token bucket does not reach the number of tokens to send the to-be-sent packet, the token in the current token bucket and the loan Indicates whether the sum of the number of tokens that are not overflowed by the current token bucket in the bucket reaches the number of tokens that send the to-be-sent packet.
  • the second determining unit may include the following subunits:
  • the determining subunit is configured to determine a token difference between the number of tokens in the current token bucket and the number of tokens in which the message to be sent is sent.
  • the determining subunit is configured to determine whether the number of non-current token bucket overflow tokens in the credit bucket is not less than the token difference.
  • a first determining subunit configured to determine the current token bucket token and the credit bucket when the number of non-current token bucket overflow tokens in the credit bucket is not less than the token difference The sum of the number of tokens within the reach reaches the number of tokens that send the message to be sent.
  • a second determining subunit configured to: when the number of non-current token bucket overflow tokens in the credit bucket is less than the token difference, determine the current token bucket token and the credit bucket The token does not reach the number of tokens that send the to-be-sent message.
  • the sending unit 150 is configured to: when the sum of the number of tokens in the current token bucket and the token in the credit bucket reaches the number of tokens to send the to-be-sent packet, send the The message to be sent.
  • the token management unit 160 is configured to reduce the number of tokens in the current token bucket and the credit bucket accordingly.
  • the apparatus embodiment shown in FIG. 7 may further include: a message processing unit 170, configured to: when the current If the sum of the number of tokens in the token bucket and the tokens in the credit bucket that are not in the current token bucket overflows, the packet to be sent is discarded when the number of the to-be-sent packets is sent; or And the sum of the number of the tokens in the current token bucket and the tokens in the credit bucket that are not in the current token bucket overflowing the number of the to-be-sent packets Re-mark, then, then send.
  • a message processing unit 170 configured to: when the current If the sum of the number of tokens in the token bucket and the tokens in the credit bucket that are not in the current token bucket overflows, the packet to be sent is discarded when the number of the to-be-sent packets is sent; or And the sum of the number of the tokens in the current token bucket and the tokens in the credit bucket that are not in the current token bucket overflowing the number of the to-be-sent packets
  • the data flow limiting device shown in FIG. 7 may further add a second pre-configuration unit 180.
  • the second pre-configuration unit 180 is connected to the first pre-configuration unit 110 for pre-configuring a debit bucket for each single-board chip, and setting a space of the debit bucket according to a preset space determining method, and a sum of spaces of each of the debit buckets is equal to the overall CBS or an overall PBS, and the preset space determining method includes a space halving determining method or a spatial weight determining method, wherein the debit bucket is used to store the same The token overflowed by the token bucket of the single chip chip, so that the credit bucket stores the tokens uniformly taken out from the respective debit buckets according to preset conditions.
  • the data traffic limiting device provided in this embodiment first configures a credit bucket for each link aggregation group interface in the network device, and stores the token overflowed by each token bucket, and overflows for different token buckets.
  • the token distinguishes the token.
  • the token of the corresponding token bucket is used to send the packet to be sent. If the number of tokens in the token bucket is insufficient to send the packet to be sent, The token in the credit bucket is not the current token bucket overflowing. If the token of the current token bucket and the token of the borrowed token are sufficient to send the to-be-sent packet, the packet to be sent is sent, and the current token is reduced. The number of tokens in the bucket and credit bucket.
  • the first pre-configuration unit may include the following units:
  • the token placement rate configuration unit 111 is configured to determine a token placement rate of each token bucket according to a preset token placement rate determining method, and a sum of token placement rates of each token bucket is equal to the link aggregation group interface.
  • the overall commitment information rate CIR/PIR wherein the preset token placement rate method includes a rate halving determination method or a rate weight determination method;
  • the first space configuration unit 112 is configured to determine a space of each token bucket according to a preset space determining method, and a sum of spaces of each of the token buckets is equal to an overall committed burst traffic of the link aggregation group interface, where
  • the spatial determination method includes a spatial halving determination method or a spatial weight determination method.
  • the second space configuration unit 113 is configured to configure a space of the credit bucket as the whole CBS or an overall PBS.
  • FIG. 9 is a schematic structural diagram of another data flow device according to an embodiment of the present invention.
  • the method further includes: a first dynamic adjustment unit 200, configured to preset a latest cycle.
  • the token bucket adjusts the bucket parameter of the token bucket according to the borrowing information of the current token bucket, and the adjusted order is adjusted according to the borrowing information of the current token bucket.
  • the bucket parameters of the bucket still need to meet the data traffic setting parameters of the link aggregation group interface, and the bucket parameters include the token placement rate and space.
  • the loan information includes the number of loan tokens, the loan ratio, and/or the number of loan tokens.
  • the first dynamic adjustment unit may include: a first space adjustment unit, a second space adjustment unit, and a first token placement rate adjustment unit.
  • a first space adjustment unit configured to increase, according to the current token bucket, a loan information information, if the number of tokens that the current token bucket borrows from the credit bucket exceeds a first threshold in a preset latest period The space of the current token bucket;
  • a second space adjustment unit configured to reduce a space of the corresponding non-current token bucket according to the label distribution information of the current token bucket from the credit bucket loan token, where the sum of the reduced spaces is equal to the sum of the added spaces;
  • the tag distribution information includes the token of the token and the corresponding number of loans.
  • the first token placement rate adjustment unit is configured to determine a new token placement rate of each token bucket according to the re-determined space of each token bucket and a token placement rate calculation method.
  • the first dynamic adjustment unit may include: a second token placement rate adjustment unit, a third space adjustment unit, and a fourth space adjustment unit.
  • a second token placement rate adjustment unit configured to: according to the current token bucket, the borrowing information of the current token bucket, if the number of tokens that the current token bucket borrows from the credit bucket exceeds a first threshold in a preset latest period And increasing the token placement rate of the current token bucket from an initial token placement rate to a new token placement rate. Wherein the new token placement rate is greater than the initial placement rate.
  • a third space adjustment unit configured to adjust a space of the current token bucket according to a new token placement rate of the current token bucket, and a correspondence between a token placement rate and a space of the token bucket;
  • a fourth space adjustment unit configured to determine, according to the current token bucket, the adjustment space of the corresponding non-current token bucket from the mark distribution information of the credit bucket loan token, so that the sum of the reduced spaces is equal to the increased space Sum.
  • the apparatus shown in FIG. 9 may further include: a second dynamic adjustment unit 300, configured to dynamically adjust a space of the debit bucket, and adjust a space of the debit bucket according to an adjustment ratio of the token bucket.
  • a second dynamic adjustment unit 300 configured to dynamically adjust a space of the debit bucket, and adjust a space of the debit bucket according to an adjustment ratio of the token bucket.
  • the data restriction method provided in this embodiment when a certain token bucket in the link aggregation group interface borrows the loan information of the loan token from the credit bucket to meet the preset condition of the adjustment bucket, according to the current token bucket
  • the loan information increases the bucket parameter of the token bucket, which includes the token placement rate and space. Therefore, the bucket parameters of the token buckets of each link in the link aggregation group interface are dynamically adjusted, so that the token placement rate of each token bucket and its corresponding member port are implemented.
  • the data traffic on the interface is matched, which improves the control precision of the overall data traffic of the link aggregation group interface.
  • the apparatus includes: a processor 101, a memory 102, a transmitter 103, and a receiver 104.
  • the receiver 104 is configured to receive a message to be sent.
  • the memory 102 is configured to store a credit bucket configured in advance for each link aggregation group interface in the network device, and a bucket parameter of each token bucket; wherein the credit bucket is used to store each token bucket Overflowing tokens and distinguishing tokens for tokens that overflow from different token buckets. And, an operation instruction executable by the processor 101 is stored, and the processor reads an operation instruction in the memory 102 for implementing the following functions:
  • the packet to be sent When the packet to be sent is received, it is determined whether the number of tokens in the current token bucket corresponding to the to-be-sent packet reaches the number of tokens for sending the to-be-sent packet; If the number of tokens does not reach the number of tokens to be sent, the sum of the tokens in the token bucket and the number of tokens in the credit bucket that are not in the current token bucket overflows. Determining the number of tokens to be sent; when the sum of the tokens in the current token bucket and the tokens in the credit bucket reaches the number of tokens to send the to-be-sent packet, corresponding Reducing the number of tokens in the current token bucket and the credit bucket; and transmitting, by the sender 103, the to-be-sent packet.
  • the memory 102 further includes an operation instruction that the processor 101 can execute, so that the processor reads the operation instruction in the memory 102 for implementing the following function: when the current order If the sum of the tokens in the token bucket and the number of tokens in the credit bucket that are not overflowing by the current token bucket does not reach the number of the to-be-sent packets, the packet to be sent is discarded or The message to be sent is re-marked.
  • the memory 102 further includes an operation instruction that the processor 101 can execute, so that the processor reads the operation instruction in the memory 102 for implementing the following functions:
  • the debit bucket is configured to store a token overflow of the token bucket belonging to the same single board chip, so that the credit bucket storage is evenly distributed from each Determining the token in the debit bucket, and setting a space of the debit bucket according to a preset space determining method, and a sum of spaces of each of the debit buckets is equal to the overall CBS/PBS, wherein the
  • the preset space determining method includes a space halving determining method or a spatial weight determining method.
  • the memory 102 further includes an operation instruction that the processor 101 can execute, so that the processor reads the operation instruction in the memory 102 for implementing the following functions:
  • the space of the current token bucket is increased according to the borrowing information of the current token bucket; According to the current token bucket
  • the token distribution information of the bucket loan token is reduced, and the space of the corresponding non-current token bucket is reduced, and the sum of the reduced spaces is equal to the sum of the added spaces; and the space of each token bucket and the token placement rate are determined according to the re-determined space of each token bucket.
  • the loan information includes a token including a token and a corresponding loan amount; the token distribution information includes a token of the token and a corresponding loan amount.
  • the memory 102 further includes an operation instruction that the processor 101 can execute, so that the processor reads the operation instruction in the memory 102 for implementing the following functions:
  • the token of the current token bucket is used according to the borrowing information of the current token bucket.
  • the placement rate is increased from the initial token placement rate to the new token placement rate; according to the current token bucket new token placement rate, and the correspondence between the token placement rate and the token bucket space, the placement is adjusted. Defining the space of the current token bucket; determining, according to the current token bucket, the adjustment space of the corresponding non-current token bucket, so that the sum of the reduced spaces is equal to the increased space Sum.
  • the present invention also provides a network device, including any one of the foregoing embodiments of data traffic limiting devices.
  • the various embodiments in the specification are described in a progressive manner, and the same or similar parts between the various embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments.
  • the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment. Those of ordinary skill in the art can understand and implement without any creative effort.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本发明公开了一种数据流量限制方法及装置,应用于链路聚合组接口的成员端口位于不同单板芯片组上的网络设备,首先为每个链路聚合组接口配置一个贷记桶,用于存储各个成员端口对应的令牌桶溢出的令牌,且为不同令牌桶溢出的令牌区分标记;接收到待发送报文,如果令牌桶中的令牌数量不够发送待发送报文,从贷记桶中借贷非当前令牌桶溢出的令牌,若当前令牌桶的令牌与借贷的令牌足够发送待发送报文时,发送待发送报文,同时,减少当前令牌桶及贷记桶内的令牌数量。以实现对网络设备中每个链路聚合组接口的多个链路上的数据流量的动态限制,使该链路聚合组接口内各个成员端口的数据流量之和能够更好地与链路聚合组接口所限制的整体数据流量相匹配。

Description

数据流量限制方法及装置
本申请要求于2014年01月29日提交中国专利局、申请号为201410043708.5、发明名称为“数据流量限制方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及网络通信技术领域,特别是涉及一种数据流量限制方法及装置。
背景技术
在通信网络中,某个链路的数据流量很可能超过该链路与服务提供商签订的服务水平协议,此时,该链路就会占用通信网络中其它链路的带宽,导致其它链路不能得到正常的带宽服务。在此种情况下,为保证通信网络的正常运行,需要在通信网络入口处对链路数据流量进行限制,以保证链路数据流量不超出服务水平协议签订的带宽服务。
令牌桶是一种常用的数据流量限制技术,令牌桶是网络设备内部的存储池,令牌是指以给定速率填充令牌桶的虚拟信息包。根据分配给链路的数据流量往令牌桶中放置令牌,同时,根据链路的实际数据流量从令牌桶中取令牌。当令牌桶内没有令牌时,则数据无法发送给该链路。通常,网络设备为每个单板芯片组配置一个令牌桶,如果放置令牌的速率大于实际数据流量,则当令牌桶装满令牌后,再接收到的令牌将被丢弃;如果放置令牌的速率持续小于实际数据流量,令牌桶内的令牌被取空后,无法继续发送数据。此种令牌存取方式不能很好地利用链路聚合组接口内其它链路剩余的数据流量。
如图1所示,为成员端口跨堆叠设备(跨单板)的链路聚合组接口(Eth-Thunk)的网络拓扑示意图,链路聚合组接口包括两个成员端口,两个成员端口分别通过链路上行至堆叠设备中的两个不同的芯片组中。针对图1所示的链路聚合组接口,传统的数据流量限制方法包括以下两种:
一种是将为链路聚合组接口设置的整体CIR(Committed Information Rate,承诺信息速率),同步扩散到该链路聚合组接口的各个成员端口上,假设为图1所示的链路聚合组接口配置的整体CIR为100M,则该链路聚合组接口的每个成员端口允许通过的数据流量也为100M,两个成员端口允许通过的数据流量总和为200M,与设置的整体CIR为100M不符。
另一种是通过静态预估方式设置权重,假设为图1所示的链路聚合组接口设置的整体CIR为100M,两个成员端口的权重为1:1,即两个成员端口的限制数据流量均为50M。如果分担到一个成员端口的数据流量为60M,而分担到另一个成员的数据流量为40M,由于所述一个成员端口允许通过的最大数据流量为50M,则两个成员端口的实际数据流量为50M+40M=90M,不能满足服务提供商的预期整体数据流量100M的要求。此种数据流量限制方法,链路聚合组接口中各个成员端口的实际数据流量不能超过为其分配的数据流量,当某个成员端口的实际数据流量低于该成员端口限制的数据流量时,整个链路聚合组接口的整体数据流量将小于设定的整体数据流量。
综上所述,传统的两种数据流量限制方法的链路聚合组接口中各成员端口的数据流量分担实际情况很难与链路聚合组接口的整体CIR或整体PIR(Peak Information Rate,峰值信息速率)保持一致。
发明内容
本发明实施例中提供了一种数据流量限制方法及装置,以解决现有技术中的数据流量限制方式不能按链路聚合组接口预先设定的整体数据流量要求精确控制数据流量的技术问题。
为了解决上述技术问题,本发明实施例公开了如下技术方案:
第一方面,本发明提供一种数据流量限制方法,应用于链路聚合组接口的成员端口位于不同单板芯片组上的网络设备,预先为网络设备内的每个链路聚合组接口配置一个贷记桶,所述贷记桶用于存储各个成员端口对应的令牌桶溢出的令牌,且不同令牌桶溢出的令牌进行区分标记;以及,预先配置贷记桶及各个令牌桶的桶参数,所述桶参数包括令牌桶的令牌放置速率、令牌桶的空间和贷记桶的空间;
所述数据流量限制方法包括:
当接收到待发送报文时,判断所述待发送报文对应的当前令牌桶内的令牌数量是否达到发送所述待发送报文的令牌数量;
当所述当前令牌桶内的令牌数量未达到发送所述待发送报文的令牌数量时,判断所述当前令牌桶内的令牌及所述贷记桶内非当前令牌桶溢出的令牌的数量总和是否达到发送所述待发送报文的令牌数量;
当所述当前令牌桶内的令牌及所述贷记桶的所述令牌的数量总和达到发送所述待发送报文的令牌数量时,发送所述待发送报文,并相应减少所述当前令牌桶及所述贷记 桶内的令牌数量。
结合第一方面,在第一方面的第一种可能的实现方式中,判断所述当前令牌桶内的令牌及所述贷记桶内非当前令牌桶溢出的令牌的数量总和是否达到发送所述待发送报文的令牌数量,包括:
确定所述当前令牌桶内的令牌数量与发送所述待发送报文的令牌数量之间的令牌差额;
判断所述贷记桶内非当前令牌桶溢出的令牌的数量是否不小于所述令牌差额;
当所述贷记桶内非当前令牌桶溢出的令牌的数量不小于所述令牌差额时,确定所述当前令牌桶内令牌及所述贷记桶内的所述令牌的数量总和达到发送所述待发送报文的令牌数量;
当所述贷记桶内非当前令牌桶溢出令牌的数量小于所述令牌差额时,确定所述当前令牌桶内令牌及所述贷记桶内的所述令牌未达到发送所述待发送报文的令牌数量。
结合第一方面或第一方面的第一种可能的实现方式,在第一方面的第二种可能的实现方式中,所述方法还包括:当所述当前令牌桶内的令牌数量及所述贷记桶内非当前令牌桶溢出的令牌数量之和未达到发送所述待发送报文的数量时,丢弃所述待发送报文或对所述待发送报文进行重新标记。
结合第一方面,在第一方面的第三种可能的实现方式中,所述预先设置贷记桶及各个令牌桶的桶参数,包括:
根据预设令牌放置速率确定方法确定各个令牌桶的令牌放置速率,且各个令牌桶的令牌放置速率之和等于所述链路聚合组接口的整体承诺信息速率或峰值信息速率,其中,所述预设令牌放置速率确定方法包括速率等分确定法或速率权重确定法;
根据预设空间确定方法确定各个令牌桶的空间,且各个所述令牌桶的空间之和等于所述链路聚合组接口的整体承诺突发尺寸或峰值突发尺寸,其中,所述预设空间确定方法包括空间等分确定法或空间权重确定法;
所述贷记桶的空间等于所述整体承诺突发尺寸或峰值突发尺寸。
结合第一方面的第三种可能的实现方式,在第一方面的第四种可能的实现方式中,所述方法还包括:当贷记桶满足预设条件时,丢弃所述贷记桶内的令牌;
其中,所述当贷记桶满足预设条件时,丢弃所述贷记桶内的令牌包括:
当所述贷记桶内的令牌超过预设时长时,按照预设公平算法丢弃所述令牌;
当检测到令牌桶内无令牌时,则丢弃贷记桶内存放的该令牌桶在当前周期之前溢出 的令牌。
结合第一方面,在第一方面的第五种可能的实现方式中,所述方法还包括:
预先为每个令牌桶配置一个借记桶,所述借记桶用于存储属于同一单板芯片组的令牌桶溢出的令牌,以使所述贷记桶按照预设公平算法从各个所述借记桶中取出的令牌,并根据预设空间确定方法设置所述借记桶的空间,且各个所述借记桶的空间之和等于所述整体承诺突发尺寸或峰值突发尺寸,其中,所述预设空间确定方法包括空间等分确定法或空间权重确定法。
结合第一方面、第一方面的第一种可能的实现方式、第一方面的第三种可能的实现方式、第一方面第四种可能的实现方式或第一方面第五种可能的实现方式,在第一方面第六种可能的实现方式中,所述方法还包括:
在预设个最新周期内,若当前令牌桶从所述贷记桶中借贷令牌的借贷信息满足调桶预设条件,按照所述当前令牌桶的借贷信息增长所述当前令牌桶的空间;其中,所述借贷信息包括借贷令牌的数量和/或借贷令牌的次数;
根据当前令牌桶从贷记桶借贷令牌的标记分布信息,减少相应的非当前令牌桶的空间,减小的空间之和等于增加的空间之和;
根据重新确定的各个令牌桶的空间及令牌放置速率计算方法,确定各个令牌桶的新的令牌放置速率。
结合第一方面、第一方面的第一种可能的实现方式、第一方面的第三种可能的实现方式、第一方面第四种可能的实现方式或第一方面第五种可能的实现方式,在第一方面第七种可能的实现方式中,所述方法还包括:
在预设个最新周期内,若当前令牌桶从所述贷记桶借贷令牌的借贷信息满足调桶预设条件,按照所述当前令牌桶的借贷令牌的数量,将所述当前令牌桶的令牌放置速率从初始令牌放置速率提高至新的令牌放置速率;其中,所述借贷信息包括借贷令牌的数量和/或借贷令牌的次数。
结合第一方面的第七种可能的实现方式,在第一方面第八种可能的实现方式中,所述方法还包括:
根据所述当前令牌桶的新的令牌放置速率,以及令牌放置速率与令牌桶的空间之间的对应关系,调整所述当前令牌桶的空间;
根据所述当前令牌桶从贷记桶借贷令牌的标记分布信息,确定相应非当前令牌桶的调整空间,以使减小的空间之和等于增加的空间之和。
第二方面,本发明还提供一种数据流量限制装置包括:第一预配置单元、接收单元、第一判断单元、第二判断单元、发送单元和令牌管理单元;
所述第一预配置单元,用于预先为网络设备内的每个链路聚合组接口配置一个贷记桶,所述贷记桶用于存储各个令牌桶溢出的令牌,且为不同令牌桶溢出的令牌进行区分标记;以及,预先配置贷记桶及各个令牌桶的桶参数,所述桶参数包括令牌桶的令牌放置速率、令牌桶的空间和贷记桶的空间;
所述接收单元,用于接收待发送报文;
所述第一判断单元,用于判断所述待发送报文对应的当前令牌桶内的令牌数量是否达到发送所述待发送报文的令牌数量;
所述第二判断单元,用于当所述当前令牌桶内的令牌数量未达到发送所述待发送报文的令牌数量时,判断当前令牌桶内的令牌及所述贷记桶内非当前令牌桶溢出的令牌的数量总和是否达到发送所述待发送报文的令牌数量;
所述发送单元,用于当所述当前令牌桶内的令牌及所述贷记桶的所述令牌的数量总和达到发送所述待发送报文的令牌数量时,发送所述待发送报文;
令牌管理单元,用于相应减少所述当前令牌桶及所述贷记桶内的令牌数量。
结合第二方面,在第二方面的第一种可能的实现方式中,所述第二判断单元包括:
确定子单元,用于确定当前令牌桶内的令牌数量与发送所述待发送报文的令牌数量之间的令牌差额;
判断子单元,用于判断所述贷记桶内非当前令牌桶溢出令牌的数量是否不小于所述令牌差额;
第一确定子单元,用于当所述贷记桶内非当前令牌桶溢出令牌的数量不小于所述令牌差额时,确定所述当前令牌桶内令牌及所述贷记桶内的所述令牌的数量总和达到发送所述待发送报文的令牌数量;
第二确定子单元,用于当所述贷记桶内非当前令牌桶溢出令牌的数量小于所述令牌差额时,确定所述当前令牌桶内的令牌及所述贷记桶内的所述令牌未达到发送所述待发送报文的令牌数量。
结合第二方面或第二方面的第一种可能的实现方式,在第二方面的第二种可能的实现方式中,所述装置还包括:报文处理单元;
所述报文处理单元,用于当所述当前令牌桶内的令牌及所述贷记桶内非当前令牌桶溢出的令牌的数量之和未达到发送所述待发送报文的数量时,丢弃所述待发送报文;
或者,用于当所述当前令牌桶内的令牌数量及所述贷记桶内非当前令牌桶溢出的令牌的数量之和未达到发送所述待发送报文的数量时,对所述待发送报文进行重新标记。
结合第二方面或第二方面的第一种可能的实现方式,在第二方面的第三种可能的实现方式中,所述第一预配置单元包括:
令牌放置速率配置单元,用于根据预设令牌放置速率确定方法确定各个令牌桶的令牌放置速率,且各个令牌桶的令牌放置速率之和等于所述链路聚合组接口的整体承诺信息速率或峰值信息速率,其中,所述预设令牌放置速率方法包括速率等分确定法或速率权重确定法;
第一空间配置单元,用于根据预设空间确定方法确定各个令牌桶的空间,且各个所述令牌桶的空间之和等于所述链路聚合组接口的整体承诺突发尺寸或峰值突发尺寸,其中,所述空间确定方法包括空间等分确定法或空间权重确定法;
第二空间配置单元,用于将所述贷记桶的空间配置为所述整体承诺突发尺寸或峰值突发尺寸。
结合第二方面,在第二方面的第四种可能的实现方式中,所述装置还包括:
第二预配置单元,用于预先为每个单板芯片组配置一个借记桶,并根据预设空间确定法设置所述借记桶的空间,且使各个所述借记桶的空间之和等于所述整体承诺突发尺寸或峰值突发尺寸,所述预设空间确定方法包括空间等分确定法或空间权重确定法,其中,所述借记桶用于存储属于同一单板芯片组的令牌桶溢出的令牌,以使所述贷记桶存储按照预设条件均匀从各个所述借记桶中取出的令牌。
结合第二方面、第二方面的第一种可能的实现方式或第二方面的第四种可能的实现方式,在第二方面的第五种可能的实现方式中,所述装置还包括:
第一空间调整单元,用于在预设个最新周期内,若当前令牌桶从所述贷记桶中借贷的令牌数量超过第一阈值,按照所述当前令牌桶的借贷信息增长所述当前令牌桶的空间;
第二空间调整单元,用于根据当前令牌桶从贷记桶借贷令牌的标记分布信息,减少相应的非当前令牌桶的空间,减小的空间之和等于增加的空间之和;
第一令牌放置速率调整单元,用于根据重新确定的各个令牌桶的空间及令牌放置速率计算方法,确定各个令牌桶的新的令牌放置速率。
结合第二方面、第二方面的第一种可能的实现方式或第二方面的第四种可能的实现方式,在第二方面的第六种可能的实现方式中,所述装置还包括:
第二令牌放置速率调整单元,用于在预设个最新周期内,若当前令牌桶向所述贷记桶借贷的令牌数量超过第一阈值,按照所述当前令牌桶的借贷信息,将所述当前令牌桶的令牌放置速率从初始令牌放置速率提高至新的令牌放置速率。
结合第二方面的第六种可能的实现方式,在第二方面的第七种可能的实现方式中,所述装置还包括:
第三空间调整单元,用于根据所述当前令牌桶的新的令牌放置速率,以及令牌放置速率与令牌桶的空间之间的对应关系,调整所述当前令牌桶的空间;
第四空间调整单元,用于根据所述当前令牌桶从贷记桶借贷令牌的标记分布信息,确定相应非当前令牌桶的调整空间,以使减小的空间之和等于增加的空间之和。
第三方面,本发明提供一种网络设备,包括第二方面所述的任意一种数据流量限制装置。
由以上技术方案可见,本发明实施例提供的数据流量限制方法,首先为网络设备内的每个链路聚合组接口配置一个贷记桶,用于存储各个令牌桶溢出的令牌,且为不同的令牌桶溢出的令牌区分标记;当接收到待发送报文时,判断待发送报文对应的令牌桶内的令牌是否足以发送待发送报文,如果所述令牌桶中的令牌数量不满足发送待发送报文的数量时,从贷记桶中借贷非当前令牌桶溢出的令牌,若当前令牌桶的令牌与借贷的令牌足够发送待发送报文时,发送所述待发送报文,同时,减少当前令牌桶及贷记桶内的令牌数量。以实现对网络设备中每个链路聚合组接口的多个链路上的数据流量的动态限制,使该链路聚合组接口内各个成员端口的数据流量之和能够更好地与链路聚合组接口所限制的整体数据流量相匹配。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其它的附图。
图1为一种跨框链路聚合组接口的结构示意图;
图2为本发明实施例提供的一种数据流量限制方法的流程示意图;
图3为本发明实施例针对图1的一种具体应用场景示意图;
图4为针对图1所示的链路聚合组接口的数据流量变化曲线图;
图5为本发明实施例针对图1的另一种具体应用场景示意图;
图6为本发明实施例另一种数据流量限制方法的流程示意图;
图7为本发明实施例一种数据流量限制装置的结构示意图;
图8为本发明图7所示的所述第一预配置单元的结构示意图;
图9为本发明实施例另一种数据流量装置的结构示意图;
图10为本发明实施例再一种数据流量限制装置的结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本发明中的技术方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都应当属于本发明保护的范围。
本发明实施例提供的数据流量限制方法,应用于网络设备(例如,框式设备或堆叠设备等)内的链路聚合组接口,其中,所述链路聚合组接口的成员端口跨单板或跨堆叠设备的链路聚合组接口(Eth-Trunk),即链路聚合组接口的成员端口位于不同的单板芯片组中。对于服务提供商而言,此种链路聚合组接口的数据流量的限制参数是针对整个链路聚合组接口而言的,即该链路聚合组接口的全部成员端口允许通过的数据流量之和为服务提供商针对该链路聚合组接口所设定的整体数据流量。
在进行数据流量限制之前,首先为网络设备内每个链路聚合组接口(网络设备上可能存在多个链路聚合组接口)配置一个贷记桶。所述贷记桶用于存储链路聚合组接口的各个单板芯片组对应令牌桶溢出的令牌,且为不同令牌桶溢出的令牌区分标记(例如,可以将不同的令牌桶溢出的令牌染成不同颜色);
可选地,贷记桶按照预设条件丢弃贷记桶内的部分或全部令牌,而且,贷记桶丢弃令牌时按照预设公平算法丢弃来自不同令牌桶的令牌,从而保证贷记桶内的令牌能够准确反映链路聚合组接口各个成员端口的剩余数据流量的情况。所述预设条件可以包括以下情况:
①在令牌桶未从贷记桶中借贷令牌的周期内,当贷记桶内令牌的存放时间超过预设时长时,按照预设公平算法丢弃所述令牌,而且,当所述令牌桶内的令牌来自不同的令牌桶时,按照预设公平算法丢弃来自不同令牌桶的所述令牌;其中,所述周期可以是CPU的时钟周期的预设倍数。
②当检测到令牌桶内无令牌时,则丢弃贷记桶内存放的该令牌桶在当前周期之前溢出的令牌。
以及,预先配置贷记桶及各个令牌桶的桶参数,所述桶参数包括令牌桶的令牌放置速率、令牌桶的空间和贷记桶的空间。
假设链路聚合组接口拥有的单板芯片组(例如,框式设备的单板芯片或盒式设备的芯片)的令牌桶个数为M,链路聚合组接口整体CIR(或PIR)值为N1,整体CBS或PBS值为N2,第i个令牌桶的初始空间为CBSi(或PBSi),第i个令牌桶的初始令牌放置速率为CIRi(或PIRi),则各个令牌桶参数满足以下公式:
Figure PCTCN2014087395-appb-000001
Figure PCTCN2014087395-appb-000002
   (式1)
Figure PCTCN2014087395-appb-000003
Figure PCTCN2014087395-appb-000004
   (式2)
其中,CBS(Committed Burst Size,承诺突发尺寸),PBS(Peak Burst Size,峰值突发尺寸)。
贷记桶的空间等于各个令牌桶的空间之和,即贷记桶的空间等于N2。
配置令牌桶参数的过程可以包括以下过程:
按照预设令牌放置速率确定方法确定令牌放置速率CIRi,比如,可以按照速率等分确定法,即CIRi=N1/M;也可以按照成员端口的权重值确定,即速率权重确定法,假设第i个成员端口的权重值为Pi,即
Figure PCTCN2014087395-appb-000005
当然,还可以利用其它的方法确定令牌放置速率,本申请对此不以限制。
所述令牌桶的初始空间CBSi可以利用空间确定方法确定,比如,空间等分确定法,即CBSi=N2/M;也可以按照空间权重确定法,假设第i个成员端口的权重值为Pi,即
Figure PCTCN2014087395-appb-000006
当然,还可以利用其它方法确定令牌桶的空间。
参见图2,为本发明实施例提供的一种数据流量限制方法的流程示意图,如图2所示,所述方法可以包括以下步骤:
S10,接收待发送报文。
S11,判断所述待发送报文对应的成员端口的当前令牌桶内的令牌数量是否达到发送所述待发送报文的令牌数量;如果否,则执行步骤S13;如果是,则执行步骤S12。
当接收到待发送报文时,确定所述待发送报文是哪个成员端口的报文,并确定与该 成员端口对应的单板芯片组,从而确定对应的令牌桶。确定令牌桶后,判断所述令牌桶内的令牌是否满足发送待发送报文的令牌数量,如果满足,则在步骤S12,利用所述令牌桶内的令牌发送待发送报文,并减少所述令牌桶内相应数量的令牌。
当所述当前令牌桶内的令牌数量未达到发送所述待发送报文的令牌数量时,在步骤S13,判断当前令牌桶内的令牌及所述贷记桶内非当前令牌桶溢出的令牌的数量总和是否达到发送所述待发送报文的令牌数量;如果是,执行步骤S14;如果否,则执行步骤S15。
在本发明的一个实施例中,步骤S13的判断过程可以包括以下步骤:
1),确定当前令牌桶内的令牌数量与发送所述待发送报文的令牌数量之间的令牌差额;
2),判断贷记桶内非当前令牌桶溢出的令牌的数量是否不小于所述令牌差额;
当所述贷记桶内非当前令牌桶溢出令牌的数量不小于所述令牌差额时,在步骤3),确定所述当前令牌桶内令牌及所述贷记桶内的绿色令牌的数量达到发送所述待发送报文的令牌数量;
当所述贷记桶内非当前令牌桶溢出令牌的数量小于所述令牌差额时,在步骤4),确定所述当前令牌桶内令牌及所述贷记桶内的绿色令牌未达到发送所述待发送报文的令牌数量。
当所述当前令牌桶内的令牌及所述贷记桶的所述令牌的数量总和达到发送所述待发送报文的令牌数量时,在步骤S14,发送所述待发送报文,并相应减少当前令牌桶及所述贷记桶内的令牌数量。
假设发送所述待发送报文缺少的令牌数量为N,则在发送所述待发送报文后,当前令牌桶内的令牌清空,同时,贷记桶内的非当前令牌桶溢出的令牌减少N。
当所述当前令牌桶内的令牌及所述贷记桶的所述令牌的数量总和未达到发送所述待发送报文的令牌数量时,在步骤S15,丢弃报文或重新标记所述报文。
当贷记桶内非当前令牌桶溢出的令牌小于所述令牌差额时,即没有足够数量的令牌发送所述待发送报文,丢弃所述待发送报文,或者,重新标记所述待发送报文的IP Precedence、DSCP或EXP等字段的值,然后再发送重新标记后的报文。此时,令牌桶和贷记桶内的令牌数量均不发生变化。
本实施例提供的数据流量限制方法,首先为网络设备内的每个链路聚合组接口配置一个贷记桶,用于存储各个令牌桶溢出的令牌,且为不同的令牌桶溢出的令牌区分标记; 当接收到待发送报文时,首先判断待发送报文对应令牌桶中的令牌数量是否满足发送待发送报文的数量,如果所述令牌桶中的令牌数量不满足发送待发送报文时,从贷记桶中借贷非当前令牌桶溢出的令牌,若当前令牌桶的令牌与借贷的令牌足够发送待发送报文时,发送所述待发送报文,同时,减少当前令牌桶及贷记桶内相应令牌的令牌数量。以实现对网络设备中每个链路聚合组接口的多个链路上的数据流量的动态限制,使该链路聚合组接口内各个成员端口的数据流量之和能够更好地与链路聚合组接口所限制的整体数据流量相匹配。
如图3所示,为与图1所示实施例对应的一种具体应用场景的示意图。
如图3所示,当前链路聚合组接口的两个成员端口分别位于单板芯片组1和单板芯片组2,两个单板芯片组配置的令牌桶分别为令牌桶11和令牌桶21,为该链路聚合组接口配置贷记桶3。贷记桶3将令牌桶11溢出的令牌染成蓝色,将令牌桶21溢出的令牌染成绿色。本实例中所示的链路聚合组接口包含2个链路,本申请实施例所提供的数据流量限制方法能够应用到包含任意多个链路的链路聚合组接口中。
为方便说明,假设该链路聚合组接口的整体CIR为100M,令牌桶11和令牌桶21的CIR均设置为50M的数据流量。而且,从链路聚合组建立后可以正常工作的初始时刻t0,各个令牌桶是满的,即令牌桶内已经装满令牌,根据成员端口的实际数据流量从令牌桶中取出令牌,然后按照预设的令牌放置速率向令牌桶中放置令牌。
如图4所示,该链路聚合组接口两个成员端口的数据流量随时间的变化曲线示意图;图4中,横轴为时间轴,纵轴为成员端口的数据流量,图中○所在的曲线表示单板芯片组1对应的成员端口的数据流量变化曲线,图中的□所在的曲线表示单板芯片组2对应的成员端口的数据流量变化曲线,其中,假设每两个时刻点之间的时间间隔为一个周期,即图中给出了九个周期的数据流量变化曲线示意图,其中,所述周期为网络设备的CPU的时钟周期的预设倍数。
从t1到t2时间段内,负载分担到芯片组1对应的成员端口的数据流量为30M(没有超过限制的数据流量50M),分担到芯片组2对应的成员端口的数据流量为40M(没有超过限制的数据流量50M),则该链路聚合组接口的总数据流量为70M。该时间段内,令牌桶11的令牌放置速率(50M)大于数据流量速率(30M),令牌桶21的令牌放置速率(50M)大于数据流量速率(40M)。即两者均有令牌溢出,溢出的令牌被贷记桶按某种公平算法取走后染成对应的颜色,令牌桶11溢出的令牌被染成蓝色放入贷记桶3内,令牌桶21溢出的令牌被染成绿色放入贷记桶3内,贷记桶3内存放的令牌达到预设时 长后,按照预设公平算法丢弃不同颜色的令牌,假设所述预设时长为一个周期,贷记桶3内的蓝色令牌和绿色令牌全部被丢弃。
在t2时刻,负载分担到芯片组2对应的成员端口的数据流量从40M增加到60M,芯片组1对应的成员端口的数据流量未变,从t2到t3时间段内,负载分担到芯片组2对应的成员端口的数据流量为60M,此时,令牌桶11的令牌放置速率(50M)大于数据流量速率(30M),即令牌桶11的令牌溢出,溢出的令牌被贷记桶3取走并染成蓝色,从t2到t3时间段内,令牌桶11剩余20M*(t3-t2)的令牌放入贷记桶3中。而令牌桶21的令牌放置速率小于数据流量,即芯片组2对应的成员端口在发送报文时,需要向贷记桶3借贷令牌桶11溢出的令牌,贷记桶中的蓝色令牌数量为20M*(t3-t2)大于(60-50)M*(t3-t2),因此芯片组2对应的成员端口允许通过60M数据流量,该链路聚合组接口的整体数据流量为30M+60M=90M,未超过整体数据流量100M。到t3时刻时,贷记桶内剩余的10M*(t3-t2)的蓝色令牌被丢弃。从t3到t7时间段内,两个芯片组的数据流量与t2到t3时间段内的数据流量变化情况相同,令牌的分配过程相同,此处不再赘述。
在t7时刻,负载分担到芯片组1对应成员端口的数据流量从30M增加到50M,从t7到t8时间段内,此时,令牌桶21的令牌放置速率等于取出令牌的速率相同,没有多余的令牌放入贷记桶3内;此时间段内,负载分担到芯片组2所在成员端口的数据流量仍为60M,即令牌桶11的令牌放置速率小于取出令牌的速率,需要从贷记桶中借贷10M的令牌才能通过60M的数据流量,但是,由于令牌桶21没有多余的令牌放入贷记桶中,因此,贷记桶内没有满足条件的令牌,故芯片组2所在成员端口只能通过50M的数据流量,两个成员端口的总数据流量之和为50M+50M=100M,等于为链路聚合组接口设置的整体CIR或整体PIR。
请参见图5,示出了本申请实施例另一种应用场景的示意图,在图3所示的应用场景中,还可以为每个单板芯片组配置一个借记桶。设置借记桶后,发送报文和令牌的分配流程与图1所示实施例中的相应流程相同,此处不再赘述。
所述链路聚合组接口的两个成员端口分别设置在单板芯片组1和单板芯片组2。为单板芯片组1配置令牌桶11和借记桶12,为单板芯片组2配置令牌桶21和借记桶22。其中,借记桶用于存储同一单板芯片组内令牌桶溢出的令牌,当借记桶满时,令牌桶溢出的令牌直接丢弃。为网络设备的该链路聚合组接口配置贷记桶3,当满足预设触发条件后,贷记桶3按照预设公平算法从各个借记桶取走令牌,确保在周期T内,贷记桶3均匀从全部有令牌溢出的令牌桶对应的借记桶中取走令牌(即保证了贷记桶内令牌的标 记分布均匀,或者,令牌颜色分布满足预设比例),进而保证链路聚合组接口中任意一个成员端口的数据流量突然增加,而且其余成员端口的数据流量有剩余时,数据流量增加的成员端口能够从贷记桶中借贷出令牌,避免贷记桶内的令牌颜色单一,无法使令牌桶借贷其它颜色令牌的现象发生,实现对成员端口跨芯片组的链路聚合组接口整体数据流量精确控制。
其中,所述预设触发条件可以包括以下情况:
①在没有令牌桶从贷记桶借贷令牌的周期内,贷记桶按照第一预设时长,并按照公平算法均匀从各个借记桶中取走令牌;
②当有令牌桶从贷记桶借贷令牌时,且距离贷记桶上一次从借记桶内取走令牌超过第二预设时长时,按照公平算法均匀从各个借记桶中取走令牌;
③当贷记桶内的令牌数量低于预设值时,贷记桶按照公平算法从各个借记桶中取走令牌。所述预设值可以根据需求自由设定。
其中,各个借记桶的空间之和等于链路聚合组接口的整体CBS或整体PBS,每个借记桶的空间可以按照等分确定法或权重确定法确定,当然也可以采用其它的方法确定。
延用图4中的数据流量变化实例,按照指定的令牌放置速率(50M)向令牌桶11和令牌桶21中放令牌,从t1到t2时间段内,令牌桶11的令牌放置速率(50M)大于数据流量(30M),令牌溢出,溢出的令牌放入借记桶12内;同时,令牌桶21的令牌放置速率(50M)大于数据流量(40M),令牌溢出,溢出的令牌放入借记桶22内。此时,贷记桶3按照某种公平算法从借记桶12和借记桶22中取走令牌,并将两个借记桶取出的令牌染成不同的颜色。其中,所述公平算法可以是任意一种公平算法,本申请对此并不限制。
本实施例提供的链路聚合组接口结构,为链路聚合组接口内的每个单板芯片组配置一个借记桶,利用借记桶存储同一单板芯片组的令牌桶溢出的令牌,然后,贷记桶在触发时间点从各个借记桶取走令牌,避免令牌桶一有令牌溢出,贷记桶就要从令牌桶中取走令牌,即避免了与贷记桶对应的主控板与令牌桶对应的单板之间的频繁交互,从而降低了对贷记桶对应的主控板性能参数的要求,降低了网络设备的成本。
请参见图6,示出了本申请实施例另一种数据流量限制方法的流程示意图,在图2所示实施例的基础上,还包括以下步骤:
在步骤S14之后,执行步骤S16,统计借出令牌的标记分布信息;所述标记分布信 息包括令牌的标记及对应的借贷数量,即统计各个颜色标记的令牌及对应的借出数量,例如,绿色令牌借出20M。
S17,当前令牌桶从所述贷记桶中借贷令牌的借贷信息满足调桶预设条件时,按照所述当前令牌桶的令牌借贷信息及所述标记分布信息调整对应的令牌桶的桶参数;而且调整后的各个令牌桶的桶参数仍需满足所述链路聚合组接口的数据流量设定参数,所述桶参数包括令牌放置速率和空间。所述借贷信息包括借贷令牌的数量、借贷令牌的次数和/或借贷比例,所述借贷比例是指某一令牌桶借贷令牌的数量与发送所述待发送报文所需的令牌的数量的比值。
所述调桶预设条件可以包括某一令牌桶连续从贷记桶借贷的令牌数量超于过第一阈值后,和/或,某一令牌桶连续从贷记桶借贷令牌的次数达到预设次数后,表明某一成员端口的流量总是超过为其设定的数据流量,且其它的成员端口的流量总是小于设定的数据流量,此时,触发动态调桶,增加总是从贷记桶借贷令牌的令牌桶的令牌放置速率和空间,减小总是溢出令牌的令牌桶的令牌放置速率和空间,且满足减小的令牌放置速率之和等于增加的令牌放置速率之和,即保证调整后的各个令牌桶的令牌放置速率之和仍等于该链路聚合组接口的整体CIR或整体PIR,以及,满足减小的空间等于增加的空间,即保证调整后的各个令牌桶的空间之和仍为该链路聚合组接口的整体CBS或整体PBR。所述第一阈值可以根据需要自由设定。
在本发明的一个实施例中,调整当前令牌桶的桶参数过程可以包括以下步骤:
11),按照所述当前令牌桶的借贷信息(例如,借贷比例)增长所述当前令牌桶的空间;
12),按照当前令牌桶从贷记桶借贷令牌的标记分布信息,减少相应的非当前令牌桶的空间,减小的空间之和等于增加的空间之和;
13),根据重新确定的各个令牌桶的空间及令牌放置速率计算方法,确定各个令牌桶的新的令牌放置速率。
延用图4所示的流量变化实例,假设在从t2到t4的时间段内,负载分到到单板芯片组1对应的成员端口的数据流量均为30M,负载分担到单板芯片组2对应的成员端口的数据流量均为60M,令牌桶21持续从贷记桶3借出令牌,且每个周期,令牌桶21从贷记桶中借出的令牌速率为10M,且借出的令牌均来自令牌桶11。则计算出所需调整空间,以及调整策略包括:令牌桶11的空间调整为50M*T-10M*T=40M*T,对应的令牌放置速率调整为40M;同时,令牌桶21的空间调整为50M*T+10M*T=60M*T,对 应的令牌放置速率调整为60M。在动态调整令牌桶的桶参数之后,从t4到t7的时间段内,各个令牌桶的令牌放置速率和该成员端口的数据流量相符,无需从贷记桶借出令牌,最终实现动态控制成员端口的数据流量,且符合链路聚合组接口的整体数据流量,提高了整体数据流量的控制精度。
在本发明的另一个实施例中,与上述的动态调桶过程的区别在于,首先确定需调整令牌放置速率,再由令牌放置速率与令牌桶的空间之间的关系,确定需调整的令牌桶的空间及最终的调整策略。调整当前令牌桶的桶参数的过程可以包括以下步骤:
21),按照所述当前令牌桶的借贷信息,将所述当前令牌桶的令牌放置速率从初始令牌放置速率提高至新的令牌放置速率。其中,所述新的令牌放置速率大于所述初始令牌放置速率。
22),根据所述当前令牌桶新的令牌放置速率,以及令牌放置速率与令牌桶的空间之间的对应关系,调整所述当前令牌桶的空间。
23),根据所述当前令牌桶从贷记桶借贷令牌的标记分布信息,确定相应非当前令牌桶的调整空间,以使减小的空间之和等于增加的空间之和。
优选地,对于图5所示的应用场景,动态调整令牌桶的桶参数之后,可以动态调整借记桶的空间,可以根据令牌桶的调整比例,调整借记桶的空间。
本实施例提供的数据限制方法,当链路聚合组接口中的某一令牌桶从所述贷记桶中借贷令牌的借贷信息满足调桶预设条件时,按照所述当前令牌桶的借贷信息调整相应的令牌桶的桶参数,所述桶参数包括令牌放置速率和空间。从而最终实现动态调整链路聚合组接口内各个链路的令牌桶的桶参数,使各个令牌桶的令牌放置速率与其所对应的成员端口上的数据流量相匹配,提高了链路聚合组接口的整体数据流量控制精度。
通过以上的方法实施例的描述,所属领域的技术人员可以清楚地了解到本发明可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:只读存储器(ROM)、随机存取存储器(RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
与本发明提供的数据流量限制方法实施例相对应,本发明还提供了数据流量限制装 置实施例。
请参见图7,示出了本发明实施例一种数据流量限制装置的结构示意图,所述数据流量限制装置应用于网络设备,所述网络设备可以是框式设备或堆叠设备等。如图7所示,所述装置包括:第一预配置单元110、接收单元120、第一判断单元130、第二判断单元140、发送单元150和令牌管理单元160。
所述第一预配置单元110,用于预先为网络设备内的每个链路聚合组接口配置一个贷记桶,所述贷记桶用于存储各个令牌桶溢出的令牌,且为不同令牌桶溢出的令牌进行区分标记;以及,预先配置贷记桶及各个令牌桶的桶参数,所述桶参数包括令牌桶的令牌放置速率、令牌桶的空间和贷记桶的空间。
所述接收单元120,用于接收待发送报文。
所述第一判断单元130,用于判断所述待发送报文对应的当前令牌桶内的令牌数量是否达到发送所述待发送报文的令牌数量。
所述第二判断单元140,用于当所述当前令牌桶内的令牌数量未达到发送所述待发送报文的令牌数量时,判断当前令牌桶内的令牌及所述贷记桶内非当前令牌桶溢出的令牌的数量总和是否达到发送所述待发送报文的令牌数量。
在本发明的一个实施例中,所述第二判断单元可以包括以下子单元:
确定子单元,用于确定当前令牌桶内的令牌数量与发送所述待发送报文的令牌数量之间的令牌差额。
判断子单元,用于判断所述贷记桶内非当前令牌桶溢出令牌的数量是否不小于所述令牌差额。
第一确定子单元,用于当所述贷记桶内非当前令牌桶溢出令牌的数量不小于所述令牌差额时,确定所述当前令牌桶内令牌及所述贷记桶内的所述令牌的数量总和达到发送所述待发送报文的令牌数量。
第二确定子单元,用于当所述贷记桶内非当前令牌桶溢出令牌的数量小于所述令牌差额时,确定所述当前令牌桶内令牌及所述贷记桶内的所述令牌未达到发送所述待发送报文的令牌数量。
所述发送单元150,用于当所述当前令牌桶内的令牌及所述贷记桶的所述令牌的数量总和达到发送所述待发送报文的令牌数量时,发送所述待发送报文。
令牌管理单元160,用于相应减少所述当前令牌桶及所述贷记桶内的令牌数量。
优选地,图7所示的装置实施例还可以包括:报文处理单元170,用于当所述当前 令牌桶内的令牌及所述贷记桶内非当前令牌桶溢出的令牌的数量总和未达到发送所述待发送报文的数量时,丢弃所述待发送报文;或者,当所述当前令牌桶内的令牌及所述贷记桶内非当前令牌桶溢出的令牌的数量总和未达到发送所述待发送报文的数量时,对所述待发送报文进行重新标记,然后,再发送。
优选地,图7所示的数据流量限制装置还可以增设第二预配置单元180。
第二预配置单元180,与所述第一预配置单元110相连用于预先为每个单板芯片配置一个借记桶,并根据预设空间确定方法设置所述借记桶的空间,且使各个所述借记桶的空间之和等于所述整体CBS或整体PBS,所述预设空间确定方法包括空间等分确定法或空间权重确定法,其中,所述借记桶用于存储属于同一单板芯片的令牌桶溢出的令牌,以使所述贷记桶存储按照预设条件均匀从各个所述借记桶中取出的令牌。
本实施例提供的数据流量限制装置,首先为网络设备内的每个链路聚合组接口配置一个贷记桶,用于存储各个令牌桶溢出的令牌,且为不同的令牌桶溢出的令牌区分标记;当接收到待发送报文时,优先使用对应的令牌桶的令牌发送待发送报文,如果所述令牌桶中的令牌数量不够发送待发送报文时,从贷记桶中借贷非当前令牌桶溢出的令牌,若当前令牌桶的令牌与借贷的令牌足够发送待发送报文时,发送所述待发送报文,同时,减少当前令牌桶及贷记桶内的令牌数量。以实现对网络设备中每个链路聚合组接口的多个链路上的数据流量的动态限制,使该链路聚合组接口内各个成员端口的数据流量之和能够更好地与链路聚合组接口所限制的整体数据流量相匹配。
请参见图8,示出了图7所示的所述第一预配置单元的结构示意图,所述第一预配置单元可以包括以下单元:
令牌放置速率配置单元111,用于根据预设令牌放置速率确定方法确定各个令牌桶的令牌放置速率,且各个令牌桶的令牌放置速率之和等于所述链路聚合组接口的整体承诺信息速率CIR/PIR,其中,所述预设令牌放置速率方法包括速率等分确定法或速率权重确定法;
第一空间配置单元112,用于根据预设空间确定方法确定各个令牌桶的空间,且各个所述令牌桶的空间之和等于所述链路聚合组接口的整体承诺突发流量,其中,所述空间确定方法包括空间等分确定法或空间权重确定方法。
第二空间配置单元113,用于将所述贷记桶的空间配置为所述整体CBS或整体PBS。
请参见图9,示出了本发明实施例又一种数据流量装置的结构示意图,在图7对应实施例的基础上,还包括:第一动态调整单元200,用于在预设个最新周期内,若当前 令牌桶从所述贷记桶中借贷令牌的借贷信息满足调桶预设条件时,按照所述当前令牌桶的借贷信息调整对应的令牌桶的桶参数,且调整后的各个令牌桶的桶参数仍需满足所述链路聚合组接口的数据流量设定参数,所述桶参数包括令牌放置速率和空间。所述借贷信息包括借贷令牌的数量、借贷比例和/或借贷令牌的次数。
在本发明的一个实施例中,所述第一动态调整单元可以包括:第一空间调整单元、第二空间调整单元和第一令牌放置速率调整单元。
第一空间调整单元,用于在预设个最新周期内,若当前令牌桶从所述贷记桶中借贷的令牌数量超过第一阈值,按照所述当前令牌桶的借贷信息增长所述当前令牌桶的空间;
第二空间调整单元,用于根据当前令牌桶从贷记桶借贷令牌的标记分布信息,减少相应的非当前令牌桶的空间,减小的空间之和等于增加的空间之和;所述标记分布信息包括令牌的标记及对应的借贷数量。
第一令牌放置速率调整单元,用于根据重新确定的各个令牌桶的空间及令牌放置速率计算方法,确定各个令牌桶新的令牌放置速率。
在本发明的另一个实施例中,所述第一动态调整单元可以包括:第二令牌放置速率调整单元、第三空间调整单元、第四空间调整单元。
第二令牌放置速率调整单元,用于在预设个最新周期内,若当前令牌桶向所述贷记桶借贷的令牌数量超过第一阈值,按照所述当前令牌桶的借贷信息,将所述当前令牌桶的令牌放置速率从初始令牌放置速率提高至新的令牌放置速率。其中,所述新的令牌放置速率大于所述初始放置速率。
第三空间调整单元,用于根据所述当前令牌桶新的令牌放置速率,以及令牌放置速率与令牌桶的空间之间的对应关系,调整所述当前令牌桶的空间;
第四空间调整单元,用于根据所述当前令牌桶从贷记桶借贷令牌的标记分布信息,确定相应非当前令牌桶的调整空间,以使减小的空间之和等于增加的空间之和。
可选地,图9所示的装置还可以包括:第二动态调整单元300,用于动态调整借记桶的空间,可以根据令牌桶的调整比例,调整借记桶的空间。
本实施例提供的数据限制方法,当链路聚合组接口中的某一令牌桶从所述贷记桶中借贷令牌的借贷信息满足调桶预设条件时,按照所述当前令牌桶的借贷信息增长令牌桶的桶参数,所述桶参数包括令牌放置速率和空间。从而最终实现动态调整链路聚合组接口内各个链路的令牌桶的桶参数,使各个令牌桶的令牌放置速率与其所对应的成员端口 上的数据流量相匹配,提高了链路聚合组接口的整体数据流量的控制精度。
请参见图10,示出了本发明实施例再一种数据流量限制装置的结构示意图,所述装置包括:处理器101、存储器102、发送器103和接收器104。
接收器104,用于接收待发送报文。
所述存储器102,用于存储预先为网络设备内的每个链路聚合组接口配置的贷记桶,以及各个令牌桶的桶参数;其中,所述贷记桶用于存储各个令牌桶溢出的令牌,并为不同令牌桶溢出的令牌区分标记。以及,存储有处理器101能够执行的操作指令,处理器读取存储器102内的操作指令用于实现以下功能:
当接收到待发送报文时,判断所述待发送报文对应的当前令牌桶内的令牌数量是否达到发送所述待发送报文的令牌数量;当所述当前令牌桶内的令牌数量未达到发送所述待发送报文的令牌数量时,判断当前令牌桶内的令牌及所述贷记桶内非当前令牌桶溢出的令牌的数量总和是否达到发送所述待发送报文的令牌数量;当所述当前令牌桶内的令牌及所述贷记桶的所述令牌的数量总和达到发送所述待发送报文的令牌数量时,相应减少所述当前令牌桶及所述贷记桶内的令牌数量;并由所述发送器103发送所述待发送报文。
优选地,在本发明的一个具体实施例中,存储器102中还包括处理器101能够执行的操作指令,以使处理器读取存储器102内的操作指令用于实现以下功能:当所述当前令牌桶内的令牌及所述贷记桶内非当前令牌桶溢出的令牌的数量之和未达到发送所述待发送报文的数量时,丢弃所述待发送报文或对所述待发送报文进行重新标记。
优选地,在本发明的一个具体实施例中,存储器102中还包括处理器101能够执行的操作指令,以使处理器读取存储器102内的操作指令用于实现以下功能:
预先为每个单板芯片配置一个借记桶,所述借记桶用于存储属于同一单板芯片的令牌桶溢出的令牌,以使所述贷记桶存储按照预设条件均匀从各个所述借记桶中取出的令牌,并根据预设空间确定方法设置所述借记桶的空间,且各个所述借记桶的空间之和等于所述整体CBS/PBS,其中,所述预设空间确定方法包括空间等分确定法,或者,空间权重确定法。
可选地,在本发明的一个具体实施例中,存储器102中还包括处理器101能够执行的操作指令,以使处理器读取存储器102内的操作指令用于实现以下功能:
在预设个最新周期内,若当前令牌桶从所述贷记桶中借贷的令牌数量超过第一阈值,按照所述当前令牌桶的借贷信息增长所述当前令牌桶的空间;根据当前令牌桶从贷 记桶借贷令牌的标记分布信息,减少相应的非当前令牌桶的空间,减小的空间之和等于增加的空间之和;根据重新确定的各个令牌桶的空间及令牌放置速率计算方法,确定各个令牌桶新的令牌放置速率。所述借贷信息包括包括令牌的标记及对应的借贷数量;所述标记分布信息包括令牌的标记及对应的借贷数量。
可选地,在本发明的一个具体实施例中,存储器102中还包括处理器101能够执行的操作指令,以使处理器读取存储器102内的操作指令用于实现以下功能:
在预设个最新周期内,若当前令牌桶向所述贷记桶借贷的令牌数量超过第一阈值,按照所述当前令牌桶的借贷信息,将所述当前令牌桶的令牌放置速率从初始令牌放置速率提高至新的令牌放置速率;根据所述当前令牌桶新的令牌放置速率,以及令牌放置速率与令牌桶的空间之间的对应关系,调整所述当前令牌桶的空间;根据所述当前令牌桶从贷记桶借贷令牌的标记分布信息,确定相应非当前令牌桶的调整空间,以使减小的空间之和等于增加的空间之和。
另一方面,本发明还提供一种网络设备,包括上述数据流量限制装置实施例中的任意一种。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其它实施例的不同之处。尤其,对于装置实施例而言,由于其基本相似于方法实施例,所以描述得比较简单,相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
需要说明的是,在本文中,诸如“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其它变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其它要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上所述仅是本发明的具体实施方式,应当指出,对于本技术领域的普通技术人员 来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。

Claims (18)

  1. 一种数据流量限制方法,其特征在于,应用于链路聚合组接口的成员端口位于不同单板芯片组上的网络设备,预先为网络设备内的每个链路聚合组接口配置一个贷记桶,所述贷记桶用于存储各个成员端口对应的令牌桶溢出的令牌,且不同令牌桶溢出的令牌进行区分标记;以及,预先配置贷记桶及各个令牌桶的桶参数,所述桶参数包括令牌桶的令牌放置速率、令牌桶的空间和贷记桶的空间;
    所述数据流量限制方法包括:
    当接收到待发送报文时,判断所述待发送报文对应的当前令牌桶内的令牌数量是否达到发送所述待发送报文的令牌数量;
    当所述当前令牌桶内的令牌数量未达到发送所述待发送报文的令牌数量时,判断所述当前令牌桶内的令牌及所述贷记桶内非当前令牌桶溢出的令牌的数量总和是否达到发送所述待发送报文的令牌数量;
    当所述当前令牌桶内的令牌及所述贷记桶的所述令牌的数量总和达到发送所述待发送报文的令牌数量时,发送所述待发送报文,并相应减少所述当前令牌桶及所述贷记桶内的令牌数。
  2. 根据权利要求1所述的方法,其特征在于,判断所述当前令牌桶内的令牌及所述贷记桶内非当前令牌桶溢出的令牌的数量总和是否达到发送所述待发送报文的令牌数量,包括:
    确定所述当前令牌桶内的令牌数量与发送所述待发送报文的令牌数量之间的令牌差额;
    判断所述贷记桶内非当前令牌桶溢出的令牌的数量是否不小于所述令牌差额;
    当所述贷记桶内非当前令牌桶溢出的令牌的数量不小于所述令牌差额时,确定所述当前令牌桶内令牌及所述贷记桶内的所述令牌的数量总和达到发送所述待发送报文的令牌数量;
    当所述贷记桶内非当前令牌桶溢出令牌的数量小于所述令牌差额时,确定所述当前令牌桶内令牌及所述贷记桶内的所述令牌未达到发送所述待发送报文的令牌数量。
  3. 根据权利要求1或2所述的方法,其特征在于,还包括:当所述当前令牌桶内的令牌及所述贷记桶内非当前令牌桶溢出的令牌的数量之和未达到发送所述待发送报文的数量时,丢弃所述待发送报文或对所述待发送报文进行重新标记。
  4. 根据权利要求1所述的方法,其特征在于,所述预先设置贷记桶及各个令牌桶的桶参数,包括:
    根据预设令牌放置速率确定方法确定各个令牌桶的令牌放置速率,且各个令牌桶的令牌放置速率之和等于所述链路聚合组接口的整体承诺信息速率或峰值信息速率,其 中,所述预设令牌放置速率确定方法包括速率等分确定法或速率权重确定法;
    根据预设空间确定方法确定各个令牌桶的空间,且各个所述令牌桶的空间之和等于所述链路聚合组接口的整体承诺突发尺寸或峰值突发尺寸,其中,所述预设空间确定方法包括空间等分确定法或空间权重确定法;
    所述贷记桶的空间等于所述整体承诺突发尺寸或峰值突发尺寸。
  5. 根据权利要求4所述的方法,其特征在于,还包括:当贷记桶满足预设条件时,丢弃所述贷记桶内的令牌;
    其中,所述当贷记桶满足预设条件时,丢弃所述贷记桶内的令牌包括:
    当所述贷记桶内的令牌超过预设时长时,按照预设公平算法丢弃所述令牌;
    当检测到令牌桶内无令牌时,则丢弃贷记桶内存放的该令牌桶在当前周期之前溢出的令牌。
  6. 根据权利要求1所述的方法,其特征在于,还包括:
    预先为每个令牌桶配置一个借记桶,所述借记桶用于存储属于同一单板芯片组的令牌桶溢出的令牌,以使所述贷记桶按照预设公平算法从各个所述借记桶中取出令牌,并根据预设空间确定方法设置所述借记桶的空间,且各个所述借记桶的空间之和等于所述整体承诺突发尺寸或峰值突发尺寸,其中,所述预设空间确定方法包括空间等分确定法或空间权重确定法。
  7. 根据权利要求1、2、4、5或6所述的方法,其特征在于,还包括:
    在预设个最新周期内,若当前令牌桶从所述贷记桶中借贷令牌的借贷信息满足调桶预设条件,按照所述当前令牌桶的借贷信息增长所述当前令牌桶的空间;其中,所述借贷信息包括借贷令牌的数量、借贷比例和/或借贷令牌的次数;
    根据当前令牌桶从贷记桶借贷令牌的标记分布信息,减少相应的非当前令牌桶的空间,减小的空间之和等于增加的空间之和;
    根据重新确定的各个令牌桶的空间,以及令牌放置速率计算方法,确定各个令牌桶的新的令牌放置速率。
  8. 根据权利要求1、2、4、5或6所述的方法,其特征在于,还包括:
    在预设个最新周期内,若当前令牌桶从所述贷记桶借贷令牌的借贷信息满足调桶预设条件,按照所述当前令牌桶的借贷信息,将所述当前令牌桶的令牌放置速率从初始令牌放置速率提高至新的令牌放置速率;其中,所述借贷信息包括借贷令牌的数量、借贷比例和/或借贷令牌的次数。
  9. 根据权利要求8所述的方法,其特征在于,还包括:
    根据所述当前令牌桶的新的令牌放置速率,以及令牌放置速率与令牌桶的空间之间的对应关系,调整所述当前令牌桶的空间;
    根据所述当前令牌桶从贷记桶借贷令牌的标记分布信息,确定相应非当前令牌桶的 调整空间,以使减小的空间之和等于增加的空间之和。
  10. 一种数据流量限制装置,其特征在于,包括:第一预配置单元、接收单元、第一判断单元、第二判断单元、发送单元和令牌管理单元;
    所述第一预配置单元,用于预先为网络设备内的每个链路聚合组接口配置一个贷记桶,所述贷记桶用于存储各个令牌桶溢出的令牌,且为不同令牌桶溢出的令牌进行区分标记;以及,预先配置贷记桶及各个令牌桶的桶参数,所述桶参数包括令牌桶的令牌放置速率、令牌桶的空间和贷记桶的空间;
    所述接收单元,用于接收待发送报文;
    所述第一判断单元,用于判断所述待发送报文对应的当前令牌桶内的令牌数量是否达到发送所述待发送报文的令牌数量;
    所述第二判断单元,用于当所述当前令牌桶内的令牌数量未达到发送所述待发送报文的令牌数量时,判断当前令牌桶内的令牌及所述贷记桶内非当前令牌桶溢出的令牌数量是否达到发送所述待发送报文的令牌数量;
    所述发送单元,用于当所述当前令牌桶内的令牌及所述贷记桶的所述令牌的数量总和达到发送所述待发送报文的令牌数量时,发送所述待发送报文;
    令牌管理单元,用于相应减少所述当前令牌桶及所述贷记桶内的令牌数量。
  11. 根据权利要求10所述的装置,其特征在于,所述第二判断单元包括:
    确定子单元,用于确定当前令牌桶内的令牌数量与发送所述待发送报文的令牌数量之间的令牌差额;
    判断子单元,用于判断所述贷记桶内非当前令牌桶溢出令牌的数量是否不小于所述令牌差额;
    第一确定子单元,用于当所述贷记桶内非当前令牌桶溢出令牌的数量不小于所述令牌差额时,确定所述当前令牌桶内令牌及所述贷记桶内的所述令牌的数量总和达到发送所述待发送报文的令牌数量;
    第二确定子单元,用于当所述贷记桶内非当前令牌桶溢出令牌的数量小于所述令牌差额时,确定所述当前令牌桶内令牌及所述贷记桶内的所述令牌未达到发送所述待发送报文的令牌数量。
  12. 根据权利要求10或11所述的装置,其特征在于,还包括:报文处理单元;
    所述报文处理单元,用于当所述当前令牌桶内的令牌数量及所述贷记桶内非当前令牌桶溢出的令牌数量之和未达到发送所述待发送报文的数量时,丢弃所述待发送报文;
    或者,用于当所述当前令牌桶内的令牌数量及所述贷记桶内非当前令牌桶溢出的令牌数量之和未达到发送所述待发送报文的数量时,对所述待发送报文进行重新标记。
  13. 根据权利要求10或11所述的装置,其特征在于,所述第一预配置单元包括:
    令牌放置速率配置单元,用于根据预设令牌放置速率确定方法确定各个令牌桶的令 牌放置速率,且各个令牌桶的令牌放置速率之和等于所述链路聚合组接口的整体承诺信息速率或峰值信息速率,其中,所述预设令牌放置速率方法包括速率等分确定法或速率权重确定法;
    第一空间配置单元,用于根据预设空间确定方法确定各个令牌桶的空间,且各个所述令牌桶的空间之和等于所述链路聚合组接口的整体承诺突发尺寸或峰值突发尺寸,其中,所述空间确定方法包括空间等分确定法或空间权重确定法;
    第二空间配置单元,用于将所述贷记桶的空间配置为所述整体承诺突发尺寸或峰值突发尺寸。
  14. 根据权利要求10所述的装置,其特征在于,还包括:
    第二预配置单元,用于预先为每个单板芯片组配置一个借记桶,并根据预设空间确定法设置所述借记桶的空间,且使各个所述借记桶的空间之和等于所述整体承诺突发尺寸或峰值突发尺寸,所述预设空间确定方法包括空间等分确定法或空间权重确定法,其中,所述借记桶用于存储属于同一单板芯片组的令牌桶溢出的令牌,以使所述贷记桶存储按照预设条件均匀从各个所述借记桶中取出的令牌。
  15. 根据权利要求10、11或14所述的装置,其特征在于,还包括:
    第一空间调整单元,用于在预设个最新周期内,若当前令牌桶从所述贷记桶中借贷的令牌数量超过第一阈值,按照所述当前令牌桶的借贷信息增长所述当前令牌桶的空间,所述借贷信息包括包括令牌的标记及对应的借贷数量;
    第二空间调整单元,用于根据当前令牌桶从贷记桶借贷令牌的标记分布信息,减少相应的非当前令牌桶的空间,减小的空间之和等于增加的空间之和;
    第一令牌放置速率调整单元,用于根据重新确定的各个令牌桶的空间及令牌放置速率计算方法,确定各个令牌桶的新的令牌放置速率。
  16. 根据权利要求10、11、或14所述的装置,其特征在于,还包括:
    第二令牌放置速率调整单元,用于在预设个最新周期内,若当前令牌桶向所述贷记桶借贷的令牌数量超过第一阈值,按照所述当前令牌桶的借贷信息,将所述当前令牌桶的令牌放置速率从初始令牌放置速率提高至新的令牌放置速率,所述借贷信息包括借贷令牌的数量、借贷比例和/或借贷令牌的次数。
  17. 根据权利要求16所述的装置,其特征在于,还包括:
    第三空间调整单元,用于根据所述当前令牌桶新的令牌放置速率,以及令牌放置速率与令牌桶的空间之间的对应关系,调整所述当前令牌桶的空间;
    第四空间调整单元,用于根据所述当前令牌桶从贷记桶借贷令牌的标记分布信息,确定相应非当前令牌桶的调整空间,以使减小的空间之和等于增加的空间之和。
  18. 一种网络设备,其特征在于,包括权利要求10-17任一项所述的数据流量限制装置。
PCT/CN2014/087395 2014-01-29 2014-09-25 数据流量限制方法及装置 WO2015113405A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
KR1020167023586A KR101738657B1 (ko) 2014-01-29 2014-09-25 데이터 트래픽 제한을 위한 방법 및 장치
JP2016549033A JP6268623B2 (ja) 2014-01-29 2014-09-25 データトラフィック制限のための方法および装置
EP14880535.1A EP3101851B1 (en) 2014-01-29 2014-09-25 Method and apparatus for data flow restriction
US15/224,232 US10560395B2 (en) 2014-01-29 2016-07-29 Method and apparatus for data traffic restriction

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410043708.5A CN103763208B (zh) 2014-01-29 2014-01-29 数据流量限制方法及装置
CN201410043708.5 2014-01-29

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/224,232 Continuation US10560395B2 (en) 2014-01-29 2016-07-29 Method and apparatus for data traffic restriction

Publications (1)

Publication Number Publication Date
WO2015113405A1 true WO2015113405A1 (zh) 2015-08-06

Family

ID=50530370

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/087395 WO2015113405A1 (zh) 2014-01-29 2014-09-25 数据流量限制方法及装置

Country Status (6)

Country Link
US (1) US10560395B2 (zh)
EP (1) EP3101851B1 (zh)
JP (1) JP6268623B2 (zh)
KR (1) KR101738657B1 (zh)
CN (1) CN103763208B (zh)
WO (1) WO2015113405A1 (zh)

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103763208B (zh) 2014-01-29 2017-08-29 华为技术有限公司 数据流量限制方法及装置
CN104702528B (zh) * 2015-04-09 2018-07-10 深圳中兴网信科技有限公司 流量控制方法和流量控制系统
CN104980779B (zh) * 2015-04-10 2018-04-20 腾讯科技(成都)有限公司 视频数据的传输控制方法和装置
CN104768188B (zh) * 2015-04-23 2018-07-20 新华三技术有限公司 一种流量控制方法和装置
CN107295572B (zh) * 2016-04-11 2021-10-01 北京搜狗科技发展有限公司 一种动态自适应限流方法及电子设备
CN106302211B (zh) * 2016-07-18 2018-04-03 网易无尾熊(杭州)科技有限公司 一种网络资源的请求量控制方法和装置
US10917353B2 (en) * 2018-02-28 2021-02-09 Microsoft Technology Licensing, Llc Network traffic flow logging in distributed computing systems
CN110275670B (zh) 2018-03-16 2021-02-05 华为技术有限公司 控制存储设备中数据流的方法、装置、存储设备及存储介质
CN108667545B (zh) * 2018-04-17 2020-06-02 迈普通信技术股份有限公司 一种串口带宽同步方法及装置
GB2573573B (en) * 2018-05-11 2022-08-17 Cambridge Broadband Networks Group Ltd A system and method for distributing packets in a network
CN109194765B (zh) * 2018-09-26 2023-07-28 中国平安人寿保险股份有限公司 请求控制方法、装置、计算机设备和存储介质
TWI673613B (zh) * 2018-10-17 2019-10-01 財團法人工業技術研究院 伺服器及其資源調控方法
CN111078391A (zh) * 2018-10-22 2020-04-28 阿里巴巴集团控股有限公司 一种业务请求处理方法、装置及设备
CN109862069B (zh) * 2018-12-13 2020-06-09 百度在线网络技术(北京)有限公司 消息处理方法和装置
CN109714268B (zh) * 2019-01-23 2022-06-07 平安科技(深圳)有限公司 一种虚拟私有云的流量控制方法及相关装置
CN110213173B (zh) * 2019-06-06 2023-03-24 北京百度网讯科技有限公司 流量控制方法及装置、系统、服务器、计算机可读介质
CN110166376B (zh) * 2019-06-06 2022-11-01 北京百度网讯科技有限公司 流量控制方法及装置、系统、服务器、计算机可读介质
CN111404839B (zh) * 2020-03-20 2023-05-26 国家计算机网络与信息安全管理中心 报文处理方法和装置
CN111835655B (zh) * 2020-07-13 2022-06-28 北京轻网科技有限公司 共享带宽限速方法、装置及存储介质
US11652753B2 (en) * 2020-07-29 2023-05-16 Hewlett Packard Enterprise Development Lp Link aggregation group optimization
CN112631928A (zh) * 2020-12-30 2021-04-09 上海中通吉网络技术有限公司 基于令牌桶的性能测试方法、装置及设备
CN115080657A (zh) * 2021-03-10 2022-09-20 中国移动通信集团山东有限公司 一种应用于分布式存储的操作令牌分配方法、系统及设备
CN113726684A (zh) * 2021-07-12 2021-11-30 新华三信息安全技术有限公司 通信方法及装置
CN114172848B (zh) * 2021-11-18 2024-02-09 新华三技术有限公司合肥分公司 通信方法及装置
CN114401226B (zh) * 2022-02-21 2024-02-27 李超 一种流媒体数据的路由流量控制方法及系统
CN114860334B (zh) * 2022-04-24 2024-01-26 曙光信息产业(北京)有限公司 虚拟机启动风暴的处理方法、装置、设备及介质
CN117437038B (zh) * 2023-12-21 2024-03-26 恒丰银行股份有限公司 一种基于服务组件化的银行风控业务处理方法及设备
CN117579564B (zh) * 2024-01-19 2024-05-24 成都智明达电子股份有限公司 一种基于fpga和令牌桶算法的多路流量调度系统及方法

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040008625A1 (en) * 2002-04-24 2004-01-15 Jae-Hwoon Lee Method for monitoring traffic in packet switched network
CN1534926A (zh) * 2003-04-01 2004-10-06 华为技术有限公司 一种基于承诺接入速率的带宽统计复用方法
EP1705851A1 (en) * 2005-03-22 2006-09-27 Alcatel Communication traffic policing apparatus and methods
CN1859207A (zh) * 2006-03-24 2006-11-08 华为技术有限公司 一种剩余带宽复用的方法及网络设备
CN1859206A (zh) * 2006-03-24 2006-11-08 华为技术有限公司 一种剩余带宽复用的方法及网络设备
CN101834786A (zh) * 2010-04-15 2010-09-15 华为技术有限公司 队列调度的方法和装置
CN102368741A (zh) * 2011-12-05 2012-03-07 盛科网络(苏州)有限公司 支持层次化队列调度和流量整形的方法及装置
CN103763208A (zh) * 2014-01-29 2014-04-30 华为技术有限公司 数据流量限制方法及装置

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5790521A (en) * 1994-08-01 1998-08-04 The University Of Iowa Research Foundation Marking mechanism for controlling consecutive packet loss in ATM networks
US5596576A (en) 1995-11-03 1997-01-21 At&T Systems and methods for sharing of resources
US8032653B1 (en) * 2000-09-08 2011-10-04 Juniper Networks, Inc. Guaranteed bandwidth sharing in a traffic shaping system
US7292594B2 (en) * 2002-06-10 2007-11-06 Lsi Corporation Weighted fair share scheduler for large input-buffered high-speed cross-point packet/cell switches
US7680049B2 (en) * 2005-02-08 2010-03-16 Cisco Technology, Inc. Methods and apparatus for allowing promotion in color-based policers
US7646718B1 (en) * 2005-04-18 2010-01-12 Marvell International Ltd. Flexible port rate limiting
US20070070895A1 (en) * 2005-09-26 2007-03-29 Paolo Narvaez Scaleable channel scheduler system and method
WO2007076879A1 (en) * 2005-12-30 2007-07-12 Telefonaktiebolaget Lm Ericsson (Publ) Scheduling strategy for packet switched traffic
KR20080062215A (ko) 2006-12-29 2008-07-03 삼성전자주식회사 네트워크 프로세서에서 예외 패킷에 대한 속도 제한 장치및 방법
JP4823187B2 (ja) * 2007-09-26 2011-11-24 アラクサラネットワークス株式会社 帯域監視装置、帯域監視方法
US20110083175A1 (en) * 2009-10-06 2011-04-07 Sonus Networks, Inc. Methods and Apparatuses for Policing and Prioritizing of Data Services
CN102185777B (zh) * 2011-05-11 2014-04-30 烽火通信科技股份有限公司 多级层次化带宽管理的方法
US8908522B2 (en) * 2012-04-30 2014-12-09 Fujitsu Limited Transmission rate control
CN103326953B (zh) * 2013-03-28 2016-06-29 华为技术有限公司 一种基于令牌桶的流量限制方法和装置
US9450881B2 (en) * 2013-07-09 2016-09-20 Intel Corporation Method and system for traffic metering to limit a received packet rate

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040008625A1 (en) * 2002-04-24 2004-01-15 Jae-Hwoon Lee Method for monitoring traffic in packet switched network
CN1534926A (zh) * 2003-04-01 2004-10-06 华为技术有限公司 一种基于承诺接入速率的带宽统计复用方法
EP1705851A1 (en) * 2005-03-22 2006-09-27 Alcatel Communication traffic policing apparatus and methods
CN1859207A (zh) * 2006-03-24 2006-11-08 华为技术有限公司 一种剩余带宽复用的方法及网络设备
CN1859206A (zh) * 2006-03-24 2006-11-08 华为技术有限公司 一种剩余带宽复用的方法及网络设备
CN101834786A (zh) * 2010-04-15 2010-09-15 华为技术有限公司 队列调度的方法和装置
CN102368741A (zh) * 2011-12-05 2012-03-07 盛科网络(苏州)有限公司 支持层次化队列调度和流量整形的方法及装置
CN103763208A (zh) * 2014-01-29 2014-04-30 华为技术有限公司 数据流量限制方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3101851A4 *

Also Published As

Publication number Publication date
JP2017505065A (ja) 2017-02-09
CN103763208B (zh) 2017-08-29
KR20160113273A (ko) 2016-09-28
JP6268623B2 (ja) 2018-01-31
US20160337259A1 (en) 2016-11-17
EP3101851A4 (en) 2017-02-15
CN103763208A (zh) 2014-04-30
EP3101851A1 (en) 2016-12-07
KR101738657B1 (ko) 2017-05-22
US10560395B2 (en) 2020-02-11
EP3101851B1 (en) 2018-01-10

Similar Documents

Publication Publication Date Title
WO2015113405A1 (zh) 数据流量限制方法及装置
US10447594B2 (en) Ensuring predictable and quantifiable networking performance
WO2017016360A1 (zh) 一种带宽调整方法及相关设备
US9112809B2 (en) Method and apparatus for controlling utilization in a horizontally scaled software application
EP2466824B1 (en) Service scheduling method and device
US9264367B2 (en) Method and system for controlling packet traffic
CN103999414B (zh) 一种归因针对相应用户寄存器的共享资源的拥塞贡献的方法和装置
US11929911B2 (en) Shaping outgoing traffic of network packets in a network management system
EP3310093B1 (en) Traffic control method and apparatus
US9319325B2 (en) Adaptive method and system of regulation of yellow traffic in a network
WO2020078390A1 (zh) 一种流量监管方法、设备、装置和计算机存储介质
CN108173780A (zh) 数据处理方法、装置、计算机及存储介质
WO2017211252A1 (zh) 业务流调度方法及装置、设备、存储介质
CN103747488B (zh) 载波均衡的方法、装置及系统
CN109450813A (zh) 云计算系统中流量控制方法和装置
CN103139097B (zh) Cpu过载控制方法、装置及系统
US10075380B2 (en) Probabilistic metering
Balogh et al. Average delay and queue length model for WRRPQ
CN114071471A (zh) 低速率小区的确定方法、装置及计算设备
CN107852336A (zh) 一种统计报文拥塞的方法和装置
CN106330538A (zh) 一种数据流传输控制方法及其设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14880535

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2016549033

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2014880535

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2014880535

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 20167023586

Country of ref document: KR

Kind code of ref document: A