CN115037693A - Distributed current limiting method and distributed current limiting device based on token bucket - Google Patents

Distributed current limiting method and distributed current limiting device based on token bucket Download PDF

Info

Publication number
CN115037693A
CN115037693A CN202210537854.8A CN202210537854A CN115037693A CN 115037693 A CN115037693 A CN 115037693A CN 202210537854 A CN202210537854 A CN 202210537854A CN 115037693 A CN115037693 A CN 115037693A
Authority
CN
China
Prior art keywords
current limiting
user
token
service request
distributed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210537854.8A
Other languages
Chinese (zh)
Other versions
CN115037693B (en
Inventor
胡云森
何渝君
王翔
舒忠玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hanyun Technology Co Ltd
Original Assignee
Hanyun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hanyun Technology Co Ltd filed Critical Hanyun Technology Co Ltd
Priority to CN202210537854.8A priority Critical patent/CN115037693B/en
Publication of CN115037693A publication Critical patent/CN115037693A/en
Application granted granted Critical
Publication of CN115037693B publication Critical patent/CN115037693B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/215Flow control; Congestion control using token-bucket

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides a distributed current limiting method and a distributed current limiting device based on a token bucket, wherein the distributed current limiting method comprises the following steps: receiving an access request sent by a user, and determining a request source address of the user; determining a current limiting strategy corresponding to the user based on the request source address of the user; based on the current limiting strategy, putting tokens corresponding to the current limiting strategy into a token bucket belonging to the user at the starting moment of each current limiting interval time; receiving a service request sent by a user, and judging whether the token can be acquired from the token bucket for the service request; if so, acquiring the token and endowing the token to the service request so as to execute the service request with the token; if not, the service request is rejected. According to the distributed current limiting method and the distributed current limiting device, the problem that distributed current limiting cannot be achieved in the prior art is solved.

Description

Distributed current limiting method and distributed current limiting device based on token bucket
Technical Field
The application relates to the technical field of computer communication, in particular to a distributed current limiting method and a distributed current limiting device based on a token bucket.
Background
With the development of computer technology, more and more technologies are applied to the fields of finance or IT technology, but higher requirements are also put forward on the technologies due to the requirements of safety and real-time performance of various industries. In the aspect of data traffic management, in order to prevent a system from being paralyzed due to an excessive access amount of a certain server or an excessive access amount of a certain application, a current-limiting mode is often adopted to protect a server or an application background. Throttling, a means by which a service or application protects itself, guarantees its own load by limiting or denying the traffic of the calling party. In the traditional flow limiting mode, in a single-machine memory, a plurality of gateway servers cannot share flow limiting, and with the increasing use quantity of open gateways, how to realize multi-machine distributed flow limiting puts higher requirements on the prior art.
Disclosure of Invention
In view of the above, an object of the present application is to provide a distributed throttling method and a distributed throttling apparatus based on a token bucket, where a throttling policy corresponding to a user is determined according to a request source address of the user, and a corresponding token is put into the token bucket according to the throttling policy to which the user belongs, so as to throttle the user. Each user has the current limiting strategy, and the corresponding current limiting strategy is used for limiting the current of the user, so that the problem that the distributed current limiting cannot be realized in the prior art is solved, the accuracy and the real-time performance of the current limiting of the user are improved, and the purpose of carrying out the distributed current limiting on different users is realized.
In a first aspect, an embodiment of the present application provides a distributed current limiting method based on a token bucket, where the distributed current limiting method includes:
receiving an access request sent by a user, and determining a request source address of the user;
determining a current limiting strategy corresponding to the user based on the request source address of the user; wherein the current limit policy is to characterize a number of tokens that the user is capable of using per current limit interval time;
based on the current limiting strategy, putting tokens corresponding to the current limiting strategy into a token bucket belonging to the user at the starting moment of each current limiting interval time;
receiving a service request sent by a user, and judging whether the token can be acquired from the token bucket for the service request or not;
if so, acquiring the token and endowing the token to the service request so as to execute the service request with the token;
if not, the service request is rejected.
Further, before receiving an access request sent by a user, the distributed current limiting method further includes:
and aiming at each different current limiting grade, setting the current limiting interval time, the number of tokens required to be put into the token bucket in each current limiting interval time and the number of tokens consumed by the user in each service request for the current limiting strategy corresponding to the current limiting grade.
Further, the determining, based on the request source address of the user, the current limiting policy corresponding to the user includes:
judging whether the user is a white list user or not based on the request source address of the user;
if not, determining the current limiting grade of the user based on the request source address, and determining the current limiting strategy corresponding to the user based on the current limiting grade.
Further, it is determined whether the token can be obtained for the service request from a token bucket by:
judging whether the number of the current tokens in the token bucket is larger than or equal to the number of the tokens consumed by the service request or not within each current limiting interval time;
if yes, obtaining the token for the service request from the token bucket;
if not, the token cannot be obtained for the service request from the token bucket.
Further, when it is determined that the token can be obtained for the service request from the token bucket, the distributed current limiting method further includes;
determining the token consumption number of the user in the current limiting interval time;
and at the starting moment of the next current limiting interval time after the current limiting interval time, putting tokens corresponding to the token consumption quantity into the token bucket.
Further, when it is determined that the token cannot be obtained for the service request from the token bucket, the distributed current limiting method further includes;
and at the starting moment of the next current limiting interval time after the current limiting interval time, putting tokens corresponding to the current limiting strategy into a token bucket belonging to the user based on the current limiting strategy.
Further, after rejecting the service request, the distributed current limiting method further includes:
determining the user as an excess access user, and determining user information of the excess access user; the user information comprises a request source address, a user name, a user contact way and an API path;
sending a current limiting strategy corresponding to each different current limiting grade to the excess access user according to the user contact information of the excess access user;
when detecting the ordering operation of the excess access user on the current limiting strategy, acquiring the ordering grade of the excess access user, and determining the current limiting strategy corresponding to the ordering grade as the current limiting strategy corresponding to the excess access user, so that when the excess access user sends a service request next time, a token corresponding to the current limiting strategy is provided for the excess access user according to the current limiting strategy corresponding to the excess access user; wherein the order level is any one of the current limit levels.
In a second aspect, an embodiment of the present application further provides a distributed current limiting apparatus based on a token bucket, where the distributed current limiting apparatus includes:
the receiving module is used for receiving an access request sent by a user and determining a request source address of the user;
a current limiting strategy determining module, configured to determine a current limiting strategy corresponding to the user based on the request source address of the user; wherein the current limiting policy is used to characterize the number of tokens that the user is able to use per current limiting interval;
the token placing module is used for placing tokens corresponding to the current limiting strategies into a token bucket to which the user belongs at the starting moment of each current limiting interval time based on the current limiting strategies;
the judging module is used for receiving a service request sent by a user and judging whether the token can be acquired from the token bucket for the service request or not;
the request execution module is used for acquiring the token and endowing the token to the service request if the token is true so as to execute the service request with the token;
and the request rejection module is used for rejecting the service request if the service request is not rejected.
In a third aspect, an embodiment of the present application further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is operating, the machine-readable instructions when executed by the processor performing the steps of the token bucket based distributed throttling method as described above.
In a fourth aspect, embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the token bucket-based distributed throttling method as described above.
The distributed current limiting method based on the token bucket provided by the embodiment of the application comprises the steps of firstly, receiving an access request sent by a user, and determining a request source address of the user; then, based on the request source address of the user, determining a current limiting strategy corresponding to the user; wherein the current limit policy is to characterize a number of tokens that the user is capable of using per current limit interval time; based on the current limiting strategy, putting tokens corresponding to the current limiting strategy into a token bucket belonging to the user at the starting moment of each current limiting interval time; finally, receiving a service request sent by a user, and judging whether the token can be acquired from the token bucket for the service request; if so, acquiring the token and endowing the token to the service request so as to execute the service request with the token; if not, the service request is rejected.
Compared with the current limiting method in the prior art, the distributed current limiting method provided by the application determines the current limiting strategy corresponding to the user according to the request source address of the user, and puts the corresponding token into the token bucket according to the current limiting strategy to which the user belongs so as to limit the current of the user. Each user has the current limiting strategy, and the corresponding current limiting strategy is used for limiting the current of the user, so that the problem that the distributed current limiting cannot be realized in the prior art is solved, the accuracy and the real-time performance of the current limiting of the user are improved, and the purpose of carrying out the distributed current limiting on different users is realized.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a flowchart of a distributed throttling method based on token buckets according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a method for determining a current limiting policy according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a distributed current limiting apparatus based on a token bucket according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. Every other embodiment that can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present application falls within the protection scope of the present application.
With the development of computer technology, more and more technologies are applied to the fields of finance or IT technology, but higher requirements are also put forward on the technologies due to the requirements of safety and real-time performance of various industries. In the aspect of data traffic management, in order to prevent a system from being paralyzed due to an excessive access amount of a certain server or an excessive access amount of a certain application, a current-limiting mode is often adopted to protect a server or an application background. Throttling, a means for protecting itself by a service or application, guarantees its own load by limiting or rejecting traffic of a calling party. In the traditional flow limiting mode, in a single-machine memory, a plurality of gateway servers cannot share the flow limiting flow, and with the increasing use quantity of open gateways, how to realize multi-machine distributed flow limiting puts higher requirements on the prior art.
In recent years, user throttling has been a subject of being bypassed in highly concurrent applications. As the number of users of the system is increased, the API gateway has higher requirements on security and performance, and great challenges exist in terms of convenience for maintenance, robustness and the like of codes. A large-demand network system needs to carry billions of traffic calls every day, and the smooth operation of interfaces and the performance loss of each interface after back-end service are very important. Throttling can guarantee the availability of our API services to all users and also can prevent network attacks.
Based on this, the embodiment of the application provides a token bucket-based distributed current limiting method, each user has a current limiting policy to which the user belongs, and the corresponding current limiting policy is used for limiting the current of the user, so that the problem that the distributed current limiting cannot be realized in the prior art is solved, the accuracy and the real-time performance of the current limiting of the user are improved, and the purpose of performing the distributed current limiting on different users is realized.
Referring to fig. 1, fig. 1 is a flowchart illustrating a distributed throttling method based on token buckets according to an embodiment of the present disclosure. As shown in fig. 1, a distributed throttling method based on a token bucket according to an embodiment of the present application includes:
s101, receiving an access request sent by a user, and determining a request source address of the user according to the access request.
It should be noted that the access request refers to an access request sent to a platform when a user prepares to access the platform that provides service requirements for the user. The request source Address refers to an Address where the user sends an access request, where the request source Address may be an IP Address (Internet Protocol Address) of the user, which is not particularly limited. The IP address is an address specific to the Internet protocol, and is a uniform address format provided by the IP protocol, and the IP address allocates a logical address to each network and each host on the Internet so as to shield the difference of physical addresses.
For the step S101, in specific implementation, when a user accesses a platform providing a service requirement for the user, an access request sent by the user is received, and a request source address of the user is determined according to the access request sent by the user.
S102, determining a current limiting strategy corresponding to the user based on the request source address of the user.
It should be noted that the current limiting policy refers to the number of services that the user can request in each current limiting interval, that is, the current limiting policy is used to characterize the number of tokens that the user can use in each current limiting interval. Specifically, the current limit interval refers to a preset interval, and a token is provided for the user in each current limit interval. For example, the current limiting interval may be set to 1 minute, and the present application is not particularly limited thereto. Token Bucket technology (Token Bucket) is a common flow measurement technology, and is commonly used for limiting and shaping flow, and can measure the rate and burst of the flow. The distributed current limiting method provided by the application is realized on the basis of the token bucket technology. The token is the token stored in the token bucket for the corresponding user service request. The number of tokens refers to the number of tokens in the token bucket. For example, the number of tokens that can be used by the user in one minute may be set to 100, and this application is not particularly limited.
For the above step S102, in a specific implementation, after the request source address of the user is determined in step S101, the current limiting policy corresponding to the user is determined according to the request source address of the user. Continuing with the previous embodiment, the throttling policy may be that the user is able to use 100 tokens per minute.
As an optional implementation manner, before the receiving the access request sent by the user, the distributed current limiting method provided in the present application further includes:
and aiming at each different current limiting grade, setting the current limiting interval time, the number of tokens required to be put into the token bucket in each current limiting interval time and the number of tokens consumed by each service request of the user for the current limiting strategy corresponding to the current limiting grade.
It should be noted that the current limiting level refers to a preset level corresponding to different current limiting policies. According to an embodiment provided by the present application, the current limiting level may include a first current limiting level, a second current limiting level, a third current limiting level, and the like, which is not specifically limited in the present application. Each different current limit level corresponds to a different current limit strategy. The number of tokens that need to be placed into the token bucket per throttling interval refers to the number of tokens placed into the token bucket per throttling interval. For example, when the time interval for throttling is 1 minute, the number of tokens to be put into the token bucket per one minute may be 100, and the application is not particularly limited thereto. The number of tokens consumed by a user per service request refers to the number of tokens consumed by the user in response to the service request each time the user sends a service request. For example, the number of tokens consumed by the user per service request may be set to be 1, which is not specifically limited in this application.
In specific implementation, for each different current limiting level, current limiting interval time, the number of tokens to be put into the token bucket in each current limiting interval time, and the number of tokens consumed by each service request of the user are set for the current limiting policy corresponding to the current limiting level. Continuing with the previous embodiment, for example, for a first current limit level, setting a current limit interval time to be 1 minute for a current limit policy corresponding to the first current limit level, where the number of tokens to be put into a token bucket in each current limit interval time is 100, and the number of tokens consumed by a user per service request is 1; setting a current limit interval time to be 1 minute for a current limit strategy corresponding to a second current limit level aiming at the second current limit level, wherein the number of tokens required to be put into a token bucket in each current limit interval time is 500, and the number of tokens consumed by a user in each service request is 1; and aiming at a third current limit level, setting a current limit interval time to be 1 minute for a current limit strategy corresponding to the third current limit level, wherein the number of tokens required to be put into a token bucket in each current limit interval time is 1000, and the number of tokens consumed by each service request of a user is 1.
Therefore, before the user accesses the platform, different current limiting strategies can be set for different current limiting levels, the current limiting strategy to which the user belongs is determined according to the request source address of the user, so that the corresponding current limiting strategy is used for limiting the current of the user, the availability of all users in the platform is guaranteed in real time according to the set current limiting strategy, and malicious attack of a network can be effectively prevented.
Here, it should be noted that the above examples of the current limiting policies corresponding to the current limiting levels and the current limiting levels are merely examples, and in practice, the current limiting policies corresponding to the current limiting levels and the current limiting levels are not limited to the above examples.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for determining a current limiting policy according to an embodiment of the present disclosure. As shown in fig. 2, the determining, based on the request source address of the user, a current limiting policy corresponding to the user includes:
s201, based on the request source address of the user, judging whether the user is a white list user.
It should be noted that the white list user refers to a user who does not need to perform current limiting.
In step S201, in a specific implementation, it is determined whether the user is a white list user based on the request source address of the user. Specifically, the request source address of the user is judged whether to be in the white list IP address or not by comparing the query request source address with the white list IP address configured by Nacos. If the user is in the white list IP address, the user is considered as a white list user; if the user is not in the white list IP address, the user is not considered to be the white list user. If the user is a white list user, a normal API call flow is carried out for the user, that is, a current limiting strategy is not required to be set for the user, the number of service requests of the user is not required to be limited by a token bucket, and the normal execution can be carried out for the user no matter how many service requests are sent by the user. If the user is not a white list user, step S202 is executed.
And S202, if not, determining the current limiting grade of the user based on the request source address, and determining the current limiting strategy corresponding to the user based on the current limiting grade.
For the above step S202, in a specific implementation, if the user is not a white list user, the current limiting level of the user is determined according to the request source address of the user, and the current limiting policy corresponding to the user is determined based on the current limiting level. According to the embodiment provided by the present application, in specific implementation, if it is determined that the user accesses the platform for providing the service requirement of the user for the first time according to the request source address of the user, the current limit level of the user is set to be the lowest level by default, that is, the first current limit level in the above embodiment. And when the user does not access the platform for providing the service requirement by the user for the first time, determining the current limit level corresponding to the user according to the request source address of the user. For example, if the user previously purchased a current limit policy of a second current limit level, the current limit level of the user may be determined to be the second current limit level according to the request source address of the user. And after the current limit grade of the user is determined, determining that the user provides a corresponding current limit strategy according to the current limit grade so as to provide current limit service for the user according to the current limit strategy.
S103, based on the current limiting strategy, putting tokens corresponding to the current limiting strategy into a token bucket to which the user belongs at the starting moment of each current limiting interval time.
It should be noted that the start time of the current limiting interval refers to the start time within the interval in each current limiting interval. For example, when the current limiting interval time is one minute, the start time of the current limiting interval time is the start time of each minute.
In step S103, in a specific implementation, based on the determined current limiting policy of the user, a token corresponding to the current limiting policy is put into the token bucket to which the user belongs at the starting time of each current limiting interval time. Continuing with the previous embodiment, when it is determined that the throttling level of the user is the first throttling level, the throttling policy of the user is the throttling interval time of 1 minute, and the number of tokens to be put into the token bucket in each throttling interval time is 100, 100 tokens are put into the token bucket of the user at the beginning of each minute. Specifically, when the token is put into the token bucket, the token is persisted into a redisson cache, and a redis lua script is adopted to realize logic, namely, a redis command is executed through the lua script, wherein the lua is an executable dynamic scripting language supported by reids; the method can ensure the atomicity of operation, realize the atomic current limitation of user interface access in a cluster environment, avoid the situation of wrong token quantity setting of the token bucket under concurrent access, and ensure the flow limitation shared by multiple gateway services. When the token bucket is full, the currently deposited tokens are discarded.
S104, receiving a service request sent by a user, and judging whether the token can be acquired from the token bucket for the service request.
It should be noted that the service request refers to a request that a user wants to perform a service, for example, the service request may be a request for data in a database that the user puts forward, and this application is not limited in particular.
In the specific implementation of step S104, after receiving a service request sent by a user, it is determined that a token can be obtained for the service request from the token bucket. If yes, go to step S105; if not, go to step S106.
And S105, if yes, obtaining the token and endowing the token to the service request so as to execute the service request with the token.
In the specific implementation of step S105, when it is determined that the token can be obtained for the service request from the token bucket, the token is obtained from the token bucket and given to the service request, so as to execute the service request with the token. Here, the token is obtained according to the number of tokens consumed by each service request in the current limiting policy corresponding to the user. For example, if the number of tokens consumed by each service request in the current limiting policy corresponding to the user is 1, obtaining 1 token for the service request; and if the number of tokens consumed by each service request in the current limiting strategy corresponding to the user is 2, acquiring 2 tokens for the service request. And then endowing the acquired token to the service request so as to execute the service request with the token. When a service request sent by a user is transmitted to a token bucket node, if enough tokens in the token bucket can be used for responding to the service request of the user, the tokens are directly obtained from the token bucket, the number of the obtained tokens is the same as the number of the tokens consumed by the service request, and meanwhile, the quantity of the tokens in the token bucket is correspondingly reduced according to the number of the tokens consumed by the service request.
And S106, if not, rejecting the service request.
In the specific implementation of step S106, when it is determined that the token cannot be obtained for the service request from the token bucket, the service request sent by the user is rejected. If the number of tokens in the token bucket is insufficient or empty, a service request that does not have enough tokens will be rejected, and the number of tokens in the token bucket will not change.
As an optional implementation manner, regarding step S104, it is determined whether the token can be obtained from the token bucket for the service request by the following steps:
step 1041, in each current limiting interval, determining whether the number of current tokens in the token bucket is greater than or equal to the number of tokens consumed by the service request.
And 1042, if yes, obtaining the token for the service request from the token bucket.
Step 1043, if not, the token cannot be obtained for the service request from the token bucket.
It should be noted that, the current token number refers to the number of tokens remaining in the token bucket when a service request of a user is received within a certain time limit interval.
In the above step 1041-1043, in a specific implementation, in each current-limiting interval, when a service request sent by a user is received, the current token number in the token bucket to which the user belongs is determined, and whether the current token number is greater than or equal to the number of tokens consumed by the service request is determined. If yes, go to step 1042, determine that the token can be obtained for the service request from the token bucket. If not, step 1043 is executed to determine that the token cannot be obtained for the service request from the token bucket.
As an optional implementation manner, when it is determined that the token can be obtained for the service request from the token bucket, the distributed current limiting method further includes;
a: determining a token consumption amount of the user within a current throttling interval time.
It should be noted that the token consumption amount refers to the number of tokens used by the user in the current time limit interval.
For the above step a, in a specific implementation, when a token can be obtained from the token bucket for a service request, the token consumption number of the user in the current throttling interval time is determined, for example, the token consumption number of the user in the current throttling interval time is 70, which is not specifically limited in this application.
B: and at the starting moment of the next current limiting interval time after the current limiting interval time, putting tokens corresponding to the token consumption quantity into the token bucket.
It should be noted that the next current limiting interval refers to a current limiting interval after the current limiting interval.
For the step B, in specific implementation, at the starting time of the next current limiting interval time after the current limiting interval time, the token bucket is put with the tokens corresponding to the token consumption number. Continuing with the embodiment in step a, when the number of tokens that the user can use in each throttling interval is 100, the token consumption number of the user in the current throttling interval is 70, and the token consumption number is smaller than the number of tokens that the user can use, so at the beginning of the next throttling interval after the current throttling interval, only 70 tokens are added to bring the token bucket to maximum capacity.
As an optional implementation manner, when it is determined that the token cannot be obtained for the service request from the token bucket, the distributed current limiting method further includes;
and at the starting moment of the next current limiting interval time after the current limiting interval time, putting tokens corresponding to the current limiting strategy into a token bucket belonging to the user based on the current limiting strategy.
For the above steps, in specific implementation, when all tokens in the current limiting interval are exhausted from the user, no token will be put into the token bucket in the current limiting interval, and at the beginning of the next current limiting interval after the current limiting interval, a token corresponding to the current limiting policy is put into the token bucket to which the user belongs based on the current limiting policy corresponding to the user.
As an optional implementation manner, after rejecting the service request, the distributed current limiting method further includes:
a: and determining the user as an excessive access user, and determining the user information of the excessive access user.
It should be noted that the excessive access user refers to a user that exhausts all tokens in the token bucket within a certain time limit interval. The user information refers to basic user information of the excessive access user, and here, the user information includes a request source address, a user name, a user contact address and an API path. The user contact information may include an email or a mobile phone number of the user with excessive access, and the application is not particularly limited. As an alternative embodiment, the user information may also include other information of the excess access user, such as the number of invoked tokens, the number of excess tokens, and the like, which is not specifically limited in this application.
In specific implementation, after the service request of the user is rejected, the user is determined as an excess access user, and user information of the excess access user is determined, where a request source address, a user name, a user contact address and an API path of the excess access user are determined.
b: and sending the current limiting strategy corresponding to each different current limiting grade to the excess access user according to the user contact information of the excess access user.
And b, when the step b is specifically implemented, sending the current limiting strategies corresponding to different current limiting levels to the excess access user according to the user contact information of the excess access user, so that the excess access user can select the current limiting strategies corresponding to different current limiting levels.
c: when the ordering operation of the excess access user on the current limiting strategy is detected, the ordering grade of the excess access user is obtained, and the current limiting strategy corresponding to the ordering grade is determined as the current limiting strategy corresponding to the excess access user, so that when the excess access user sends a service request next time, a token corresponding to the current limiting strategy is provided for the excess access user according to the current limiting strategy corresponding to the excess access user.
It should be noted that the ordering operation refers to an ordering operation for the current limiting policy when the user purchases the current limiting policy. The order placing grade is a current limiting grade corresponding to the order placing operation of the user, and the order placing grade is any one of preset current limiting grades.
For the step c, in specific implementation, when it is detected that the excess access user orders the current limiting policy, the order ordering level of the excess access user is obtained, and the current limiting policy corresponding to the order ordering level is determined as the current limiting policy corresponding to the excess access user, so that when the excess access user sends a service request next time, the token corresponding to the current limiting policy is provided for the excess access user according to the current limiting policy corresponding to the excess access user. Continuing the example in the foregoing embodiment, for example, when the current limit level placed by the excess access user is the third current limit level, and when it is detected that the excess access user places an order for the current limit policy, the order placed by the excess access user is the third current limit level, and the current limit policy corresponding to the third current limit level is determined as the current limit policy of the excess access user, so that when the excess access user sends a service request next time, the current limit policy corresponding to the third current limit level placed by the excess access user can provide tokens corresponding to the current limit policy for the excess access user, that is, the number of tokens to be placed in the token bucket in each minute is 1000, and the number of tokens consumed by the user for each service request is 1.
Therefore, when a user sends a service request, and the number of tokens consumed in the current limiting interval time exceeds the number of tokens in the current limiting strategy corresponding to the user, the existing current limiting strategies of different grades can be sent to the user, and the user can select the current limiting strategies according to the requirement of the user. Therefore, when the user sends the service request next time, the corresponding token can be provided for the user according to the current limiting strategy corresponding to the current limiting level selected by the user.
The distributed current limiting method based on the token bucket provided by the embodiment of the application comprises the steps of firstly, receiving an access request sent by a user, and determining a request source address of the user; then, based on the request source address of the user, determining a current limiting strategy corresponding to the user; wherein the current limit policy is to characterize a number of tokens that the user is capable of using per current limit interval time; based on the current limiting strategy, putting tokens corresponding to the current limiting strategy into a token bucket belonging to the user at the starting moment of each current limiting interval time; finally, receiving a service request sent by a user, and judging whether the token can be acquired from the token bucket for the service request or not; if so, acquiring the token and endowing the token to the service request so as to execute the service request with the token; if not, the service request is rejected.
Compared with the current limiting method in the prior art, the distributed current limiting method provided by the application determines the current limiting strategy corresponding to the user according to the request source address of the user, and puts the corresponding token into the token bucket according to the current limiting strategy to which the user belongs so as to limit the current of the user. Each user has the current limiting strategy, and the corresponding current limiting strategy is used for limiting the current of the user, so that the problem that the distributed current limiting cannot be realized in the prior art is solved, the accuracy and the real-time performance of the current limiting of the user are improved, and the purpose of carrying out the distributed current limiting on different users is realized.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a distributed current limiting apparatus based on token bucket according to an embodiment of the present disclosure. As shown in fig. 3, the distributed current limiting apparatus 300 includes:
a receiving module 301, configured to receive an access request sent by a user, and determine a request source address of the user;
a current limiting policy determining module 302, configured to determine a current limiting policy corresponding to the user based on the request source address of the user; wherein the current restrictive policy is to characterize a number of tokens that the user is capable of using per current restrictive interval;
a token placing module 303, configured to place a token corresponding to the current limiting policy into a token bucket to which the user belongs at a start time of each current limiting interval time based on the current limiting policy;
a determining module 304, configured to receive a service request sent by a user, and determine whether the token can be obtained for the service request from the token bucket;
a request executing module 305, configured to, if yes, obtain the token and assign the token to the service request, so as to execute the service request with the token;
a request rejecting module 306, configured to reject the service request if the service request is not received.
Further, the distributed current limiting apparatus 300 further includes a current limiting policy setting module, where the current limiting policy setting module is configured to:
and aiming at each different current limiting grade, setting the current limiting interval time, the number of tokens required to be put into the token bucket in each current limiting interval time and the number of tokens consumed by each service request of the user for the current limiting strategy corresponding to the current limiting grade.
Further, when the current limiting policy determining module 302 is configured to determine the current limiting policy corresponding to the user based on the request source address of the user, the current limiting policy determining module 302 is further configured to:
judging whether the user is a white list user or not based on the request source address of the user;
if not, determining the current limiting grade of the user based on the request source address, and determining the current limiting strategy corresponding to the user based on the current limiting grade.
Further, the determining module 304 determines whether the token can be obtained from the token bucket for the service request by:
judging whether the number of the current tokens in the token bucket is larger than or equal to the number of the tokens consumed by the service request or not within each current limiting interval time;
if yes, obtaining the token for the service request from the token bucket;
if not, the token cannot be obtained for the service request from the token bucket.
Further, the distributed current limiting apparatus 300 further includes a first placing module, and when it is determined that the token can be obtained for the service request from the token bucket, the first placing module is configured to:
determining the token consumption number of the user in the current limiting interval time;
and at the starting moment of the next current limiting interval time after the current limiting interval time, putting tokens corresponding to the token consumption quantity into the token bucket.
Further, the distributed current limiting apparatus 300 further includes a second placing module, and when it is determined that the token cannot be obtained for the service request from the token bucket, the second placing module is configured to:
and at the starting moment of the next current limiting interval time after the current limiting interval time, putting tokens corresponding to the current limiting strategy into a token bucket belonging to the user based on the current limiting strategy.
Further, the distributed current limiting apparatus 300 further includes a current limiting policy sending module, and after rejecting the service request, the current limiting policy sending module is configured to:
determining the user as an excess access user, and determining user information of the excess access user; the user information comprises a request source address, a user name, a user contact way and an API path;
sending a current limiting strategy corresponding to each different current limiting grade to the excess access user according to the user contact information of the excess access user;
when detecting the ordering operation of the excess access user on the current limiting strategy, acquiring the ordering grade of the excess access user, and determining the current limiting strategy corresponding to the ordering grade as the current limiting strategy corresponding to the excess access user, so that when the excess access user sends a service request next time, a token corresponding to the current limiting strategy is provided for the excess access user according to the current limiting strategy corresponding to the excess access user; wherein the order placing grade is any one of the current limiting grades.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 4, the electronic device 400 includes a processor 410, a memory 420, and a bus 430.
The memory 420 stores machine-readable instructions executable by the processor 410, when the electronic device 400 runs, the processor 410 communicates with the memory 420 through the bus 430, and when the machine-readable instructions are executed by the processor 410, the steps of the distributed current limiting method based on the token bucket in the method embodiments shown in fig. 1 and fig. 2 may be performed.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the token bucket-based distributed current limiting method in the method embodiments shown in fig. 1 and fig. 2 may be executed.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solutions of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus once an item is defined in one figure, it need not be further defined and explained in subsequent figures, and moreover, the terms "first", "second", "third", etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used to illustrate the technical solutions of the present application, but not to limit the technical solutions, and the scope of the present application is not limited to the above-mentioned embodiments, although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: those skilled in the art can still make modifications or changes to the embodiments described in the foregoing embodiments, or make equivalent substitutions for some features, within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the exemplary embodiments of the present application, and are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A distributed throttling method based on token buckets, the distributed throttling method comprising:
receiving an access request sent by a user, and determining a request source address of the user;
determining a current limiting strategy corresponding to the user based on the request source address of the user; wherein the current limit policy is to characterize a number of tokens that the user is capable of using per current limit interval time;
based on the current limiting strategy, putting tokens corresponding to the current limiting strategy into a token bucket belonging to the user at the starting moment of each current limiting interval time;
receiving a service request sent by a user, and judging whether the token can be acquired from the token bucket for the service request;
if so, acquiring the token and endowing the token to the service request so as to execute the service request with the token;
if not, the service request is rejected.
2. The distributed current limiting method of claim 1, wherein prior to said receiving an access request sent by a user, the distributed current limiting method further comprises:
and aiming at each different current limiting grade, setting the current limiting interval time, the number of tokens required to be put into the token bucket in each current limiting interval time and the number of tokens consumed by the user in each service request for the current limiting strategy corresponding to the current limiting grade.
3. The distributed current limiting method according to claim 2, wherein the determining the current limiting policy corresponding to the user based on the request source address of the user includes:
judging whether the user is a white list user or not based on the request source address of the user;
if not, determining the current limiting grade of the user based on the request source address, and determining the current limiting strategy corresponding to the user based on the current limiting grade.
4. The distributed current limiting method of claim 2 wherein determining whether the token can be obtained for the traffic request from a token bucket is performed by:
judging whether the number of the current tokens in the token bucket is larger than or equal to the number of the tokens consumed by the service request or not within each current limiting interval time;
if yes, obtaining the token for the service request from the token bucket;
if not, the token cannot be obtained for the service request from the token bucket.
5. The distributed current limiting method of claim 4, wherein when it is determined that the token can be obtained for the service request from the token bucket, the distributed current limiting method further comprises;
determining the token consumption number of the user in the current limiting interval time;
and at the starting moment of the next current limiting interval time after the current limiting interval time, putting tokens corresponding to the token consumption quantity into the token bucket.
6. The distributed current limiting method of claim 4, wherein when it is determined that the token cannot be obtained for the service request from the token bucket, the distributed current limiting method further comprises;
and at the starting moment of the next current limiting interval time after the current limiting interval time, putting tokens corresponding to the current limiting strategy into a token bucket belonging to the user based on the current limiting strategy.
7. The distributed current limiting method of claim 1, wherein after rejecting the service request, the distributed current limiting method further comprises:
determining the user as an excess access user, and determining user information of the excess access user; the user information comprises a request source address, a user name, a user contact way and an API path;
sending a current limiting strategy corresponding to each different current limiting grade to the excess access user according to the user contact information of the excess access user;
when the ordering operation of the excess access user on the current limiting strategy is detected, acquiring the ordering grade of the excess access user, and determining the current limiting strategy corresponding to the ordering grade as the current limiting strategy corresponding to the excess access user, so that when the excess access user sends a service request next time, a token corresponding to the current limiting strategy is provided for the excess access user according to the current limiting strategy corresponding to the excess access user; wherein the order placing grade is any one of the current limiting grades.
8. A distributed current limiting apparatus based on token buckets, the distributed current limiting apparatus comprising:
the receiving module is used for receiving an access request sent by a user and determining a request source address of the user;
a current limiting strategy determining module, configured to determine a current limiting strategy corresponding to the user based on the request source address of the user; wherein the current limiting policy is used to characterize the number of tokens that the user is able to use per current limiting interval;
the token placing module is used for placing tokens corresponding to the current limiting strategies into a token bucket to which the user belongs at the starting moment of each current limiting interval time based on the current limiting strategies;
the judging module is used for receiving a service request sent by a user and judging whether the token can be acquired from the token bucket for the service request or not;
the request execution module is used for acquiring the token and endowing the token to the service request if the token is true so as to execute the service request with the token;
and the request rejection module is used for rejecting the service request if the service request is not rejected.
9. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is run, the machine-readable instructions when executed by the processor performing the steps of the token bucket based distributed throttling method of any of claims 1 to 7.
10. A computer-readable storage medium, having stored thereon a computer program for performing, when executed by a processor, the steps of the token bucket based distributed throttling method according to any one of claims 1 to 7.
CN202210537854.8A 2022-05-17 2022-05-17 Distributed current limiting method and distributed current limiting device based on token bucket Active CN115037693B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210537854.8A CN115037693B (en) 2022-05-17 2022-05-17 Distributed current limiting method and distributed current limiting device based on token bucket

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210537854.8A CN115037693B (en) 2022-05-17 2022-05-17 Distributed current limiting method and distributed current limiting device based on token bucket

Publications (2)

Publication Number Publication Date
CN115037693A true CN115037693A (en) 2022-09-09
CN115037693B CN115037693B (en) 2023-05-26

Family

ID=83121575

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210537854.8A Active CN115037693B (en) 2022-05-17 2022-05-17 Distributed current limiting method and distributed current limiting device based on token bucket

Country Status (1)

Country Link
CN (1) CN115037693B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276182A (en) * 2019-06-10 2019-09-24 必成汇(成都)科技有限公司 The implementation method of API distribution current limliting
CN110611623A (en) * 2019-08-30 2019-12-24 江苏苏宁物流有限公司 Current limiting method and device
US10659371B1 (en) * 2017-12-11 2020-05-19 Amazon Technologies, Inc. Managing throttling limits in a distributed system
CN111447150A (en) * 2020-02-29 2020-07-24 中国平安财产保险股份有限公司 Access request current limiting method, server and storage medium
WO2020211222A1 (en) * 2019-04-15 2020-10-22 厦门市美亚柏科信息股份有限公司 Method and device for providing micro-service based on data service platform, and storage medium
CN111901249A (en) * 2020-07-31 2020-11-06 深圳前海微众银行股份有限公司 Service current limiting method, device, equipment and storage medium
CN112929291A (en) * 2021-02-18 2021-06-08 欧冶云商股份有限公司 Distributed current limiting method based on redis, computer equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10659371B1 (en) * 2017-12-11 2020-05-19 Amazon Technologies, Inc. Managing throttling limits in a distributed system
WO2020211222A1 (en) * 2019-04-15 2020-10-22 厦门市美亚柏科信息股份有限公司 Method and device for providing micro-service based on data service platform, and storage medium
CN110276182A (en) * 2019-06-10 2019-09-24 必成汇(成都)科技有限公司 The implementation method of API distribution current limliting
CN110611623A (en) * 2019-08-30 2019-12-24 江苏苏宁物流有限公司 Current limiting method and device
CN111447150A (en) * 2020-02-29 2020-07-24 中国平安财产保险股份有限公司 Access request current limiting method, server and storage medium
CN111901249A (en) * 2020-07-31 2020-11-06 深圳前海微众银行股份有限公司 Service current limiting method, device, equipment and storage medium
CN112929291A (en) * 2021-02-18 2021-06-08 欧冶云商股份有限公司 Distributed current limiting method based on redis, computer equipment and storage medium

Also Published As

Publication number Publication date
CN115037693B (en) 2023-05-26

Similar Documents

Publication Publication Date Title
CN108683604B (en) Concurrent access control method, terminal device, and medium
CN109831504B (en) Micro service request processing method, device and equipment
CN111556059A (en) Abnormity detection method, abnormity detection device and terminal equipment
CN108306874B (en) Service interface access current limiting method and device
CN111901249A (en) Service current limiting method, device, equipment and storage medium
JPWO2018220709A1 (en) Resource management system, management device, method and program
US10749867B1 (en) Systems and methods for device detection and registration
CN106034138A (en) Remote service calling method and remote service calling device
CN114257551A (en) Distributed current limiting method and system and storage medium
CN105337783B (en) Monitor the method and device of communication equipment non-normal consumption flow
US20170289354A1 (en) System and Method for Allocation And Management Of Shared Virtual Numbers
CN111371841B (en) Data monitoring method and device
CN114223177A (en) Access control method, device, server and computer readable medium
CN115934202A (en) Data management method, system, data service gateway and storage medium
CN110750761A (en) Applet access control method and device
CN114422439A (en) Interface current limiting method and device, computer equipment and storage medium
CN108629582B (en) Service processing method and device
CN111953650A (en) Service account logout method, device, equipment and storage medium
US7778660B2 (en) Mobile communications terminal, information transmitting system and information receiving method
CN115037693A (en) Distributed current limiting method and distributed current limiting device based on token bucket
CN111597041A (en) Calling method and device of distributed system, terminal equipment and server
CN112417402B (en) Authority control method, authority control device, authority control equipment and storage medium
CN113590180B (en) Detection strategy generation method and device
CN110245016B (en) Data processing method, system, device and terminal equipment
CN110765426A (en) Equipment permission setting method, device, equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant