CN111488135A - Current limiting method and device for high-concurrency system, storage medium and equipment - Google Patents
Current limiting method and device for high-concurrency system, storage medium and equipment Download PDFInfo
- Publication number
- CN111488135A CN111488135A CN201910079932.2A CN201910079932A CN111488135A CN 111488135 A CN111488135 A CN 111488135A CN 201910079932 A CN201910079932 A CN 201910079932A CN 111488135 A CN111488135 A CN 111488135A
- Authority
- CN
- China
- Prior art keywords
- resource access
- priority
- access request
- access requests
- current limiting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000003860 storage Methods 0.000 title claims abstract description 16
- 238000012545 processing Methods 0.000 claims abstract description 25
- 238000004590 computer program Methods 0.000 claims description 19
- 238000012913 prioritisation Methods 0.000 claims description 4
- 230000003139 buffering effect Effects 0.000 claims 2
- 230000006870 function Effects 0.000 description 8
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000032683 aging Effects 0.000 description 2
- 238000012217 deletion Methods 0.000 description 2
- 230000037430 deletion Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/20—Software design
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention provides a current limiting method, a current limiting device, a storage medium and a device of a high concurrency system, wherein the method comprises the following steps: judging whether the resource access amount of the high concurrency system meets a preset current limiting condition or not; if the resource access amount of the high concurrency system meets a preset current limiting condition, performing priority division on the received resource access requests, and caching the resource access requests to cache queues with corresponding priorities according to the priority of each resource access request; and sequentially processing the resource access requests in each buffer queue according to the priority order of the buffer queues. The invention can carry out current limiting in the high concurrent access of the system, ensure the normal operation of the core system and improve the user experience.
Description
Technical Field
The invention relates to the technical field of system design, in particular to a current limiting method, a current limiting device, a storage medium and a device for a high concurrency system.
Background
In highly concurrent system design, current limiting is one of the most common methods for protecting the system from normal operation. Common current limiting algorithms include counters, token buckets, and leaky bucket algorithms, where:
token bucket throttling: the token bucket is a bucket for storing tokens with fixed capacity, the tokens are added into the bucket according to a fixed speed, when the token bucket is filled, the tokens are discarded, whether the request is processed needs to see whether the tokens in the bucket are enough, and when the number of the tokens is reduced to zero, a new request is rejected. Token buckets allow a certain degree of bursty traffic, which can be handled as long as there are tokens, supporting multiple tokens at a time. The tokens are contained in the token bucket.
Leakage bucket flow limiting: leaky bucket a fixed capacity leaky bucket, outflow requests according to a fixed constant rate, inflow request rate is arbitrary, when the number of inflow requests accumulates to the capacity of the leaky bucket, the new inflow request is refused. A leaky bucket may be viewed as a queue with a fixed capacity, fixed egress rate, and the leaky bucket limits the egress rate of requests. The leaky bucket is filled with requests.
Current limiting of the counter: sometimes, a counter is used for limiting current, and is mainly used for limiting total concurrency number in a certain time, such as a database connection pool, a thread pool and a second-killing concurrency number; the counter current limiting is realized as long as the total request number in a certain time exceeds a set threshold value, and is a simple and rough total number current limiting instead of average speed current limiting.
The current limiting level commonly used is gateway level current limiting or application level current limiting, etc., where: the gateway-level flow limitation is to set a proxy server (Nginx and the like) at a system access total entrance and filter some redundant resources by setting a resource access amount upper limit through the proxy server; the application level current limiting is to adopt a current limiting algorithm to reject redundant resource access requests in an application system.
The current limiting algorithm performs simple and rough current limiting only by discarding redundant requests, which inevitably results in that resource access requests of some important cores cannot be responded, so that a core system cannot normally operate, and a huge leak exists in an actual production environment.
Disclosure of Invention
In view of the above problems, the present invention provides a current limiting method, device, storage medium and device for a high concurrency system, which can limit current in a high concurrency access of the system and ensure normal operation of a core system, thereby improving user experience.
In one aspect of the present invention, a current limiting method for a high concurrency system is provided, the method comprising:
judging whether the resource access amount of the high concurrency system meets a preset current limiting condition or not;
if the resource access amount of the high concurrency system meets a preset current limiting condition, performing priority division on the received resource access requests, and caching the resource access requests to cache queues with corresponding priorities according to the priorities after the resource access requests are divided;
and sequentially processing the resource access requests in each buffer queue according to the priority order of the buffer queues.
Optionally, the prioritizing the received resource access requests includes:
and dividing the priority of each resource access request according to a preset priority division standard and the service type of the received resource access request, wherein the priority division standard comprises the corresponding relation between the service type and the priority to which the service type belongs.
Optionally, the prioritizing the received resource access requests includes:
and extracting the priority identification carried in each received resource access request, and dividing the priority of the corresponding resource access request according to the extracted priority identification.
Optionally, the caching the resource access requests to the cache queues of the corresponding priorities according to the priorities after the resource access requests are divided includes:
and for the resource access requests with the same priority, caching the resource access requests with long time into the cache queues with corresponding priorities.
Optionally, the method further comprises:
when the resource access request in the cache queue with the lowest priority is processed, if the resource access request to be cached in the cache queue with the highest priority is larger than the queue cache threshold value, the resource access request in the cache queue with the lowest priority is discarded, and the resource access request in the cache queue with the highest priority is processed.
Optionally, the determining whether the resource access amount of the high concurrency system meets a preset current limiting condition includes:
and judging whether the current resource access amount of the high concurrency system is larger than a preset current limiting threshold value or not.
In another aspect of the present invention, there is provided a current limiting apparatus for a high concurrency system, including:
the judging module is used for judging whether the resource access amount of the high concurrency system meets a preset current limiting condition or not;
the flow limiting control module is used for carrying out priority division on the received resource access requests when the resource access amount of the high concurrency system meets a preset flow limiting condition, and caching the resource access requests to cache queues with corresponding priorities according to the priorities after the resource access requests are divided;
and the processing module is used for sequentially processing the resource access requests in each buffer queue according to the priority order of the buffer queues.
Optionally, the flow limiting control module is specifically configured to prioritize the resource access requests according to a preset prioritization standard and a service type of a received resource access request, where the prioritization standard includes a correspondence between the service type and a priority to which the service type belongs, or,
and extracting the priority identification carried in each received resource access request, and dividing the priority of the corresponding resource access request according to the extracted priority identification.
Optionally, the flow limiting control module is further specifically configured to preferentially cache a resource access request with a long time to a cache queue with a corresponding priority for a resource access request with the same priority.
Optionally, the processing module is further configured to, when processing the resource access request in the cache queue with the lowest priority, discard the resource access request in the cache queue with the lowest priority and process the resource access request in the cache queue with the highest priority if the resource access request to be cached in the cache queue with the highest priority is greater than the queue caching threshold.
Furthermore, the invention also provides a computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method as described above.
Furthermore, the present invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method as described above when executing the program.
According to the current limiting method, device, storage medium and equipment for the high-concurrency system, when the system is accessed at high concurrency, all resource access requests are subjected to priority division, the resource access requests are cached to the cache queues with corresponding priorities according to the priorities of the resource access requests, and the resource access requests with high priorities are processed preferentially according to the priority order of the cache queues, so that the resource access requests of important cores are responded quickly, normal operation of the core system is guaranteed, and user experience is improved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a schematic flow chart illustrating a current limiting method of a high concurrency system according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating a method for limiting current in a high concurrency system according to another embodiment of the invention;
fig. 3 is a schematic structural diagram of a current limiting device of a high concurrency system according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Fig. 1 schematically shows a flow chart of a current limiting method of a high concurrency system according to an embodiment of the present invention. Referring to fig. 1, the method for limiting current of a high concurrency system according to the embodiment of the present invention specifically includes steps S11-S13, as follows:
and S11, judging whether the resource access amount of the high concurrency system meets a preset current limiting condition.
In this embodiment, the current limiting condition is configured first to perform a condition for starting the current limiting function, and a principle of priority division of the resource access request is configured, so that priority division is performed on the received resource access request in the following.
Specifically, whether the resource access amount of the high concurrency system meets the preset current limiting condition can be judged through the configured current limiting condition.
And S12, if the resource access amount of the high concurrency system meets the preset current limiting condition, performing priority division on the received resource access requests, and caching the resource access requests to cache queues with corresponding priorities according to the priorities after the resource access requests are divided.
And S13, sequentially processing the resource access requests in each buffer queue according to the priority order of the buffer queues.
According to the current limiting method for the high-concurrency system, when the system is accessed at high concurrency, all resource access requests are subjected to priority division, the resource access requests are cached to the cache queues with corresponding priorities according to the priorities of the resource access requests, and the resource access requests with high priorities are processed preferentially according to the priority order of the cache queues, so that the resource access requests of important cores are responded quickly, normal operation of the core system is guaranteed, and user experience is improved.
In another embodiment of the present invention, referring to fig. 2, specifically, by determining whether the current resource access amount of the system is greater than a preset current limiting threshold, it is determined whether the current resource access amount of the system meets a preset current limiting condition. If the number of the current resource access requests reaches a preset current limiting threshold value, starting a current limiting function, carrying out priority division on the received resource access requests, caching the resource access requests to a cache queue with corresponding priority according to the priority of each resource access request after determining the priority of each resource access request, putting the resource access requests into a high-priority queue if the priority of the resource access requests is high, putting the resource access requests into a secondary priority queue, and putting the resource access requests with the lowest priority into a queue with the lowest priority. Then preferentially processing the resource access requests in the high-priority cache queue, then processing the resource access requests in the secondary priority queue, and then processing the resource access request with the lowest priority. And if the resource access amount of the system does not meet the preset current limiting condition, namely the number of the current resource access requests does not reach the preset current limiting threshold value, processing the received resource access requests according to a normal process.
In an embodiment of the present invention, the received resource access request is prioritized, which is specifically implemented as follows: and dividing the priority of each resource access request according to a preset priority division standard and the service type of the received resource access request, wherein the priority division standard comprises the corresponding relation between the service type and the priority to which the service type belongs.
The service types of the resource access request comprise resource addition, resource modification, resource deletion, resource query and the like.
In a specific embodiment, each resource access request may be level labeled according to the urgency of the corresponding service type. Specifically, the service type is the highest priority of resource access requests for resource addition, resource modification and resource deletion, and the service type is the second priority of resource access requests for resource query. The highest priority level and the lowest priority level may even drop the resource access request.
In another embodiment of the present invention, the received resource access request is prioritized, which is specifically implemented as follows: and extracting the priority identification carried in each received resource access request, and dividing the priority of the corresponding resource access request according to the extracted priority identification.
In this embodiment, the priority of the resource access request may be identified by the resource requesting person. Specifically, the resource requesting person marks a priority identifier in the information carried in the resource access request, and distinguishes the priority of the resource access request by a special identifier or a plurality of different identifiers.
In this embodiment of the present invention, the caching the resource access requests to the cache queues of the corresponding priorities according to the priorities after the resource access requests are divided specifically includes: and for the resource access requests with the same priority, caching the resource access requests with long time into the cache queues with corresponding priorities.
When the system has high concurrent access, all resource access requests are respectively cached according to the priority, the resource access request with the highest level is processed preferentially, and the resource access request with the lower level is processed finally. And for the resource access requests with the same priority, caching the resource access requests with long time into the cache queues with corresponding priorities. Specifically, especially for the resource access requests with low priority, the resource access requests with low priority and long aging can be preferentially cached in the cache queue with low priority, and the resource access requests with low priority and short aging can be directly discarded, so that the normal operation of the core system can be ensured, and the resource access requests of important cores can be responded quickly.
Further, in the embodiment of the present invention, when processing the resource access request in the cache queue with the lowest priority, if the resource access request to be cached in the cache queue with the highest priority is greater than the queue cache threshold, the resource access request in the cache queue with the lowest priority is discarded, and the resource access request in the cache queue with the highest priority is processed.
Specifically, the resource access request processing program processes the resource access requests in the high-priority cache queue preferentially, then processes the resource access requests in the sub-priority queue, and then processes the resource access request with the lowest priority. When the resource access request with the lowest priority is processed, the resource in the queue with the highest priority is found to be full, the resource in the current queue is discarded, the resource in the queue with the highest priority is processed, the normal operation of the core system is further ensured, and the resource access request of the important core is responded quickly.
For simplicity of explanation, the method embodiments are described as a series of acts or combinations, but those skilled in the art will appreciate that the embodiments are not limited by the order of acts described, as some steps may occur in other orders or concurrently with other steps in accordance with the embodiments of the invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Fig. 3 is a schematic view illustrating a structure of a current limiting apparatus of a high concurrency system according to an embodiment of the present invention. Referring to fig. 3, the current limiting apparatus of the high concurrency system according to the embodiment of the present invention specifically includes a determining module 301, a current limiting control module 302, and a processing module 303, where:
the judging module 301 is configured to judge whether the resource access amount of the high concurrency system meets a preset current limiting condition;
the flow limiting control module 302 is configured to, when the resource access amount of the high concurrency system meets a preset flow limiting condition, perform priority division on the received resource access request, and cache the resource access request to a cache queue of a corresponding priority according to the priority after the resource access request is divided;
the processing module 303 is configured to sequentially process the resource access requests in each buffer queue according to the priority order of the buffer queues.
In this embodiment of the present invention, the flow restriction control module 302 is specifically configured to divide the priority of each resource access request according to a preset priority division standard and the service type of the received resource access request, where the priority division standard includes a correspondence between the service type and the priority to which the service type belongs.
In another embodiment of the present invention, the flow restriction control module 302 is specifically configured to extract a priority identifier carried in each received resource access request, and divide the priority of the corresponding resource access request according to the extracted priority identifier.
In this embodiment of the present invention, the flow limiting control module 302 is further specifically configured to preferentially cache a resource access request with a long time period to a cache queue with a corresponding priority level for a resource access request with the same priority level.
In this embodiment of the present invention, the processing module 303 is further configured to, when processing the resource access request in the cache queue with the lowest priority, drop the resource access request in the cache queue with the lowest priority and process the resource access request in the cache queue with the highest priority if the resource access request to be cached in the cache queue with the highest priority is greater than the queue caching threshold.
In this embodiment of the present invention, the determining module 301 is specifically configured to determine whether a current resource access amount of the system is greater than a preset current limit threshold.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
According to the current limiting method, device, storage medium and equipment for the high-concurrency system, when the system is accessed at high concurrency, all resource access requests are subjected to priority division, the resource access requests are cached to the cache queues with corresponding priorities according to the priorities of the resource access requests, and the resource access requests with high priorities are processed preferentially according to the priority order of the cache queues, so that the resource access requests of important cores are responded quickly, normal operation of the core system is guaranteed, and user experience is improved.
Furthermore, an embodiment of the present invention also provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the method as described above.
In this embodiment, the module/unit integrated with the current limiting device of the high concurrency system may be stored in a computer readable storage medium if it is implemented in the form of a software functional unit and sold or used as an independent product. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The electronic device provided by the embodiment of the present invention includes a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the steps in the above-mentioned embodiments of the current limiting method for the high concurrency system when executing the computer program, for example, S11 shown in fig. 1, and determines whether the resource access amount of the high concurrency system satisfies a preset current limiting condition. And S12, if the resource access amount of the high concurrency system meets the preset current limiting condition, performing priority division on the received resource access requests, and caching the resource access requests to cache queues with corresponding priorities according to the priorities after the resource access requests are divided. And S13, sequentially processing the resource access requests in each buffer queue according to the priority order of the buffer queues. Alternatively, the processor implements the functions of the modules/units in the current limiting device embodiments of the high concurrency systems when executing the computer program, such as the determining module 301, the current limiting control module 302, and the processing module 303 shown in fig. 3.
Illustratively, the computer program may be partitioned into one or more modules/units that are stored in the memory and executed by the processor to implement the invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program in the current limiting device of the high concurrency system. For example, the computer program may be divided into a decision module 301, a current limit control module 302, and a processing module 303.
The electronic device may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing device. The electronic device may include, but is not limited to, a processor, a memory. Those skilled in the art will appreciate that the electronic device in this embodiment may include more or fewer components, or combine certain components, or different components, for example, the electronic device may also include an input-output device, a network access device, a bus, etc.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like that is the control center for the electronic device and that connects the various parts of the overall electronic device using various interfaces and wires.
The memory may be used to store the computer programs and/or modules, and the processor may implement various functions of the electronic device by running or executing the computer programs and/or modules stored in the memory and calling data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than others, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (12)
1. A method of limiting current for a high concurrency system, the method comprising:
judging whether the resource access amount of the high concurrency system meets a preset current limiting condition or not;
if the resource access amount of the high concurrency system meets a preset current limiting condition, performing priority division on the received resource access requests, and caching the resource access requests to cache queues with corresponding priorities according to the priorities after the resource access requests are divided;
and sequentially processing the resource access requests in each buffer queue according to the priority order of the buffer queues.
2. The method of claim 1, wherein prioritizing the received resource access requests comprises:
and dividing the priority of each resource access request according to a preset priority division standard and the service type of the received resource access request, wherein the priority division standard comprises the corresponding relation between the service type and the priority to which the service type belongs.
3. The method of claim 1, wherein prioritizing the received resource access requests comprises:
and extracting the priority identification carried in each received resource access request, and dividing the priority of the corresponding resource access request according to the extracted priority identification.
4. The method according to any one of claims 1 to 3, wherein the buffering the resource access requests to the buffer queue of the corresponding priority according to the priority of each resource access request after being divided comprises:
and for the resource access requests with the same priority, caching the resource access requests with long time into the cache queues with corresponding priorities.
5. The method according to any one of claims 1-3, further comprising:
when the resource access request in the cache queue with the lowest priority is processed, if the resource access request to be cached in the cache queue with the highest priority is larger than the queue cache threshold value, the resource access request in the cache queue with the lowest priority is discarded, and the resource access request in the cache queue with the highest priority is processed.
6. The method of claim 1, wherein the determining whether the resource access amount of the high concurrency system meets a preset current limit condition comprises:
and judging whether the current resource access amount of the high concurrency system is larger than a preset current limiting threshold value or not.
7. A current limiting device for a high concurrency system, comprising:
the judging module is used for judging whether the resource access amount of the high concurrency system meets a preset current limiting condition or not;
the flow limiting control module is used for carrying out priority division on the received resource access requests when the resource access amount of the high concurrency system meets a preset flow limiting condition, and caching the resource access requests to cache queues with corresponding priorities according to the priorities after the resource access requests are divided;
and the processing module is used for sequentially processing the resource access requests in each buffer queue according to the priority order of the buffer queues.
8. The apparatus according to claim 7, wherein the flow restriction control module is specifically configured to prioritize the resource access requests according to a preset prioritization criterion and a service type of the received resource access request, where the prioritization criterion includes a correspondence between the service type and a priority to which the service type belongs, or,
and extracting the priority identification carried in each received resource access request, and dividing the priority of the corresponding resource access request according to the extracted priority identification.
9. The apparatus according to claim 7 or 8, wherein the flow restriction control module is further configured to preferentially buffer older resource access requests to the buffer queue of the corresponding priority level for resource access requests of the same priority level.
10. The apparatus according to claim 7 or 8, wherein the processing module is further configured to, when processing the resource access request in the buffer queue with the lowest priority, drop the resource access request in the buffer queue with the lowest priority and process the resource access request in the buffer queue with the highest priority if the resource access request to be buffered in the buffer queue with the highest priority is greater than a queue buffering threshold.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
12. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1-6 are implemented when the processor executes the program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910079932.2A CN111488135A (en) | 2019-01-28 | 2019-01-28 | Current limiting method and device for high-concurrency system, storage medium and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910079932.2A CN111488135A (en) | 2019-01-28 | 2019-01-28 | Current limiting method and device for high-concurrency system, storage medium and equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111488135A true CN111488135A (en) | 2020-08-04 |
Family
ID=71795885
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910079932.2A Pending CN111488135A (en) | 2019-01-28 | 2019-01-28 | Current limiting method and device for high-concurrency system, storage medium and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111488135A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112217878A (en) * | 2020-09-23 | 2021-01-12 | 上海维信荟智金融科技有限公司 | High-concurrency request distribution method and system |
CN113722062A (en) * | 2021-08-10 | 2021-11-30 | 上海浦东发展银行股份有限公司 | Request processing method and device, computer equipment and storage medium |
CN114363263A (en) * | 2021-12-24 | 2022-04-15 | 深圳市紫金支点技术股份有限公司 | Bandwidth control method of financial service network and related equipment |
CN114500403A (en) * | 2022-01-24 | 2022-05-13 | 中国联合网络通信集团有限公司 | Data processing method and device and computer readable storage medium |
CN114666284A (en) * | 2022-05-23 | 2022-06-24 | 阿里巴巴(中国)有限公司 | Flow control method and device, electronic equipment and readable storage medium |
CN115174479A (en) * | 2022-07-19 | 2022-10-11 | 天翼云科技有限公司 | Flow control method and device |
CN115242729A (en) * | 2022-09-22 | 2022-10-25 | 沐曦集成电路(上海)有限公司 | Cache query system based on multiple priorities |
CN116208680A (en) * | 2023-05-04 | 2023-06-02 | 成都三合力通科技有限公司 | Server access management system and method |
CN116757796A (en) * | 2023-08-22 | 2023-09-15 | 深圳硬之城信息技术有限公司 | Shopping request response method based on nginx and related device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150195371A1 (en) * | 2012-08-24 | 2015-07-09 | Google Inc. | Changing a cache queue based on user interface pointer movement |
CN105068864A (en) * | 2015-07-24 | 2015-11-18 | 北京京东尚科信息技术有限公司 | Method and system for processing asynchronous message queue |
CN107391268A (en) * | 2016-05-17 | 2017-11-24 | 阿里巴巴集团控股有限公司 | service request processing method and device |
-
2019
- 2019-01-28 CN CN201910079932.2A patent/CN111488135A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150195371A1 (en) * | 2012-08-24 | 2015-07-09 | Google Inc. | Changing a cache queue based on user interface pointer movement |
CN105068864A (en) * | 2015-07-24 | 2015-11-18 | 北京京东尚科信息技术有限公司 | Method and system for processing asynchronous message queue |
CN107391268A (en) * | 2016-05-17 | 2017-11-24 | 阿里巴巴集团控股有限公司 | service request processing method and device |
Non-Patent Citations (1)
Title |
---|
荒城9510: "架构设计之服务限流", 《HTTPS://WWW.JIANSHU.COM/P/908FD3396DE7》 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112217878A (en) * | 2020-09-23 | 2021-01-12 | 上海维信荟智金融科技有限公司 | High-concurrency request distribution method and system |
CN113722062A (en) * | 2021-08-10 | 2021-11-30 | 上海浦东发展银行股份有限公司 | Request processing method and device, computer equipment and storage medium |
CN114363263A (en) * | 2021-12-24 | 2022-04-15 | 深圳市紫金支点技术股份有限公司 | Bandwidth control method of financial service network and related equipment |
CN114500403A (en) * | 2022-01-24 | 2022-05-13 | 中国联合网络通信集团有限公司 | Data processing method and device and computer readable storage medium |
CN114666284A (en) * | 2022-05-23 | 2022-06-24 | 阿里巴巴(中国)有限公司 | Flow control method and device, electronic equipment and readable storage medium |
WO2023226948A1 (en) * | 2022-05-23 | 2023-11-30 | 阿里巴巴(中国)有限公司 | Traffic control method and apparatus, electronic device and readable storage medium |
CN115174479B (en) * | 2022-07-19 | 2023-10-13 | 天翼云科技有限公司 | Flow control method and device |
CN115174479A (en) * | 2022-07-19 | 2022-10-11 | 天翼云科技有限公司 | Flow control method and device |
CN115242729A (en) * | 2022-09-22 | 2022-10-25 | 沐曦集成电路(上海)有限公司 | Cache query system based on multiple priorities |
CN115242729B (en) * | 2022-09-22 | 2022-11-25 | 沐曦集成电路(上海)有限公司 | Cache query system based on multiple priorities |
CN116208680B (en) * | 2023-05-04 | 2023-07-14 | 成都三合力通科技有限公司 | Server access management system and method |
CN116208680A (en) * | 2023-05-04 | 2023-06-02 | 成都三合力通科技有限公司 | Server access management system and method |
CN116757796A (en) * | 2023-08-22 | 2023-09-15 | 深圳硬之城信息技术有限公司 | Shopping request response method based on nginx and related device |
CN116757796B (en) * | 2023-08-22 | 2024-01-23 | 深圳硬之城信息技术有限公司 | Shopping request response method based on nginx and related device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111488135A (en) | Current limiting method and device for high-concurrency system, storage medium and equipment | |
CN111198759B (en) | Memory optimization method, system, terminal equipment and readable storage medium | |
CN109525500B (en) | Information processing method and information processing device capable of automatically adjusting threshold | |
CN111324427B (en) | Task scheduling method and device based on DSP | |
CN107347039B (en) | Management method and device for shared cache space | |
CN109981702B (en) | File storage method and system | |
CN111597040B (en) | Resource allocation method, device, storage medium and electronic equipment | |
CN110659151A (en) | Data verification method and device and storage medium | |
WO2021068205A1 (en) | Access control method and apparatus, and server and computer-readable medium | |
CN113472681A (en) | Flow rate limiting method and device | |
RU2641250C2 (en) | Device and method of queue management | |
CN111385214B (en) | Flow control method, device and equipment | |
CN111625358A (en) | Resource allocation method and device, electronic equipment and storage medium | |
CN111159009A (en) | Pressure testing method and device for log service system | |
WO2017070869A1 (en) | Memory configuration method, apparatus and system | |
CN110990148A (en) | Method, device and medium for optimizing storage performance | |
CN116204293A (en) | Resource scheduling method, device, computer equipment and storage medium | |
CN113961334A (en) | Task processing method, device, equipment and storage medium | |
CN115499513A (en) | Data request processing method and device, computer equipment and storage medium | |
CN113688107A (en) | Metadata caching method, device, terminal and storage medium for distributed file system | |
CN113347110A (en) | Flow control method, flow control device, storage medium and equipment | |
CN112231090A (en) | Application process management method and device and terminal equipment | |
CN105306578A (en) | Method and device for storing content | |
CN116204328B (en) | Off-base load sharing processing method and system | |
CN116346729B (en) | Data log reporting current limiting method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200804 |
|
RJ01 | Rejection of invention patent application after publication |