CN110557341A - Method and device for limiting data current - Google Patents

Method and device for limiting data current Download PDF

Info

Publication number
CN110557341A
CN110557341A CN201810550766.5A CN201810550766A CN110557341A CN 110557341 A CN110557341 A CN 110557341A CN 201810550766 A CN201810550766 A CN 201810550766A CN 110557341 A CN110557341 A CN 110557341A
Authority
CN
China
Prior art keywords
data
pointer
write pointer
read
read pointer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810550766.5A
Other languages
Chinese (zh)
Inventor
戚华南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201810550766.5A priority Critical patent/CN110557341A/en
Publication of CN110557341A publication Critical patent/CN110557341A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/21Flow control; Congestion control using leaky-bucket
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/215Flow control; Congestion control using token-bucket
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/22Traffic shaping
    • H04L47/225Determination of shaping rate, e.g. using a moving window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/29Flow control; Congestion control using a combination of thresholds

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Transfer Systems (AREA)

Abstract

the invention discloses a method and a device for data current limiting, and relates to the technical field of computers. One embodiment of the method comprises: storing data to be processed into a ring data structure, wherein the ring data structure comprises an array for storing the data to be processed, a write pointer pointing to the position of the currently writable data and a read pointer pointing to the position of the currently readable data; respectively acquiring logic values of the current positions of the write pointer and the read pointer, and calculating a logic difference value of the current positions of the write pointer and the read pointer; and carrying out data current limiting according to the logic difference value. The embodiment can perform current limiting according to the real-time processing capacity and the resource using state of the system, can more accurately improve or optimize the system performance and more reasonably perform data current limiting, and has the advantages of real-time performance, high efficiency, dynamic performance and the like.

Description

method and device for limiting data current
Technical Field
The invention relates to the technical field of computers, in particular to a method and a device for data current limiting.
Background
With the rapid development of information technology, network users are more and more, and the impact of burst traffic on the network becomes more and more severe. At present, the current limiting technology is commonly used on the internet to control the rate of a network interface for receiving and transmitting communication data so as to optimize the system performance, reduce delay, improve bandwidth and the like. The current limiting techniques that are commonly used are mostly implemented based on two current limiting algorithms, a token bucket algorithm (token bucket) and/or a leaky bucket algorithm (leak bucket).
and (3) a leaky bucket algorithm: its main purpose is to control the rate of data injection into the network, smoothing out bursty traffic on the network. The leaky bucket algorithm provides a mechanism by which bursty traffic can be shaped to provide a stable flow to the network. A leaky bucket may be viewed as a single server queue with a constant service time, and if the leaky bucket (packet buffer) overflows, the packet is discarded.
token bucket algorithm: it is used to control the amount of data sent onto the network and to allow the sending of bursts of data. The principle is that the system will put tokens into the bucket at a constant rate, and if there is data to be transmitted, it will need to first get tokens from the bucket, and when there are not enough tokens in the bucket to be taken, the service will be denied. The token bucket algorithm can conveniently change the rate of data transmission, and once the rate needs to be increased, the rate of putting tokens into the bucket is increased as required. A certain number of tokens are added to the bucket at regular intervals (e.g., 100 ms), but some variations (e.g., 03109091.5 patent "method for message throttling using a leaky bucket") can calculate the number of tokens to be added in real time. The patent provides a method for dynamically calculating the number of available tokens, compared with other methods for increasing tokens at regular time, the method comprises the steps of calculating the number of tokens to be injected into a token leaky bucket according to the time interval between a message and the previous message after receiving the message, and judging whether the number of tokens in the bucket after injecting the tokens meets the requirement of transmitting the message. The patent overcomes the defect that the token bucket cannot evaluate the resource utilization rate in the network software system in real time to a certain extent, but the number of tokens added into the bucket is calculated based on the time interval between the current message and the previous message, so that the actual processing capacity and the resource state of a subsequent system cannot be accurately reflected.
In the process of implementing the invention, the inventor finds that at least the following problems exist in the prior art:
1. In some cases, the leaky bucket algorithm cannot efficiently use network resources. Because the leakage rate of leaky buckets is a fixed parameter, even if there is no resource conflict in the network (no congestion occurs), the leaky bucket algorithm cannot burst a single data stream to the port rate. Thus, the leaky bucket algorithm lacks efficiency for traffic that has bursty characteristics;
2. The current limiting methods such as the existing token bucket algorithm or the variant algorithm based on the token bucket algorithm realize current limiting through an external prediction method, and cannot accurately reflect the real-time processing capacity and the resource using state in the system, so that the processing capacity and the resources of the system cannot be fully utilized to carry out reasonable current limiting.
Disclosure of Invention
in view of this, embodiments of the present invention provide a method and an apparatus for data current limiting, which can perform current limiting according to real-time processing capability and resource usage status of a system, improve or optimize system performance more accurately, perform data current limiting more reasonably, and have the advantages of being real-time, efficient, and dynamic.
to achieve the above object, according to an aspect of an embodiment of the present invention, there is provided a method of data throttling.
A method of data throttling, comprising: storing data to be processed into a ring data structure, wherein the ring data structure comprises an array for storing the data to be processed, a write pointer pointing to the position of the currently writable data and a read pointer pointing to the position of the currently readable data; respectively acquiring logic values of the current positions of the write pointer and the read pointer, and calculating a logic difference value of the current positions of the write pointer and the read pointer; and carrying out data current limiting according to the logic difference value.
Optionally, the size of the array is determined according to the ultimate processing capacity of the system.
Optionally, when writing data into the ring data structure, writing the data into the position currently pointed by the write pointer, and then updating the position pointed by the write pointer to the next position where the data can be written; when data is read from the ring data structure, the data is read according to the position currently pointed by the read pointer, and then the position pointed by the read pointer is updated to be the position where the data can be read next.
optionally, the write pointer and the read pointer advance in the same direction, the write pointer does not point to a location where unread data is stored, and the read pointer does not go beyond the write pointer.
optionally, the step of performing data throttling according to the logical difference value includes: when the logic difference value is larger than a preset first threshold value, reducing the current limiting rate and/or improving the system service processing rate; when the logic difference value is smaller than a preset second threshold value, increasing the current limiting rate and/or reducing the system service processing rate; wherein the first threshold is greater than the second threshold.
According to another aspect of the embodiments of the present invention, there is provided an apparatus for data current limiting.
an apparatus for data throttling, comprising: the data storage module is used for storing data to be processed into a ring data structure, and the ring data structure comprises an array for storing the data to be processed, a write pointer pointing to the position of the currently writable data and a read pointer pointing to the position of the currently readable data; the logical operation module is used for respectively acquiring the logical values of the current positions of the write pointer and the read pointer and calculating the logical difference value of the current positions of the write pointer and the read pointer; and the current limiting adjustment module is used for carrying out data current limiting according to the logic difference value.
optionally, the size of the array is determined according to the ultimate processing capacity of the system.
Optionally, when writing data into the ring data structure, writing the data into the position currently pointed by the write pointer, and then updating the position pointed by the write pointer to the next position where the data can be written; when data is read from the ring data structure, the data is read according to the position currently pointed by the read pointer, and then the position pointed by the read pointer is updated to be the position where the data can be read next.
Optionally, the write pointer and the read pointer advance in the same direction, the write pointer does not point to a location where unread data is stored, and the read pointer does not go beyond the write pointer.
Optionally, the current limit adjusting module is further configured to: when the logic difference value is larger than a preset first threshold value, reducing the current limiting rate and/or improving the system service processing rate; when the logic difference value is smaller than a preset second threshold value, increasing the current limiting rate and/or reducing the system service processing rate; wherein the first threshold is greater than the second threshold.
According to another aspect of the embodiments of the present invention, there is provided an electronic device for data current limiting.
An electronic device for data current limiting, comprising: one or more processors; the storage device is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors realize the method for limiting the data stream provided by the embodiment of the invention.
According to yet another aspect of embodiments of the present invention, a computer-readable medium is provided.
A computer readable medium, on which a computer program is stored, which when executed by a processor implements the method of data throttling provided by embodiments of the present invention.
One embodiment of the above invention has the following advantages or benefits: the data is stored by using the ring data structure, and the data flow limitation is carried out according to the logic difference value of the current positions of the write pointer and the read pointer, and the flow pressure and the real-time processing capacity of the current system are actually reflected by the logic difference value of the current positions of the write pointer and the read pointer, so that the reasonable flow limitation is carried out according to the real-time processing capacity and the resource using state of the system. The invention realizes more accurate improvement or optimization of system performance and more reasonable data current limiting by acquiring the current actual processing capacity parameter of the system from the inside of the system in real time and feeding the parameter back to a system thread pool and the like for service optimization and/or to an outer leaky bucket or token bucket mechanism for dynamically adjusting the current limiting rate, and has the advantages of real time, high efficiency, dynamics and the like. In addition, because the invention adopts a ring data structure to store data, and obtains the real-time position of the reading pointer through the real-time keyword voltate when reading the data, the situation of resource contention or repeated data acquisition can be avoided without locking the data, and the realization mode is simple.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
drawings
the drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a schematic diagram of a main flow of a method of data throttling according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating an initial state of a ring data structure according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a state of writing data to a ring data structure according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a state of reading data from a ring data structure according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating the state of the ring data structure in the optimal state of data throttling according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating a ring data structure when the current limit rate needs to be adjusted according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating a ring data structure when the current limit rate needs to be adjusted according to another embodiment of the present invention;
FIG. 8 is a schematic diagram of the main modules of an apparatus for data throttling in accordance with an embodiment of the present invention;
FIG. 9 is an exemplary system architecture diagram in which embodiments of the present invention may be employed;
fig. 10 is a schematic block diagram of a computer system suitable for use in implementing a terminal device or server according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The invention provides a data flow limiting method aiming at the defects that the existing leaky bucket algorithm, token bucket algorithm or variant algorithm used by the two algorithms in a cross mixing mode can not accurately reflect the real-time processing capability and the resource state of a system and can not carry out reasonable data flow limitation according to the real-time processing capability and the resource state of the system. The data to be processed is stored through a ring data structure (RingBuffer), and whether current limiting adjustment is needed or not is judged according to a logic difference value of current positions of a write pointer and a read pointer of the ring data structure, wherein the logic difference value of the current positions of the write pointer and the read pointer represents real-time processing capacity and flow pressure of a current system, so that the data are processed in time according to the real-time processing capacity of a system server to enable the server system to be in the best processing capacity and state, and the real-time processing capacity of the system is fed back to a rate of current limiting by using a leaky bucket algorithm or a token bucket algorithm.
Fig. 1 is a schematic diagram of a main flow of a method of data throttling according to an embodiment of the present invention. As shown in fig. 1, the method for limiting data stream of the present invention mainly includes the following steps S101 to S103.
Step S101: and storing the data to be processed into a ring data structure, wherein the ring data structure comprises an array for storing the data to be processed, a write pointer pointing to the position of the currently writable data and a read pointer pointing to the position of the currently readable data.
According to the technical scheme of the embodiment of the invention, in order to better realize reasonable data flow limitation according to the real-time processing capacity and the resource state of the system, the size of the array of the ring data structure is related to the limit processing capacity of the system, and specifically, the size of the array of the ring data structure is determined according to the limit processing capacity of the system.
According to the embodiment of the invention, the system can be subjected to a stress test to obtain the maximum processing capacity QPS (Query Per Second, which is a measure of how much traffic a specific Query server processes within a specified time), and then the maximum processing capacity QPS of the system is saved in a configuration file or a database of the system so as to be directly obtained when needed. The pressure test, also called strength test, tests the performance, reliability, stability, etc. of the tested system by simulating the software and hardware environment of actual application and the system load of the user during use and running the test software for a long time or with an excessive load. Stress testing requires determining a system bottleneck or unacceptable performance point to achieve the maximum level of service that the system can provide.
Then, according to the QPS value obtained by the pressure test, the outer layer service (for example, the memory of the server where the service system is located, or the service agent layer nginx, apache, etc. of the distributed service system where the service system is located) sets a current limiting rate to complete the preliminary current limiting, for example, a leaky bucket algorithm or a token bucket algorithm may be used to set a rate of a leaky bucket or a rate of putting a token into the leaky bucket to complete the preliminary current limiting. In order to ensure that the leaky bucket is large enough or enough tokens are available in the token bucket, the inner layer system (for example, a hard disk of a server where the service system is located, or service middleware tomcat, jboss, etc. for executing specific services in a distributed service system where the service system is located) may acquire data to be processed (for example, data such as a request message sent by a user) at any time, and the outer layer service does not collapse within a tolerance range, and a current limiting rate (that is, a rate of the leaky bucket or a rate of putting tokens) may be set to be 1 to 2 times of a QPS value, and specifically, for example, the current limiting rate is set to be QPS 1.5.
After the inner layer system acquires the data to be processed, the data to be processed is stored in a ring data structure (called a RingBuffer structure body) connected end to end, so that a service logic processing flow of the system can read the data to be processed from the ring data structure and process the data. The ring data structure of the present invention includes an array for holding data to be processed, a write pointer to a location where data is currently writable, and a read pointer to a location where data is currently readable. The write pointer and the read pointer may be implemented using, for example, a real-time key voltate modifier variable provided by java to ensure that the obtained current locations of the write pointer and the read pointer are real-time and accurate.
A specific implementation of the ring data structure of the present invention is described below with reference to the accompanying drawings.
FIG. 2 is a diagram illustrating an initial state of a ring data structure according to an embodiment of the present invention. As shown in fig. 2, the number of positions (i.e., the size of an array) on the constructed ring data structure (RingBuffer structure) is set according to the maximum processing capacity QPS of the system, for example, QPS × 1.5, so as to ensure that the service logic processing flow of the system can acquire the data to be processed at any time. In the embodiment of the present invention, the array size is 1024 as an example.
As shown in fig. 2, in the initial state, the write pointer and the read pointer point to the same position, and the position is set as the head of the ring data structure, and the logical value of the position is set to 1. In the embodiment of the invention, the write pointer and the read pointer need to advance in the same direction, and the read pointer follows the write pointer but does not exceed the write pointer, so that the read pointer can be ensured to just read the written data, and the situation that the data cannot be read can be avoided. Meanwhile, the write pointer does not point to a location where unread data is stored, that is, when writing data, the write pointer cannot point to a location where the read pointer points and a location where data is already stored thereafter.
FIG. 3 is a diagram illustrating a state of writing data into a ring data structure according to an embodiment of the present invention. As shown in fig. 3, after the initialization of the ring data structure is completed, when data needs to be written into the ring data structure, the system writes the data into the current location pointed by the write pointer (logical value is 1), and then updates the location pointed by the write pointer to the next location where data can be written (logical value is 2). For convenience of description, the logical value of the position pointed to by the pointer is increased by 1 every time the pointer moves in the present invention.
FIG. 4 is a diagram illustrating a state of reading data from a ring data structure according to an embodiment of the present invention. As shown in fig. 4, after the system writes data into the location with the logical value 2, the write pointer points to the location with the logical value 3, and at this time, when data needs to be read from the ring data structure, the system reads data according to the location currently pointed by the read pointer (logical value 1), and then updates the location pointed by the read pointer to the location where the data can be read next (logical value 2).
step S102: and respectively acquiring the logic values of the current positions of the write pointer and the read pointer, and calculating the logic difference value of the current positions of the write pointer and the read pointer.
In the embodiment of the present invention, a ring data structure is used to store data, so when obtaining the logical values of the current positions of the write pointer and the read pointer and performing comparison calculation, it is necessary to determine whether the two pointers are on the same ring of the ring. In the embodiment of the present invention, a logical value of the current position of the write pointer or the read pointer is obtained by modulo the index value of the write pointer or the read pointer with respect to the array size of the ring data structure, and is used to identify the position of the write pointer or the read pointer on the corresponding ring data structure. Wherein the index value of the pointer is sequentially incremented as the pointer advances. When the system writes one datum into the ring data structure, the write pointer advances one position, and the index value of the write pointer is increased by 1; when the system reads one data from the ring data structure, the read pointer advances by one position, and the index value of the read pointer is increased by 1. The number of data written into the ring data structure by the system and the number of data read from the ring data structure by the system can be counted by the index value of the pointer, and the overall performance of the system can be evaluated.
Assuming that the size of an array of a ring data structure is cachesize, the index value of a write pointer is Writeindex, and the index value of a read pointer is Readindex; then the logical value Nextwriteindex for the current position of the write pointer is: nextvriteindex ═ wriitendex% cachesize; the logical value nextreaddindex of the current position of the read pointer is: nextreadindex ═ Readindex% cachesize. It is apparent that when the pointer (write pointer or read pointer) is located at the first turn of the ring data structure, the logical value of the current position of the pointer is equal to the index value of the pointer. By logically operating on the two values Nextwriteindex and Nextwriteindex, data read out violations and write overwrites can be prevented, namely: it is necessary that the read pointer follows the write pointer but cannot override, and meanwhile, the write pointer cannot point to a position where unread data is stored.
when calculating the logical difference between the current positions of the write pointer and the read pointer, it is first determined whether the write pointer and the read pointer are on the same circle of the ring data structure, and if so, the logical difference between the current positions of the write pointer and the read pointer is: the difference between the logical value of the current position of the write pointer and the logical value of the current position of the read pointer is: (Nextwriteindex-Nextreadindex); otherwise, since the write pointer is always located before the read pointer or overlaps the read pointer, the logical difference between the current positions of the write pointer and the read pointer is: the sum of the logical value of the current position of the write pointer and the array size of the ring data structure is subtracted from the logical value of the current position of the read pointer, i.e.: (Nextwriteindex + cachesize-Nextreadindex).
to facilitate determining whether the write pointer and the read pointer are on the same circle of the ring data structure, a ring crossing flag may be set, and when the write pointer moves from the tail of the ring data structure (e.g., the position with logic value 1024 shown in fig. 2) to the head of the ring data structure (e.g., the position with logic value 1 shown in fig. 2), the ring crossing flag is added to the write pointer to indicate that the write pointer and the read pointer are not on the same circle of the ring data structure. When the read pointer is also moved from the tail of the ring data structure to the head of the ring data structure, the cross-ring marker on the write pointer is eliminated to indicate that the write pointer and the read pointer are on the same turn of the ring data structure. In addition, in order to judge whether the write pointer and the read pointer are on the same circle of the ring data structure, the attribute of the 'number of times of ring striding' can be respectively added to the write pointer and the read pointer, and when the write pointer or the read pointer moves from the tail of the ring data structure to the head of the ring data structure each time, the attribute value is increased by 1, so that whether the write pointer and the read pointer are on the same circle of the ring data structure can be judged according to the attribute value of the 'number of times of ring striding' of the write pointer and the read pointer. Similarly, in order to determine whether the write pointer and the read pointer are on the same circle of the ring data structure, a comparison determination may be made according to the magnitude of the logical value of the current position of the write pointer and the logical value of the current position of the read pointer, since the logical value of the current position of the write pointer is greater than or equal to the logical value of the current position of the read pointer when the write pointer and the read pointer are on the same circle; when the write pointer and the read pointer are not in the same circle, the logical value of the current position of the write pointer is smaller than that of the read pointer because the write pointer does not point to the position where the unread data is stored, so that whether the write pointer and the read pointer are in the same circle of the ring data structure can be judged according to the logical values of the current position of the write pointer and the current position of the read pointer. According to the implementation principle of the present invention, in order to determine whether the write pointer and the read pointer are on the same ring of the ring data structure, other schemes may be adopted, which is not limited in the present invention.
According to the technical scheme of the invention, the logical value Nextwriteidex of the current position of the write pointer and the logical value Nextreadindex of the current position of the read pointer on the ring data structure can be respectively used for measuring the rate of writing data into the ring data structure and the rate of reading data from the ring data structure by the current system, and can also be respectively used for measuring the free capacity of the data which can be received outside the current system and the busy degree inside the system. The logical difference value of the current positions of the write pointer and the read pointer actually reflects the flow pressure and the real-time processing capacity of the current system, and the system is informed to carry out internal logical processing skip or an external current limiting module to adjust the current limiting rate according to the logical difference value, so that the system performance can be more accurately improved or optimized and the current limiting rate can be more accurately controlled.
In addition, because the invention adopts a ring data structure to store data, and obtains the real-time position of the reading pointer through the real-time keyword voltate when reading the data, the situation of resource contention or repeated data acquisition can be avoided without locking the data, and the realization mode is simple.
Step S103: and carrying out data current limiting according to the logic difference value.
Step S103 may specifically include the following two cases:
When the logic difference value is larger than a preset first threshold value, reducing the current limiting rate and/or improving the system service processing rate;
When the logic difference value is smaller than a preset second threshold value, increasing the current limiting rate and/or reducing the system service processing rate;
wherein the first threshold is greater than the second threshold.
FIG. 5 is a diagram illustrating a ring data structure state in an optimal state of data throttling according to an embodiment of the present invention. As can be seen from fig. 5, when the logical difference between the current positions of the write pointer and the read pointer is about half of the array size of the ring data structure (in a specific implementation, different values may be set as required), it is described that the rate at which the system writes data into the ring data structure is approximately the same as the rate at which the system reads data from the ring data structure, at this time, the traffic processing rate and the current limiting rate of the system are substantially equal, and the stability and the processing efficiency of the system both reach the optimal state without performing current limiting adjustment.
FIG. 6 is a diagram illustrating a ring data structure when the current limit rate needs to be adjusted according to an embodiment of the present invention. As shown in fig. 6, when the logical difference (1023+1024 — 1023) between the current positions of the write pointer and the read pointer is greater than a preset first threshold (e.g., 4/5 indicating the size of the array, which can be flexibly set as required), which indicates that the rate of writing data into the ring data structure by the system is greater than the rate of reading data from the ring data structure by the system, then it may happen that the write pointer is about to catch up with the read pointer or the data can be rewritten after waiting for the read pointer to read data. That is, the processing power of the system may have reached a limit peak and the system is already in an overload state. At the moment, current limiting adjustment is needed, and the rate of writing data can be reduced by reducing the current limiting rate of the outer layer, so that the pressure of the system is relieved, and reasonable current limiting is realized; it is also possible to improve the service processing rate of the system, for example: the time and space overhead of resource contention or proxy server switching among threads is reduced by reducing the number of the threads in the thread pool, so that the service processing speed of the system is improved, the processing capacity of the system is optimized, and reasonable current limiting is realized; the system can also reduce the current limiting rate and improve the service processing rate of the system so as to relieve the pressure of the system and optimize the processing capacity of the system, thereby realizing reasonable current limiting. In specific implementation, different measures can be selected according to different scene requirements so as to achieve the purpose of reasonable current limiting.
FIG. 7 is a diagram illustrating a ring data structure when the current limit rate needs to be adjusted according to another embodiment of the present invention. As shown in fig. 7, when the logical difference (1023+ 1022 ═ 1) between the current positions of the write pointer and the read pointer is smaller than a preset second threshold (for example, 1/5 indicating the size of the array can be flexibly set as required), which indicates that the rate of writing data into the ring data structure by the system is smaller than the rate of reading data from the ring data structure by the system, then it may happen that the read pointer is about to catch up with the write pointer or the data can be read again after waiting for the write pointer to write data. That is, the system has good processing capability and is in an unsaturated state, which results in waste of system resources. At this time, current limiting adjustment is needed, and the rate of writing data can be increased by increasing the current limiting rate of the outer layer, so that the system can read enough data for processing, the waste of system resources is avoided, and reasonable current limiting is realized; it is also possible to reduce the traffic processing rate of the system, for example: the service processing rate of the system is reduced by reducing the number of threads in the thread pool of the service system, the system resource cost occupied by the threads is saved due to the reduction of the number of the threads, the resource occupancy rate of the current system is reduced, and meanwhile, the thread pools required by other systems deployed on a machine where the current system is located can be expanded, so that the effects of reasonably utilizing hardware resources, network bandwidth and storage resources are achieved to a certain extent, the system resources are better and more fully utilized, and reasonable current limiting is realized; the system resource can be better utilized by improving the current limiting rate and reducing the system service processing rate, thereby optimizing the system efficiency and realizing reasonable current limiting. During specific implementation, different measures can be selected according to different scene requirements to achieve the purpose of reasonable current limiting, high-performance processing services inside the system can be achieved, and the result of current limiting can be optimized more reasonably outside the system.
fig. 8 is a schematic diagram of main blocks of an apparatus for data throttling according to an embodiment of the present invention. As shown in fig. 8, the apparatus 800 for limiting data current according to the embodiment of the present invention mainly includes a data saving module 801, a logic operation module 802, and a current limiting adjustment module 803.
The data saving module 801 is configured to save data to be processed into a ring data structure, where the ring data structure includes an array for saving the data to be processed, a write pointer pointing to a location where data can be written currently, and a read pointer pointing to a location where data can be read currently;
The logic operation module 802 is configured to obtain logic values of current positions of the write pointer and the read pointer, respectively, and calculate a logic difference value of the current positions of the write pointer and the read pointer;
The current limiting adjustment module 803 is used for performing data current limiting according to the logic difference value.
Wherein the size of the array is determined according to the limit processing capacity of the system.
According to the embodiment of the invention, when data is written into the ring data structure, the data is written into the position pointed by the write pointer currently, and then the position pointed by the write pointer is updated to the next position where the data can be written;
When data is read from the ring data structure, the data is read according to the position currently pointed to by the read pointer, and then the position pointed to by the read pointer is updated to the position where the data can be read next.
According to an embodiment of the invention, the write pointer and the read pointer advance in the same direction, and the write pointer does not point to a location where unread data is stored, and the read pointer does not overtake the write pointer.
According to the technical solution of the embodiment of the present invention, the current limiting adjustment module 803 may be further configured to:
When the logic difference value is larger than a preset first threshold value, reducing the current limiting rate and/or improving the system service processing rate;
When the logic difference value is smaller than a preset second threshold value, increasing the current limiting rate and/or reducing the system service processing rate;
Wherein the first threshold is greater than the second threshold.
According to the technical scheme of the embodiment of the invention, the data is stored by using the ring data structure, and the data flow limitation is carried out according to the logic difference value of the current positions of the write pointer and the read pointer, and the flow pressure and the real-time processing capacity of the current system are actually reflected by the logic difference value of the current positions of the write pointer and the read pointer, so that the reasonable flow limitation is carried out according to the real-time processing capacity and the resource using state of the system. The invention realizes more accurate improvement or optimization of system performance and more reasonable data current limiting by acquiring the current actual processing capacity parameter of the system from the inside of the system in real time and feeding the parameter back to a system thread pool and the like for service optimization and/or to an outer leaky bucket or token bucket mechanism for dynamically adjusting the current limiting rate, and has the advantages of real time, high efficiency, dynamics and the like. In addition, because the invention adopts a ring data structure to store data, and obtains the real-time position of the reading pointer through the real-time keyword voltate when reading the data, the situation of resource contention or repeated data acquisition can be avoided without locking the data, and the realization mode is simple.
Fig. 9 illustrates an exemplary system architecture 900 of a data throttling method or apparatus to which embodiments of the invention may be applied.
As shown in fig. 9, the system architecture 900 may include end devices 901, 902, 903, a network 904, and a server 905. Network 904 is the medium used to provide communication links between terminal devices 901, 902, 903 and server 905. Network 904 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
A user may use the terminal devices 901, 902, 903 to interact with a server 905 over a network 904 to receive or send messages and the like. The terminal devices 901, 902, 903 may have installed thereon various messenger client applications such as, for example only, a shopping-like application, a web browser application, a search-like application, an instant messaging tool, a mailbox client, social platform software, etc.
The terminal devices 901, 902, 903 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 905 may be a server providing various services, such as a background management server (for example only) providing support for shopping websites browsed by users using the terminal devices 901, 902, 903. The backend management server may analyze and perform other processing on the received data such as the product information query request, and feed back a processing result (for example, target push information, product information — just an example) to the terminal device.
It should be noted that, the method for limiting data flow provided by the embodiment of the present invention is generally executed by the server 905, and accordingly, the apparatus for limiting data flow is generally disposed in the server 905.
It should be understood that the number of terminal devices, networks, and servers in fig. 9 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 10, a block diagram of a computer system 1000 suitable for use with a terminal device or server implementing an embodiment of the invention is shown. The terminal device or the server shown in fig. 10 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 10, the computer system 1000 includes a Central Processing Unit (CPU)1001 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)1002 or a program loaded from a storage section 1008 into a Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data necessary for the operation of the system 1000 are also stored. The CPU 1001, ROM 1002, and RAM 1003 are connected to each other via a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
the following components are connected to the I/O interface 1005: an input section 1006 including a keyboard, a mouse, and the like; an output section 1007 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 1008 including a hard disk and the like; and a communication section 1009 including a network interface card such as a LAN card, a modem, or the like. The communication section 1009 performs communication processing via a network such as the internet. The driver 1010 is also connected to the I/O interface 1005 as necessary. A removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1010 as necessary, so that a computer program read out therefrom is mounted into the storage section 1008 as necessary.
in particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication part 1009 and/or installed from the removable medium 1011. The computer program executes the above-described functions defined in the system of the present invention when executed by the Central Processing Unit (CPU) 1001.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present invention may be implemented by software, or may be implemented by hardware. The described units or modules may also be provided in a processor, and may be described as: a processor comprises a data storage module, a logic operation module and a current limiting adjustment module. Where the names of such units or modules do not in some cases constitute a limitation of the unit or module itself, for example, the data saving module may also be described as a "module for saving data to be processed into a ring data structure".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise: storing data to be processed into a ring data structure, wherein the ring data structure comprises an array for storing the data to be processed, a write pointer pointing to the position of the currently writable data and a read pointer pointing to the position of the currently readable data; respectively acquiring logic values of the current positions of the write pointer and the read pointer, and calculating a logic difference value of the current positions of the write pointer and the read pointer; and carrying out data current limiting according to the logic difference value.
According to the technical scheme of the embodiment of the invention, the data is stored by using the ring data structure, and the data flow limitation is carried out according to the logic difference value of the current positions of the write pointer and the read pointer, and the flow pressure and the real-time processing capacity of the current system are actually reflected by the logic difference value of the current positions of the write pointer and the read pointer, so that the reasonable flow limitation is carried out according to the real-time processing capacity and the resource using state of the system. The invention realizes more accurate improvement or optimization of system performance and more reasonable data current limiting by acquiring the current actual processing capacity parameter of the system from the inside of the system in real time and feeding the parameter back to a system thread pool and the like for service optimization and/or to an outer leaky bucket or token bucket mechanism for dynamically adjusting the current limiting rate, and has the advantages of real time, high efficiency, dynamics and the like. In addition, because the invention adopts a ring data structure to store data, and obtains the real-time position of the reading pointer through the real-time keyword voltate when reading the data, the situation of resource contention or repeated data acquisition can be avoided without locking the data, and the realization mode is simple.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (12)

1. A method of data throttling, comprising:
Storing data to be processed into a ring data structure, wherein the ring data structure comprises an array for storing the data to be processed, a write pointer pointing to the position of the currently writable data and a read pointer pointing to the position of the currently readable data;
respectively acquiring logic values of the current positions of the write pointer and the read pointer, and calculating a logic difference value of the current positions of the write pointer and the read pointer;
and carrying out data current limiting according to the logic difference value.
2. The method of claim 1, wherein the size of the array is determined based on an ultimate processing capability of the system.
3. The method of claim 1,
When data are written into the ring data structure, writing the data into the position currently pointed by the write pointer, and then updating the position pointed by the write pointer to the next position where the data can be written;
When data is read from the ring data structure, the data is read according to the position currently pointed by the read pointer, and then the position pointed by the read pointer is updated to be the position where the data can be read next.
4. A method according to claim 1 or 3, wherein the write pointer and the read pointer proceed in the same direction, and the write pointer does not point to a location where unread data is held, and the read pointer does not overtake the write pointer.
5. the method of claim 1, wherein the step of limiting the data according to the logical difference value comprises:
When the logic difference value is larger than a preset first threshold value, reducing the current limiting rate and/or improving the system service processing rate;
When the logic difference value is smaller than a preset second threshold value, increasing the current limiting rate and/or reducing the system service processing rate;
Wherein the first threshold is greater than the second threshold.
6. an apparatus for limiting data current, comprising:
The data storage module is used for storing data to be processed into a ring data structure, and the ring data structure comprises an array for storing the data to be processed, a write pointer pointing to the position of the currently writable data and a read pointer pointing to the position of the currently readable data;
the logical operation module is used for respectively acquiring the logical values of the current positions of the write pointer and the read pointer and calculating the logical difference value of the current positions of the write pointer and the read pointer;
And the current limiting adjustment module is used for carrying out data current limiting according to the logic difference value.
7. The apparatus of claim 6, wherein the size of the array is determined based on an ultimate processing capability of the system.
8. The apparatus of claim 6,
When data are written into the ring data structure, writing the data into the position currently pointed by the write pointer, and then updating the position pointed by the write pointer to the next position where the data can be written;
When data is read from the ring data structure, the data is read according to the position currently pointed by the read pointer, and then the position pointed by the read pointer is updated to be the position where the data can be read next.
9. The apparatus of claim 6 or 8, wherein the write pointer and the read pointer proceed in the same direction, and wherein the write pointer does not point to a location where unread data is stored, and wherein the read pointer does not override the write pointer.
10. the apparatus of claim 6, wherein the current limit adjustment module is further configured to:
When the logic difference value is larger than a preset first threshold value, reducing the current limiting rate and/or improving the system service processing rate;
when the logic difference value is smaller than a preset second threshold value, increasing the current limiting rate and/or reducing the system service processing rate;
wherein the first threshold is greater than the second threshold.
11. an electronic device for data current limiting, comprising:
One or more processors;
A storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
12. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-5.
CN201810550766.5A 2018-05-31 2018-05-31 Method and device for limiting data current Pending CN110557341A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810550766.5A CN110557341A (en) 2018-05-31 2018-05-31 Method and device for limiting data current

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810550766.5A CN110557341A (en) 2018-05-31 2018-05-31 Method and device for limiting data current

Publications (1)

Publication Number Publication Date
CN110557341A true CN110557341A (en) 2019-12-10

Family

ID=68734247

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810550766.5A Pending CN110557341A (en) 2018-05-31 2018-05-31 Method and device for limiting data current

Country Status (1)

Country Link
CN (1) CN110557341A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112379844A (en) * 2020-11-25 2021-02-19 深圳市华宝电子科技有限公司 Data protection method and device, electronic terminal and storage medium
CN113779019A (en) * 2021-01-14 2021-12-10 北京沃东天骏信息技术有限公司 Current limiting method and device based on annular linked list
CN116431395A (en) * 2023-06-07 2023-07-14 成都云祺科技有限公司 Cache dynamic balance method, system and storage medium based on volume real-time backup

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05289847A (en) * 1992-04-06 1993-11-05 Toshiba Corp Ring buffer control device
US5765187A (en) * 1991-04-05 1998-06-09 Fujitsu Limited Control system for a ring buffer which prevents overrunning and underrunning
US6636523B1 (en) * 1999-01-27 2003-10-21 Advanced Micro Devices, Inc. Flow control using rules queue monitoring in a network switching system
US6977897B1 (en) * 2000-05-08 2005-12-20 Crossroads Systems, Inc. System and method for jitter compensation in data transfers
CN101674479A (en) * 2008-09-11 2010-03-17 索尼株式会社 Information processing apparatus and method
CN101800867A (en) * 2010-01-19 2010-08-11 深圳市同洲电子股份有限公司 Method, device and digital-television receiving terminal for realizing ring buffer
US8782355B1 (en) * 2010-11-22 2014-07-15 Marvell International Ltd. Method and apparatus to prevent FIFO overflow and underflow by adjusting one of a write rate and a read rate
CN105337891A (en) * 2015-11-02 2016-02-17 北京百度网讯科技有限公司 Traffic control method and traffic control device for distributed cache system
CN107491398A (en) * 2017-08-04 2017-12-19 歌尔科技有限公司 Method of data synchronization, device and electronic equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5765187A (en) * 1991-04-05 1998-06-09 Fujitsu Limited Control system for a ring buffer which prevents overrunning and underrunning
JPH05289847A (en) * 1992-04-06 1993-11-05 Toshiba Corp Ring buffer control device
US6636523B1 (en) * 1999-01-27 2003-10-21 Advanced Micro Devices, Inc. Flow control using rules queue monitoring in a network switching system
US6977897B1 (en) * 2000-05-08 2005-12-20 Crossroads Systems, Inc. System and method for jitter compensation in data transfers
CN101674479A (en) * 2008-09-11 2010-03-17 索尼株式会社 Information processing apparatus and method
CN101800867A (en) * 2010-01-19 2010-08-11 深圳市同洲电子股份有限公司 Method, device and digital-television receiving terminal for realizing ring buffer
US8782355B1 (en) * 2010-11-22 2014-07-15 Marvell International Ltd. Method and apparatus to prevent FIFO overflow and underflow by adjusting one of a write rate and a read rate
CN105337891A (en) * 2015-11-02 2016-02-17 北京百度网讯科技有限公司 Traffic control method and traffic control device for distributed cache system
CN107491398A (en) * 2017-08-04 2017-12-19 歌尔科技有限公司 Method of data synchronization, device and electronic equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112379844A (en) * 2020-11-25 2021-02-19 深圳市华宝电子科技有限公司 Data protection method and device, electronic terminal and storage medium
CN113779019A (en) * 2021-01-14 2021-12-10 北京沃东天骏信息技术有限公司 Current limiting method and device based on annular linked list
CN113779019B (en) * 2021-01-14 2024-05-17 北京沃东天骏信息技术有限公司 Circular linked list-based current limiting method and device
CN116431395A (en) * 2023-06-07 2023-07-14 成都云祺科技有限公司 Cache dynamic balance method, system and storage medium based on volume real-time backup
CN116431395B (en) * 2023-06-07 2023-09-08 成都云祺科技有限公司 Cache dynamic balance method, system and storage medium based on volume real-time backup

Similar Documents

Publication Publication Date Title
CN110545246A (en) Token bucket-based current limiting method and device
CN109040230B (en) File downloading method, device, equipment/terminal/server and storage medium
CN110430142B (en) Method and device for controlling flow
CN110557341A (en) Method and device for limiting data current
CN114595043A (en) IO (input/output) scheduling method and device
WO2019109902A1 (en) Queue scheduling method and apparatus, communication device, and storage medium
CN116303173B (en) Method, device and system for reducing RDMA engine on-chip cache and chip
CN111596864A (en) Method, device, server and storage medium for data delayed deletion
CN114374657B (en) Data processing method and device
CN113726885B (en) Flow quota adjusting method and device
CN114785770A (en) Mirror layer file sending method and device, electronic equipment and computer readable medium
CN114006871A (en) Flow control method, flow control device, container and storage medium
CN110502891B (en) Method, device, medium and electronic equipment for acquiring process memory leakage
CN112163176A (en) Data storage method and device, electronic equipment and computer readable medium
CN112784139A (en) Query method, query device, electronic equipment and computer readable medium
CN110896391B (en) Message processing method and device
CN113132480B (en) Data transmission method, device and system
CN115993942B (en) Data caching method, device, electronic equipment and computer readable medium
CN118585472A (en) Data transmission method, device, electronic equipment and computer readable medium
US9787564B2 (en) Algorithm for latency saving calculation in a piped message protocol on proxy caching engine
EP4231155A1 (en) Performance testing method and apparatus, electronic device, and storage medium
CN112422342B (en) Method and device for acquiring service data
CN115994120B (en) Data file merging method, device, electronic equipment and computer readable medium
CN116155808B (en) Network flow control method, device, electronic equipment and computer readable medium
CN114995764A (en) Data storage method and device based on stream computing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191210

RJ01 Rejection of invention patent application after publication