CN115643309A - Dynamic flow control method and system based on request scheduling queue length - Google Patents

Dynamic flow control method and system based on request scheduling queue length Download PDF

Info

Publication number
CN115643309A
CN115643309A CN202211328698.0A CN202211328698A CN115643309A CN 115643309 A CN115643309 A CN 115643309A CN 202211328698 A CN202211328698 A CN 202211328698A CN 115643309 A CN115643309 A CN 115643309A
Authority
CN
China
Prior art keywords
storage
water level
request
request scheduling
client
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211328698.0A
Other languages
Chinese (zh)
Inventor
高彦平
王国庆
吴江
张京城
谢鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xtao Co ltd
Original Assignee
Xtao Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xtao Co ltd filed Critical Xtao Co ltd
Priority to CN202211328698.0A priority Critical patent/CN115643309A/en
Publication of CN115643309A publication Critical patent/CN115643309A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a dynamic flow control method and a system based on a request scheduling queue length; the dynamic flow control method comprises the following steps: each storage front end respectively calculates the water level states of all the storage back ends corresponding to the request scheduling queues; the storage front end adjusts a front end threshold of the storage front end according to the water level state of the request scheduling queue, wherein the front end threshold is used for limiting the upper limit of the quantity of concurrent requests of a client connected with the storage front end; when receiving a data request of a client, a storage front end judges whether the data request quantity of the client exceeds a front end threshold; if the quantity of the data requests does not exceed the front-end threshold, the storage front end sends the data requests of the client to a corresponding request scheduling queue of the storage back end; and if the data request quantity exceeds the front-end threshold, the storage front end does not receive the data request of the client. The technical scheme of the invention can solve the problems that the prior art is difficult to accurately and efficiently limit the request flow of the storage back end and exert the data processing capacity of the storage back end.

Description

Dynamic flow control method and system based on request scheduling queue length
Technical Field
The invention relates to the technical field of distributed storage, in particular to a dynamic flow control method and a dynamic flow control system based on a request scheduling queue length.
Background
Distributed storage is a data storage technology that uses disk space distributed on each computer in an enterprise via a network and combines these distributed storage resources into a virtual storage device, so that data in the distributed storage system is distributed and stored in the corners of the computers in the enterprise.
Most distributed storage systems include a "storage front-end" and a "storage back-end"; the storage front end is responsible for receiving data requests of the client, processing protocols and distributing data requests. The true data requests are, however, actually handled by one or more storage back-ends, which interact with the storage media to achieve true "access" of the storage resources. As shown in fig. 1 in detail, the distributed access includes: 1. IO request: the client sends an IO request to a storage front end; 2. the IO is forwarded to the corresponding backend: the storage front end forwards the IO request to a corresponding storage back end; 3. and IO back end returning: the storage back end feeds back the IO response corresponding to the IO request to the storage front end; 4. IO response: and the storage front end returns the IO response to the client.
Although the storage back-end is actually responsible for data requests, when the storage back-end reaches a capacity bottleneck (e.g., a disk is busy and overloaded), if the storage front-end continuously receives requests from the client, a large number of data requests cannot be digested and completed in time, the backlog of the data requests occupies too many storage resources (e.g., memory), and even consumes the memory resources of the distributed storage system, which results in a breakdown of the distributed storage system. In order to avoid the situation, many distributed storage systems design a QoS or threshold policy thread scheme, and limit the traffic of the client or the number of new requests received at the same time at the storage front end, so as to achieve the purpose of preventing the storage back end from being overloaded.
Limiting the network data flow of the client is not without defects, although the data carried by the read-write request is too large, the limitation of the client flow may limit the load of the distributed storage system to a certain extent; however, it is not feasible to limit the traffic, and the hardware configuration of the front end and the back end of each batch in the distributed storage system is different, so that the processing capacity of the front end and the back end is very different, and it is difficult for the distributed storage system to quantify the amount of the traffic or the concurrent requests for flow limitation.
Disclosure of Invention
The invention provides a dynamic flow control scheme based on a request scheduling queue length, and aims to solve the problems that the method for limiting the network data flow of a client to prevent the overload of a storage back end in the prior art is difficult to accurately and efficiently limit the flow of the front end and the back end of the storage or the number of concurrent requests, and the data processing capacity of the storage back end cannot be exerted.
In order to achieve the above object, according to a first aspect of the present invention, the present invention provides a dynamic flow control method based on a request scheduling queue length, where the method is used in a distributed storage system, where the distributed storage system includes a plurality of storage front ends and a plurality of storage back ends, and each storage back end corresponds to one request scheduling queue; the dynamic flow control method comprises the following steps:
each storage front end respectively calculates the water level states of all the storage back ends corresponding to the request scheduling queues;
the storage front end adjusts a front end threshold of the storage front end according to the water level state of the request scheduling queue, wherein the front end threshold is used for limiting the upper limit of the quantity of concurrent requests of a client connected with the storage front end;
when receiving a data request of a client, a storage front end judges whether the data request quantity of the client exceeds a front end threshold;
if the quantity of the data requests does not exceed the front-end threshold, the storage front end sends the data requests of the client to a corresponding request scheduling queue of the storage back end;
and if the data request quantity exceeds the front-end threshold, the storage front end does not receive the data request of the client.
Preferably, in the dynamic flow control method, the step of adjusting, by the storage front end, the front end threshold of the storage front end according to the water level state of the request scheduling queue includes:
the storage front end judges whether any storage back end corresponding to the storage front end access request scheduling queue is in a high water level state;
and if any storage back end corresponding to the request scheduling queue is in a high water level state, the storage front end shrinks the front end thresholds corresponding to all the clients according to a preset proportion.
Preferably, in the dynamic flow control method, after the step of storing the front-end threshold corresponding to the front-end contraction of all the clients by a predetermined ratio, the method further includes:
the other storage front ends of the plurality of storage front ends judge whether the other storage front ends access the storage back ends corresponding to the request scheduling queues in the non-high water level state;
and if other storage front ends are accessing the storage back ends corresponding to the request scheduling queues in the non-high water level state, the other storage front ends maintain the front end thresholds of the other storage front ends or increase the front end thresholds.
Preferably, in the dynamic flow control method, the step of adjusting, by the storage front end, the front end threshold of the storage front end according to the water level state of the request scheduling queue includes:
the storage front end judges whether all the storage back ends are in a low water level state corresponding to the request scheduling queue;
and if all the storage back-end corresponding request scheduling queues are in a low water level state, the storage front-end increases the front-end threshold corresponding to all the clients by a fixed step length.
Preferably, in the dynamic flow control method, the step of adjusting, by the storage front end, the front end threshold of the storage front end according to the water level state of the request scheduling queue includes:
the storage front end judges whether all the storage back ends are in a normal water level state corresponding to the request scheduling queues;
and if all the corresponding request scheduling queues of the storage back end are in the normal water level state, the storage front end maintains the front end threshold corresponding to all the clients.
Preferably, in the dynamic flow control method, the step of judging whether the number of data requests of the client exceeds the front-end threshold when the storage front-end receives the data requests of the client includes:
the storage front end calculates the number of data requests of the corresponding client by using an IO counter;
and the storage front end judges whether the data request quantity of the client exceeds a front end threshold according to the data request quantity calculated by the IO counter.
Preferably, in the dynamic flow control method, the step of calculating, by each storage front end, the request scheduling queues corresponding to all storage back ends includes:
when the storage front end sends a data request to the storage back end, the storage front end acquires the water level state of a request scheduling queue; and the number of the first and second groups,
and when the storage back end finishes processing the data request, the storage back end feeds back the water level state of the request scheduling queue to the storage front end.
Preferably, in the dynamic flow control method, the step of feeding back the water level state of the request scheduling queue to the storage front end by the storage back end includes:
storing the water level state of a detection request scheduling queue of which the back end continues for a preset number of times in a preset time;
and when the request scheduling queues are in the same water level state for the continuous preset times, the storage back end feeds back the water level states of the request scheduling queues to the storage front end.
According to a second aspect of the present invention, the present invention further provides a dynamic flow control system based on the queue length of request scheduling, including:
the system comprises a plurality of storage front ends and a plurality of storage back ends, wherein each storage back end corresponds to a request scheduling queue; wherein the content of the first and second substances,
each storage front end is respectively used for calculating the water level state of the corresponding request scheduling queue of all the storage back ends;
the storage front end is also used for adjusting a front end threshold of the storage front end according to the water level state of the request scheduling queue, wherein the front end threshold is used for limiting the upper limit of the quantity of concurrent requests of the client connected with the storage front end;
the storage front end is also used for judging whether the data request quantity of the client exceeds a front end threshold or not when receiving the data request of the client;
the storage front end is also used for sending the data requests of the client to a request scheduling queue corresponding to the storage back end if the number of the data requests does not exceed the front end threshold;
and the front end is stored and is also used for not receiving the data requests of the client when the number of the data requests exceeds the front end threshold.
Preferably, in the dynamic flow control system, the storage front end is specifically configured to determine whether any of the storage back ends is in a high water level state corresponding to the request scheduling queue;
and the front-end storage unit is specifically used for contracting the front-end thresholds corresponding to all the clients according to a preset proportion when any one of the back-end storage units is in a high-water level state corresponding to the request scheduling queue.
Preferably, in the dynamic flow control system, the storage front end is further specifically configured to receive a data request from a client according to a front end threshold, and forward all data requests to the storage back end that requests the scheduling queue to be in a low water level state or a normal water level state;
the other storage front ends in the plurality of storage front ends are specifically used for judging whether the storage back end corresponding to the request scheduling queue in the non-high water level state is accessed or not;
the other storage front ends are specifically used for maintaining the front end threshold of the other storage front ends or increasing the front end threshold if the other storage front ends are accessing the storage back ends corresponding to the request scheduling queues in the non-high water level state.
In summary, according to the dynamic flow control scheme based on the length of the request scheduling queue provided by the present invention, each storage front end respectively calculates the water level states of all storage back ends corresponding to the request scheduling queues, and then the storage front end adjusts the front end threshold of the storage front end according to the water level states of the request scheduling queues, where the front end threshold is used to limit the upper limit of the quantity of concurrent requests of the clients connected to the storage front end, so that the storage front end can dynamically adjust the quantity of concurrent data requests of the clients according to the load states of the storage back ends at the storage front end, judge whether the data request data of the clients exceed the front end threshold when the storage front end receives the data request of the clients, and only when the quantity of the data request does not exceed the front end threshold, the storage front end sends the data request of the clients to the request scheduling queues corresponding to the storage back ends, so as to use the front end, that is, the load condition of the storage back end limits the quantity of concurrent requests of the clients. The storage front-end will not receive data requests from the client when the number of data requests exceeds the front-end threshold. By the method, the front-end threshold of the front-end receiving the client data request can be automatically adjusted according to the load of the rear-end in different scenes, and the quantity of the concurrent data requests of the client is dynamically adjusted, so that the storage adaptability is improved, the service quality is ensured, and the system overload is prevented; the problem that in the prior art, the difference of processing capacities of the front end and the rear end is large due to different hardware configurations of the front end and the rear end is stored, and the flow of flow limitation or the quantity of concurrent requests is difficult to quantify in a distributed storage system is solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
Fig. 1 is a flowchart illustrating an access method of a distributed storage system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a dynamic flow control provided by an embodiment of the present invention;
fig. 3 is a schematic flowchart of a dynamic flow control method based on a request scheduling queue length according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating a method for calculating a water level status of a request dispatch queue according to the embodiment shown in FIG. 3;
FIG. 5 is a flowchart illustrating a method for feeding back a water level status of a request dispatch queue according to the embodiment shown in FIG. 4;
FIG. 6 is a flowchart illustrating a first method for adjusting a front-end threshold of a storage front-end according to the embodiment shown in FIG. 3;
FIG. 7 is a flowchart illustrating a second method for adjusting a front-end threshold of a storage front-end according to the embodiment shown in FIG. 3;
FIG. 8 is a flowchart illustrating a third method for adjusting a front-end threshold of a storage front-end according to the embodiment shown in FIG. 3;
fig. 9 is a flowchart illustrating a method for determining the number of data requests of the client according to the embodiment shown in fig. 3;
fig. 10 is a flowchart illustrating a method for sending a data request of a client according to the embodiment shown in fig. 3;
fig. 11 is a schematic structural diagram of a dynamic flow control system based on a request scheduling queue length according to an embodiment of the present invention.
The present invention will be further described with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the invention mainly solves the technical problems that:
in distributed storage systems, the real data request is actually handled by one or more "storage backend" that is responsible for interacting with the storage media to achieve the real "access" of the storage resource. Although the "storage back end" is actually responsible for the data request, when the "storage back end" reaches a capacity bottleneck (for example, a disk is busy and overloaded), if the "storage front end" continuously receives requests from the client, a large number of data requests cannot be "digested" and completed in time even if entering the distributed storage system, the backlog of the data requests will occupy too many storage resources (for example, memory), and even will exhaust memory resources of the distributed storage system, resulting in a crash of the distributed storage system. In order to solve the above problems, the existing distributed storage system achieves the purpose of preventing the storage back end from being overloaded by limiting the flow of the client at the storage front end or limiting the number of new requests received at the same time. However, this method may also cause a large difference in processing capabilities between the front end and the back end due to different hardware configurations of the storage back end, and it is difficult for the distributed storage system to quantify the number of flow or concurrent requests for decision throttling.
In order to solve the above problem, referring to fig. 2, fig. 2 is a schematic diagram of dynamic flow control provided in an embodiment of the present invention, where the schematic diagram includes a client 1, multiple storage front ends 2, a request scheduling queue 3, and multiple storage back ends 4; wherein each storage back end 4 corresponds to one request scheduling queue 3. Thus, the water level state of the request scheduling queue 3 directly reflects the load condition of the storage back end 4, and the higher the water level state is, the higher the load of the storage back end 4 is, and the weaker the processing capability of the data request is. The upper limit of the number of concurrent requests of the client connected with the storage front end 2 can be limited by setting and adjusting the front end threshold of the storage front end 2 through the water level state of the request scheduling queue. By the dynamic flow control principle, the front-end threshold of the storage front-end 2 can be accurately and efficiently limited according to the load condition of the storage back-end 4, namely, the data flow of the client 1 received and processed by the storage front-end 2 is limited, the processing capacities of the storage front-end 2 and the storage back-end 4 are matched, and the flow of the flow-limiting decision or the number of the concurrent requests is quantized.
To achieve the above object, specifically referring to fig. 3, fig. 3 is a schematic flowchart of a dynamic flow control method based on a request scheduling queue length according to an embodiment of the present invention. As shown in fig. 3, the dynamic flow control method based on the length of the request scheduling queue is used for a distributed storage system, where the distributed storage system includes a plurality of storage front ends and a plurality of storage back ends, and each storage back end corresponds to one request scheduling queue; the dynamic flow control method comprises the following steps:
s110: and each storage front end respectively calculates the water level states of all the storage back ends corresponding to the request scheduling queues.
The request scheduling queue corresponds to the storage back end one by one, and the length of the request scheduling queue is an important factor for measuring the load condition of the storage back end. Because the storage back end is a 'consumer' on the request scheduling queue, if the capacity of the consumer is strong enough, the consumption speed of the storage back end is greater than the production speed of the data request on the request scheduling queue, and the request scheduling queue does not have too much request compression. On the contrary, if the storage back end is busy and the processing capacity reaches the upper limit, the consumption speed of the storage back end is lower than the production speed, and the data request of the request scheduling queue is backlogged at the moment, so that the request scheduling queue becomes long. The storage request queue lengths at the back ends of different storage nodes are not very same at any time point, and some are long and some are short. Setting 3 constant water level lines for a request dispatching queue corresponding to the storage back end: high water level, normal water level and low water level. The high water level indicates that the storage back end is busy, the normal water level indicates that the load of the storage back end is normal, and the low water level indicates that the storage back end is idle and can bear more data requests. The 3 water lines can be divided equally according to the length of the request dispatching queue.
Specifically, as a preferred embodiment, as shown in fig. 4, the step S110: each storage front end respectively calculates the request scheduling queues corresponding to all the storage back ends, and the calculation comprises the following steps:
s111: when the storage front end sends a data request to the storage back end, the storage front end acquires the water level state of a request scheduling queue; and the number of the first and second groups,
s112: and when the storage back end finishes processing the data request, the storage back end feeds back the water level state of the request scheduling queue to the storage front end.
When the storage front end sends a data request to the storage back end, the storage front end calculates the position of the storage back end where the resource corresponding to the request is stored according to the data request, and a plurality of storage back ends may store the resource, so that the storage front end can obtain the request scheduling queues corresponding to the plurality of storage back ends and call the water level states of the plurality of request scheduling queues. In addition, each time a data request is sent from the front storage end to the back storage end, namely the data request of the back storage end enters the request scheduling queue, the front storage end judges the load condition of the back storage end according to the current water level of the request scheduling queue. And when the storage back end finishes processing the data request, the storage back end responds to the storage front end and brings the water level state of the request scheduling queue corresponding to the storage back end back to the storage front end node.
Specifically, as a preferred embodiment, as shown in fig. 5, the step of feeding back, by the storage back end, the water level state of the request scheduling queue to the storage front end specifically includes:
s1121: and the storage back end detects the water level state of the request scheduling queue for a preset number of times in a preset time.
S1122: and when the request scheduling queues are in the same water level state for the continuous preset times, the storage back end feeds back the water level states of the request scheduling queues to the storage front end.
In the technical solution provided in the embodiment of the present application, in order to prevent that the length of the instantaneous occasional request scheduling queue is suddenly changed, and the real water level state cannot be accurately expressed, the length of the storage back end, which is required to be continuously scheduled for a predetermined number of times within a specific time period, is in the same water level state, and the request scheduling queue can only return to the water level state. For example, the storage back end is continuously in the high water level 3 times in a certain period, and then returns to the high water level state.
After calculating the water level states of the request scheduling queues corresponding to all the storage backend, the dynamic flow control method based on the length of the request scheduling queue provided by the embodiment shown in fig. 3 further includes:
s120: and the front end of the storage adjusts the front end threshold of the storage front end according to the water level state of the request scheduling queue. The front-end threshold is used for limiting the upper limit of the concurrent request quantity of the client connected with the front-end. As described above, the water level state of the request scheduling queue directly reflects the load condition of the storage back end, and the front end threshold represents the upper limit of the number of concurrent requests of the client connected to the storage front end, so that the front end threshold of the storage front end can be adjusted according to the load condition of the storage back end, and the number of concurrent requests of the client connected to the storage front end is limited.
Specifically, as a preferred embodiment, as shown in fig. 6, the step of the front-end adjusting the front-end threshold of the front-end according to the water level status of the request scheduling queue includes:
s121: the storage front end judges whether any storage back end connected with the storage front end is in a high water level state or not corresponding to the request scheduling queue.
S122: and if any storage back end corresponding to the request scheduling queue is in a high water level state, the storage front end shrinks the front end thresholds corresponding to all the clients according to a preset proportion.
In the method for adjusting the front-end threshold by the storage front-end, when any storage front-end receives a data request, the water level state of each storage back-end related to the storage front-end is judged, and as long as the water level state of a corresponding request scheduling queue of any storage back-end related to the storage front-end is in a high water level state, the front-end thresholds of all clients connected with the storage front-end are contracted to half of the original front-end thresholds. Therefore, the distributed storage system can be ensured to quickly limit the front-end threshold when the distributed storage system is busy so as to quickly prevent the storage back-end from being overloaded.
As shown in fig. 7, as a preferred embodiment, in the dynamic flow control method, step S120: the step that the front end of the storage adjusts the front end threshold of the storage front end according to the water level state of the request dispatching queue comprises the following steps:
s123: the storage front end judges whether all the storage back ends are in a low water level state corresponding to the request scheduling queues.
S124: and if all the storage back-end corresponding request scheduling queues are in a low water level state, the storage front-end increases the front-end threshold corresponding to all the clients by a fixed step length.
In the technical solution provided in the embodiment of the present application, all involved backend water levels are in a low water level state within a fixed time period (e.g. 3S), then the front-end thresholds of all clients connected to the storage front-end are increased by a fixed long sand step n, which means that all clients connected to the storage front-end can send more data requests, and at this time, the front-end thresholds are T = T + n (n < < T). And when a certain proportion of the storage back ends in all the storage back ends connected with the storage front end are in a normal water level state, stopping increasing the threshold and maintaining the front end threshold of the client connected with the storage front end unchanged.
As shown in fig. 8, as a preferred embodiment, in the dynamic flow control method, step S120: the step that the front end of the storage adjusts the front end threshold of the front end of the storage according to the water level state of the request dispatching queue comprises the following steps:
s125: the storage front end judges whether all the storage back ends are in a normal water level state corresponding to the request scheduling queues.
S126: and if all the storage back-end corresponding request scheduling queues are in the normal water level state, the storage front-end maintains the front-end threshold corresponding to all the clients.
In the technical scheme provided by the embodiment of the application, when the water level states of all the storage back ends connected with the storage front end and the corresponding request scheduling queues are in the normal water level state within a certain time period, the front end thresholds of all the clients connected with the storage front end are maintained, so that the stable sending of the data requests of the clients is ensured.
In addition, when a certain number of the storage back ends are in a low water level state, the corresponding request scheduling queues of the other part of the storage back ends are in a normal water level state, for example, when the storage back ends in the low water level state/the storage back ends in the normal water level state are less than or equal to 1/2, the front end threshold of the storage front end connected with all the clients is maintained to be unchanged; if it is higher than the ratio, the front-end threshold is increased by a fixed step size.
According to the dynamic flow control method provided by the embodiment of the application, the front-end threshold of the front end of the storage is dynamically adjusted according to the water level state of the request scheduling queue of the rear end of the storage, and the strategies of aggressive contraction threshold, conservative increment step length and the like are adopted, so that the threshold can be rapidly limited when a distributed storage system is busy, and the overload of the rear end of the storage can be prevented; and gradually and slowly increasing the front-end threshold when the distributed storage system is idle. The purpose of continuous multiple sampling of the storage back end in a specific time period is to express the water level state of the back end more accurately, and the trend of the water level of the storage back end is not determined by the water level at a certain instant. These strategies are all to prevent the front-end threshold of the distributed storage system from changing violently, and the adjustment causes the system to oscillate.
After adjusting the front-end threshold of the storage front-end according to the water level status of the request scheduling queue, the technical solution provided by the embodiment shown in fig. 3 further includes:
s130: when receiving the data request of the client, the storage front end judges whether the data request quantity of the client exceeds a front end threshold.
The storage front-end integrates the water level states of all the involved storage back-ends to adjust the front-end threshold, and specifically, counts the number of data requests of all clients connected to the storage front-end through an IO counter.
Specifically, as a preferred embodiment, as shown in fig. 9, when the storage front end receives a data request from a client, the step of determining whether the number of data requests from the client exceeds a front end threshold includes:
s131: and the storage front end calculates the data request quantity of the corresponding client by using the IO counter.
S132: and the storage front end judges whether the data request quantity of the client exceeds a front end threshold according to the data request quantity calculated by the IO counter.
According to the technical scheme provided by the embodiment of the application, the front-end threshold of the storage front end can limit the upper limit of the quantity of the concurrent requests of each client connected with the storage front end. When each client terminal newly comes a data request, adding 1 to the IO counter of the corresponding client terminal in the front storage terminal, ending the data request, and subtracting 1 from the IO counter when the back storage terminal responds to the client terminal; therefore, when the IO counter corresponding to a certain client is larger than the threshold of the front end, the storage front end can automatically not receive the new data request of the overrun client any more, and only when the IO counter is reduced to be lower than the threshold, the storage front end can receive the new request.
After determining whether the number of data requests of the client exceeds the front-end threshold, the technical solution provided by the embodiment shown in fig. 3 further includes:
s140: and if the quantity of the data requests does not exceed the front-end threshold, the storage front end sends the data requests of the client to a request scheduling queue corresponding to the storage back end. The storage front end can use the data request to calculate the position of a storage back end for storing data requested by the data request according to a certain algorithm, and when a plurality of storage back ends exist, the storage front end can send the data request of the client to a request scheduling queue corresponding to the idle storage back end and then send the data request to the storage back end through the request scheduling queue. Wherein the storage back end is idle, namely the request scheduling queue is in a low water level state.
Specifically, in order to approach the high-water line as much as possible and exert the load capacity of all the storage back-ends, as a preferred embodiment, as shown in fig. 10, the step of sending the data request of the client to the request scheduling queue corresponding to the storage back-end by the storage front-end includes:
s141: and the other storage front ends of the plurality of storage front ends judge whether the other storage front ends access the storage back ends corresponding to the request scheduling queues in the non-high water level state.
S142: and if other storage front ends are accessing the storage back ends corresponding to the request scheduling queues in the non-high water level state, maintaining the front end thresholds of the other storage front ends or increasing the front end thresholds.
In the technical scheme provided by the embodiment of the application, because the front-end thresholds of different storage front ends may be different, when one or more storage back ends are busy and the corresponding request scheduling queues are in a high-water-level state, the front-end thresholds of a part of storage front ends can be contracted, so that the thresholds of some storage front ends can be limited possibly, and the storage back ends corresponding to the request scheduling queues in other non-high-water-level states are influenced. In order to avoid the above situation, the front-end threshold of the storage front-end corresponding to the storage back-end of the request scheduling queue accessing the non-high-water-level state should be maintained or increased according to the foregoing threshold adjustment method, so that the resource processing capability of all the storage back-ends can be exerted to approach the high-water-level line of the storage back-end as much as possible.
After the storage front end sends the data request of the client to the request scheduling queue corresponding to the storage back end, the technical solution provided by the embodiment shown in fig. 3 further includes:
s150: and if the data request quantity exceeds the front-end threshold, the storage front end does not receive the data request of the client.
To sum up, in the dynamic flow control method based on the length of the request scheduling queue provided in the embodiment of the present invention, each storage front end respectively calculates the water level states of all storage back ends corresponding to the request scheduling queue, and then the storage front end adjusts the front end threshold of the storage front end according to the water level states of the request scheduling queue, where the front end threshold is used to limit the upper limit of the quantity of concurrent requests of the client connected to the storage front end, so that the storage front end can dynamically adjust the quantity of concurrent data requests of the client according to the load states of the storage back ends, and when receiving the data request of the client, the storage front end determines whether the data request data of the client exceeds the front end threshold, and only when the quantity of the data request does not exceed the front end threshold, the storage front end sends the data request of the client to the request scheduling queue corresponding to the storage back end, so as to use the front end, that is, the load condition of the storage back end limits the quantity of concurrent requests of the client. When the number of data requests exceeds the front-end threshold, the storage front-end will not receive the data requests of the client. By the method, the front-end threshold of the storage front end for receiving the client data request can be automatically adjusted according to the load of the storage rear end in different scenes, and the quantity of the concurrent data requests of the client is dynamically adjusted, so that the storage adaptability is improved, the service quality is ensured, the system overload is prevented, and meanwhile, the manual intervention is reduced.
Based on the same concept of the embodiment of the method, the embodiment of the present invention further provides a dynamic flow control system based on the request scheduling queue length, which is used for implementing the method of the present invention.
Referring to fig. 11, fig. 11 is a schematic structural diagram of a dynamic flow control system based on the request scheduling queue length according to the present invention. As shown in fig. 11, the dynamic flow control system based on the request scheduling queue length includes:
a plurality of storage front-ends 110 and a plurality of storage back-ends 120, as shown in fig. 11, the storage front-end 110 includes a storage front-end 1 and a storage front-end 2, and the storage back-end 120 includes a storage back-end 1, a storage back-end 2, and a storage back-end 3. Each storage back end 120 corresponds to one request scheduling queue 130; wherein the content of the first and second substances,
each storage front end 110 is used to calculate the water level status of the request dispatch queue 130 corresponding to all the storage back ends 120.
The storage front end 110 is further configured to adjust a front end threshold of the storage front end 110 according to the water level status of the request scheduling queue 130, where the front end threshold is used to limit an upper limit of a number of concurrent requests of a client connected to the storage front end 110.
The storage front end 110 is further configured to, when receiving a data request from the client 140, determine whether the number of data requests from the client 140 exceeds a front end threshold.
The storage front end 110 is further configured to send the data request of the client 140 to the request scheduling queue 130 corresponding to the storage back end 120 if the number of the data requests does not exceed the front end threshold.
The storage front-end 110 is further configured to not receive data requests from the client 140 if the number of data requests exceeds the front-end threshold.
In addition, as a preferred embodiment, as shown in fig. 11, in the dynamic flow control system, the front storage end 110 is specifically configured to determine whether any of the back storage ends 120 corresponding to the request scheduling queues 130 is in a high water level state;
the storage front end 110 is further configured to shrink the front end thresholds corresponding to all the clients according to a predetermined ratio if the request scheduling queue 130 corresponding to any of the storage back ends 120 is in a high water level state.
In addition, in the above preferred embodiment, as shown in fig. 11, in the dynamic flow control system, the storage front end 110 is specifically further configured to receive, according to a front-end threshold, data requests of a client, and forward all the data requests to the storage back end 120 that requests the scheduling queue 130 to be in a low water level state or a normal water level state;
the other storage front ends of the plurality of storage front ends 110 are specifically further configured to determine whether the storage back end 130 corresponding to the request scheduling queue in the non-high water level state is being accessed;
the other front storage end 110, for example, the front storage end 2 in fig. 11, is specifically further configured to maintain its front end threshold or increase the front end threshold if the other front storage end is accessing the back storage end 130 corresponding to the request scheduling queue in the non-high water level state.
To sum up, in the dynamic flow control system based on the length of the request scheduling queue 130 provided in the embodiment of the present invention, each storage front end 110 calculates the water level states of the request scheduling queues 130 corresponding to all the storage back ends 120, and then the storage front end 110 adjusts the front end threshold of the storage front end 110 according to the water level states of the request scheduling queues 130, where the front end threshold is used to limit the upper limit of the quantity of concurrent requests of clients connected to the storage front end 110, so that the storage front end 110 can dynamically adjust the quantity of the concurrent data requests of the clients at the storage front end 110 according to the load states of the storage back ends 120, and when the storage front end 110 receives a data request of a client, determine whether the data request data of the client exceeds the front end threshold, and only when the quantity of the data request does not exceed the front end threshold, the storage front end 110 sends the data request of the client to the request scheduling queues 130 corresponding to the storage back ends 120, so that the front end threshold is used, that the quantity of the concurrent requests of the client is limited by the load condition of the storage back ends 120. The storage front-end 110 does not receive data requests from clients when the number of data requests exceeds the front-end threshold. Through the above manner, the front-end threshold of the storage front-end 110 for receiving the client data request can be automatically adjusted according to the load of the storage back-end 120 in different scenes, and the number of concurrent data requests of the client can be dynamically adjusted, so that the storage adaptability is improved, the service quality is ensured, the system overload is prevented, and meanwhile, the manual intervention is reduced.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention can be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A dynamic flow control method based on request scheduling queue length is characterized in that the method is used for a distributed storage system, the distributed storage system comprises a plurality of storage front ends and a plurality of storage back ends, and each storage back end corresponds to a request scheduling queue; the dynamic flow control method comprises the following steps:
each storage front end respectively calculates the water level states of all the storage back ends corresponding to the request scheduling queues;
the storage front end adjusts a front end threshold of the storage front end according to the water level state of the request scheduling queue, wherein the front end threshold is used for limiting the upper limit of the quantity of concurrent requests of a client connected with the storage front end;
when the storage front end receives the data request of the client, judging whether the data request quantity of the client exceeds the front end threshold;
if the data request quantity does not exceed the front-end threshold, the storage front end sends the data request of the client to a request scheduling queue corresponding to the storage back end;
and if the data request quantity exceeds the front-end threshold, the storage front end does not receive the data request of the client.
2. The dynamic flow control method according to claim 1, wherein the step of the storage front end adjusting the front end threshold of the storage front end according to the water level status of the request scheduling queue comprises:
the storage front end judges whether any storage back end corresponding to the request scheduling queue accessed by the storage front end is in a high water level state;
and if any storage back end corresponding to the request scheduling queue is in a high water level state, the storage front end shrinks the front end thresholds corresponding to all the clients according to a preset proportion.
3. The dynamic flow control method according to claim 2, wherein after the step of storing the front-end threshold corresponding to all the clients by shrinking the front-end threshold by a predetermined ratio, the method further comprises:
the other storage front ends in the plurality of storage front ends judge whether the storage back end corresponding to the request scheduling queue in the non-high water level state is accessed;
and if the other storage front ends are accessing the storage back ends corresponding to the request scheduling queues in the non-high water level state, maintaining the front end thresholds of the other storage front ends or increasing the front end thresholds.
4. The dynamic flow control method according to claim 1, wherein the step of the storage front end adjusting the front end threshold of the storage front end according to the water level status of the request scheduling queue comprises:
the storage front end judges whether all storage back ends are in a low water level state corresponding to the request scheduling queues;
and if all the storage back-ends are in a low-water level state corresponding to the request scheduling queues, the storage front-end increases the front-end thresholds corresponding to all the clients by a fixed step length.
5. The dynamic flow control method according to claim 1, wherein the step of the storage front end adjusting the front end threshold of the storage front end according to the water level status of the request scheduling queue comprises:
the storage front end judges whether all the storage back ends are in a normal water level state corresponding to the request scheduling queues;
and if all the corresponding request scheduling queues of the storage back end are in a normal water level state, the storage front end maintains the front end threshold corresponding to all the clients.
6. The dynamic flow control method according to claim 1, wherein the step of determining, by the storage front end, whether the number of data requests of the client exceeds the front end threshold when receiving the data requests of the client includes:
the storage front end calculates the number of data requests of the corresponding client by using an IO counter;
and the storage front end judges whether the data request quantity of the client exceeds the front end threshold according to the data request quantity calculated by the IO counter.
7. The dynamic flow control method according to claim 1, wherein the step of calculating, by each storage front end, the request scheduling queues corresponding to all storage back ends respectively comprises:
when the storage front end sends a data request to the storage back end, the storage front end acquires the water level state of the request scheduling queue; and the number of the first and second groups,
and when the storage back end finishes processing the data request, the storage back end feeds back the water level state of the request scheduling queue to the storage front end.
8. The dynamic flow control method according to claim 7, wherein the step of the storage back end feeding back the water level status of the request scheduling queue to the storage front end comprises:
the storage back end continuously detects the water level state of the request scheduling queue for a preset number of times in a preset time;
and when the request scheduling queues are in the same water level state for the predetermined times, the storage back end feeds back the water level states of the request scheduling queues to the storage front end.
9. A dynamic flow control system that schedules queue lengths based on requests, comprising:
the system comprises a plurality of storage front ends and a plurality of storage back ends, wherein each storage back end corresponds to a request scheduling queue; wherein, the first and the second end of the pipe are connected with each other,
each storage front end is respectively used for calculating the water level state of the corresponding request scheduling queue of all the storage back ends;
the storage front end is further configured to adjust a front end threshold of the storage front end according to a water level state of the request scheduling queue, where the front end threshold is used to limit an upper limit of a concurrent request quantity of a client connected to the storage front end;
the storage front end is further configured to determine whether the number of data requests of the client exceeds the front end threshold when receiving the data requests of the client;
the storage front end is further configured to send the data request of the client to a request scheduling queue corresponding to the storage back end if the number of the data requests does not exceed the front end threshold;
the storage front end is further configured to not receive the data requests of the client any more if the number of the data requests exceeds the front end threshold.
10. The dynamic flow control system according to claim 9, wherein the storage front-end is specifically configured to determine whether any storage back-end corresponding to the request scheduling queue is in a high water level state;
the storage front end is specifically used for contracting the front end thresholds corresponding to all the clients according to a preset proportion when any storage back end corresponding to the request scheduling queue is in a high water level state.
CN202211328698.0A 2022-10-27 2022-10-27 Dynamic flow control method and system based on request scheduling queue length Pending CN115643309A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211328698.0A CN115643309A (en) 2022-10-27 2022-10-27 Dynamic flow control method and system based on request scheduling queue length

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211328698.0A CN115643309A (en) 2022-10-27 2022-10-27 Dynamic flow control method and system based on request scheduling queue length

Publications (1)

Publication Number Publication Date
CN115643309A true CN115643309A (en) 2023-01-24

Family

ID=84946652

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211328698.0A Pending CN115643309A (en) 2022-10-27 2022-10-27 Dynamic flow control method and system based on request scheduling queue length

Country Status (1)

Country Link
CN (1) CN115643309A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100302950A1 (en) * 2009-05-30 2010-12-02 Zhao Ray R Timer adjustment for load control
WO2016068986A1 (en) * 2014-10-31 2016-05-06 Hewlett Packard Enterprise Development Lp Draining a write queue based on information from a read queue
CN109450803A (en) * 2018-09-11 2019-03-08 广东神马搜索科技有限公司 Traffic scheduling method, device and system
CN109688229A (en) * 2019-01-24 2019-04-26 江苏中云科技有限公司 Session keeps system under a kind of load balancing cluster
CN110120973A (en) * 2019-04-28 2019-08-13 华为技术有限公司 A kind of request control method, relevant device and computer storage medium
US20200183620A1 (en) * 2018-12-05 2020-06-11 International Business Machines Corporation Enabling compression based on queue occupancy
CN111552565A (en) * 2020-04-26 2020-08-18 深圳市鸿合创新信息技术有限责任公司 Multithreading screen projection method and device
CN111857992A (en) * 2020-06-24 2020-10-30 厦门网宿有限公司 Thread resource allocation method and device in Radosgw module
CN111949417A (en) * 2020-07-03 2020-11-17 福建天泉教育科技有限公司 Message transmission method and storage medium
CN111988234A (en) * 2019-05-23 2020-11-24 厦门网宿有限公司 Overload protection method, device, server and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100302950A1 (en) * 2009-05-30 2010-12-02 Zhao Ray R Timer adjustment for load control
WO2016068986A1 (en) * 2014-10-31 2016-05-06 Hewlett Packard Enterprise Development Lp Draining a write queue based on information from a read queue
CN109450803A (en) * 2018-09-11 2019-03-08 广东神马搜索科技有限公司 Traffic scheduling method, device and system
US20200183620A1 (en) * 2018-12-05 2020-06-11 International Business Machines Corporation Enabling compression based on queue occupancy
CN109688229A (en) * 2019-01-24 2019-04-26 江苏中云科技有限公司 Session keeps system under a kind of load balancing cluster
CN110120973A (en) * 2019-04-28 2019-08-13 华为技术有限公司 A kind of request control method, relevant device and computer storage medium
CN111988234A (en) * 2019-05-23 2020-11-24 厦门网宿有限公司 Overload protection method, device, server and storage medium
CN111552565A (en) * 2020-04-26 2020-08-18 深圳市鸿合创新信息技术有限责任公司 Multithreading screen projection method and device
CN111857992A (en) * 2020-06-24 2020-10-30 厦门网宿有限公司 Thread resource allocation method and device in Radosgw module
CN111949417A (en) * 2020-07-03 2020-11-17 福建天泉教育科技有限公司 Message transmission method and storage medium

Similar Documents

Publication Publication Date Title
CN106557369B (en) Multithreading management method and system
US8826291B2 (en) Processing system
CN108134814B (en) Service data processing method and device
US9037703B1 (en) System and methods for managing system resources on distributed servers
CN111026553A (en) Resource scheduling method for offline mixed part operation and server system
CN111988234A (en) Overload protection method, device, server and storage medium
CN113778347B (en) Read-write quality optimization method for ceph system and server
CN108769253A (en) A kind of adaptive prefetching control method of distributed system access performance optimization
CN117336300B (en) Resource management system for state machine
US11005776B2 (en) Resource allocation using restore credits
CN112350998B (en) Video streaming transmission method based on edge calculation
CN115643309A (en) Dynamic flow control method and system based on request scheduling queue length
CN111857992B (en) Method and device for allocating linear resources in Radosgw module
JP2009237918A (en) Distributed content delivery system, center server, distributed content delivery method and distributed content delivery program
JP2007328413A (en) Method for distributing load
CN107105015B (en) Data stream shunting method and device
CN111930710A (en) Method for distributing big data content
CN115686863A (en) Hybrid polling method, device, equipment and readable storage medium
CN113934531A (en) High-throughput flow processing method and device
CN113918093B (en) Capacity reduction optimization method and terminal
CN115297057B (en) Network flow control method based on information management platform
CN115586957B (en) Task scheduling system, method and device and electronic equipment
CA2318082C (en) Method and device for controlling processes on a computer system
CN112965796B (en) Task scheduling system, method and device
CN117493048B (en) Message dynamic processing method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination