CN117793188A - Resource flow limiting method, device, system, equipment and storage medium - Google Patents

Resource flow limiting method, device, system, equipment and storage medium Download PDF

Info

Publication number
CN117793188A
CN117793188A CN202211170305.8A CN202211170305A CN117793188A CN 117793188 A CN117793188 A CN 117793188A CN 202211170305 A CN202211170305 A CN 202211170305A CN 117793188 A CN117793188 A CN 117793188A
Authority
CN
China
Prior art keywords
target
resource
preset
current
storage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211170305.8A
Other languages
Chinese (zh)
Inventor
纪卓志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Zhenshi Information Technology Co Ltd
Original Assignee
Beijing Jingdong Zhenshi Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Zhenshi Information Technology Co Ltd filed Critical Beijing Jingdong Zhenshi Information Technology Co Ltd
Priority to CN202211170305.8A priority Critical patent/CN117793188A/en
Publication of CN117793188A publication Critical patent/CN117793188A/en
Pending legal-status Critical Current

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention discloses a resource current limiting method, a device, a system, equipment and a storage medium. According to the method, a target resource identifier corresponding to the current resource request is determined in response to the obtained current resource request, the target storage fragments are selected from all preset storage fragments for storing current limiting data corresponding to the target resource identifier, so that the current resource request and the target resource identifier are sent to the target storage fragments, and the target storage fragments execute a current limiting process.

Description

Resource flow limiting method, device, system, equipment and storage medium
Technical Field
The present invention relates to the field of cloud computing technologies, and in particular, to a method, an apparatus, a system, a device, and a storage medium for limiting resource flow.
Background
In the existing distributed current limiting technology, the same distributed storage component is used among all machines to store current limiting data, so that the problem that the current limiting data are inaccurate due to restarting of the machines and increase and decrease of the machines can be avoided. However, because the same resource uses the same identifier, when the distributed storage component in the cluster mode is adopted, each request for the same resource is routed to the same cluster partition in the distributed storage component, so that the processing pressure of the cluster partition is too high, and the single-point pressure of the distributed storage component is caused.
In the process of implementing the present invention, the inventor finds that at least the following technical problems exist in the prior art: individual requests for the same resource may be routed into the same shard, such that the processing pressure of the shard is excessive, and the computing power that the shard can carry is limited, which presents a performance bottleneck.
Disclosure of Invention
The embodiment of the invention provides a resource flow limiting method, a device, a system, equipment and a storage medium, which are used for solving the problem of single-point pressure caused by the fact that all resource requests of the same resource are routed to the same partition for flow limiting in the prior art, and further solving the problem of flow limiting performance bottleneck.
According to an aspect of the embodiment of the present invention, there is provided a resource flow limiting method, including:
determining a target resource identifier corresponding to a current resource request in response to acquiring the current resource request;
determining a plurality of preset storage fragments for storing the current-limiting data corresponding to the target resource identifier, and selecting a target storage fragment from the plurality of preset storage fragments;
and sending the current resource request and the target resource identifier to the target storage partition, so that the target storage partition determines current limiting data corresponding to the target resource identifier based on the target resource identifier, and performs current limiting processing on the current resource request based on the current limiting data.
According to another aspect of the embodiment of the present invention, there is provided a resource flow limiting device, including:
the resource identification determining module is used for determining a target resource identification corresponding to a current resource request in response to the current resource request;
the storage fragment selection module is used for determining a plurality of preset storage fragments for storing the current-limiting data corresponding to the target resource identification, and selecting the target storage fragments from the preset storage fragments;
And the resource request sending module is used for sending the current resource request and the target resource identifier to the target storage partition so that the target storage partition determines the current limiting data corresponding to the target resource identifier based on the target resource identifier and executes the current limiting processing on the current resource request based on the current limiting data.
According to another aspect of an embodiment of the present invention, there is provided a resource flow restriction system, the system comprising at least one flow restrictor and at least two preset memory slices, the preset memory slices comprising target memory slices; wherein,
the flow limiter is used for sending a current resource request and a target resource identifier corresponding to the current resource request to a target storage partition based on the resource flow limiting method provided by any embodiment of the invention;
the target storage fragment is used for determining current limiting data corresponding to the target resource identifier based on the target resource identifier, and executing current limiting processing on the current resource request based on the current limiting data.
According to another aspect of an embodiment of the present invention, there is provided an electronic apparatus including:
At least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the resource throttling method provided by any of the embodiments of the present invention.
According to another aspect of the embodiments of the present invention, there is provided a computer readable storage medium storing computer instructions for implementing the resource throttling method provided by any of the embodiments of the present invention when executed by a processor.
One embodiment of the above invention has the following advantages or benefits:
in response to the obtained current resource request, determining a target resource identifier corresponding to the current resource request, selecting a target storage fragment from all preset storage fragments for storing current limiting data corresponding to the target resource identifier, and sending the current resource request and the target resource identifier to the target storage fragment so as to enable the target storage fragment to execute a current limiting process.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a resource flow limiting method according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating another method for limiting flow of resources according to an embodiment of the present invention;
FIG. 3 is a flow chart illustrating another method for limiting flow of resources according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a resource flow-limiting device according to an embodiment of the present invention;
FIG. 5A is a schematic diagram illustrating a resource flow-limiting system according to an embodiment of the present invention;
FIG. 5B is a process flow diagram of a resource throttling system provided in accordance with one embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "preset," "target," and the like in the description and the claims of the present invention and the above drawings are used for distinguishing similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Fig. 1 is a flow chart of a resource flow limiting method according to an embodiment of the present invention, where the embodiment is applicable to a case of performing flow limiting processing on each received resource request of the same resource based on a plurality of preset storage slices storing flow limiting data of the same resource, and the method may be performed by a resource flow limiting device, where the resource flow limiting device may be implemented in a form of hardware and/or software, and the resource flow limiting device may be configured in each node in a monolithic server or a distributed system. As shown in fig. 1, the method includes:
s110, determining a target resource identifier corresponding to the current resource request in response to the current resource request.
Wherein the current resource request may be a client-generated resource request. By way of example, it may be a presentation request for a product detail page, a display request for article content, an account login request for a client, or an access request for a web page, etc. In particular, the resource request may be sent by the client or by the load balancer.
Specifically, according to the obtained current resource request, determining a target resource identifier corresponding to the current resource request, namely determining the identifier of the resource requested by the current resource request. For example, the target resource identifier corresponding to the current resource request may be determined according to the resource request information carried by the current resource request. The resource request information may be information describing the resource to be requested, such as a commodity identifier, a web address, or an article name.
It should be noted that, in the embodiment of the present invention, for each resource request of the same resource, the corresponding resource identifier should be the same, so as to query the corresponding current-limiting data based on the resource identifier. If the resource request_a1 sent by the client a is an access request for the web page_web1 and the resource request B1 sent by the client B is an access request for the web page_web1, the resource identifiers corresponding to the resource request_a1 and the resource request B1 are the same.
S120, determining a plurality of preset storage fragments for storing the current-limiting data corresponding to the target resource identification, and selecting the target storage fragment from the plurality of preset storage fragments.
The preset storage slices can be all cluster slices in the distributed storage assembly, namely all machines forming the distributed storage assembly. The number of preset storage slices is at least two. The preset storage fragments store the current limiting data of the target resources corresponding to the target resource identifiers, and the preset storage fragments can be used for carrying out current limiting processing on the resource requests of the resources corresponding to the stored current limiting data.
In particular, the current limit data may include a resource-responsive amount and a current responded amount of the resource, or the current limit data may include a remaining amount of the resource response. Of course, the current limit data may also include the time when the current limit process was last performed.
In the embodiment of the present invention, the current-limiting data stored for the same resource in each preset storage partition may be uniformly distributed. For example, preset current limit information corresponding to the target resource can be configured for the target resource corresponding to the target resource identifier in advance, and then current limit data required to be stored in the preset storage slices is determined according to the number of the preset storage slices. For example, the number of preset storage slices is 10, the preset current limit information of the target resource is 1000 requests per second, and the current limit data stored in each preset storage slice may be 100 requests per second.
Of course, the current limit data stored for the same resource in each preset storage partition may be unevenly distributed. For example, the preset current limit information is distributed to each preset storage segment based on the preset current limit information of the target resource corresponding to the target resource identifier and the actual load capacity corresponding to each preset storage segment. For example, along the above example, the current limit data stored in each preset storage segment may be 150, 50, 100, 30, 120, 150, 100, respectively.
It should be noted that, the preset storage partition may store the current limiting data corresponding to the plurality of resources at the same time. The flow-limiting data corresponding to each resource can be distinguished through the resource identifier corresponding to each resource, namely the preset storage fragments can determine the flow-limiting data corresponding to each resource respectively according to the resource identifiers.
In the embodiment of the present invention, selecting a target storage partition from a plurality of preset storage partitions may be: and selecting a target storage fragment from a plurality of preset storage fragments based on a load balancing algorithm. Illustratively, the selecting a target storage partition from the plurality of preset storage partitions may be any of the following:
determining a target storage fragment for carrying out current limiting processing on the current resource request based on the storage fragment for carrying out current limiting processing on the resource request corresponding to the target resource identifier last time; calculating a current random parameter through a random function, and determining a target storage fragment based on the current random parameter; and acquiring a source address corresponding to the current resource request, calculating a current hash value based on the source address, and determining a target storage score based on the current hash value.
In a specific embodiment, the preset storage slices correspond to at least one preset virtual slice, a plurality of preset storage slices for storing the current limiting data corresponding to the target resource identifier are determined, and the target storage slices are selected from the plurality of preset storage slices, including: determining each preset virtual fragment, and selecting a target virtual fragment from each preset virtual fragment to obtain the identification of the target virtual fragment; determining the number of preset storage fragments for storing the current-limiting data corresponding to the target resource identifier, and determining the target storage fragments corresponding to the target virtual fragments based on the identifier of the target virtual fragments, the number of preset storage fragments and a preset hash mapping algorithm.
The preset virtual partition may be a preset virtual structure or an entity structure for mapping the preset storage partition. Each preset virtual partition may be mapped to a preset storage partition. It should be noted that, the mapping relationship between the preset virtual shards and the preset storage shards may be a many-to-one mapping or a one-to-one mapping, that is, one preset virtual shard is mapped to one preset storage shard, or a plurality of preset virtual shards are mapped to the same preset storage shard.
In the embodiment of the invention, a plurality of preset virtual fragments can be preset, and the number of the preset virtual fragments is larger than that of the preset storage fragments, so that the preset virtual fragments and the preset storage fragments meet a many-to-one mapping relationship or a one-to-one mapping relationship, the existence of the preset storage fragments which cannot be mapped is avoided, and further the existence of idle preset storage fragments which cannot be subjected to current limiting processing is avoided. For example, the number of preset virtual tiles may be an integer multiple of the number of preset storage tiles, so that each preset storage tile may be uniformly mapped, and thus each resource request may be uniformly distributed to each preset storage tile process.
Specifically, the number of preset virtual segments may be obtained, and the flow restriction on each preset virtual segment may be calculated. If the number of preset virtual slices is 20 and the total number of cluster traffic restrictions is 1000 requests per second, then the traffic restriction on each preset virtual slice is 50 requests per second. Further, according to the flow limit on each preset virtual partition and the load balancing algorithm, a target virtual partition can be selected from the preset virtual partitions, and the identification of the target virtual partition is obtained. The load balancing algorithm comprises, but is not limited to, a polling scheduling algorithm, a weighted polling algorithm, a random selection algorithm, a hash consistency algorithm and the like.
Furthermore, according to the identification of the target virtual fragments, the number of preset storage fragments and a preset hash mapping algorithm, a mapping identification corresponding to the identification of the target virtual fragments can be calculated, mapping from the target virtual fragments to the target storage fragments is achieved, and then the target storage fragments are determined according to the mapping identification. For example, the mapping identifier may be a fragment identifier of the target storage fragment, and the target storage fragment may be directly determined according to the mapping table identifier.
The preset hash mapping algorithm may be a consistent hash algorithm. Illustratively, determining the target storage shard corresponding to the target virtual shard based on the identification of the target virtual shard, the number of preset storage shards, and the preset hash mapping algorithm may be: and determining a module value between the identification of the target virtual fragment and the number of the preset storage fragments, and determining the identification of the target storage fragments based on the module value.
Or, mapping all preset memory fragments onto a preset hash ring in advance according to the identifiers corresponding to all preset memory fragments respectively to obtain fragment mapping positions of all preset memory fragments on the preset hash ring; determining a current mapping position of the target virtual fragment on a preset hash ring, and determining the target storage fragment from all preset storage fragments according to the position relation between the current mapping position and all the fragment mapping positions.
The preset virtual partition is exemplified by a preset data structure, a preset storage unit, a preset middleware or a preset number. The preset data structure may be a data structure in a memory such as a stack or a queue. The preset storage unit may be a preset storage space in the memory. The preset middleware may be a preset server or node, that is, an actually existing proxy node. The preset number is a preset number mark, such as 01, 02, 03, etc.
It should be noted that, if the preset virtual partition is a preset middleware, the mapping logic for the preset storage partition may be executed by the preset virtual partition. That is, the target storage shard corresponding thereto may be determined by the target virtual shard. The advantage of this arrangement is that the mapping process of virtual shards to storage shards can be performed by virtual shards, reducing the running processing logic. If the preset virtual fragment is a preset number, a preset data structure or a preset storage unit, other nodes which exist in practice do not need to be set independently, so that the realization cost of resource current limiting is reduced.
In the above embodiment, the determining the identifier of the target virtual partition from each preset virtual partition, and further determining the target storage partition according to the identifier of the target virtual partition has the following advantages: the method has the advantages that the storage fragments are mapped in a virtual fragment introducing mode, so that the splitting of the flow limiting processing of each resource request is realized, the flow limiting processing of the resource requests aiming at the same resource is routed to each preset storage fragment, and the problem that the distributed storage component in the prior art has single-point pressure is effectively solved.
S130, sending the current resource request and the target resource identifier to the target storage partition, so that the target storage partition determines the current limiting data corresponding to the target resource identifier based on the target resource identifier, and executing the current limiting processing on the current resource request based on the current limiting data.
In the embodiment of the invention, the target storage fragments can be used for carrying out current limiting processing on the current resource request. Specifically, a current resource request and a target resource identifier of a target resource requested by the current resource request are sent to a target storage partition; further, the target storage fragment queries the current limiting data corresponding to the target resource according to the target resource identifier, and executes the current limiting processing on the current resource request according to the current limiting data.
The current resource request is subjected to current limiting processing according to the limiting data, which can be: determining a current resource request current limiting processing result according to the resource response residual quantity in the current limiting data; or determining the resource response residual quantity according to the resource responsive quantity in the current limiting data and the current responsive quantity of the resource, and determining the current limiting processing result of the resource request according to the resource response residual quantity.
Illustratively, the current resource request current flow limiting processing result determined according to the resource response residual amount may be: and determining the current resource request current limiting processing result according to the resource response residual quantity and a preset limiting algorithm. The preset current limiting algorithm comprises a counter algorithm, a rolling window algorithm, a leaky bucket algorithm and a token bucket algorithm.
In a specific implementation manner, after the current resource request and the target resource identifier are sent to the target storage partition, the method provided by the embodiment of the invention further includes: obtaining a current limiting processing result fed back by the target storage fragments, wherein the current limiting processing result comprises refusing response, direct response or waiting for response; and forwarding the current resource request to a resource service server for processing or returning a request failure signal to the client based on the current limiting processing result.
Specifically, if the current limiting processing result is a refusal response, a request failure signal may be returned to the client to prompt the client to reinitiate the resource request after the set re-request time elapses. If the current resource request is a direct response, the current resource request can be forwarded to the resource service server for processing, so that the resource service server responds to the current resource request and provides resource service for the client corresponding to the current resource request. If the current limiting processing result is waiting for response, a request failure signal can be returned to the client; or after the set waiting time length is passed, forwarding the current resource request to a resource service server for processing, wherein the set waiting time length can be carried by a current limiting processing result or a preset uniform time length.
And by acquiring the current limiting processing result fed back by the target storage fragments, whether the current resource request is forwarded to the resource service server for processing or not is determined according to the current limiting processing result, or whether the current resource request is refused, the resource flow limitation is realized, and the collapse of the resource service server is avoided.
In the embodiment of the invention, the target storage fragments can directly feed back the current limiting processing result, or the target storage fragments can also feed back the resource response residual quantity, and the current limiting processing result is further determined according to the resource response residual quantity.
In another specific embodiment, after the current resource request and the target resource identifier are sent to the target storage partition, the method provided by the embodiment of the present invention further includes: acquiring the resource response residual quantity corresponding to the target resource identifier and fed back by the target storage fragment; based on the residual quantity of the resource response, the current resource request is forwarded to a resource service server for processing, or a request failure signal is returned to the client.
Specifically, the target storage partition can return the resource response residual quantity, and according to the resource response residual quantity fed back by the target storage partition and a preset flow limiting algorithm, the flow limiting processing result of the current resource request can be determined, and then the current resource request is determined to be forwarded to the resource service server for processing according to the flow limiting processing result, or a request failure signal is returned to the client.
The flow restriction is realized by acquiring the resource response residual quantity fed back by the target storage fragment, so as to determine whether to forward the current resource request to the resource service server for processing or reject the current resource request according to the resource response residual quantity. And the target storage fragments only feed back the residual quantity of the resource response, and complete current limiting logic is not required to be executed, so that the operation logic of the target storage fragments is reduced, and the current limiting response speed of the target storage fragments is further improved.
According to the technical scheme, the target resource identification corresponding to the current resource request is determined in response to the obtained current resource request, the target storage fragments are selected from all preset storage fragments for storing the current limiting data corresponding to the target resource identification, so that the current resource request and the target resource identification are sent to the target storage fragments, and the target storage fragments execute the current limiting process.
Fig. 2 is a flow chart of another resource current limiting method according to an embodiment of the present invention, where a process of selecting a target storage partition from a plurality of preset storage partitions is exemplarily described based on the above embodiment. As shown in fig. 2, the method includes:
s210, determining a target resource identifier corresponding to the current resource request in response to the current resource request.
S220, determining each preset virtual partition, and selecting a target virtual partition from the preset virtual partitions to obtain the identification of the target virtual partition.
The target virtual fragment can be selected from all preset virtual fragments based on a load balancing algorithm. Such as a polling algorithm, a random selection algorithm, a weighted polling algorithm, or a consistent hashing algorithm, etc.
And S230, sending the identification of the target virtual fragment to a preset middleware so that the preset middleware determines the number of preset storage fragments for storing the current-limiting data corresponding to the target resource identification, and determining the target storage fragment corresponding to the target virtual fragment based on the identification of the target virtual fragment, the number of preset storage fragments and a preset hash mapping algorithm.
Specifically, after the target virtual fragment is selected from each preset virtual fragment, the identifier of the target virtual fragment is sent to the preset middleware, so that the preset middleware performs mapping operation from the virtual fragment to the storage fragment.
The preset middleware is a target virtual partition or a main node corresponding to the preset storage partition. Specifically, if the preset middleware is a target virtual partition, the target virtual partition may be an independent proxy node, and the target virtual partition may determine the mapped target storage partition according to the identifier of the target virtual partition.
If the preset middleware is a master node corresponding to the preset storage slice, the preset middleware may be a node used for bearing the request for receiving in the distributed storage component to which the preset storage slice belongs, such as a master node or a leader node. Specifically, the identification of the target virtual shard may be sent to a master node in the distributed storage component, and the master node determines the target storage shard mapped by the target virtual shard.
It should be noted that, in the embodiment of the present invention, a preset middleware is set to determine, through the preset middleware, a target storage partition mapped by a target virtual partition, where the purpose is that: the mapping operation from the virtual fragments to the storage fragments is executed by the preset middleware, information such as the number of the preset storage fragments is not required to be paid attention to, and therefore the influence caused by capacity expansion and contraction of the distributed storage assembly can be shielded, and the processing logic of the distributed storage assembly is not required to be changed.
The determining the target storage slices corresponding to the target virtual slices based on the identification of the target virtual slices, the number of preset storage slices and the preset hash mapping algorithm may be specifically described in the above exemplary description, which is not repeated herein.
S240, the current resource request and the target resource identifier are sent to the target storage partition, so that the target storage partition determines the current limiting data corresponding to the target resource identifier based on the target resource identifier, and performs current limiting processing on the current resource request based on the current limiting data.
In a specific embodiment, sending the current resource request and the target resource identifier to the target storage partition includes: and sending the current resource request and the target resource identifier to a preset middleware so that the preset middleware forwards the current resource request and the target resource identifier to the target storage partition.
The preset middleware can determine the target storage fragments mapped by the target virtual fragments, and the preset middleware forwards the current resource request and the target resource identification to the target storage fragments.
The advantages of this arrangement are that: considering that the situation that each preset storage slice in the distributed storage assembly does not have access right exists, the embodiment of the invention can take the main node or the target virtual slice in the distributed storage assembly as the preset middleware to construct the middle layer, thereby avoiding the situation that the current resource request cannot be directly sent to the storage slice due to the fact that the access right is not provided, and avoiding the influence caused by expansion and contraction of the distributed storage assembly.
According to the technical scheme, the target virtual fragments are selected from the preset virtual fragments through determining the preset virtual fragments, the identification of the target virtual fragments is sent to the preset middleware, so that the preset middleware determines the number of the preset storage fragments, the target storage fragments are determined based on the identification of the target virtual fragments, the number of the preset storage fragments and a preset hash mapping algorithm, mapping based on the target virtual fragments is achieved, mapping operation from the virtual fragments to the storage fragments is executed by the preset middleware, the influence caused by capacity expansion and contraction of the distributed storage assembly can be shielded, and self-processing logic does not need to be changed along with the change of the number of the storage fragments.
Fig. 3 is a flow chart of another resource throttling method according to an embodiment of the present invention, where the step of receiving a current resource request sent by a load balancer based on a load balancing algorithm is added before responding to the current resource request. As shown in fig. 3, the method includes:
and S310, receiving a current resource request sent by a load balancer based on a load balancing algorithm, wherein the load balancer is used for carrying out load balancing scheduling on the resource request sent by each client.
Wherein the load balancer may be deployed on a separate service server. Specifically, each client can send a resource request to a service server, and the service server distributes each resource request to different nodes through a built-in load balancer thereof; after receiving the resource request, each node may execute the current limiting processing on the resource request by using the resource current limiting method provided by the embodiment of the present invention.
In a specific embodiment, receiving a current resource request sent by a load balancer based on a load balancing algorithm includes: and receiving a current resource request corresponding to the hot spot resource, which is sent by the load balancer based on the load balancing algorithm, wherein the hot spot resource is determined by the load balancer based on the predicted flow of each preset resource.
Specifically, the target resource corresponding to the current resource request is a hot spot resource. That is, the load balancer may determine the predicted flow rate corresponding to each preset resource in advance, and determine the hot spot resource from each preset resource according to the predicted flow rate corresponding to each preset resource. Wherein, according to the predicted flow rate corresponding to each preset resource, the hot spot resource is determined from each preset resource, which may be: determining preset resources with the predicted flow greater than a preset flow threshold as hot spot resources; or determining the preset resources with the predicted flow rate at the first N names as hot spot resources; or, determining the preset resource with the predicted flow exceeding the historical average flow of the preset resource as the hot spot resource; or determining the preset resource with the flow generation speed exceeding the preset speed threshold value of the preset flow in the set time period as the hot spot resource.
Further, after receiving the resource request for the hot spot resource, the load balancer distributes each resource request to different nodes through load balancing scheduling. Of course, the load balancer can periodically predict the hot spot resources so as to perform current limiting processing on different hot spot resources at different time points.
The load balancer may determine at least one hot spot resource in the current prediction period, and send preset current limit data corresponding to the hot spot resource to the distributed storage component, so that the storage component in the distributed storage component stores the preset current limit data of the hot spot resource. When the next prediction period is reached, the load balancer can predict the hot spot resources again, and send preset current limit data corresponding to the hot spot resources to the distributed storage component, so that the storage component in the distributed storage component updates the stored current limit data.
In the embodiment, by receiving the current resource request corresponding to the hot spot resource sent by the load balancer based on the load balancing algorithm, the current limiting processing of the request of the hot spot resource can be realized, the current limiting processing of the request of all the resources is avoided, and the current limiting processing workload of each storage partition in the distributed storage component is reduced.
S320, in response to the current resource request, determining a target resource identifier corresponding to the current resource request, determining a plurality of preset storage fragments for storing the current limit data corresponding to the target resource identifier, and selecting the target storage fragment from the plurality of preset storage fragments.
S330, the current resource request and the target resource identifier are sent to the target storage partition, so that the target storage partition determines the current limiting data corresponding to the target resource identifier based on the target resource identifier, and performs current limiting processing on the current resource request based on the current limiting data.
According to the technical scheme, the current resource request sent by the load balancer based on the load balancing algorithm is received to carry out flow-limiting forwarding on the current resource request after load balancing scheduling, so that a plurality of resource requests can be distributed to each preset storage partition through different nodes, excessive node pressure caused by the fact that a single node is responsible for distributing the plurality of resource requests to each preset storage partition is avoided, and further flow-limiting processing efficiency of each resource request is improved.
Fig. 4 is a schematic structural diagram of a resource flow-limiting device according to an embodiment of the present invention. As shown in fig. 4, the apparatus includes a resource identification determination module 410, a memory fragment selection module 420, and a resource request transmission module 430.
A resource identifier determining module 410, configured to determine, in response to obtaining a current resource request, a target resource identifier corresponding to the current resource request;
a storage partition selection module 420, configured to determine a plurality of preset storage partitions for storing current-limiting data corresponding to the target resource identifier, and select a target storage partition from the plurality of preset storage partitions;
the resource request sending module 430 is configured to send the current resource request and the target resource identifier to the target storage partition, so that the target storage partition determines, based on the target resource identifier, current limiting data corresponding to the target resource identifier, and performs current limiting processing on the current resource request based on the current limiting data.
According to the technical scheme of the embodiment, a resource identification determining module determines a target resource identification corresponding to a current resource request in response to the acquired current resource request, and a storage partition selecting module selects a target storage partition from preset storage partitions for storing current limiting data corresponding to the target resource identification, so that the current resource request and the target resource identification are sent to the target storage partition through a resource request sending module, and the target storage partition executes a current limiting process. By selecting a target storage fragment for processing the current resource request from each preset storage fragment, a plurality of resource requests aiming at the same resource can be respectively forwarded to each preset storage fragment for current limiting processing, current limiting distribution processing of each resource request is realized, the problem of single-point pressure caused by the fact that each resource request of the same resource is routed to the same fragment for current limiting in the prior art is solved, and the problem of current limiting performance bottleneck is further solved.
On the basis of the foregoing embodiment, the preset storage partition corresponds to at least one preset virtual partition, and the storage partition selection module 420 is specifically configured to:
determining each preset virtual fragment, and selecting a target virtual fragment from each preset virtual fragment to obtain the identification of the target virtual fragment; determining the number of preset storage fragments for storing the current-limiting data corresponding to the target resource identifier, and determining the target storage fragments corresponding to the target virtual fragments based on the identifier of the target virtual fragments, the number of preset storage fragments and a preset hash mapping algorithm.
On the basis of the foregoing embodiment, the preset storage partition corresponds to at least one preset virtual partition, and the storage partition selection module 420 is specifically configured to:
determining each preset virtual fragment, and selecting a target virtual fragment from each preset virtual fragment to obtain the identification of the target virtual fragment; the identification of the target virtual fragment is sent to a preset middleware, so that the preset middleware determines the number of preset storage fragments for storing the current limiting data corresponding to the target resource identification, and the target storage fragments corresponding to the target virtual fragments are determined based on the identification of the target virtual fragment, the number of preset storage fragments and a preset hash mapping algorithm; the preset middleware is the target virtual partition or a main node corresponding to the preset storage partition.
Based on the above embodiment, the resource request sending module 430 is specifically configured to: and sending the current resource request and the target resource identifier to the preset middleware so that the preset middleware forwards the current resource request and the target resource identifier to the target storage partition.
Based on the above embodiment, the preset virtual partition is a preset data structure, a preset storage unit, a preset middleware or a preset number.
On the basis of the above embodiment, the apparatus further includes a resource request processing module, where the resource request processing module is configured to obtain a current limiting processing result of the target storage partition feedback, where the current limiting processing result includes a reject response, a direct response, or a wait response; and forwarding the current resource request to a resource service server for processing or returning a request failure signal to the client based on the current limiting processing result.
On the basis of the above embodiment, the apparatus further includes a resource request processing module, where the resource request processing module is configured to obtain a resource response remaining amount corresponding to the target resource identifier, where the resource response remaining amount corresponds to the target resource identifier; and forwarding the current resource request to a resource service server for processing or returning a request failure signal to the client based on the resource response residual quantity.
On the basis of the above embodiment, the apparatus further includes a resource request receiving module, where the resource request receiving module is configured to receive a current resource request sent by a load balancer based on a load balancing algorithm; the load balancer is used for carrying out load balancing scheduling on resource requests sent by all clients.
On the basis of the foregoing embodiment, the resource request receiving module is specifically configured to:
and receiving a current resource request corresponding to a hot spot resource sent by the load balancer based on a load balancing algorithm, wherein the hot spot resource is determined by the load balancer based on the predicted flow of each preset resource.
The resource current limiting device provided by the embodiment of the invention can execute the resource current limiting method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Fig. 5A is a schematic structural diagram of a resource current limiting system according to an embodiment of the present invention. Wherein the resource flow limiting system comprises at least one flow restrictor 51 and at least two preset memory slices 52, the preset memory slices 52 comprising target memory slices; the flow limiter 51 is configured to send a current resource request and a target resource identifier corresponding to the current resource request to a target storage partition according to the resource throttling method provided by the embodiment of the present invention; and the target storage fragment is used for determining the current limiting data corresponding to the target resource identifier based on the target resource identifier and executing the current limiting processing on the current resource request based on the current limiting data.
On the basis of the above embodiment, the flow limiter 51 is further configured to determine each preset virtual partition, select a target virtual partition from the preset virtual partitions, obtain an identifier of the target virtual partition, determine the number of preset storage partitions for storing the current-limiting data corresponding to the target resource identifier, and determine the target storage partition corresponding to the target virtual partition based on the identifier of the target virtual partition, the number of preset storage partitions, and a preset hash mapping algorithm.
On the basis of the foregoing embodiment, the system provided by the embodiment of the present invention further includes a preset middleware, the flow limiter 51 is further configured to determine each preset virtual partition, select a target virtual partition from each preset virtual partition, obtain an identifier of the target virtual partition, send the identifier of the target virtual partition to the preset middleware, so that the preset middleware determines the number of preset storage partitions for storing the current-limiting data corresponding to the target resource identifier, and determine the target storage partition corresponding to the target virtual partition based on the identifier of the target virtual partition, the number of preset storage partitions and a preset hash mapping algorithm.
On the basis of the above embodiment, the flow limiter 51 is further configured to send the current resource request and the target resource identifier to the preset middleware, so that the preset middleware forwards the current resource request and the target resource identifier to the target storage partition.
Referring to fig. 5B, an exemplary process flow diagram of a resource throttling system is shown, wherein the resource throttling system includes a load balancer, at least one traffic limiter, and at least two preset memory slices. The flow limiter can execute the resource flow limiting method provided by the embodiment of the invention in the node.
Specifically, the client may send a large number of resource requests to the load balancer, and the load balancer may allocate each resource request to each flow limiter (i.e. each node) through a load balancing algorithm according to the resource requests of the hot spot resources, so that after the flow limiter obtains the current resource request, determine the target virtual partition through the load balancing algorithm, or determine the target virtual partition through the load balancer built in the node, and further calculate the target storage partition corresponding to the target virtual partition through the routing algorithm and send the current resource request to the target storage partition.
The method comprises the steps of calculating a target storage fragment corresponding to a target virtual fragment through a routing algorithm and sending a current resource request to the target storage fragment, wherein the calculation can be performed by a flow limiter or by a preset middleware (such as a virtual fragment, a main node in a distributed storage component and the like). If the mapping process is executed by the preset middleware, the flow limiter only needs to determine the virtual fragments, and the storage fragments do not need to be considered.
The flow limiter may send the current resource request to the preset middleware after adding the identifier of the target virtual partition in the current resource request, so that the preset middleware determines the target storage partition according to the identifier of the target virtual partition in the current resource request.
It should be noted that, fig. 5B shows that the virtual fragments and the storage fragments are in a one-to-one mapping relationship, and the mapping relationship between the virtual fragments and the storage fragments is not limited to one-to-one mapping, but also can be many-to-one mapping, which is not limited in this embodiment, and the virtual fragments with the set number greater than the number of the storage fragments can be set, so that when the capacity of the distributed storage components is expanded (the number of preset storage components is increased), the storage components that cannot be mapped are stored, and further, the existence of idle storage components is avoided.
According to the resource flow limiting system provided by the embodiment of the invention, a large number of resource requests are split by introducing the virtual slicing method, and the resource requests of the same resource are routed to different storage slices of the distributed storage assembly, so that the problem of single-point pressure of the distributed storage assembly is effectively solved. Meanwhile, the virtual fragments can be used as preset middleware, and the virtual fragments and the storage fragments are mapped through a routing algorithm, so that the flow limiter does not need to pay attention to the influence caused by capacity expansion and contraction of the distributed storage component.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. The electronic device 10 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 6, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as the resource throttling method.
In some embodiments, the resource throttling method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the resource throttling method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the resource throttling method in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for implementing a resource throttling method of an embodiment of the invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
The embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium stores computer instructions, and the computer instructions are used for enabling a processor to execute a resource current limiting method, and the method comprises the following steps:
determining a target resource identifier corresponding to a current resource request in response to acquiring the current resource request;
determining a plurality of preset storage fragments for storing the current-limiting data corresponding to the target resource identifier, and selecting a target storage fragment from the plurality of preset storage fragments;
and sending the current resource request and the target resource identifier to the target storage partition, so that the target storage partition determines current limiting data corresponding to the target resource identifier based on the target resource identifier, and performs current limiting processing on the current resource request based on the current limiting data.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (11)

1. A method for limiting flow of resources, comprising:
determining a target resource identifier corresponding to a current resource request in response to acquiring the current resource request;
determining a plurality of preset storage fragments for storing the current-limiting data corresponding to the target resource identifier, and selecting a target storage fragment from the plurality of preset storage fragments;
and sending the current resource request and the target resource identifier to the target storage partition, so that the target storage partition determines current limiting data corresponding to the target resource identifier based on the target resource identifier, and performs current limiting processing on the current resource request based on the current limiting data.
2. The method according to claim 1, wherein the preset storage slices correspond to at least one preset virtual slice, the determining a plurality of preset storage slices for storing the current limit data corresponding to the target resource identifier, and selecting a target storage slice from the plurality of preset storage slices includes:
determining each preset virtual fragment, and selecting a target virtual fragment from each preset virtual fragment to obtain the identification of the target virtual fragment;
Determining the number of preset storage fragments for storing the current-limiting data corresponding to the target resource identifier, and determining the target storage fragments corresponding to the target virtual fragments based on the identifier of the target virtual fragments, the number of preset storage fragments and a preset hash mapping algorithm.
3. The method according to claim 1, wherein the preset storage slices correspond to at least one preset virtual slice, the determining a plurality of preset storage slices for storing the current limit data corresponding to the target resource identifier, and selecting a target storage slice from the plurality of preset storage slices includes:
determining each preset virtual fragment, and selecting a target virtual fragment from each preset virtual fragment to obtain the identification of the target virtual fragment;
the identification of the target virtual fragment is sent to a preset middleware, so that the preset middleware determines the number of preset storage fragments for storing the current limiting data corresponding to the target resource identification, and the target storage fragments corresponding to the target virtual fragments are determined based on the identification of the target virtual fragment, the number of preset storage fragments and a preset hash mapping algorithm;
The preset middleware is the target virtual partition or a main node corresponding to the preset storage partition.
4. The method of claim 1, wherein after said sending the current resource request and the target resource identification to the target storage shard, the method further comprises:
obtaining a current limiting processing result fed back by the target storage fragments, wherein the current limiting processing result comprises refusing response, direct response or waiting for response;
and forwarding the current resource request to a resource service server for processing or returning a request failure signal to the client based on the current limiting processing result.
5. The method of claim 1, wherein after said sending the current resource request and the target resource identification to the target storage shard, the method further comprises:
acquiring the resource response residual quantity corresponding to the target resource identifier and fed back by the target storage partition;
and forwarding the current resource request to a resource service server for processing or returning a request failure signal to the client based on the resource response residual quantity.
6. The method of claim 1, wherein prior to said responding to the acquisition of the current resource request, the method further comprises:
receiving a current resource request sent by a load balancer based on a load balancing algorithm;
the load balancer is used for carrying out load balancing scheduling on resource requests sent by all clients.
7. The method of claim 6, wherein receiving the current resource request sent by the load balancer based on the load balancing algorithm comprises:
and receiving a current resource request corresponding to a hot spot resource sent by the load balancer based on a load balancing algorithm, wherein the hot spot resource is determined by the load balancer based on the predicted flow of each preset resource.
8. A resource flow limiting device, comprising:
the resource identification determining module is used for determining a target resource identification corresponding to a current resource request in response to the current resource request;
the storage fragment selection module is used for determining a plurality of preset storage fragments for storing the current-limiting data corresponding to the target resource identification, and selecting the target storage fragments from the preset storage fragments;
And the resource request sending module is used for sending the current resource request and the target resource identifier to the target storage partition so that the target storage partition determines the current limiting data corresponding to the target resource identifier based on the target resource identifier and executes the current limiting processing on the current resource request based on the current limiting data.
9. A resource throttling system, wherein the system comprises at least one flow restrictor and at least two preset memory slices, the preset memory slices comprising target memory slices; wherein,
the flow limiter is configured to send a current resource request and a target resource identifier corresponding to the current resource request to a target storage partition based on the resource throttling method according to any one of claims 1-7;
the target storage fragment is used for determining current limiting data corresponding to the target resource identifier based on the target resource identifier, and executing current limiting processing on the current resource request based on the current limiting data.
10. An electronic device, the electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
The memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the resource throttling method of any of claims 1-7.
11. A computer readable storage medium storing computer instructions for causing a processor to implement the resource throttling method of any of claims 1-7 when executed.
CN202211170305.8A 2022-09-22 2022-09-22 Resource flow limiting method, device, system, equipment and storage medium Pending CN117793188A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211170305.8A CN117793188A (en) 2022-09-22 2022-09-22 Resource flow limiting method, device, system, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211170305.8A CN117793188A (en) 2022-09-22 2022-09-22 Resource flow limiting method, device, system, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117793188A true CN117793188A (en) 2024-03-29

Family

ID=90382276

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211170305.8A Pending CN117793188A (en) 2022-09-22 2022-09-22 Resource flow limiting method, device, system, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117793188A (en)

Similar Documents

Publication Publication Date Title
CN109660607B (en) Service request distribution method, service request receiving method, service request distribution device, service request receiving device and server cluster
CN109684074B (en) Physical machine resource allocation method and terminal equipment
US20110276982A1 (en) Load Balancer and Load Balancing System
CN103516744A (en) A data processing method, an application server and an application server cluster
JP2011258098A (en) Virtual computer system, monitoring method of virtual computer system and network system
US9110696B2 (en) Thin client system, connection management server, connection management method and connection management program
CN109960575B (en) Computing capacity sharing method, system and related equipment
CN113656176B (en) Cloud equipment distribution method, device and system, electronic equipment, medium and product
CN112346871A (en) Request processing method and micro-service system
CN108881379B (en) Method and device for data synchronization between server clusters
CN112600761A (en) Resource allocation method, device and storage medium
CN111556123A (en) Self-adaptive network rapid configuration and load balancing system based on edge calculation
CN115237595A (en) Data processing method, data processing device, distribution server, data processing system, and storage medium
CN115665263B (en) Flow allocation method, device, server and storage medium
CN112615795A (en) Flow control method and device, electronic equipment, storage medium and product
CN117793188A (en) Resource flow limiting method, device, system, equipment and storage medium
CN107528884B (en) User request processing method and device of aggregation server
CN111274022A (en) Server resource allocation method and system
CN115118475A (en) Method, device, equipment and medium for dispatching cryptographic equipment cluster
KR20220046525A (en) Network load balancer, request message distribution method, program product and system
CN113992760B (en) Method, device, equipment and storage medium for scheduling back source traffic
CN115442432B (en) Control method, device, equipment and storage medium
CN114793234B (en) Message processing method, device, equipment and storage medium
CN114615273B (en) Data transmission method, device and equipment based on load balancing system
CN118467175B (en) Data blood margin analysis system and analysis method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination