CN114338816A - Concurrency control method, device, equipment and storage medium under server-free architecture - Google Patents

Concurrency control method, device, equipment and storage medium under server-free architecture Download PDF

Info

Publication number
CN114338816A
CN114338816A CN202111580666.5A CN202111580666A CN114338816A CN 114338816 A CN114338816 A CN 114338816A CN 202111580666 A CN202111580666 A CN 202111580666A CN 114338816 A CN114338816 A CN 114338816A
Authority
CN
China
Prior art keywords
thread pool
same
concurrent thread
size threshold
concurrency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111580666.5A
Other languages
Chinese (zh)
Inventor
马思琦
王宏琦
常率
冯一博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202111580666.5A priority Critical patent/CN114338816A/en
Publication of CN114338816A publication Critical patent/CN114338816A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Computer And Data Communications (AREA)

Abstract

The embodiment of the application provides a concurrency control method, a concurrency control device, concurrency control equipment and a storage medium under a server-free architecture. The method comprises the following steps: counting success and failure results of service requests, corresponding to the same concurrency thread pool, fed back by the control node in a current time window to obtain a statistical result of the same concurrency thread pool, wherein threads in the same concurrency thread pool correspond to the service requests belonging to the same service request set one by one, and the threads are used for calling an interface provided by the control node according to the corresponding service requests so that the control node allocates corresponding instances for the service requests; and in the current time window, adjusting the size threshold value of the same concurrent thread pool according to the statistical result of the same concurrent thread pool. The method can induce the condition that the size threshold of the concurrent thread pool is not matched with the actual condition of the back end to cause the instability of the non-server architecture, thereby improving the stability of the non-server architecture.

Description

Concurrency control method, device, equipment and storage medium under server-free architecture
Technical Field
The present application relates to the field of computer technologies, and in particular, to a concurrency control method, apparatus, device, and storage medium under a server-less architecture.
Background
The Serverless architecture is an architecture that provides management services by a third party, and can eliminate most of the requirements of the traditional online server for a user, and provide services such as function calculation, event triggering, table storage and the like for a client.
A policing node in the Serverless architecture may create an instance on a compute node to process a service request to provide a service. After receiving a service request, a front-end node in the Serverless architecture may create a corresponding thread for the service request, and request an instance corresponding to the service request from the management and control node by the thread. Wherein the front-end node may control the amount of concurrency for an instance by limiting a size threshold for the pool of concurrent threads. Usually, the size threshold of the concurrent thread pool is fixed, however, in this way, there is a problem that the actual situation of the backend is not considered, resulting in poor stability of the Serverless architecture.
Disclosure of Invention
The embodiment of the application provides a concurrency control method, a concurrency control device, concurrency control equipment and a storage medium under a server-free architecture, and aims to solve the technical problem that the stability of the server-free architecture is poor in the prior art.
In a first aspect, an embodiment of the present application provides a concurrency control method in a server-less architecture, where the server-less architecture includes a front-end node and a policing node, and the method is performed by the front-end node, and the method includes:
counting success and failure results of service requests, corresponding to the same concurrency thread pool, fed back by the control node in a current time window to obtain a statistical result of the same concurrency thread pool, wherein threads in the same concurrency thread pool correspond to the service requests belonging to the same service request set one by one, and the threads are used for calling an interface provided by the control node according to the corresponding service requests so that the control node allocates corresponding instances for the service requests;
and in the current time window, adjusting the size threshold value of the same concurrent thread pool according to the statistical result of the same concurrent thread pool.
In a second aspect, an embodiment of the present application provides a concurrency control apparatus in a server-less architecture, where the server-less architecture includes a front-end node and a management and control node, and the apparatus is applied to the front-end node, and the apparatus includes:
the counting module is used for counting success and failure results of the service requests, corresponding to the same concurrency thread pool, fed back by the control node in a current time window to obtain the counting results of the same concurrency thread pool, threads in the same concurrency thread pool correspond to the service requests belonging to the same service request set one by one, and the threads are used for calling an interface provided by the control node according to the corresponding service requests to distribute corresponding examples for the service requests by the control node;
and the adjusting module is used for adjusting the size threshold of the same concurrent thread pool according to the statistical result of the same concurrent thread pool in the current time window.
In a third aspect, an embodiment of the present application provides a computer device, including: a memory, a processor; wherein the memory is to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the method of any of the first aspects.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, which, when executed, implements the method according to any one of the first aspect.
Embodiments of the present application also provide a computer program, which is used to implement the method according to any one of the first aspect when the computer program is executed by a computer.
In the embodiment of the application, the success or failure result of the service request corresponding to the same concurrency thread pool fed back by the management and control node in the current time window is counted, the success or failure result of the service request corresponding to the same concurrency thread pool is obtained, the statistical result of the same concurrency thread pool is obtained, and the size threshold of the same concurrency thread pool is adjusted and processed according to the statistical result of the same concurrency thread pool in the current time window. Therefore, the size threshold of the concurrent thread pool is adaptively adjusted according to the success or failure result of the service request fed back by the back end in the server-free architecture, the actual situation of the back end is considered by the size threshold of the concurrent thread pool, the situation that the server-free architecture is unstable due to the fact that the size threshold of the concurrent thread pool is not matched with the actual situation of the back end can be reduced, and the stability of the server-free architecture can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic view of an application scenario of a concurrency control method provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a concurrency control method according to an embodiment of the present application;
fig. 3 is a schematic diagram of a handover adjustment phase according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a concurrency control device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the examples of this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and "a" and "an" typically include at least two, but do not exclude the presence of at least one.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It is also noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a good or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such good or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a commodity or system that includes the element.
In addition, the sequence of steps in each method embodiment described below is only an example and is not strictly limited.
The concurrency control method provided by the embodiment of the application can be applied to a server-free architecture, as shown in fig. 1, the server-free architecture may include a front-end node 11, a management and control node 12, and a computing node 13. Wherein service requests from clients or servers, e.g. service requests for function computation services, may first arrive at the front-end node 11. After receiving a service request, the front-end node 11 may first determine whether a thread corresponding to the service request may be created, and if so, create a thread corresponding to the service request, and call an interface provided by the management and control node 12 by the thread to request the management and control node 12 to allocate a corresponding instance to the service request; in response to the request of the thread, the management and control node 12 may determine whether an idle instance capable of processing the service request exists on the compute node 13, if so, may directly allocate the idle instance to the service request, if not, may create and start an instance for processing the service request on the compute node 13, and allocate the started instance to the service request; after the corresponding instance is allocated to the service request by the managing node 12, the thread may be connected to the instance allocated to the service request, so that the thread may pass the Metadata (Metadata) of the service request to the instance, and the instance may process the service request.
The front-end node 11 may determine whether the thread corresponding to the service request may be created by determining whether the current size of the concurrent thread pool corresponding to the service request set to which the service request belongs reaches a size threshold of the concurrent thread pool. Specifically, when the current size of a concurrent thread pool corresponding to a service request set to which the service request belongs is smaller than the size threshold of the concurrent thread pool, determining that a thread corresponding to the service request can be created; and when the current size of the concurrent thread pool corresponding to the service request set to which the service request belongs is equal to the size threshold of the concurrent thread pool, determining that the thread corresponding to the service request cannot be created.
Wherein one concurrent thread pool may correspond to one service request set. It should be noted that the granularity of the service request set can be flexibly implemented according to the requirement, and for example, the granularity of the service request set may include a client granularity, or a client + service granularity.
For example, assuming that the granularity of the service request sets includes client granularity, and the clients of the serverless architecture include client a and client b, one service request set may represent a set consisting of the service requests of client a, and another service request set may represent a set consisting of the service requests of client b.
For another example, assuming that the granularity of the service request set includes a client + service granularity, and the function computation service of client a of the server architecture includes a computation service for function x, a computation service for function y, and a computation service for function z, one service request set may represent a set consisting of service requests for function x and function y of client a, and another service request may represent a set consisting of service requests for function z of client a.
Generally, the size threshold of the concurrent thread pool is fixed, however, in this way, the size threshold of the concurrent thread pool does not consider the actual situation of the backend, which results in the problem of poor stability of the serverless architecture.
In order to solve the technical problem that the stability of a serverless architecture is poor due to the fact that the size threshold of a concurrency thread pool is fixed and the actual situation of a back end is not considered in the prior art, as shown in fig. 1, in the embodiment of the present application, the success or failure result of a service request corresponding to the same concurrency thread pool fed back by a management and control node in a current time window is counted, the statistical result of the same concurrency thread pool is obtained, and the size threshold of the same concurrency thread pool is adjusted in the current time window according to the statistical result of the same concurrency thread pool. Therefore, the size threshold of the concurrent thread pool is adaptively adjusted according to the success or failure result of the back end to the service request in the server-free architecture, the actual situation of the back end is considered by the size threshold of the concurrent thread pool, the unstable situation of the server-free architecture caused by the fact that the size threshold of the concurrent thread pool is not matched with the actual situation of the back end can be reduced, and the stability of the server-free architecture can be improved.
It should be understood that the policing node 12 and the computing node 13 in a serverless architecture may be understood as the back-end of the serverless architecture with respect to the front-end node 11 in the serverless architecture.
It should be noted that the number of the computing nodes 13 in fig. 1, and the number of instances on the computing nodes 13 are only examples.
It should be noted that the concurrency control method provided by the embodiment of the present application can be applied to any type of scenario in which a front-end node in a serverless architecture can control the concurrency amount of an instance by limiting the size threshold of a concurrency thread pool.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Fig. 2 is a flowchart illustrating a concurrency control method according to an embodiment of the present application, where an execution subject of the embodiment may be the front-end node 11 in fig. 1. As shown in fig. 2, the method of this embodiment may include:
step 21, counting success or failure results of the service requests corresponding to the same concurrent thread pool fed back by the management and control node in the current time window to obtain a statistical result of the same concurrent thread pool, wherein threads in the same concurrent thread pool correspond to the service requests belonging to the same service request set one by one, and the threads are used for calling an interface provided by the management and control node according to the corresponding service requests so that the management and control node allocates corresponding instances to the service requests;
and step 22, in the current time window, adjusting the size threshold of the same concurrent thread pool according to the statistical result of the same concurrent thread pool.
In this embodiment of the application, the front-end node may use the concurrent thread pool as a statistical unit to count the success or failure result of the service request corresponding to the same concurrent thread pool fed back by the management and control node in the current time window. The number of concurrent thread pools for which success or failure results need to be counted in the current time window may be one or more.
Illustratively, the success or failure results may include: the success or failure result of the reservation and the success or failure result of the execution, that is, the success or failure result of the management and control node for a service request may be a success of the reservation, a failure of the reservation, a success of the execution or a failure of the execution. The success or failure result of the management and control node for a service request is reservation success, which can indicate that the management and control node successfully allocates an instance for the service request, and the instance allocated for the service request is successfully connected with the thread corresponding to the service request; the success or failure result of the management and control node for a service request is reservation failure, which may indicate that the management and control node fails to allocate an instance to the service request, or that the instance allocated to the service request in the management and control stage is not successfully connected to the thread corresponding to the service request; the success or failure result of the management and control node for a service request is the execution success, which can represent that the instance successfully processes the service request; the success or failure result of the management and control node for a service request is an execution failure, which may indicate that the instance has not successfully processed the service request. Based on this, the failure may include a reservation failure and an execution failure, the reason for the reservation failure may be a backend system error or the reaching of the resource upper limit, and the reason for the execution failure may be a backend system error or a client self error (e.g., a function code error). The backend system error may be, for example, insufficient backend resources, crash of the backend system, or the like.
The service request corresponding to the concurrency thread pool fed back by the management and control node returns "reaching the upper limit of resources", which may indicate that the number of instances corresponding to the concurrency thread pool on the compute node 13 reaches the upper limit. Taking the granularity of the service request set as the client granularity and the upper limit of the number of instances of a certain client as 300 as an example, if the management and control node acquires the request for creating the corresponding instance for the service request of the client when the number of the instances of the client has reached 300 and the 300 instances are all non-idle instances, the management and control node may return the failure reason that the reservation of "reaching the upper limit of resources" fails.
In the embodiment of the present application, a statistical result of a concurrent thread pool may be used to characterize a failure rate of the concurrent thread pool. Illustratively, counting the success result and the failure result of the service request corresponding to the same concurrent thread pool fed back by the management and control node in the current time window may obtain the number of times of the success result and the number of times of the failure result of the same concurrent thread pool, where the failure rate may be equal to the number of times of the failure result/(the number of times of the failure result + the number of times of the success result). Or, illustratively, the success result and the failure result of the service request corresponding to the same concurrency thread pool, which are fed back by the management and control node in the current time window, are counted, so that the failure rate of the concurrency thread pool can be directly obtained.
Optionally, during the statistics, the failure result caused by the error of the architecture itself may be counted, and the failure result caused by the error of the client itself may not be counted, so that the statistical result can accurately reflect the actual situation of the back end, and the adaptive adjustment of the size threshold of the concurrent thread pool can better adapt to the actual situation of the back end. The above-mentioned backend system errors and the reaching of the upper resource limit can be understood as architecture self errors, and the above-mentioned function code errors can be understood as client self errors.
In the embodiment of the application, the size of the time window can be flexibly realized according to requirements. The smaller the time window, the more timely the adjustment to the size threshold of the concurrency thread pool can be, and the faster the change in the size threshold of the concurrency thread pool can be. In one embodiment, the size of the time window may be related to the speed of the backend instances of initiation, and optionally, the size of the time window may be less than the speed of the backend instances of initiation. In consideration of the fact that in practical application, in response to a request from a front-end node for an instance corresponding to a certain service request, a management and control node can create and start multiple instances corresponding to the service request, and by means of the fact that the size of a time window is smaller than the speed of starting the instances at a back end, the situation that due to the fact that the time window is too large, additionally started instances can be used later, and instance resources are idle can be avoided.
In this embodiment of the present application, after counting the success or failure result of the service request corresponding to the same concurrency thread pool fed back by the management and control node in the current time window and obtaining the statistical result of the same concurrency thread pool, the size threshold of the same concurrency thread pool may be adjusted according to the statistical result of the same concurrency thread pool in the current time window.
For example, assuming that the time window t1, the time window t2 and the time window t3 … … are divided according to the sequence from front to back, and the concurrent thread pool includes a concurrent thread pool 1 and a concurrent thread pool 2, when the current time is in the time window t1, the success or failure results of the service requests of the corresponding concurrent thread pool 1 and the concurrent thread pool 2 fed back by the management and control node in the time window t1 may be counted respectively to obtain a statistical result (marked as a statistical result a1) of the concurrent thread pool 1 and a statistical result (marked as a statistical result b1) of the concurrent thread pool 2 in the time window t1, then the size threshold of the concurrent thread pool 1 is adjusted according to the statistical result a1, and the size threshold of the concurrent thread pool 2 is adjusted according to the statistical result b 1; then, when the current time is in the time window t2, the success or failure results of the service requests corresponding to the concurrent thread pool 1 and the concurrent thread pool 2 fed back by the management and control node in the time window t2 may be respectively counted to obtain a statistical result (denoted as statistical result a2) of the concurrent thread pool 1 and a statistical result (denoted as statistical result b2) of the concurrent thread pool 2 in the time window t2, then the size threshold of the concurrent thread pool 1 is adjusted according to the statistical result a2, and the size threshold of the concurrent thread pool 2 is adjusted according to the statistical result b 2; then, when the current time is in the time window t3, the success or failure results of the service requests corresponding to the concurrent thread pool 1 and the concurrent thread pool 2 fed back by the management and control node in the time window t3 may be respectively counted to obtain a statistical result (denoted as statistical result a3) of the concurrent thread pool 1 and a statistical result (denoted as statistical result b3) of the concurrent thread pool 2 in the time window t3, then the size threshold of the concurrent thread pool 1 is adjusted according to the statistical result a3, and the size threshold of the concurrent thread pool 2 is adjusted according to the statistical result b 3; … … are provided.
The adjusting process of the size threshold of a concurrency thread pool may specifically include: the size threshold of the concurrent thread pool is updated or kept unchanged. The updating of the size threshold of the concurrent thread pool may specifically be: the size threshold of the concurrent thread pool is increased or decreased.
Optionally, the adjustment processing of the size threshold of the concurrent thread pool may be performed according to a failure rate represented by a statistical result of the concurrent thread pool, and based on this, in an embodiment, the step 22 may specifically include: and adjusting the size threshold of the same concurrent thread pool according to the failure rate of the same concurrent thread pool represented by the statistical result of the same concurrent thread pool.
Further optionally, the size threshold of the concurrent thread pool may be adjusted by comparing the size relationship between the failure rate of the concurrent thread pool and the failure rate threshold, and based on this, in an embodiment, the adjusting the size threshold of the same concurrent thread pool according to the failure rate of the same concurrent thread pool represented by the statistical result of the same concurrent thread pool may specifically include: and when the failure rate of the same concurrent thread pool is greater than or equal to a first failure rate threshold value, reducing the size threshold value of the same concurrent thread pool.
The first failure rate threshold can be flexibly realized according to requirements, the smaller the first failure rate threshold is, the more timely the adjustment on the size threshold of the concurrent thread pool can be, and the faster the change of the size threshold of the concurrent thread pool can be. The failure rate of a concurrent thread pool is greater than or equal to the first failure rate threshold, which may indicate that there are more false feedbacks of the service request corresponding to the concurrent thread pool, which is fed back by the management and control node, and the probability that the back end cannot operate stably is higher. When the error rate of the concurrent thread pool is greater than or equal to the first failure rate threshold, the size threshold of the concurrent thread pool is reduced, so that the probability of instability of the server-free architecture caused by more error feedbacks of the service request corresponding to the concurrent thread pool fed back by the control node due to the overlarge size threshold of the concurrent thread pool can be reduced, and the stability of the server-free architecture can be improved.
Further, the adjusting the size threshold of the same concurrent thread pool according to the failure rate of the same concurrent thread pool represented by the statistical result of the same concurrent thread pool may further include; and when the failure rate of the same concurrent thread pool is less than or equal to a second failure rate threshold value and the ratio of the current thread quantity of the same concurrent thread pool to the size threshold value of the same concurrent thread pool is greater than or equal to a first ratio threshold value, increasing the size threshold value of the same concurrent thread pool.
The second failure rate threshold is smaller than the first failure rate threshold, and the failure rate of a concurrent thread pool is smaller than or equal to the second failure rate threshold, which may indicate that there is very little error feedback of the service request corresponding to the concurrent thread pool, which is fed back by the management and control node, and the probability that the backend cannot stably operate is also very low, for example, the second failure rate threshold may be 0. The ratio of the current number of threads in a concurrent thread pool to the size threshold of the concurrent thread pool is greater than or equal to a first ratio threshold, which may indicate the current number of threads in the concurrent thread pool, and the maximum allowed concurrent amount of the threads corresponding to the concurrent thread pool is very large, and there is a case where the size threshold of the concurrent thread pool is too small, for example, the first ratio threshold may be 1. When the failure rate of a concurrency thread pool is smaller than or equal to the second failure rate threshold value, and the ratio of the current thread quantity of the concurrency thread pool to the size threshold value of the concurrency thread pool is larger than or equal to the first ratio threshold value, the size threshold value of the concurrency control quantity is increased, so that the flow demand of a client can be met as much as possible on the basis of ensuring the stability of the rear end as much as possible.
For example, the size threshold of the concurrent thread pool may be reduced in a multiplicative reduction manner, and based on this, in an embodiment, the reducing the size threshold of the same concurrent thread pool may specifically include: multiplying a size threshold of the same concurrency thread pool by a first coefficient to reduce the size threshold of the same concurrency thread pool, the first coefficient being greater than 0 and less than 1. It should be noted that, when the result of multiplying the size threshold of the concurrent thread pool by the first coefficient is a non-integer, the updated size threshold of the concurrent thread pool may be obtained by rounding up or rounding down the result of the multiplication.
For example, the size threshold of the concurrent thread pool may be increased in an additive growth manner, and based on this, in an embodiment, the increasing the size threshold of the same concurrent thread pool may specifically include: adding a first number to the size threshold of the same concurrent thread pool to increase the size threshold of the same concurrent thread pool, wherein the first number is greater than or equal to 1.
Taking the first failure rate as 0.05, the second failure rate as 0, the first coefficient as α, and the first number as β as an example, the relationship between the size threshold of the concurrency thread pool (referred to as the current size threshold) before the adjustment processing and the size threshold of the concurrency thread pool (referred to as the subsequent size threshold) after the adjustment processing can be expressed by the following formula (1).
Figure BDA0003426924490000071
It should be noted that, when the management and control node feeds back the reason that the reservation failure of "reaching the upper limit of the resource" is used to perform the adjustment processing of the size threshold of the concurrent thread pool, if the reservation failure of "reaching the upper limit of the resource" of the service request corresponding to one concurrent thread pool fed back by the management and control node is more, the front-end node may be triggered to reduce the size threshold of the concurrent thread pool, so that the size threshold of the concurrent thread pool may be converged to the "upper limit of the resource".
In the embodiment of the application, when the size threshold of the concurrent thread pool is adjusted according to the statistical result of the concurrent thread pool, the size threshold may be slowly adjusted to improve the robustness of the system, so that the stability of the server-less architecture can be further improved. In this case, in order to meet the increased traffic demand of the client as soon as possible, before adjusting the size threshold of the same concurrent thread pool according to the statistical result of the same concurrent thread pool, the size threshold of the same concurrent thread pool may be increased for a period of time. Based on this, before step 22, the following steps a and B may also be performed.
Step A, in a current time window, increasing the size threshold of the same concurrent thread pool according to the current thread number of the same concurrent thread pool and the size threshold of the same concurrent thread pool;
step B, if a failure result of the service request corresponding to the same concurrent thread pool fed back by the control node needle is obtained, ending a first adjusting stage and entering a second adjusting stage aiming at the same concurrent thread pool; the first adjustment phase refers to the phase of increasing the size threshold of the same concurrent thread pool, and the second adjustment phase refers to the phase of adjusting the size threshold of the same concurrent thread pool.
Optionally, if a failure result caused by an error of the architecture is obtained for the service request corresponding to the concurrency thread pool fed back by the management and control node, the first adjustment stage may be ended and the second adjustment stage is entered for the concurrency thread pool, so that the actual situation of the back end can be accurately reflected when the second adjustment stage is triggered to enter, and thus the adaptive adjustment of the size threshold of the concurrency thread pool can better adapt to the actual situation of the back end.
It should be understood that, before ending the first adjustment phase for a pool of concurrent threads, the adjustment phase in which the pool of concurrent threads is located may be the first adjustment phase; after the first adjustment phase is ended and the second adjustment phase is entered for the concurrent thread pool, the adjustment phase in which the concurrent thread pool is located may be the second adjustment phase. At the same time, the adjustment stage of a concurrency thread pool can be a first adjustment stage or a second adjustment stage.
For example, assuming that the time window t11, the time window t12, the time window t13, the time port t14, and the time window t15 … … are divided in order from front to back, and the concurrent thread pool includes the concurrent thread pool 1, when the current time is in the time window t11, the size threshold of the concurrent thread pool 1 may be increased, if the failure result of the service request corresponding to the concurrent thread pool 1 fed back by the management node is not obtained in the time window t11, when the current time is in the time window t12, the size threshold of the concurrent thread pool 1 may be increased, if the failure result of the service request corresponding to the concurrent thread pool 1 fed back by the management node is not obtained in the time window t12, when the current time is in the time window t13, the size threshold of the concurrent thread pool 1 may be increased, and if the failure result of the service request corresponding to the concurrent thread pool 1 fed back by the management node is obtained in the time window t13, the size threshold of the concurrent thread pool 1 may be increased again If the current time is in the time window t14, statistics may be performed on success or failure results of the service requests corresponding to the concurrent thread pool 1 fed back by the management and control node in the time window t14 to obtain a statistical result (denoted as statistical result a4) of the concurrent thread pool 1 in the time window t14, and then the size threshold of the concurrent thread pool 1 is adjusted according to the statistical result a 4; then, when the current time is in the time window t15, statistics may be performed on success or failure results of the service requests corresponding to the concurrent thread pool 1 fed back by the management and control node in the time window t15 to obtain a statistical result (denoted as statistical result a5) of the concurrent thread pool 1 in the time window t15, and then the size threshold of the concurrent thread pool 1 is adjusted according to the statistical result 5; … … are provided.
It should also be understood that the adjustment phase at which different concurrent thread pools are located may be the same or different at the same time. For example, at a certain time, the adjustment phase in which the concurrent thread pool 1 is located may be a first adjustment phase, and the adjustment phase in which the concurrent thread pool 2 is located may be a second adjustment phase.
In this embodiment of the application, the first adjustment phase may be understood as a Slow Start (Slow Start) phase, the second adjustment phase may be understood as a Stable State (Stable State) phase, and an increase of the size threshold of the concurrent thread pool in the first adjustment phase may be smaller than an increase of the size threshold of the concurrent thread pool in the second adjustment phase.
The increasing process of the size threshold of the concurrency thread pool may specifically include: the size threshold of the concurrent thread pool is increased or kept unchanged.
For example, the size threshold of the concurrent thread pool may be increased according to a ratio relationship between the current number of threads of the concurrent thread pool and the size threshold of the concurrent thread pool. Based on this, in an embodiment, step a may specifically include: and judging whether the ratio of the current thread quantity of the same concurrent thread pool to the size threshold of the same concurrent thread pool is greater than or equal to a first ratio threshold, and if so, increasing the size threshold of the same concurrent thread pool.
The ratio of the current thread number of a concurrent thread pool to the size threshold of the concurrent thread pool is greater than or equal to the first ratio threshold, which may indicate that the current thread number of the concurrent thread pool is too large relative to the size threshold of the concurrent thread pool, and the size threshold of the concurrent thread pool is too small. Before a failure result of a service request for a concurrent thread pool is obtained, if the ratio of the current thread quantity of the concurrent thread pool to the size threshold of the concurrent thread pool is greater than or equal to a first ratio threshold, the size threshold of the concurrent thread pool is increased, so that the flow demand of a client can be quickly met with a quick increase range on the basis of ensuring the stability of a back end.
For example, the size threshold of the concurrent thread pool may be increased in an additive increase manner, and based on this, in an embodiment, the increasing the size threshold of the same concurrent thread pool in the first adjustment phase specifically may include: adding a second number to the size threshold of the same concurrent thread pool to increase the size threshold of the same concurrent thread pool, the second number being greater than or equal to 1.
The second number may be greater than the first number, and the speed of increasing the size threshold of the concurrent thread pool in the slow start phase may be greater than the speed of increasing the size threshold of the concurrent thread pool in the steady state phase, so that the size threshold of the concurrent thread pool is rapidly increased in the slow start phase, and the size threshold of the concurrent thread pool is slowly increased in the steady state phase. The second number may be related to, and in one embodiment may be equal to, the average of the back-end single-start instances, such that the rate of increase of the size threshold of the pool of concurrent threads may match the rate of increase of the instances.
Optionally, in order to avoid a problem that a client is prone to impact a back end when there is a sudden increase in traffic due to an excessively large difference between the size threshold of the concurrent thread pool and the current thread number of the concurrent thread pool, so that stability of a serverless architecture is poor, when the difference between the size threshold of the concurrent thread pool and the current thread number of the concurrent thread pool is large, the size threshold of the concurrent thread pool may be further reduced. Based on this, in one embodiment, the method provided by the embodiment of the present application may further include the following step C and step D.
Step C, if the first adjusting stage and the second adjusting stage do not trigger updating of the size threshold of the same concurrent thread pool in the current time window, judging whether the ratio of the current thread quantity of the same concurrent thread pool to the size threshold of the same concurrent thread pool is smaller than or equal to a second ratio threshold which is smaller than the first ratio threshold;
and D, if the judgment results of the time windows with the preset number are all smaller than or equal to the second ratio threshold, reducing the size threshold of the same concurrent thread pool.
There may be two cases when neither the first adjustment phase nor the second adjustment phase triggers an update of the size threshold of a concurrent thread pool within the current time window: one is that the service request amount is stable, and there is no case that the size threshold of the concurrent thread pool is too large; the other is that the service request amount is in a descending stage, and there is a case that the size threshold of the concurrent thread pool is too large. For the latter case, when there is a sudden increase in traffic of a client later, the front-end node creates many threads in a short time, so that many threads all request instances from the management and control node in a short time, and therefore, the back-end is easily impacted, and the stability of the server-free architecture is poor. Thus, for the latter case, the size threshold of the concurrent thread pool may be reduced to further improve the stability of the serverless architecture.
The ratio of the current thread quantity of a concurrent thread pool to the size threshold of the concurrent thread pool is less than or equal to the second ratio threshold, which can indicate that the condition that the size threshold of the concurrent thread pool is too large does not exist, that is, the probability of impact on the back end caused by sudden increase of service requests corresponding to the concurrent thread pool is low; the ratio of the current thread number of a concurrent thread pool to the size threshold of the concurrent thread pool is greater than the second ratio threshold, which may indicate that there is a situation where the size threshold of the concurrent thread pool is too large, that is, the probability of causing impact on the back end due to a sudden increase of service requests corresponding to the concurrent thread pool is high.
The preset number may be related to a size threshold of the time window and a maximum allowable duration (e.g., 5 minutes) for which the idle instance is allowed to exist at the back end, so as to ensure that the size threshold of the concurrent thread pool is contracted after the idle instance corresponding to the concurrent thread pool is released at the back end, thereby avoiding a situation that the resource of the corresponding idle instance is unavailable due to premature contraction of the size threshold of the concurrent thread pool, and the instance resource is idle.
For example, in step D, the reducing the size threshold of the same concurrent thread pool may specifically include: multiplying a size threshold of the same concurrency thread pool by a second coefficient to reduce the size threshold of the same concurrency thread pool, the second coefficient being greater than 0 and less than 1. The second coefficient may be smaller than the first coefficient, so that the speed of reducing the size threshold of the concurrent thread pool may be greater than the steady state stage, so as to achieve a rapid reduction of the size threshold of the concurrent thread pool. It should be noted that, when the result of multiplying the size threshold of the concurrent thread pool by the second coefficient is a non-integer, the updated size threshold of the concurrent thread pool may be obtained by rounding up or rounding down the result of the multiplication.
Further optionally, the method provided in this embodiment may further include: if the judgment results of the time windows with the preset number are all smaller than or equal to the second ratio threshold value, and the current adjustment stage of the same concurrent thread pool is the second adjustment stage, the second adjustment stage may be ended and the first adjustment stage may be entered for the same concurrent thread pool. Therefore, when the service requests corresponding to the concurrent thread pool increase, the increased flow demand can be met as soon as possible.
The schematic diagram of the switching between the first adjustment phase and the second adjustment phase can be as shown in fig. 3, and referring to fig. 3, for a certain concurrent thread pool, the first adjustment phase 31 can be entered after the start. In the first adjusting stage 31 of the concurrent thread pool, the size threshold of the concurrent thread pool may be increased within the current time window by executing step a (corresponding to arrow (r) in fig. 3), and if the size threshold of the concurrent thread pool is kept unchanged by executing step C and step D, the size threshold of the concurrent thread pool may be decreased within the current time window by executing step C and step D (corresponding to arrow (r) in fig. 3). In addition, in the first adjusting stage 31 of the concurrent thread pool, the execution of step B can end the first adjusting stage 31 and enter the second adjusting stage 32 (corresponding to arrow (c) in fig. 3) for the concurrent thread pool.
In the second adjusting stage 32 of the concurrent thread pool, the size threshold of the concurrent thread pool may be adjusted within the current time window by executing steps 21 and 22 (corresponding to arrow r in fig. 3), if the size threshold of the concurrent thread pool is kept unchanged by executing steps C and D, the size threshold of the concurrent thread pool may be decreased within the current time window by executing steps C and D (corresponding to arrow t in fig. 3), and the second adjusting stage of the concurrent thread pool may be ended by executing step E, and the first adjusting stage of the concurrent thread pool may be entered (corresponding to arrow t in fig. 3).
In the concurrency control method under the server-less architecture provided by this embodiment, the success or failure result of the service request corresponding to the same concurrency thread pool fed back by the management and control node in the current time window is counted to obtain the statistical result of the same concurrency thread pool, and in the current time window, according to the statistical result of the same concurrency thread pool, the size threshold of the same concurrent thread pool is adjusted, so that in a server-free architecture, adaptively adjusting the size threshold of the concurrent thread pool according to the success or failure result of the back end to the service request, the size threshold of the concurrent thread pool takes the actual situation of the back end into consideration, the situation that the non-service architecture is unstable due to the fact that the size threshold of the concurrent thread pool is not matched with the actual situation of the back end can be reduced, and therefore the stability of the non-server architecture can be improved.
Fig. 4 is a schematic structural diagram of a concurrency control device under a server-less architecture according to an embodiment of the present application; referring to fig. 4, the present embodiment provides an apparatus, which may perform the method provided by the above embodiment, and specifically, the apparatus may include:
a counting module 41, configured to count success/failure results of service requests corresponding to the same concurrent thread pool fed back by the management and control node in a current time window, to obtain a counting result of the same concurrent thread pool, where threads in the same concurrent thread pool correspond to service requests belonging to the same service request set one to one, and the threads are configured to call an interface provided by the management and control node according to the corresponding service requests, so that the management and control node allocates corresponding instances to the service requests;
and an adjusting module 42, configured to adjust the size threshold of the same concurrent thread pool according to the statistical result of the same concurrent thread pool in the current time window.
Optionally, the adjusting module 42 is configured to adjust the size threshold of the same concurrent thread pool according to the statistical result of the same concurrent thread pool, and specifically includes: and adjusting the size threshold of the same concurrent thread pool according to the failure rate of the same concurrent thread pool represented by the statistical result of the same concurrent thread pool.
Optionally, the adjusting module 42 is configured to adjust the size threshold of the same concurrent thread pool according to the failure rate of the same concurrent thread pool represented by the statistical result of the same concurrent thread pool, and specifically includes:
when the failure rate of the same concurrent thread pool is greater than or equal to a first failure rate threshold, reducing the size threshold of the same concurrent thread pool;
when the failure rate of the same concurrent thread pool is smaller than or equal to a second failure rate threshold value, and the ratio of the current thread quantity of the same concurrent thread pool to the size threshold value of the same concurrent thread pool is larger than or equal to a first ratio threshold value, increasing the size threshold value of the same concurrent thread pool; wherein the first failure rate threshold is greater than the second failure rate threshold.
Optionally, the adjusting module 42 is configured to reduce the size threshold of the same concurrent thread pool, and specifically includes: multiplying a size threshold of the same concurrency thread pool by a first coefficient to reduce the size threshold of the same concurrency thread pool, the first coefficient being greater than 0 and less than 1.
Optionally, the adjusting module 42 is configured to increase the size threshold of the same concurrent thread pool, and specifically includes: adding a first number to the size threshold of the same concurrent thread pool to increase the size threshold of the same concurrent thread pool, wherein the first number is greater than or equal to 1.
Optionally, the apparatus further comprises an increasing module, configured to:
in the current time window, increasing the size threshold value of the same concurrent thread pool according to the current thread quantity of the same concurrent thread pool and the size threshold value of the same concurrent thread pool;
if a failure result of the service request corresponding to the same concurrent thread pool fed back by the control node is obtained, ending a first adjusting stage and entering a second adjusting stage aiming at the same concurrent thread pool; the first adjustment phase refers to the phase of increasing the size threshold of the same concurrent thread pool, and the second adjustment phase refers to the phase of adjusting the size threshold of the same concurrent thread pool.
Optionally, the increasing module is configured to increase the size threshold of the same concurrent thread pool according to the current thread number of the same concurrent thread pool and the size threshold of the same concurrent thread pool, and specifically includes: and judging whether the ratio of the current thread quantity of the same concurrent thread pool to the size threshold of the same concurrent thread pool is greater than or equal to a first ratio threshold, and if so, increasing the size threshold of the same concurrent thread pool.
Optionally, the increasing module is configured to increase the size threshold of the same concurrent thread pool, and includes: adding a second number to the size threshold of the same concurrent thread pool to increase the size threshold of the same concurrent thread pool, the second number being greater than or equal to 1.
Optionally, the apparatus further comprises a reduction module configured to:
if the first adjusting stage and the second adjusting stage do not trigger updating of the size threshold of the same concurrent thread pool in the current time window, judging whether the ratio of the current thread quantity of the same concurrent thread pool to the size threshold of the same concurrent thread pool is smaller than or equal to a second ratio threshold which is smaller than the first ratio threshold;
and if the judgment results of the time windows with the preset number are all smaller than or equal to the second ratio threshold, reducing the size threshold of the same concurrent thread pool.
Optionally, the reducing module is configured to reduce the size threshold of the same concurrent thread pool, and specifically includes: multiplying a size threshold of the same concurrency thread pool by a second coefficient to reduce the size threshold of the same concurrency thread pool, the second coefficient being greater than 0 and less than 1.
Optionally, the reducing module is further configured to: and if the judgment results of the time windows with the preset number are all smaller than or equal to the second ratio threshold value and the current adjustment stage of the same concurrent thread pool is the second adjustment stage, ending the second adjustment stage and entering the first adjustment stage for the same concurrent thread pool.
The apparatus shown in fig. 4 can execute the method provided by the embodiment shown in fig. 2, and reference may be made to the related description of the embodiment shown in fig. 2 for a part not described in detail in this embodiment. The implementation process and technical effect of the technical solution refer to the description in the embodiment shown in fig. 2, and are not described herein again.
In one possible implementation, the structure of the apparatus shown in FIG. 4 may be implemented as a computer device. As shown in fig. 5, the computer apparatus may include: a processor 51 and a memory 52. Wherein the memory 52 is used for storing a program for supporting a computer device to execute the method provided by the embodiment shown in fig. 2, and the processor 51 is configured for executing the program stored in the memory 52.
The program comprises one or more computer instructions which, when executed by the processor 51, are capable of performing the steps of:
counting success and failure results of service requests, corresponding to the same concurrency thread pool, fed back by the control node in a current time window to obtain a statistical result of the same concurrency thread pool, wherein threads in the same concurrency thread pool correspond to the service requests belonging to the same service request set one by one, and the threads are used for calling an interface provided by the control node according to the corresponding service requests so that the control node allocates corresponding instances for the service requests;
and in the current time window, adjusting the size threshold value of the same concurrent thread pool according to the statistical result of the same concurrent thread pool.
Optionally, the processor 51 is further configured to perform all or part of the steps in the foregoing embodiment shown in fig. 2.
The computer device may further include a communication interface 53 for the computer device to communicate with other devices or a communication network.
In addition, the present application provides a computer storage medium for storing computer software instructions for a computer device, which includes a program for executing the method in the method embodiment shown in fig. 2.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement such a technique without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by adding a necessary general hardware platform, and of course, can also be implemented by a combination of hardware and software. With this understanding in mind, the above-described technical solutions and/or portions thereof that contribute to the prior art may be embodied in the form of a computer program product, which may be embodied on one or more computer-usable storage media having computer-usable program code embodied therein (including but not limited to disk storage, CD-ROM, optical storage, etc.).
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, linked lists, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (14)

1. A method of concurrency control in a serverless architecture comprising a front-end node and a policing node, the method being performed by the front-end node, the method comprising:
counting success and failure results of service requests, corresponding to the same concurrency thread pool, fed back by the control node in a current time window to obtain a statistical result of the same concurrency thread pool, wherein threads in the same concurrency thread pool correspond to the service requests belonging to the same service request set one by one, and the threads are used for calling an interface provided by the control node according to the corresponding service requests so that the control node allocates corresponding instances for the service requests;
and in the current time window, adjusting the size threshold value of the same concurrent thread pool according to the statistical result of the same concurrent thread pool.
2. The method according to claim 1, wherein the adjusting the size threshold of the same concurrent thread pool according to the statistical result of the same concurrent thread pool comprises:
and adjusting the size threshold of the same concurrent thread pool according to the failure rate of the same concurrent thread pool represented by the statistical result of the same concurrent thread pool.
3. The method according to claim 2, wherein the adjusting the size threshold of the same concurrent thread pool according to the failure rate of the same concurrent thread pool represented by the statistical result of the same concurrent thread pool comprises:
when the failure rate of the same concurrent thread pool is greater than or equal to a first failure rate threshold, reducing the size threshold of the same concurrent thread pool;
when the failure rate of the same concurrent thread pool is smaller than or equal to a second failure rate threshold value, and the ratio of the current thread quantity of the same concurrent thread pool to the size threshold value of the same concurrent thread pool is larger than or equal to a first ratio threshold value, increasing the size threshold value of the same concurrent thread pool; wherein the first failure rate threshold is greater than the second failure rate threshold.
4. The method of claim 3, wherein reducing the size threshold of the same concurrent thread pool comprises: multiplying a size threshold of the same concurrency thread pool by a first coefficient to reduce the size threshold of the same concurrency thread pool, the first coefficient being greater than 0 and less than 1.
5. The method of claim 3, wherein increasing the size threshold of the same concurrent thread pool comprises: adding a first number to the size threshold of the same concurrent thread pool to increase the size threshold of the same concurrent thread pool, wherein the first number is greater than or equal to 1.
6. The method according to any of claims 1-4, wherein before adjusting the size threshold of the same concurrent thread pool according to the statistical result of the same concurrent thread pool, the method further comprises:
in the current time window, increasing the size threshold value of the same concurrent thread pool according to the current thread quantity of the same concurrent thread pool and the size threshold value of the same concurrent thread pool;
if a failure result of the service request corresponding to the same concurrent thread pool fed back by the control node is obtained, ending a first adjusting stage and entering a second adjusting stage aiming at the same concurrent thread pool; the first adjustment phase refers to the phase of increasing the size threshold of the same concurrent thread pool, and the second adjustment phase refers to the phase of adjusting the size threshold of the same concurrent thread pool.
7. The method according to claim 6, wherein the increasing the size threshold of the same concurrent thread pool according to the current number of threads of the same concurrent thread pool and the size threshold of the same concurrent thread pool within the current time window comprises:
and judging whether the ratio of the current thread quantity of the same concurrent thread pool to the size threshold of the same concurrent thread pool is greater than or equal to a first ratio threshold, and if so, increasing the size threshold of the same concurrent thread pool.
8. The method of claim 7, wherein increasing the size threshold of the same concurrent thread pool comprises: adding a second number to the size threshold of the same concurrent thread pool to increase the size threshold of the same concurrent thread pool, the second number being greater than or equal to 1.
9. The method of claim 6, further comprising:
if the first adjusting stage and the second adjusting stage do not trigger updating of the size threshold of the same concurrent thread pool in the current time window, judging whether the ratio of the current thread quantity of the same concurrent thread pool to the size threshold of the same concurrent thread pool is smaller than or equal to a second ratio threshold which is smaller than the first ratio threshold;
and if the judgment results of the time windows with the preset number are all smaller than or equal to the second ratio threshold, reducing the size threshold of the same concurrent thread pool.
10. The method of claim 9, wherein reducing the size threshold of the same concurrent thread pool comprises: multiplying a size threshold of the same concurrency thread pool by a second coefficient to reduce the size threshold of the same concurrency thread pool, the second coefficient being greater than 0 and less than 1.
11. The method of claim 9, further comprising: and if the judgment results of the time windows with the preset number are all smaller than or equal to the second ratio threshold value and the current adjustment stage of the same concurrent thread pool is the second adjustment stage, ending the second adjustment stage and entering the first adjustment stage for the same concurrent thread pool.
12. A concurrency control apparatus under a server-less architecture including a front-end node and a management and control node, the apparatus being applied to the front-end node, the apparatus comprising:
the counting module is used for counting success and failure results of the service requests, corresponding to the same concurrency thread pool, fed back by the control node in a current time window to obtain the counting results of the same concurrency thread pool, threads in the same concurrency thread pool correspond to the service requests belonging to the same service request set one by one, and the threads are used for calling an interface provided by the control node according to the corresponding service requests to distribute corresponding examples for the service requests by the control node;
and the adjusting module is used for adjusting the size threshold of the same concurrent thread pool according to the statistical result of the same concurrent thread pool in the current time window.
13. A computer device, comprising: a memory, a processor; wherein the memory is to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the method of any of claims 1 to 11.
14. A computer-readable storage medium, having stored thereon a computer program which, when executed, implements the method of any one of claims 1 to 11.
CN202111580666.5A 2021-12-22 2021-12-22 Concurrency control method, device, equipment and storage medium under server-free architecture Pending CN114338816A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111580666.5A CN114338816A (en) 2021-12-22 2021-12-22 Concurrency control method, device, equipment and storage medium under server-free architecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111580666.5A CN114338816A (en) 2021-12-22 2021-12-22 Concurrency control method, device, equipment and storage medium under server-free architecture

Publications (1)

Publication Number Publication Date
CN114338816A true CN114338816A (en) 2022-04-12

Family

ID=81053873

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111580666.5A Pending CN114338816A (en) 2021-12-22 2021-12-22 Concurrency control method, device, equipment and storage medium under server-free architecture

Country Status (1)

Country Link
CN (1) CN114338816A (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090307352A1 (en) * 2008-06-10 2009-12-10 International Business Machines Corporation Requester-Side Autonomic Governor
US20170359411A1 (en) * 2016-06-08 2017-12-14 International Business Machines Corporation Concurrency reduction service
CN109582455A (en) * 2018-12-03 2019-04-05 恒生电子股份有限公司 Multithreading task processing method, device and storage medium
CN109831474A (en) * 2018-11-26 2019-05-31 阿里巴巴集团控股有限公司 Keep-alive system, method, server and the readable storage medium storing program for executing of http long connection
CN110069337A (en) * 2018-01-24 2019-07-30 北京京东尚科信息技术有限公司 A kind of method and apparatus that disaster tolerance degrades
CN110730136A (en) * 2019-10-10 2020-01-24 腾讯科技(深圳)有限公司 Method, device, server and storage medium for realizing flow control
WO2020140369A1 (en) * 2019-01-04 2020-07-09 平安科技(深圳)有限公司 Data recovery control method, server and storage medium
CN111694669A (en) * 2020-06-12 2020-09-22 深圳前海微众银行股份有限公司 Task processing method and device
CN112256432A (en) * 2020-10-29 2021-01-22 北京达佳互联信息技术有限公司 Service overload processing method and device, electronic equipment and storage medium
CN112968951A (en) * 2021-02-02 2021-06-15 浙江大华技术股份有限公司 Service node connection method and device, storage medium and electronic device
US20210208944A1 (en) * 2020-01-02 2021-07-08 International Business Machines Corporation Thread pool management for multiple applications
CN113194040A (en) * 2021-04-28 2021-07-30 王程 Intelligent control method for instantaneous high-concurrency server thread pool congestion
CN113472879A (en) * 2021-06-29 2021-10-01 中国平安财产保险股份有限公司 Service request method, device, computer equipment and storage medium
CN113645153A (en) * 2021-08-11 2021-11-12 中国银行股份有限公司 Flow control method, device, equipment and medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090307352A1 (en) * 2008-06-10 2009-12-10 International Business Machines Corporation Requester-Side Autonomic Governor
US20170359411A1 (en) * 2016-06-08 2017-12-14 International Business Machines Corporation Concurrency reduction service
CN110069337A (en) * 2018-01-24 2019-07-30 北京京东尚科信息技术有限公司 A kind of method and apparatus that disaster tolerance degrades
CN109831474A (en) * 2018-11-26 2019-05-31 阿里巴巴集团控股有限公司 Keep-alive system, method, server and the readable storage medium storing program for executing of http long connection
CN109582455A (en) * 2018-12-03 2019-04-05 恒生电子股份有限公司 Multithreading task processing method, device and storage medium
WO2020140369A1 (en) * 2019-01-04 2020-07-09 平安科技(深圳)有限公司 Data recovery control method, server and storage medium
CN110730136A (en) * 2019-10-10 2020-01-24 腾讯科技(深圳)有限公司 Method, device, server and storage medium for realizing flow control
US20210208944A1 (en) * 2020-01-02 2021-07-08 International Business Machines Corporation Thread pool management for multiple applications
CN111694669A (en) * 2020-06-12 2020-09-22 深圳前海微众银行股份有限公司 Task processing method and device
CN112256432A (en) * 2020-10-29 2021-01-22 北京达佳互联信息技术有限公司 Service overload processing method and device, electronic equipment and storage medium
CN112968951A (en) * 2021-02-02 2021-06-15 浙江大华技术股份有限公司 Service node connection method and device, storage medium and electronic device
CN113194040A (en) * 2021-04-28 2021-07-30 王程 Intelligent control method for instantaneous high-concurrency server thread pool congestion
CN113472879A (en) * 2021-06-29 2021-10-01 中国平安财产保险股份有限公司 Service request method, device, computer equipment and storage medium
CN113645153A (en) * 2021-08-11 2021-11-12 中国银行股份有限公司 Flow control method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN110489447B (en) Data query method and device, computer equipment and storage medium
KR102139410B1 (en) Time-based node selection method and apparatus
CN109936511B (en) Token obtaining method, device, server, terminal equipment and medium
US8719297B2 (en) System for managing data collection processes
US11316792B2 (en) Method and system of limiting traffic
CN109087055B (en) Service request control method and device
US10884667B2 (en) Storage controller and IO request processing method
US8676621B1 (en) System and method for managing requests for pooled resources during non-contention
CN107436835B (en) Access control method and device
US10496282B1 (en) Adaptive port management to meet service level objectives using reinforcement learning
US9037703B1 (en) System and methods for managing system resources on distributed servers
CN110505155A (en) Request degradation processing method, device, electronic equipment and storage medium
CN108958975B (en) Method, device and equipment for controlling data recovery speed
CN112165436A (en) Flow control method, device and system
CN112965823A (en) Call request control method and device, electronic equipment and storage medium
CN102137091A (en) Overload control method, device and system as well as client-side
CN114338816A (en) Concurrency control method, device, equipment and storage medium under server-free architecture
CN113992586A (en) Flow control method and device, computer equipment and storage medium
CN111966918A (en) Current limiting method, device and system for concurrent access requests
CN107491455A (en) Read method and device in a kind of distributed system
CN115174487A (en) High-concurrency current limiting method and device and computer storage medium
CN110955502A (en) Task scheduling method and device
US10992517B1 (en) Dynamic distributed execution budget management system
EP3214537A1 (en) Storage array operation method and device
US8051419B2 (en) Method of dynamically adjusting number of task request

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination