CN106533961B - Flow control method and device - Google Patents

Flow control method and device Download PDF

Info

Publication number
CN106533961B
CN106533961B CN201611264947.9A CN201611264947A CN106533961B CN 106533961 B CN106533961 B CN 106533961B CN 201611264947 A CN201611264947 A CN 201611264947A CN 106533961 B CN106533961 B CN 106533961B
Authority
CN
China
Prior art keywords
pool
queue
flow
execution
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611264947.9A
Other languages
Chinese (zh)
Other versions
CN106533961A (en
Inventor
杨全文
王仁重
王昭
李旭嘉
徐航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Agricultural Bank of China
Original Assignee
Agricultural Bank of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agricultural Bank of China filed Critical Agricultural Bank of China
Priority to CN201611264947.9A priority Critical patent/CN106533961B/en
Publication of CN106533961A publication Critical patent/CN106533961A/en
Application granted granted Critical
Publication of CN106533961B publication Critical patent/CN106533961B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the invention discloses a flow control method, which is applied to a server; the method comprises the following steps: receiving an execution request triggered by a user, wherein the execution request carries an identifier of the user; obtaining a flow pool corresponding to the user according to the user identification; the flow pool comprises an execution queue and a waiting queue, and when the execution request is in the execution queue, the execution request is processed; checking whether an execution queue in the obtained flow pool is full; and when the execution queue in the obtained traffic pool is full, adding the execution request into a waiting queue in the obtained traffic pool. The flow control method provided by the embodiment of the invention is used for respectively setting independent flow pools corresponding to different application systems for the different application systems, can be used for controlling the flow of the different application systems accessing to the platform system in a targeted manner, and is favorable for cluster load balancing.

Description

Flow control method and device
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a flow control method and apparatus.
Background
Traffic refers to the number of requests a user accesses a system at a time over a network. And the flow control mainly refers to limiting and counting the maximum number of requests which can be accessed into the system from the service perspective. The main body of the flow control is a platform system, and the accessed user generally refers to other application systems accessing the platform system, for example, the file management platform mainly provides operations of storing, querying, modifying, deleting and the like for files of various application systems. At this time, the maximum flow rate of the application system accessing the flat file management platform at a certain moment needs to be limited according to the access time interval, the traffic, the priority and the like of different application systems, and the flow rate information can be counted in real time.
The existing flow control method generally utilizes a connection pool of a server middleware to limit the number of requests simultaneously accessing a system. For example, for a platform system deployed on a WebSphere server in the form of a Web application, a certain degree of flow control management can be achieved through the thread connection pool of the WebSphere server. The function of the method is characterized in that the number of requests for accessing the application program is managed by controlling the number of threads at the JVM (Java Virtual Machine) level, and when the number of requests reaches or exceeds the limit of the number of requests, subsequent access requests are rejected. Because the service object of the flow control is actually the WebSphere server itself rather than the platform system, and is irrelevant to the service of the access platform system, the flow control cannot be controlled and counted more flexibly according to the role and the identity of the access request user, that is, the flow control cannot be performed with pertinence to the service requirements in different application systems. Therefore, the existing flow control method cannot meet the flow control requirement of the platform system.
Disclosure of Invention
In view of this, the present invention provides a flow control method and device, which can solve the problem that the flow control cannot be performed with pertinence to service requirements in different application systems in the prior art.
The flow control method provided by the embodiment of the invention is applied to a server; the method comprises the following steps:
receiving an execution request triggered by a user, wherein the execution request carries an identifier of the user;
obtaining a flow pool corresponding to the user according to the user identification;
the flow pool comprises an execution queue and a waiting queue, and when the execution request is in the execution queue, the execution request is processed;
checking whether an execution queue in the obtained flow pool is full;
and when the execution queue in the obtained traffic pool is full, adding the execution request into a waiting queue in the obtained traffic pool.
Optionally, the obtaining, according to the identifier of the user, a traffic pool corresponding to the user specifically includes:
searching whether a flow pool corresponding to the user exists or not according to the user identification;
if yes, directly obtaining a flow pool corresponding to the user;
and if not, establishing a corresponding flow pool for the service to be handled according to the preset flow limit.
Optionally, the adding the execution request to the waiting queue in the obtained traffic pool specifically includes:
checking whether a waiting queue in the obtained traffic pool is full;
when the waiting queue in the obtained flow pool is not full, adding the execution request into the waiting queue in the obtained flow pool;
and rejecting the execution request when a waiting queue in the obtained traffic pool is full.
Optionally, the adding the execution request to the waiting queue in the obtained traffic pool further includes:
checking whether the execution request is the head of a waiting queue in the obtained flow pool;
when the execution request is the head of a waiting queue in the obtained traffic pool, checking whether an execution queue in the obtained traffic pool is full;
and when the execution queue in the obtained flow pool is not full, adding the execution request into the execution queue in the obtained flow pool.
Optionally, the method further includes:
receiving a traffic statistic request, wherein the traffic statistic request carries at least one identifier to be detected;
determining a flow pool corresponding to the identifier to be detected to obtain the flow pool to be detected;
and acquiring the queue length of the execution queue in the flow pool to be tested and the queue length of the waiting queue in the flow pool to be tested.
The flow control device provided by the embodiment of the invention is applied to a server; the apparatus, comprising: the device comprises a receiving module, an acquisition module, a checking module and a processing module;
the receiving module is used for receiving an execution request triggered by a user, wherein the execution request carries an identifier of the user;
the obtaining module is used for obtaining a flow pool corresponding to the user according to the identification of the user;
the flow pool comprises an execution queue and a waiting queue, and when the execution request is in the execution queue, the execution request is processed;
the checking module is used for checking whether the execution queue in the obtained flow pool is full;
and the processing module is used for adding the execution request into a waiting queue in the obtained flow pool when the checking module checks that the execution queue in the obtained flow pool is full.
Optionally, the obtaining module specifically includes: the method comprises the steps of searching a submodule, obtaining a submodule and creating a submodule;
the searching submodule is used for searching whether a flow pool corresponding to the user exists or not according to the identification of the user;
the obtaining sub-module is used for directly obtaining the flow pool corresponding to the user when the searching sub-module finds the flow pool corresponding to the user;
and the creating sub-module is used for creating a corresponding flow pool for the service to be handled according to the preset flow limit when the searching sub-module does not search the flow pool corresponding to the user.
Optionally, the processing module specifically includes: the checking submodule and the processing submodule;
the checking submodule is used for checking whether a waiting queue in the obtained flow pool is full;
the processing submodule is used for adding the execution request into the waiting queue in the obtained flow pool when the checking submodule checks that the waiting queue in the obtained flow pool is not full; and the checking submodule is also used for rejecting the execution request when the waiting queue in the obtained traffic pool is checked to be full.
Alternatively to this, the first and second parts may,
the checking module is further configured to check whether the execution request is a head of a waiting queue in the obtained traffic pool; further for checking whether an execution queue in the obtained traffic pool is full when it is checked that the execution request is the head of a waiting queue in the obtained traffic pool;
the processing module is further configured to add the execution request to the execution queue in the obtained traffic pool when the checking module checks that the execution queue in the obtained traffic pool is not full.
Optionally, the method further includes: a determining module and a counting module;
the receiving module is further configured to receive a traffic statistics request, where the traffic statistics request carries at least one identifier to be detected;
the determining module is used for determining the flow pool corresponding to the identifier to be detected to obtain the flow pool to be detected;
and the statistical module is used for acquiring the queue length of the execution queue in the flow pool to be tested and the queue length of the waiting queue in the flow pool to be tested.
Compared with the prior art, the invention has at least the following advantages:
according to the flow control method provided by the embodiment of the invention, after an execution request triggered by an application system is received, a flow pool corresponding to a user is obtained according to the identifier of the application system carried in the execution request. Then, it is checked whether the execution queue in the obtained traffic pool is full. When the target execution queue is full, the execution request is added to a wait queue in the obtained traffic pool. And adding the execution instruction into the execution queue and executing the execution request triggered by the application system only when the execution queue in the obtained flow pool is not full. The flow control method provided by the embodiment of the invention is used for respectively setting independent flow pools corresponding to different application systems for the different application systems, can be used for controlling the flow of the different application systems accessing to the platform system in a targeted manner, and is favorable for cluster load balancing.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic flow chart of an embodiment of a flow control method provided in the present invention;
fig. 2 is a schematic diagram illustrating a correspondence relationship between an application system and a flow pool in a flow control method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an embodiment of a flow control device provided by the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
For convenience of understanding, technical terms related to the embodiments of the present invention are described below.
Flow rate: the number of requests a user accesses a system at a time over a network.
Flow control: the system refers to a function which is realized by a certain technology and is used for limiting the number of requests for accessing the system at a certain time, and generally, the setting of an upper limit of the flow is needed to be focused.
The application system comprises the following steps: a user accessing a platform system generally refers to other systems within an enterprise, such as archive, financial, business management, etc. systems.
A platform system: the system that provides a certain service for other application systems inside an enterprise, such as file management, single sign-on, and the like, is the main body of the flow control method provided by the embodiment of the present invention.
Distributed application coordination server: the method is mainly used for solving the problem of high-reliability coordination of a large-scale distributed system, generally requires more than three servers to form a cluster to operate, and generally provides application scenarios for the distributed system, wherein the application scenarios comprise uniform naming servers, load balancing, distributed locking, configuration management and fault repairing. Currently common products that can be used for distributed application coordination are ZooKeeper and Redis, among others. The ZooKeeper is most widely applied, and meanwhile, the product is also one of important components of Hadoop big data ecological environment.
In an enterprise, there are multiple application systems, which correspond to different processing services and belong to different departments. These application systems often have commonalities in certain functions, such as storage and management of files, single sign-on functions, and the like. In order to avoid repeated construction of different application systems and reduce development and operation and maintenance costs, functions with commonality are often proposed to be processed by a common platform system, and other application systems realize the functions with commonality by interacting with the platform system.
However, the access platform system has various application types and different requirements. For example, a application may request more frequently during a certain period of time, requiring the platform system to provide more service resources, while B application does not have high resource demand at that time. Therefore, it is necessary to coordinate the access traffic of the a application and the B application at different times. For another example, both the a and C applications need to request more resources at a time, but the a application has a higher priority. The platform system should therefore have limited traffic support for the a application at this time. In consideration of the above different requirement scenarios, under the condition that the processing capability of the platform system is limited, the platform system should be capable of controlling the accessed traffic according to the condition of the application system, so as to achieve the purpose of optimizing the processing capability of the platform system.
The existing http client connection pool is a connection control technology based on a transport layer TCP protocol, and the basic principle thereof is as follows: and establishing a plurality of connection pools according to the mapping of the network routing, wherein each pool comprises pending, available, free and other three connection queues. And after the access is requested, the connection pool manager distributes the request to the corresponding connection pool according to the routing mapping of the request. If the available queue in the connection pool has available connection, returning the connection to the request and continuing to execute; if the available queue has no available connection, the request is added to the pending queue and executed after waiting for the available connection. After the request is executed, the connection can be released to an available or leased queue according to whether the connection type is reused or not. The Http contact pool technology is a relatively common Http contact management technology, and focuses more on managing contacts from the perspective of underlying protocols.
In addition, the existing WebSphere server provides a function of controlling the number of threads created by the JVM, can be used for controlling the maximum number of concurrent users accessing the WebSphere server, and can also play a role in controlling traffic to a certain extent. From a technical perspective, unlike the http client connection pool, WebSphere manages the maximum concurrency number by controlling the number of JVM threads.
It can be understood that the above two prior arts are directed to the object at the single node server level instead of the cluster level, and cannot achieve the overall control of the traffic of the whole cluster. Considering that the application environments of the current platform system are basically all cluster environments, if the two prior arts are used for controlling the system flow, the connection number of each node in the cluster can only be calculated and statically configured by a person skilled in the art according to the cluster condition, so that the overall flow condition of the cluster cannot be dynamically and flexibly controlled, and the cluster load balancing is not facilitated. In addition, the prior art emphasizes that the traffic is irrelevant to the access user, the information of the access user is not concerned, and the traffic cannot be managed from the perspective of the user hierarchy.
After the application system accesses the platform system, the platform system creates a corresponding independent traffic pool for the application system according to a preconfigured traffic limit, where the traffic pool includes an execution queue and a wait queue, and configures a corresponding execution queue length limit (i.e., a maximum number of allowed requests) according to different application systems. When the application system sends a request to the platform system, the platform system firstly checks whether an execution queue included in a flow pool corresponding to the platform system is full, and if the execution queue is not full, the platform system responds to the execution request; and if the execution time is full, entering a waiting queue to wait for execution. And when the execution of the requests in the execution queue is finished and the requests are quitted, adding the requests in the waiting queue into the execution queue in sequence to execute one by one. And if the waiting queue is full, directly rejecting the execution request of the application system by the platform system. By modifying the pre-configured flow limit, the flow of the application system can be dynamically limited according to different application systems. Therefore, the flow of different application systems accessing to the platform system can be controlled in a targeted manner.
In order to make the aforementioned objects, features and advantages of the present invention more comprehensible, embodiments accompanying the drawings are described in detail below.
The method comprises the following steps:
referring to fig. 1, the figure is a schematic flow chart of an embodiment of a flow control method provided in the present invention.
First, it should be noted that the flow control method provided in the embodiment of the present invention is applied to a server, and the server is used for controlling data interaction between the platform system and an application system.
The flow control method provided in this embodiment includes:
s101: receiving an execution request triggered by a user, wherein the execution request carries an identifier of the user.
It is understood that, for the platform system, the user is other external application systems that can access the platform system, and the execution request is a request triggered by the application system and sent to the platform system. For example, for a file management platform system, a user is another application system that can access the file management platform system and operate on files therein, and an execution request is a request related to file management triggered by the application system.
In specific implementation, when the execution request is received, the authority of the user is verified according to the user identifier carried in the execution request, whether the user is allowed to access is judged, and the execution request is responded. And if the user is not authenticated, directly rejecting the execution request. If the user passes the verification, the subsequent steps are executed to judge whether the flow limit of the user is exceeded.
S102: and obtaining a flow pool corresponding to the user according to the user identification.
The flow pool comprises an execution queue and a waiting queue, and when the execution request is in the execution queue, the execution request is processed.
Referring to fig. 2, a one-to-one correspondence of application systems to traffic pools is shown. The flow of each accessed application system (namely, user) is limited through different flow pools, so that the flow of the different application systems accessed to the platform system can be controlled in a targeted manner.
It should be noted here that the queue length of the execution queue, i.e., the maximum number of requests of the application system allowed by the platform system, the execution requests exceeding the queue length of the execution queue are temporarily not processed or directly rejected.
It can be understood that, when the execution request sent by the user is located in the execution queue included in the traffic pool corresponding to the execution request, the user can complete each function provided by the platform system in response to the execution request of the user, and the execution request is exited from the execution queue after the execution is completed. When the execution request sent by the user is located in a waiting queue included in the flow pool corresponding to the execution request, the user needs to wait until the execution queue is idle so as to obtain the response of the platform system.
In a preferred embodiment of this embodiment, step S102 specifically includes: searching whether a flow pool corresponding to the user exists or not according to the user identification; if yes, directly obtaining a flow pool corresponding to the user; and if not, establishing a corresponding flow pool for the service to be handled according to the preset flow limit.
It should be noted that, in the implementation, the preconfigured traffic limits may be written in the configuration table. When a flow pool is created for a user, an execution queue and a waiting queue with corresponding lengths are created for the flow pool directly according to the preset flow limit acquired from the configuration table. It can be understood that, according to the actual situation, a person skilled in the art may specifically set the flow limit corresponding to the user to achieve targeted control of the flow, and the specific maximum flow number is not described in detail here.
In addition, a person skilled in the art can adjust the flow limitation in the configuration table according to the actual situation, dynamically and flexibly limit the flows of different application systems, and is favorable for load balancing of the cluster.
S103: it is checked whether the execution queue in the obtained traffic pool is full.
S104: and when the execution queue in the obtained traffic pool is full, adding the execution request into a waiting queue in the obtained traffic pool.
It is understood that when the execution queue is full, it indicates that the platform system has reached the maximum processing amount of the request for the user, and the traffic of the user needs to be limited. At this time, the execution request is added into the waiting queue, and after the execution queue is idle, the execution request is added into the execution queue, so that the platform system responds to the execution request.
In some possible implementation manners of this embodiment, step S104 specifically includes: checking whether a waiting queue in the obtained traffic pool is full; when the waiting queue in the obtained flow pool is not full, adding the execution request into the waiting queue in the obtained flow pool; and rejecting the execution request when a waiting queue in the obtained traffic pool is full.
It will be appreciated that a wait queue may be provided to avoid the user waiting too long for the server to respond. When the wait queue is full, the execution request is denied. Those skilled in the art can specifically set the length of the waiting queue according to the actual situation, and details are not described here.
In a preferred embodiment of this embodiment, after step S104, the flow control method further includes: checking whether the execution request is the head of a waiting queue in the obtained flow pool; when the execution request is the head of a waiting queue in the obtained traffic pool, checking whether an execution queue in the obtained traffic pool is full; and when the execution queue in the obtained flow pool is not full, adding the execution request into the execution queue in the obtained flow pool.
As an example, when implemented, the watcher can register the watcher with the execution queue in the obtained traffic pool according to the watcher mechanism provided by the distributed application coordination server, and wait for the execution queue to send a notification to check whether the execution queue is full.
When receiving the execution queue sending notification, checking that the execution queue in the obtained traffic pool is not full. At this time, the execution request may be added to the execution queue in the obtained traffic pool.
In a preferred embodiment of this embodiment, the flow control method further includes a flow statistics step, which is specifically as follows: receiving a traffic statistic request, wherein the traffic statistic request carries at least one identifier to be detected; determining a flow pool corresponding to the identifier to be detected to obtain the flow pool to be detected; and acquiring the queue length of the execution queue in the flow pool to be tested and the queue length of the waiting queue in the flow pool to be tested.
In one example, the traffic of the application system a and the traffic of the application system B are counted if needed. First, a traffic statistic request carrying identifiers (i.e., identifiers to be measured) of an application system a and an application system B is received. And then, acquiring a flow pool to be measured, namely a flow pool A corresponding to the application system A and a flow pool B corresponding to the application system B according to the identifier carried by the flow counting request. And then, the flow information of the flow pool A and the flow pool B is counted in sequence. Taking the flow pool a as an example, specifically, it is determined first whether the execution queue a of the flow pool a is empty, and if not, the number of requests in the execution queue a (i.e., the queue length of the execution queue a) is counted; and then judging whether the waiting queue A of the flow pool A is empty, and counting the number of the requests in the waiting queue A (namely the queue length of the waiting queue A) if the waiting queue A of the flow pool A is not empty. It is understood that the wait queue a may be counted first and then the execute queue a may be counted first, which is not limited in the present invention. The flow statistics method of the flow pool B is similar, and is not described herein again.
Preferably, the length of the execution queue or the waiting queue can be optionally counted, which is not described in detail herein.
It should be further noted that, in specific implementation, the flow control method provided in this embodiment may be deployed in a distributed application coordination server in a form of a distributed system, and has very good processing performance, so that system performance under a high load can be fully guaranteed. Moreover, the characteristic of high fault tolerance of the distributed application coordination server is fully utilized, and the completeness of the flow control system is ensured by the server cluster. Even if one or more server hosts fail, the normal operation of the flow control system can be ensured as long as the requirement of the host with the minimum number of servers in the cluster (generally, no less than three servers) is met.
Taking ZooKeeper as an example, a person skilled in the art may use a znode file structure of ZooKeeper to implement the flow control method and the flow statistics step described in the above steps.
The ZooKeeper server cluster mainly comprises a leader node (master node), a follower node (slave node) and an observer node (observer node), and the three are all responsible for processing read-write requests of the cluster. The difference lies in that: the leader node is used for controlling the server cluster and is generally generated by self-election of the follower node, and if the leader node fails, a new leader node can be immediately elected as long as a sufficient number of follower nodes (not less than three) exist; the follower node is used for electing the leader node and participating in voting work of the cluster writing request; the observer node does not participate in voting, only accepts the voted information, and is mainly used for expanding the read request.
In the flow control method provided in this embodiment, after receiving an execution request triggered by an application system, a flow pool corresponding to a user is obtained according to an identifier of the application system carried in the execution request. Then, it is checked whether the execution queue in the obtained traffic pool is full. When the target execution queue is full, the execution request is added to a wait queue in the obtained traffic pool. And adding the execution instruction into the execution queue and executing the execution request triggered by the application system only when the execution queue in the obtained flow pool is not full. The flow control method provided by this embodiment sets independent flow pools corresponding to different application systems, respectively, and can control flows of different application systems accessing to a platform system in a targeted manner, thereby facilitating cluster load balancing.
Based on the flow control method provided by the above embodiment, the embodiment of the invention also provides a flow control device.
The embodiment of the device is as follows:
referring to fig. 3, a schematic structural view of an embodiment of the flow control device provided by the present invention is shown.
It should be noted that, the flow control device provided in the embodiment of the present invention is applied to a server, and the server is used for controlling data interaction between the platform system and an application system.
The flow control device provided in this embodiment includes: a receiving module 100, an obtaining module 200, a checking module 300 and a processing module 400;
the receiving module 100 is configured to receive an execution request triggered by a user, where the execution request carries an identifier of the user.
The obtaining module 200 is configured to obtain a traffic pool corresponding to the user according to the identifier of the user.
The flow pool comprises an execution queue and a waiting queue, and when the execution request is in the execution queue, the execution request is processed.
In a preferred embodiment of this embodiment, the obtaining module 200 specifically includes: a search sub-module, an acquisition sub-module and a creation sub-module (none of which are shown in the figure);
and the searching submodule is used for searching whether a flow pool corresponding to the user exists or not according to the identification of the user.
The obtaining sub-module is configured to directly obtain the traffic pool corresponding to the user when the searching sub-module finds the traffic pool corresponding to the user.
And the creating sub-module is used for creating a corresponding flow pool for the service to be handled according to the preset flow limit when the searching sub-module does not search the flow pool corresponding to the user.
The checking module 300 is configured to check whether an execution queue in the obtained traffic pool is full.
The processing module 400 is configured to add the execution request to a waiting queue in the obtained traffic pool when the checking module 300 checks that the execution queue in the obtained traffic pool is full.
In some possible implementation manners of this embodiment, the processing module 400 specifically includes: an inspection sub-module and a processing sub-module (neither shown in the figure);
and the checking submodule is used for checking whether the waiting queue in the obtained flow pool is full.
The processing submodule is used for adding the execution request into the waiting queue in the obtained flow pool when the checking submodule checks that the waiting queue in the obtained flow pool is not full; and the checking submodule is also used for rejecting the execution request when the waiting queue in the obtained traffic pool is checked to be full.
In some possible implementation manners of this embodiment, the checking module 300 is further configured to check whether the execution request is a head of a waiting queue in the obtained traffic pool; further for checking whether an execution queue in the obtained traffic pool is full when it is checked that the execution request is the head of a waiting queue in the obtained traffic pool;
the processing module 400 is further configured to add the execution request to the execution queue in the obtained traffic pool when the checking module 300 checks that the execution queue in the obtained traffic pool is not full.
In a preferred embodiment of this embodiment, the amount control device further includes: a determination module and a statistics module (neither shown in the figure);
the receiving module is further configured to receive a traffic statistics request, where the traffic statistics request carries at least one identifier to be detected.
And the determining module is used for determining the flow pool corresponding to the identifier to be detected to obtain the flow pool to be detected.
And the statistical module is used for acquiring the queue length of the execution queue in the flow pool to be tested and the queue length of the waiting queue in the flow pool to be tested.
In the flow control apparatus provided in this embodiment, after the receiving module receives an execution request triggered by an application system, the obtaining module obtains a flow pool corresponding to a user according to an identifier of the application system carried in the execution request. The checking module then checks whether the execution queue in the obtained traffic pool is full. When the target execution queue is full, the processing module adds the execution request to a wait queue in the obtained traffic pool. And only when the execution queue in the obtained flow pool is not full, the processing module adds the execution instruction into the execution queue and executes the execution request triggered by the application system. The flow control device provided by this embodiment sets independent flow pools corresponding to different application systems, respectively, and can control flows of different application systems accessing to platform systems in a targeted manner, thereby facilitating cluster load balancing.
It should be noted that, in the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The foregoing is merely a preferred embodiment of the invention and is not intended to limit the invention in any manner. Although the present invention has been described with reference to the preferred embodiments, it is not intended to be limited thereto. Those skilled in the art can make numerous possible variations and modifications to the present teachings, or modify equivalent embodiments to equivalent variations, without departing from the scope of the present teachings, using the methods and techniques disclosed above. Therefore, any simple modification, equivalent change and modification made to the above embodiments according to the technical essence of the present invention are still within the scope of the protection of the technical solution of the present invention, unless the contents of the technical solution of the present invention are departed.

Claims (10)

1. A flow control method is characterized in that the method is applied to a server; the method comprises the following steps:
receiving an execution request triggered by a user, wherein the execution request carries an identifier of the user;
obtaining an independent flow pool corresponding to the user according to the user identification; the flow pool comprises an execution queue and a waiting queue, the length of the execution queue is configured according to the preset flow limit corresponding to the user, and when the execution request is in the execution queue, the execution request is processed;
checking whether an execution queue in the obtained flow pool is full;
and when the execution queue in the obtained traffic pool is full, adding the execution request into a waiting queue in the obtained traffic pool.
2. The method for controlling flow according to claim 1, wherein the obtaining a flow pool corresponding to the user according to the identifier of the user specifically includes:
searching whether a flow pool corresponding to the user exists or not according to the user identification;
if yes, directly obtaining a flow pool corresponding to the user;
and if not, creating a corresponding flow pool for the user according to the preset flow limit.
3. The method according to claim 1, wherein the adding the execution request to a waiting queue in the obtained traffic pool specifically includes:
checking whether a waiting queue in the obtained traffic pool is full;
when the waiting queue in the obtained flow pool is not full, adding the execution request into the waiting queue in the obtained flow pool;
and rejecting the execution request when a waiting queue in the obtained traffic pool is full.
4. The flow control method according to claim 1, wherein the adding the execution request to a waiting queue in the obtained flow pool further comprises:
checking whether the execution request is the head of a waiting queue in the obtained flow pool;
when the execution request is the head of a waiting queue in the obtained traffic pool, checking whether an execution queue in the obtained traffic pool is full;
and when the execution queue in the obtained flow pool is not full, adding the execution request into the execution queue in the obtained flow pool.
5. The flow control method according to any one of claims 1 to 4, characterized by further comprising:
receiving a traffic statistic request, wherein the traffic statistic request carries at least one identifier to be detected;
determining a flow pool corresponding to the identifier to be detected to obtain the flow pool to be detected;
and acquiring the queue length of the execution queue in the flow pool to be tested and the queue length of the waiting queue in the flow pool to be tested.
6. A flow control device is applied to a server; the apparatus, comprising: the device comprises a receiving module, an acquisition module, a checking module and a processing module;
the receiving module is used for receiving an execution request triggered by a user, wherein the execution request carries an identifier of the user;
the acquisition module is used for acquiring an independent flow pool corresponding to the user according to the user identifier; the flow pool comprises an execution queue and a waiting queue, the length of the execution queue is configured according to the preset flow limit corresponding to the user, and when the execution request is in the execution queue, the execution request is processed;
the checking module is used for checking whether the execution queue in the obtained flow pool is full;
and the processing module is used for adding the execution request into a waiting queue in the obtained flow pool when the checking module checks that the execution queue in the obtained flow pool is full.
7. The flow control device according to claim 6, wherein the obtaining module specifically includes: the method comprises the steps of searching a submodule, obtaining a submodule and creating a submodule;
the searching submodule is used for searching whether a flow pool corresponding to the user exists or not according to the identification of the user;
the obtaining sub-module is used for directly obtaining the flow pool corresponding to the user when the searching sub-module finds the flow pool corresponding to the user;
and the creating sub-module is used for creating a corresponding flow pool for the user according to the preset flow limit when the searching sub-module does not search the flow pool corresponding to the user.
8. The flow control device according to claim 6, wherein the processing module specifically includes: the checking submodule and the processing submodule;
the checking submodule is used for checking whether a waiting queue in the obtained flow pool is full;
the processing submodule is used for adding the execution request into the waiting queue in the obtained flow pool when the checking submodule checks that the waiting queue in the obtained flow pool is not full; and the checking submodule is also used for rejecting the execution request when the waiting queue in the obtained traffic pool is checked to be full.
9. The flow control device of claim 6,
the checking module is further configured to check whether the execution request is a head of a waiting queue in the obtained traffic pool; further for checking whether an execution queue in the obtained traffic pool is full when it is checked that the execution request is the head of a waiting queue in the obtained traffic pool;
the processing module is further configured to add the execution request to the execution queue in the obtained traffic pool when the checking module checks that the execution queue in the obtained traffic pool is not full.
10. A flow control device according to any one of claims 6 to 9 further comprising: a determining module and a counting module;
the receiving module is further configured to receive a traffic statistics request, where the traffic statistics request carries at least one identifier to be detected;
the determining module is used for determining the flow pool corresponding to the identifier to be detected to obtain the flow pool to be detected;
and the statistical module is used for acquiring the queue length of the execution queue in the flow pool to be tested and the queue length of the waiting queue in the flow pool to be tested.
CN201611264947.9A 2016-12-30 2016-12-30 Flow control method and device Active CN106533961B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611264947.9A CN106533961B (en) 2016-12-30 2016-12-30 Flow control method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611264947.9A CN106533961B (en) 2016-12-30 2016-12-30 Flow control method and device

Publications (2)

Publication Number Publication Date
CN106533961A CN106533961A (en) 2017-03-22
CN106533961B true CN106533961B (en) 2020-08-28

Family

ID=58336395

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611264947.9A Active CN106533961B (en) 2016-12-30 2016-12-30 Flow control method and device

Country Status (1)

Country Link
CN (1) CN106533961B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109788450B (en) * 2019-01-15 2019-12-13 深圳市中天网景科技有限公司 traffic sharing method, system and terminal of Internet of things card
CN109947562A (en) * 2019-03-01 2019-06-28 上海七印信息科技有限公司 A kind of task distribution current limiting system and its method for allocating tasks
CN110097268B (en) * 2019-04-19 2022-08-19 北京金山安全软件有限公司 Task allocation method and device, electronic equipment and storage medium
CN113992587B (en) * 2021-12-27 2022-03-22 广东睿江云计算股份有限公司 Flow control method and device, computer equipment and storage medium
CN117827497A (en) * 2024-03-05 2024-04-05 中国电子科技集团公司第三十研究所 Method and device for flow statistics and real-time sequencing based on domestic multi-core processor

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020156897A1 (en) * 2001-02-23 2002-10-24 Murthy Chintalapati Mechanism for servicing connections by disassociating processing resources from idle connections and monitoring the idle connections for activity
CN102035880A (en) * 2010-11-02 2011-04-27 中兴通讯股份有限公司 Method and device for maintaining connection
CN103583022A (en) * 2011-03-28 2014-02-12 思杰系统有限公司 Systems and methods for handling NIC congestion via NIC aware application
CN104572290A (en) * 2013-10-11 2015-04-29 中兴通讯股份有限公司 Method and device for controlling message processing threads
CN105681217A (en) * 2016-04-27 2016-06-15 深圳市中润四方信息技术有限公司 Dynamic load balancing method and system for container cluster

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101959236B (en) * 2009-07-13 2013-06-26 大唐移动通信设备有限公司 Traffic control method and device
CN103259743B (en) * 2012-02-15 2017-10-27 中兴通讯股份有限公司 The method and device of output flow control based on token bucket

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020156897A1 (en) * 2001-02-23 2002-10-24 Murthy Chintalapati Mechanism for servicing connections by disassociating processing resources from idle connections and monitoring the idle connections for activity
CN102035880A (en) * 2010-11-02 2011-04-27 中兴通讯股份有限公司 Method and device for maintaining connection
CN103583022A (en) * 2011-03-28 2014-02-12 思杰系统有限公司 Systems and methods for handling NIC congestion via NIC aware application
CN104572290A (en) * 2013-10-11 2015-04-29 中兴通讯股份有限公司 Method and device for controlling message processing threads
CN105681217A (en) * 2016-04-27 2016-06-15 深圳市中润四方信息技术有限公司 Dynamic load balancing method and system for container cluster

Also Published As

Publication number Publication date
CN106533961A (en) 2017-03-22

Similar Documents

Publication Publication Date Title
US10942795B1 (en) Serverless call distribution to utilize reserved capacity without inhibiting scaling
CN106533961B (en) Flow control method and device
WO2020253347A1 (en) Container cluster management method, device and system
US11119826B2 (en) Serverless call distribution to implement spillover while avoiding cold starts
CN113169952B (en) Container cloud management system based on block chain technology
US7743142B2 (en) Verifying resource functionality before use by a grid job submitted to a grid environment
US20200364608A1 (en) Communicating in a federated learning environment
US8938510B2 (en) On-demand mailbox synchronization and migration system
US11119813B1 (en) Mapreduce implementation using an on-demand network code execution system
WO2021108435A1 (en) Serverless call distribution to utilize reserved capacity without inhibiting scaling
US11683349B2 (en) Dynamic security policy management
US9535749B2 (en) Methods for managing work load bursts and devices thereof
US10534655B1 (en) Job scheduling based on job execution history
US8660996B2 (en) Monitoring files in cloud-based networks
US20110173319A1 (en) Apparatus and method for operating server using virtualization technique
US20200412736A1 (en) Dynamic security policy consolidation
US11405328B2 (en) Providing on-demand production of graph-based relationships in a cloud computing environment
CN115757611A (en) Big data cluster switching method and device, electronic equipment and storage medium
US10348814B1 (en) Efficient storage reclamation for system components managing storage
CN112860421B (en) Method, apparatus and computer program product for job processing
US9501517B2 (en) Providing consistent tenant experiences for multi-tenant databases
CN111294383B (en) Internet of things service management system
CN113986523A (en) Method, system, equipment and medium for optimizing resource allocation of Flink system
US20240193006A1 (en) Monitoring of resource consumption per tenant
US11882008B1 (en) Workload classes for tenant-level resource isolation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant