CN108089919B - Method and system for concurrently processing API (application program interface) requests - Google Patents

Method and system for concurrently processing API (application program interface) requests Download PDF

Info

Publication number
CN108089919B
CN108089919B CN201711395321.6A CN201711395321A CN108089919B CN 108089919 B CN108089919 B CN 108089919B CN 201711395321 A CN201711395321 A CN 201711395321A CN 108089919 B CN108089919 B CN 108089919B
Authority
CN
China
Prior art keywords
current
task
triple
coroutine
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711395321.6A
Other languages
Chinese (zh)
Other versions
CN108089919A (en
Inventor
向阳
金捷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING YUNSHAN NETWORKS Inc
Original Assignee
BEIJING YUNSHAN NETWORKS Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING YUNSHAN NETWORKS Inc filed Critical BEIJING YUNSHAN NETWORKS Inc
Priority to CN201711395321.6A priority Critical patent/CN108089919B/en
Publication of CN108089919A publication Critical patent/CN108089919A/en
Application granted granted Critical
Publication of CN108089919B publication Critical patent/CN108089919B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Telephonic Communication Services (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a method and a system for concurrently processing an API request, wherein the method comprises the following steps: s1, if the current task corresponding to the current API request comprises I/O operation, generating a first triple of the current task by using the current service routine; s2, when the asynchronous waiting time of the current service routine reaches a preset time length, switching the scheduling logic of the main thread to other service routines meeting preset conditions, and acquiring second triples generated by the other service routines; s3, according to the order-preserving identifier of the current task in the first triple and the order-preserving identifiers of other tasks in the second triple, the current task and the other tasks are distributed into a waiting queue by using a scheduling thread; and S4, processing the current task and other tasks by using the task processing thread, and storing the processing result into the request result queue of the first triple and the second triple. The invention realizes the order-preserving high-concurrency processing of the API request.

Description

Method and system for concurrently processing API (application program interface) requests
Technical Field
The invention belongs to the technical field of network communication, and particularly relates to a method and a system for concurrently processing an API (application program interface) request.
Background
In order to implement extensibility, module decoupling and code reuse, a large software system generally adopts a service-oriented architecture, that is, the whole software system is composed of API modules providing various services. The original API service architecture was implemented by a single process and a single thread. To improve the concurrency of services, it is common to use multiprocessing, multithreading or coroutine implementations.
The problem with the multi-process architecture is the complexity of inter-process communication, the difficulty in maintaining global resource locks, and the thread context switch introduces resource overhead. In addition, both multiprocessing and multithreaded architectures face the problem of an inability to respond to API requests in order. For example, when different API requests for the same resource object arrive in sequence, the API server is completely scheduled by the CPU at random due to lack of a context switch control mechanism and a request queuing mechanism, which may cause the later-arriving API to be executed first or concurrently and alternately, and the execution result is against the original purpose of the API caller. Such a situation is more prominent in situations where the I/O (Input/Output) process is intensive, for example, for a time-consuming API that changes the state of a resource, the final state of the resource after two consecutive calls to the API is unpredictable.
The coroutine framework is used for controlling the switching of the operation logic by a programmer, but when the number of asynchronous IO is large, the requirement of the fine control switching logic on the programmer is high, and errors are easy to occur. In addition, a large number of existing codes, programs, third-party libraries and the like do not support asynchronous IO (input/output), cannot be called by the coroutine, and often cause the coroutine architecture to lose concurrency.
Therefore, how to support order-preserving processing of API requests while ensuring high concurrency capability of API services is a problem that needs to be solved at present.
Disclosure of Invention
To overcome or at least partially solve the above-mentioned problems of not simultaneously achieving high concurrent processing and order-preserving processing of API requests, the present invention provides a method and system for concurrently processing API requests.
According to a first aspect of the present invention, there is provided a method for concurrently processing API requests, comprising:
s1, if the current task corresponding to the current API request comprises I/O operation, using the current service coroutine to generate a first triple of the current task, wherein the current service coroutine is pre-established for the current API request in a main thread;
s2, when the asynchronous waiting time of the current service routine reaches a preset time length, switching the scheduling logic of the main thread to other service routines meeting preset conditions, and acquiring second triples of other tasks corresponding to other API requests generated by the other service routines;
s3, according to the order-preserving identifier of the current task in the first triple and the order-preserving identifiers of other tasks in the second triple, distributing the current task and the other tasks to waiting queues by using a pre-established scheduling thread;
and S4, processing the current task and other tasks in each waiting queue by using a pre-created task processing thread, and storing the processing result into a request result queue in the first triple and the second triple, wherein the task processing thread corresponds to the waiting queue one to one.
Specifically, the step S1 is preceded by:
when the API service is started, creating a scheduling thread, a preset number of waiting queues and a preset number of task processing threads;
when a current API request reaches a main thread, a current service coroutine is created for the current API request in the main thread.
Specifically, the step S1 further includes:
and if the current task corresponding to the current API request does not comprise I/O operation, executing the current task by using the current service coroutine.
Specifically, the first triple includes the current task, a request result queue of the current API request, and an order-preserving identifier of the current task;
the second triple includes the other task, the request result queue of the other API request, and the order-preserving identifier of the other task.
Specifically, the step S2 specifically includes:
calculating the remaining waiting time of each other service coroutine;
and switching the scheduling logic of the main thread to the other service coroutines with the shortest remaining waiting time.
Specifically, the step S2 specifically includes:
detecting a request result queue in the first triple by using the current service coroutine every other preset time length;
if the request result queue in the first triple has a result, the API in the request result queue is returned, and the current service coroutine is terminated; or
And if the request result queue in the first triple has no result, continuing waiting by the current service coroutine.
Specifically, the step S2 further includes:
respectively calculating the remaining waiting time of each other service coroutine and request coroutine in the main thread, wherein the request coroutine is used for receiving an API request;
and if the coroutine with the shortest residual waiting time is the request coroutine, using the request coroutine to wait and receive the next API request.
Specifically, the step S2 is followed by:
saving the first triple and the second triple to a scheduling queue;
correspondingly, the step S3 specifically includes:
and using the scheduling thread to sequentially acquire the order-preserving identifiers in the first triple and the second triple from the scheduling queue.
Specifically, the step S3 specifically includes:
if a first task of the order-preserving identifier in the first triple and/or the second triple exists in the waiting queue, distributing a current task or other tasks which are the same as the order-preserving identifier of the first task to the waiting queue where the first task is located; or,
if a pre-created task processing thread is processing a second task of the order-preserving identifier in the first triple and/or the second triple, distributing a current task or other tasks which are the same as the order-preserving identifier of the second task to a waiting queue where the second task is located; or,
and if the first task does not exist in the waiting queue and the task processing thread is not processing the second task, distributing the current task and other tasks to the waiting queue with the minimum length.
According to a second aspect of the present invention, there is provided a system for concurrently processing API requests, comprising:
the generating unit is used for generating a first triple of a current task by using a current service coroutine when the current task corresponding to the current API request comprises I/O operation, wherein the current service coroutine is created in a main thread for the current API request in advance;
the switching unit is used for switching the scheduling logic of the main thread to other service coroutines by using an asynchronous waiting mechanism when the asynchronous waiting time of the current service coroutine reaches a preset time length, and acquiring second triples of other tasks corresponding to other API requests generated by other service coroutines;
the allocation unit is used for allocating the current task and other tasks to each waiting queue by using a pre-established scheduling thread according to the order-preserving identifier of the current task in the first triple and the order-preserving identifiers of other tasks in the second triple;
and the processing unit is used for processing the current task and other tasks in the corresponding waiting queues by using a pre-created task processing thread, and storing a processing result into a request result queue in the first triple and the second triple, wherein the task processing thread corresponds to the waiting queues one by one.
The invention provides a method and a system for concurrently processing API requests, wherein the method combines a coroutine and a thread pool mechanism, uses a coroutine mode to actively control a scheduling logic by a program, and avoids the disorder of API request execution caused by random scheduling of an operating system; the coroutine is very light and the switching overhead is low, and the constant scale of the thread pool does not bring the burden of creation and destruction; the coroutine switching mechanism can ensure that API requests of the same type of resources are processed without disorder by using the order-preserving identifier in the thread pool; only one asynchronous function used by the coroutine waits without depending on an asynchronous I/O library function, and the order-preserving high-concurrency processing of the API request is directly realized.
Drawings
Fig. 1 is a schematic overall flowchart of a method for concurrently processing API requests according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an overall structure of a system for concurrently processing API requests according to an embodiment of the present invention.
Detailed Description
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
In an embodiment of the present invention, a method for concurrently processing API requests is provided, and fig. 1 is a schematic overall flow chart of the method for concurrently processing API requests provided in the embodiment of the present invention, where the method includes: s1, if the current task corresponding to the current API request comprises I/O operation, using the current service coroutine to generate a first triple of the current task, wherein the current service coroutine is pre-established for the current API request in a main thread; s2, when the asynchronous waiting time of the current service routine reaches a preset time length, switching the scheduling logic of the main thread to other service routines meeting preset conditions, and acquiring second triples of other tasks corresponding to other API requests generated by the other service routines; s3, according to the order-preserving identifier of the current task in the first triple and the order-preserving identifiers of other tasks in the second triple, distributing the current task and the other tasks to waiting queues by using a pre-established scheduling thread; and S4, processing the current task and other tasks in each waiting queue by using a pre-created task processing thread, and storing the processing result into a request result queue in the first triple and the second triple, wherein the task processing thread corresponds to the waiting queue one to one.
Specifically, in S1, the current API request is an API request currently received by the main thread. The current task corresponding to the current API request refers to a task which needs to be executed in order to acquire the API of the current API request. And judging whether the current task corresponding to the current API request comprises I/O operation, wherein the I/O operation comprises file reading and writing, database reading and writing, network reading and writing and the like. And if the task comprises I/O operation, generating a first triple of the current task by using the current service routine, wherein the first triple is < the current task F, a request result queue Q and an order-preserving identifier O >. And the request result queue is used for storing the request result of the current API. The order-preserving identifiers are used for distinguishing API requests of different categories, and the order-preserving identifiers of the API requests of the same category are the same. The current service coroutine is pre-created in a main thread for the current API request.
In S2, when the asynchronous waiting time of the current service routine reaches a preset time, for example, a smaller time slice t is waited, the asynchronous waiting mechanism is used to switch the scheduling logic of the main thread to other service routines meeting preset conditions. Thereby implementing asynchronous processing of API requests. The other service coroutines are other service coroutines except the current service coroutine. And acquiring second triples of other tasks corresponding to other API requests generated by other service routines. The other API requests are other API requests except the current API request. The other tasks are other tasks except the current task.
In S3, according to the order-preserving identifier of the current task in the first triple, allocating the current task to a corresponding waiting queue using a pre-created scheduling thread; and according to the order-preserving identifiers of other tasks in the second triple, distributing the other tasks to waiting queues by using a pre-created scheduling thread. Since the order-preserving identifiers are used for distinguishing different categories of API requests, different categories of current tasks and other tasks are allocated to different waiting queues according to the order-preserving identifiers.
In S4, a task processing thread is created in advance for each waiting queue, and the task processing threads correspond to the waiting queues one to one. And processing the current task and other tasks in a waiting queue by a task processing thread, storing the processing result of the current task in the request result queue of the first triple, and storing the processing result of the other tasks in the request result queue of the second triple.
For example, at API service process start-up, 1 dispatch thread, 16 wait queues, and 16 task processing threads are created. Assume that the current task corresponding to the current API request contains an I/O operation and is the first API request. And generating a triple by the service routine and adding the triple into the scheduling queue. The API requested by the current API configures an API for the IP address of the network card, and the API is processed by the service routine C1, and the order-preserving identifier O is set as the name of the network card. The service coroutine C1 asynchronously waits a small time slice t and the asynchronous wait mechanism switches the scheduling logic of the main thread to other coroutines, thereby enabling concurrent processing of API requests. At this point, the main thread receives other requests, such as a host DNS configuration API request, and generates service routine C2 for processing, with its order-preserving identifier O set to the string "DNS". And the scheduling thread sequentially acquires the triples < F, Q and O > of the current task and other tasks from the scheduling queue and distributes the current task and other tasks to a certain waiting queue according to the O. In this example, the order-preserving identifiers of C1 and C2 are different, and the two are allocated to different waiting queues and are processed in parallel by different processing threads.
In the embodiment, by combining the coroutine and thread pool mechanisms, the scheduling logic is actively controlled by the program in a coroutine mode, so that the disorder of API request execution caused by random scheduling of an operating system is avoided; the coroutine is very light and the switching overhead is low, and the constant scale of the thread pool does not bring the burden of creation and destruction; the coroutine switching mechanism can ensure that API requests of the same type of resources are processed without disorder by using the order-preserving identifier in the thread pool; only one asynchronous function used by the coroutine waits without depending on an asynchronous I/O library function, and the order-preserving high-concurrency processing of the API request is directly realized.
On the basis of the foregoing embodiment, step S1 in this embodiment further includes: when the API service is started, creating a scheduling thread, a preset number of waiting queues and a preset number of task processing threads; when a current API request reaches a main thread, a current service coroutine is created for the current API request in the main thread.
Specifically, at the time of API service startup, multiple threads are created to achieve order-preserving, high-concurrency processing of API requests. And creating a scheduling thread, a preset number of waiting queues and a preset number of task processing threads. And the scheduling thread is used for distributing the tasks corresponding to the API requests received by the main thread to a proper preset number of request waiting queues. The tasks include a current task and other tasks. The task processing thread is used for processing tasks in the corresponding request waiting queue. A task processing thread processes a task in a request wait queue. And the main thread waits for and receives the next API request, and when the current API request arrives, a coroutine is generated for the API request in the main thread and is marked as a current service coroutine C.
On the basis of the foregoing embodiment, step S1 in this embodiment further includes: and if the current task corresponding to the current API request does not comprise I/O operation, executing the current task by using the current service coroutine.
On the basis of the foregoing embodiments, in this embodiment, the first triple includes the current task, the request result queue of the current API request, and the order-preserving identifier of the current task; the second triple includes the other task, the request result queue of the other API request, and the order-preserving identifier of the other task.
On the basis of the foregoing embodiments, in this embodiment, the step S2 specifically includes: calculating the remaining waiting time of each other service coroutine; and switching the scheduling logic of the main thread to the other service coroutines with the shortest remaining waiting time.
Specifically, the remaining waiting duration is obtained by subtracting the already-waiting duration from the preset total waiting duration of the other service coroutines.
On the basis of the foregoing embodiments, in this embodiment, the step S2 specifically includes: detecting a request result queue in the first triple by using the current service coroutine every other preset time length; if the request result queue in the first triple has a result, the API in the request result queue is returned, and the current service coroutine is terminated; or if the request result queue in the first triple has no result, the current service coroutine continues to wait.
On the basis of the foregoing embodiments, in this embodiment, the step S2 further includes: respectively calculating the remaining waiting time of each other service coroutine and request coroutine in the main thread, wherein the request coroutine is used for receiving an API request; and if the coroutine with the shortest residual waiting time is the request coroutine, using the request coroutine to wait and receive the next API request.
Specifically, the main thread includes a request coroutine for receiving API requests and M service coroutines that are processing API requests, where the service coroutines include a current service coroutine and other service coroutines. And respectively calculating the remaining waiting time of each other service coroutine and request coroutine in the main thread. And using the main thread to select a coroutine with the shortest remaining waiting time. And if the coroutine with the shortest residual waiting time is the request coroutine, using the request coroutine to wait and receive the next API request.
On the basis of the foregoing embodiments, in this embodiment, after the step S2, the method further includes: saving the first triple and the second triple to a scheduling queue; correspondingly, the step S3 specifically includes: and using the scheduling thread to sequentially acquire the order-preserving identifiers in the first triple and the second triple from the scheduling queue.
On the basis of the foregoing embodiments, in this embodiment, the step S3 specifically includes: if a first task of the order-preserving identifier in the first triple and/or the second triple exists in the waiting queue, distributing a current task or other tasks which are the same as the order-preserving identifier of the first task to the waiting queue where the first task is located; or if a pre-created task processing thread is processing a second task of the order-preserving identifier in the first triple and/or the second triple, distributing a current task or other tasks which are the same as the order-preserving identifier of the second task to a waiting queue where the second task is located; or, if the first task does not exist in the waiting queue and the task processing thread is not processing the second task, the current task and other tasks are allocated to the waiting queue with the minimum length.
In another embodiment of the present invention, a system for concurrently processing an API request is provided, and fig. 2 is a schematic diagram of an overall structure of the system for concurrently processing an API request provided in the embodiment of the present invention, where the system includes a generating unit 1, a switching unit 2, an allocating unit 3, and a processing unit 4, where:
the generating unit 1 is configured to generate a first triple of a current task by using a current service coroutine when the current task corresponding to a current API request includes an I/O operation, where the current service coroutine is created in advance for the current API request in a main thread; the switching unit 2 is configured to switch the scheduling logic of the main thread to other service coroutines by using an asynchronous waiting mechanism when the asynchronous waiting time of the current service coroutine reaches a preset duration, and obtain second triples of other tasks corresponding to other API requests generated by other service coroutines; the allocation unit 3 is configured to allocate the current task and the other tasks to each waiting queue by using a pre-created scheduling thread according to the order-preserving identifier of the current task in the first triple and the order-preserving identifiers of the other tasks in the second triple; the processing unit 4 is configured to process the current task and other tasks in the corresponding waiting queues by using a pre-created task processing thread, and store a processing result in a request result queue in the first triple and the second triple, where the task processing thread corresponds to the waiting queues one to one.
Specifically, the current API request is an API request currently received by the main thread. The current task corresponding to the current API request refers to a task which needs to be executed in order to acquire the API of the current API request. And judging whether the current task corresponding to the current API request comprises I/O operation, wherein the I/O operation comprises file reading and writing, database reading and writing, network reading and writing and the like. If the task includes an I/O operation, the generating unit 1 generates a first triple of the current task by using the current service routine, where the first triple is < the current task F, a request result queue Q, and an order-preserving identifier O >. And the request result queue is used for storing the request result of the current API. The order-preserving identifiers are used for distinguishing API requests of different categories, and the order-preserving identifiers of the API requests of the same category are the same. The current service coroutine is pre-created in a main thread for the current API request.
When the asynchronous waiting time of the current service coroutine reaches a preset time length, if a smaller time slice t is waited, the switching unit 2 uses an asynchronous waiting mechanism to switch the scheduling logic of the main thread to other service coroutines meeting preset conditions. Thereby implementing asynchronous processing of API requests. The other service coroutines are other service coroutines except the current service coroutine. And acquiring second triples of other tasks corresponding to other API requests generated by other service routines. The other API requests are other API requests except the current API request. The other tasks are other tasks except the current task.
The allocation unit 3 allocates the current task to a corresponding waiting queue by using a pre-created scheduling thread according to the order-preserving identifier of the current task in the first triple; and according to the order-preserving identifiers of other tasks in the second triple, distributing the other tasks to waiting queues by using a pre-created scheduling thread. Since the order-preserving identifiers are used for distinguishing different categories of API requests, different categories of current tasks and other tasks are allocated to different waiting queues according to the order-preserving identifiers.
And pre-establishing a task processing thread for each waiting queue, wherein the task processing threads correspond to the waiting queues one by one. The processing unit 4 processes the current task and other tasks in a waiting queue by using a task processing thread, stores the processing result of the current task in the request result queue of the first triple, and stores the processing result of the other tasks in the request result queue of the second triple.
In the embodiment, by combining the coroutine and thread pool mechanisms, the scheduling logic is actively controlled by the program in a coroutine mode, so that the disorder of API request execution caused by random scheduling of an operating system is avoided; the coroutine is very light and the switching overhead is low, and the constant scale of the thread pool does not bring the burden of creation and destruction; the coroutine switching mechanism can ensure that API requests of the same type of resources are processed without disorder by using the order-preserving identifier in the thread pool; only one asynchronous function used by the coroutine waits without depending on an asynchronous I/O library function, and the order-preserving high-concurrency processing of the API request is directly realized.
On the basis of the foregoing embodiment, the system in this embodiment further includes a creating unit, configured to create a scheduling thread, a preset number of waiting queues, and the preset number of task processing threads when the API service is started; when a current API request reaches a main thread, a current service coroutine is created for the current API request in the main thread.
On the basis of the foregoing embodiment, in this embodiment, the generating unit is further configured to: and when the current task corresponding to the current API request does not comprise I/O operation, executing the current task by using the current service coroutine.
On the basis of the foregoing embodiments, in this embodiment, the first triple includes the current task, the request result queue of the current API request, and the order-preserving identifier of the current task; the second triple includes the other task, the request result queue of the other API request, and the order-preserving identifier of the other task.
On the basis of the foregoing embodiments, in this embodiment, the switching unit is specifically configured to: calculating the remaining waiting time of each other service coroutine; and switching the scheduling logic of the main thread to the other service coroutines with the shortest remaining waiting time.
On the basis of the foregoing embodiments, in this embodiment, the switching unit is specifically configured to: detecting a request result queue in the first triple by using the current service coroutine every other preset time length; if the request result queue in the first triple has a result, the API in the request result queue is returned, and the current service coroutine is terminated; or if the request result queue in the first triple has no result, the current service coroutine continues to wait.
On the basis of the foregoing embodiments, in this embodiment, the switching unit is further configured to: respectively calculating the remaining waiting time of each other service coroutine and request coroutine in the main thread, wherein the request coroutine is used for receiving an API request; and if the coroutine with the shortest residual waiting time is the request coroutine, using the request coroutine to wait and receive the next API request.
On the basis of the foregoing embodiments, in this embodiment, the system further includes a storage unit, configured to store the first triple and the second triple in a scheduling queue; correspondingly, the allocating unit is specifically configured to use the scheduling thread to sequentially acquire the order-preserving identifiers in the first triple and the second triple from the scheduling queue.
On the basis of the foregoing embodiments, in this embodiment, the allocation unit is specifically configured to: when a first task of an order-preserving identifier in the first triple and/or the second triple exists in the waiting queue, distributing a current task or other tasks which are the same as the order-preserving identifier of the first task to the waiting queue where the first task is located; or when a pre-created task processing thread is processing a second task of the order-preserving identifier in the first triple and/or the second triple, distributing a current task or other tasks which are the same as the order-preserving identifier of the second task to a waiting queue where the second task is located; or, when the first task does not exist in the waiting queue and the task processing thread is not processing the second task, the current task and other tasks are allocated to the waiting queue with the minimum length.
Finally, the method of the present application is only a preferred embodiment and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A method for concurrently processing API requests, comprising:
s1, if the current task corresponding to the current API request comprises I/O operation, using the current service coroutine to generate a first triple of the current task, wherein the current service coroutine is pre-established for the current API request in a main thread;
s2, when the asynchronous waiting time of the current service routine reaches a preset time length, switching the scheduling logic of the main thread to other service routines meeting preset conditions, and acquiring second triples of other tasks corresponding to other API requests generated by the other service routines;
s3, according to the order-preserving identifier of the current task in the first triple and the order-preserving identifiers of other tasks in the second triple, distributing the current task and the other tasks to waiting queues by using a pre-established scheduling thread;
s4, processing the current task and other tasks in each waiting queue using a pre-created task processing thread, and storing the processing result in a request result queue in the first triple and the second triple, where the task processing thread corresponds to the waiting queue one to one;
the step S1 is preceded by:
when the API service is started, creating a scheduling thread, a preset number of waiting queues and a preset number of task processing threads;
when a current API request reaches a main thread, a current service coroutine is created for the current API request in the main thread.
2. The method according to claim 1, wherein the step S1 further comprises:
and if the current task corresponding to the current API request does not comprise I/O operation, executing the current task by using the current service coroutine.
3. The method of claim 1 or 2, wherein the first triple comprises the current task, a request result queue of the current API request, and an order-preserving identifier of the current task;
the second triple includes the other task, the request result queue of the other API request, and the order-preserving identifier of the other task.
4. The method according to claim 1 or 2, wherein the step S2 specifically includes:
calculating the remaining waiting time of each other service coroutine;
and switching the scheduling logic of the main thread to the other service coroutines with the shortest remaining waiting time.
5. The method according to claim 1 or 2, wherein the step S2 specifically includes:
detecting a request result queue in the first triple by using the current service coroutine every other preset time length;
if the request result queue in the first triple has a result, the API in the request result queue is returned, and the current service coroutine is terminated; or
And if the request result queue in the first triple has no result, continuing waiting by the current service coroutine.
6. The method according to claim 1 or 2, wherein the step S2 further comprises:
respectively calculating the remaining waiting time of each other service coroutine and request coroutine in the main thread, wherein the request coroutine is used for receiving an API request;
and if the coroutine with the shortest residual waiting time is the request coroutine, using the request coroutine to wait and receive the next API request.
7. The method according to claim 1 or 2, wherein the step S2 is further followed by:
saving the first triple and the second triple to a scheduling queue;
correspondingly, the step S3 specifically includes:
and using the scheduling thread to sequentially acquire the order-preserving identifiers in the first triple and the second triple from the scheduling queue.
8. The method according to claim 1 or 2, wherein the step S3 specifically includes:
if a first task of the order-preserving identifier in the first triple and/or the second triple exists in the waiting queue, distributing a current task or other tasks which are the same as the order-preserving identifier of the first task to the waiting queue where the first task is located; or,
if a pre-created task processing thread is processing a second task of the order-preserving identifier in the first triple and/or the second triple, distributing a current task or other tasks which are the same as the order-preserving identifier of the second task to a waiting queue where the second task is located; or,
and if the first task does not exist in the waiting queue and the task processing thread is not processing the second task, distributing the current task and other tasks to the waiting queue with the minimum length.
9. A system for concurrently processing API requests, comprising:
the generating unit is used for generating a first triple of a current task by using a current service coroutine when the current task corresponding to the current API request comprises I/O operation, wherein the current service coroutine is created in a main thread for the current API request in advance;
the switching unit is used for switching the scheduling logic of the main thread to other service coroutines by using an asynchronous waiting mechanism when the asynchronous waiting time of the current service coroutine reaches a preset time length, and acquiring second triples of other tasks corresponding to other API requests generated by other service coroutines;
the allocation unit is used for allocating the current task and other tasks to each waiting queue by using a pre-established scheduling thread according to the order-preserving identifier of the current task in the first triple and the order-preserving identifiers of other tasks in the second triple;
the processing unit is used for processing the current task and other tasks in the corresponding waiting queues by using a pre-established task processing thread, and storing a processing result into a request result queue in the first triple and the second triple, wherein the task processing thread corresponds to the waiting queues one by one;
the system also comprises a creating unit, a scheduling unit and a processing unit, wherein the creating unit is used for creating a scheduling thread, a preset number of waiting queues and a preset number of task processing threads when the API service is started; when a current API request reaches a main thread, a current service coroutine is created for the current API request in the main thread.
CN201711395321.6A 2017-12-21 2017-12-21 Method and system for concurrently processing API (application program interface) requests Active CN108089919B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711395321.6A CN108089919B (en) 2017-12-21 2017-12-21 Method and system for concurrently processing API (application program interface) requests

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711395321.6A CN108089919B (en) 2017-12-21 2017-12-21 Method and system for concurrently processing API (application program interface) requests

Publications (2)

Publication Number Publication Date
CN108089919A CN108089919A (en) 2018-05-29
CN108089919B true CN108089919B (en) 2021-01-15

Family

ID=62178035

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711395321.6A Active CN108089919B (en) 2017-12-21 2017-12-21 Method and system for concurrently processing API (application program interface) requests

Country Status (1)

Country Link
CN (1) CN108089919B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110990667B (en) * 2019-10-29 2023-06-23 内蒙古大学 Multi-end college student electronic file management system based on cooperative distance technology
CN114924849B (en) * 2022-04-27 2024-06-04 上海交通大学 High concurrency execution and resource scheduling method and device for industrial control system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6152612A (en) * 1997-06-09 2000-11-28 Synopsys, Inc. System and method for system level and circuit level modeling and design simulation using C++
CN102099826A (en) * 2008-07-14 2011-06-15 微软公司 Programming API for an extensible avatar system
CN104142858A (en) * 2013-11-29 2014-11-12 腾讯科技(深圳)有限公司 Blocked task scheduling method and device
CN105159774A (en) * 2015-07-08 2015-12-16 清华大学 API request order-preserving processing method and system
CN106980546A (en) * 2016-01-18 2017-07-25 阿里巴巴集团控股有限公司 A kind of task asynchronous execution method, apparatus and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8589925B2 (en) * 2007-10-25 2013-11-19 Microsoft Corporation Techniques for switching threads within routines

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6152612A (en) * 1997-06-09 2000-11-28 Synopsys, Inc. System and method for system level and circuit level modeling and design simulation using C++
CN102099826A (en) * 2008-07-14 2011-06-15 微软公司 Programming API for an extensible avatar system
CN104142858A (en) * 2013-11-29 2014-11-12 腾讯科技(深圳)有限公司 Blocked task scheduling method and device
CN105159774A (en) * 2015-07-08 2015-12-16 清华大学 API request order-preserving processing method and system
CN106980546A (en) * 2016-01-18 2017-07-25 阿里巴巴集团控股有限公司 A kind of task asynchronous execution method, apparatus and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"基于协程的高并发的分析与研究";刘书健;《中国优秀硕士学位论文全文数据库 信息科技辑》;20170215(第02期);第I138-1416页 *

Also Published As

Publication number Publication date
CN108089919A (en) 2018-05-29

Similar Documents

Publication Publication Date Title
US10891158B2 (en) Task scheduling method and apparatus
US10003500B2 (en) Systems and methods for resource sharing between two resource allocation systems
CN112486648A (en) Task scheduling method, device, system, electronic equipment and storage medium
US7316017B1 (en) System and method for allocatiing communications to processors and rescheduling processes in a multiprocessor system
US8695004B2 (en) Method for distributing computing time in a computer system
US9448864B2 (en) Method and apparatus for processing message between processors
US9858241B2 (en) System and method for supporting optimized buffer utilization for packet processing in a networking device
CN103150213B (en) Balancing method of loads and device
US10686728B2 (en) Systems and methods for allocating computing resources in distributed computing
US20090165003A1 (en) System and method for allocating communications to processors and rescheduling processes in a multiprocessor system
KR101638136B1 (en) Method for minimizing lock competition between threads when tasks are distributed in multi-thread structure and apparatus using the same
CN113032125B (en) Job scheduling method, job scheduling device, computer system and computer readable storage medium
KR20150114444A (en) Method and system for providing stack memory management in real-time operating systems
JP2008186136A (en) Computer system
CN108089919B (en) Method and system for concurrently processing API (application program interface) requests
Reano et al. Intra-node memory safe gpu co-scheduling
US20140245308A1 (en) System and method for scheduling jobs in a multi-core processor
RU2494446C2 (en) Recovery of control of processing resource, which performs external context of execution
CN115495262A (en) Microkernel operating system and method for processing interprocess message
WO2021212967A1 (en) Task scheduling for distributed data processing
CN111290842A (en) Task execution method and device
CN115408117A (en) Coroutine operation method and device, computer equipment and storage medium
Hu et al. Real-time schedule algorithm with temporal and spatial isolation feature for mixed criticality system
JP2015141584A (en) information processing apparatus, information processing method and program
US20150363241A1 (en) Method and apparatus to migrate stacks for thread execution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant