CN111858002A - Concurrent processing method, system and device based on asynchronous IO - Google Patents
Concurrent processing method, system and device based on asynchronous IO Download PDFInfo
- Publication number
- CN111858002A CN111858002A CN202010686322.1A CN202010686322A CN111858002A CN 111858002 A CN111858002 A CN 111858002A CN 202010686322 A CN202010686322 A CN 202010686322A CN 111858002 A CN111858002 A CN 111858002A
- Authority
- CN
- China
- Prior art keywords
- service
- stack
- processing
- processing logic
- request
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 31
- 238000012545 processing Methods 0.000 claims abstract description 234
- 230000001960 triggered effect Effects 0.000 claims abstract description 9
- 238000000034 method Methods 0.000 claims description 27
- 230000000903 blocking effect Effects 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 6
- 238000007726 management method Methods 0.000 description 9
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/16—Handling requests for interconnection or transfer for access to memory bus
- G06F13/1668—Details of memory controller
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/20—Handling requests for interconnection or transfer for access to input/output bus
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Stored Programmes (AREA)
- Multi Processors (AREA)
Abstract
The invention discloses a concurrent processing method, a system and a device based on asynchronous IO, after receiving a service processing request, establishing a service stack containing service processing logic according to the service processing request, and adding the service stack into a service group to be operated; sequentially operating the service processing logic of each service stack according to the adding sequence of the service stacks in the service group to be operated; when the service processing logic of the target service stack runs to the processing logic for representing the IO request, the target service stack is suspended, and the processing of the IO request is triggered in an asynchronous IO mode; and after the IO request processing is finished, adding the target service stack only containing the service processing logic which is not operated into the service group to be operated until all the service processing logics of the target service stack are executed. Therefore, the concurrent logic is controlled in an asynchronous IO mode, so that the service processing capacity of the system is utilized as much as possible, and the service processing efficiency is improved.
Description
Technical Field
The present invention relates to the field of storage systems, and in particular, to a concurrent processing method, system and apparatus based on asynchronous IO.
Background
With the development of information technology, data storage capacity is continuously increased, and higher requirements are put forward on the performance of a storage system. At present, in order to improve the performance of a storage system, the technical means generally adopted is as follows: the IO concurrency in unit time is increased through a multithreading + IO (Input/Output) synchronizing mode (namely, on the basis of multithreading, a single thread adopts an IO synchronizing mode), so that the performance of the storage system is improved. However, in the process of processing the synchronous IO request, a single thread must wait for the processing result of the request before processing the next request, which results in a long service processing time and is not favorable for improving the service processing capability.
Therefore, how to provide a solution to the above technical problem is a problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The invention aims to provide a concurrency processing method, a system and a device based on asynchronous IO, which adopt an asynchronous IO mode to control concurrency logic, and an asynchronous IO request can directly process the next request without waiting for a request processing result, thereby utilizing the service processing capability of the system as much as possible and improving the service processing efficiency.
In order to solve the technical problem, the invention provides a concurrent processing method based on asynchronous IO, which comprises the following steps:
after receiving a service processing request, creating a service stack containing service processing logic according to the service processing request, and adding the service stack into a service group to be operated;
sequentially operating the service processing logic of each service stack according to the adding sequence of the service stacks in the service group to be operated;
when the service processing logic of a target service stack runs to the processing logic for representing the IO request, the target service stack is suspended, and the processing of the IO request is triggered in an asynchronous IO mode; the target service stack is any service stack in the service group to be operated;
and after the IO request processing is finished, adding a target service stack only containing the service processing logic which is not operated into the service group to be operated until all the service processing logics of the target service stack are executed.
Preferably, the process of sequentially running the service processing logic of each service stack according to the adding order of the service stacks in the service group to be run includes:
and sequentially operating the service processing logic of each service stack in a multithreading operation mode according to the adding sequence of the service stacks in the service group to be operated.
Preferably, the concurrent processing method further includes:
when the target service stack is suspended from running, marking the target service stack as a blocking state, and deleting the target service stack marked as the blocking state from the service group to be run;
and when the execution of all the service processing logics of the target service stack is finished, marking the target service stack as a finished state, and deleting the target service stack marked as the finished state from the service group to be operated.
Preferably, the service group to be operated is a FIFO service queue.
Preferably, the concurrent processing method further includes:
when the target service stack is suspended from running, appointing a callback processing logic of the target service stack;
correspondingly, after the IO request is processed, adding a target service stack including only the non-running service processing logic to the service group to be run includes:
and after the IO request is processed, determining the non-running service processing logic corresponding to the target service stack according to the callback processing logic, and adding the target service stack only containing the non-running service processing logic into the service group to be run.
Preferably, the process of adding a target service stack containing only the service processing logic not running to the service group to be run includes:
adding a target service stack containing only non-running service processing logic into the FIFO queue;
and acquiring a service stack from the FIFO queue and adding the service stack to the service group to be operated.
In order to solve the above technical problem, the present invention further provides an asynchronous IO-based concurrent processing system, including:
the task management module is used for creating a service stack containing service processing logic according to the service processing request after receiving the service processing request, and adding the service stack into a service group to be operated;
the request asynchronous triggering module is used for sequentially operating the service processing logic of each service stack according to the adding sequence of the service stacks in the service group to be operated, suspending the operation of the target service stack when the service processing logic of the target service stack operates to the processing logic for representing the IO request to be sent, and triggering the processing of the IO request in an asynchronous IO mode; the target service stack is any service stack in the service group to be operated;
and the completed IO management module is used for adding a target service stack only containing the service processing logic which is not operated to the service group to be operated after the IO request is processed, until the execution of all the service processing logics of the target service stack is finished.
Preferably, the process of sequentially running the service processing logic of each service stack according to the adding order of the service stacks in the service group to be run includes:
and sequentially operating the service processing logic of each service stack in a multithreading operation mode according to the adding sequence of the service stacks in the service group to be operated.
Preferably, the request asynchronous triggering module is further configured to specify callback processing logic of the target service stack when the target service stack suspends operation;
correspondingly, the completed IO management module is specifically configured to, after the IO request is processed, determine an un-operated service processing logic corresponding to the target service stack according to the callback processing logic, and add the target service stack including only the un-operated service processing logic to the service group to be operated until all service processing logics of the target service stack are executed.
In order to solve the above technical problem, the present invention further provides a concurrent processing apparatus based on asynchronous IO, including:
a memory for storing a computer program;
a processor for implementing the steps of any one of the above asynchronous IO based concurrent processing methods when executing the computer program.
The invention provides a concurrent processing method based on asynchronous IO, after receiving a service processing request, creating a service stack containing service processing logic according to the service processing request, and adding the service stack into a service group to be operated; sequentially operating the service processing logic of each service stack according to the adding sequence of the service stacks in the service group to be operated; when the service processing logic of the target service stack runs to the processing logic for representing the IO request, the target service stack is suspended, and the processing of the IO request is triggered in an asynchronous IO mode; and after the IO request processing is finished, adding the target service stack only containing the service processing logic which is not operated into the service group to be operated until all the service processing logics of the target service stack are executed. Therefore, the concurrent logic is controlled in an asynchronous IO mode, and the asynchronous IO request can directly process the next request without waiting for the request processing result, so that the service processing capacity of the system is utilized as much as possible, and the service processing efficiency is improved.
The invention also provides a concurrent processing system and device based on asynchronous IO, and the concurrent processing system and device have the same beneficial effects as the concurrent processing method.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed in the prior art and the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a flowchart of a concurrent processing method based on asynchronous IO according to an embodiment of the present invention;
fig. 2 is a schematic diagram of concurrent processing based on asynchronous IO according to an embodiment of the present invention.
Detailed Description
The core of the invention is to provide a concurrency processing method, a system and a device based on asynchronous IO, the concurrency logic is controlled by adopting an asynchronous IO mode, and the asynchronous IO request can directly process the next request without waiting for the request processing result, thereby utilizing the service processing capability of the system as much as possible and improving the service processing efficiency.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart illustrating a concurrent processing method based on asynchronous IO according to an embodiment of the present invention.
The concurrent processing method based on the asynchronous IO is applied to a storage system and comprises the following steps:
step S1: after receiving the service processing request, creating a service stack containing service processing logic according to the service processing request, and adding the service stack into the service group to be operated.
Specifically, after receiving a service processing request, the present application first creates a service stack for processing the received service processing request, and it can be understood that the created service stack includes service processing logic for processing the received service processing request, so as to implement processing of the service processing request.
After a service stack is created, the created service stack is added to a service group to be operated (the initial state is empty) for subsequent operation of service processing logic of each service stack.
Step S2: and sequentially operating the service processing logic of each service stack according to the adding sequence of the service stacks in the service group to be operated.
Specifically, when the created service stack is added to the service group to be operated, the service processing logic of each service stack in the service group to be operated is sequentially operated according to the adding sequence of each service stack in the service group to be operated, that is, the service stack of the service group to be operated is added first to be processed, and then the service stack of the service group to be operated is added to be processed.
For example, a service stack 1, a service stack 2, and a service stack 3 are sequentially added to a service group to be operated, that is, the adding sequence of each service stack in the service group to be operated is: business stack 1 → business stack 2 → business stack 3, then the processing sequence of each business stack in the business group to be run is: service stack 1 → service stack 2 → service stack 3.
Step S3: and when the service processing logic of the target service stack runs to the processing logic for representing the IO request, the target service stack is suspended, and the processing of the IO request is triggered in an asynchronous IO mode.
It should be noted that, the target service stack refers to any service stack in the service group to be run.
Specifically, for a service stack for processing a service processing request, a service processing logic of the service stack generally includes a plurality of processing logics for characterizing sending of an IO request, that is, when the processing logics for characterizing sending of an IO request are run, a system may be triggered to process the IO request.
Based on this, taking the target service stack as an example, the asynchronous IO triggering process of any service stack in the service group to be run is explained: when the service processing logic of the target service stack runs to the processing logic for representing the IO request, the target service stack is suspended, the processing of the IO request is triggered in an asynchronous IO mode, the asynchronous IO request can directly process the next request without waiting for the processing result of the request, and therefore the service processing capability of the system is utilized as much as possible.
Step S4: and after the IO request processing is finished, adding the target service stack only containing the service processing logic which is not operated into the service group to be operated until all the service processing logics of the target service stack are executed.
Specifically, the asynchronous IO requests are processed in sequence according to the triggering sequence of the asynchronous IO requests, such as deletion of an object in the storage system. And after any asynchronous IO request is processed, adding the service stack corresponding to the asynchronous IO request into the service group to be operated so as to continuously process the service processing logic of the service stack. It should be noted that the service stack added to the service group to be executed only includes the service processing logic that is not executed (which is equivalent to segmenting the service processing logic according to the processing logic that characterizes the sending IO request in the service processing logic), and this is circulated until all the service processing logic of the service stack is executed.
The invention provides a concurrent processing method based on asynchronous IO, after receiving a service processing request, creating a service stack containing service processing logic according to the service processing request, and adding the service stack into a service group to be operated; sequentially operating the service processing logic of each service stack according to the adding sequence of the service stacks in the service group to be operated; when the service processing logic of the target service stack runs to the processing logic for representing the IO request, the target service stack is suspended, and the processing of the IO request is triggered in an asynchronous IO mode; and after the IO request processing is finished, adding the target service stack only containing the service processing logic which is not operated into the service group to be operated until all the service processing logics of the target service stack are executed. Therefore, the concurrent logic is controlled in an asynchronous IO mode, and the asynchronous IO request can directly process the next request without waiting for the request processing result, so that the service processing capacity of the system is utilized as much as possible, and the service processing efficiency is improved.
On the basis of the above-described embodiment:
as an optional embodiment, the process of sequentially running the service processing logic of each service stack according to the adding order of the service stacks in the service group to be run includes:
and sequentially operating the service processing logic of each service stack by adopting a multithreading operation mode according to the adding sequence of the service stacks in the service group to be operated.
Specifically, the method and the device can sequentially run the service processing logic of each service stack in a multithreading running mode according to the adding sequence of the service stacks in the service group to be run, so that the processing efficiency of each service stack is improved. For example, a service stack 1, a service stack 2, and a service stack 3 are sequentially added to a service group to be operated, the present application sequentially operates the service processing logic of each service stack in a multi-thread (thread 1, thread 2, and thread 3) operation mode, specifically, after the service stack 1 is added to the service group to be operated, the service stack 1 is allocated to the thread 1 for processing; after the service stack 2 is added into the service group to be operated, the service stack 2 is allocated to the thread 2 for processing; after the service stack 3 is added to the service group to be run, the service stack 3 is allocated to the thread 3 for processing.
Of course, the application can also sequentially run the service processing logic of each service stack in a single-thread running mode according to the adding sequence of the service stacks in the service group to be run. For example, a service stack 1, a service stack 2 and a service stack 3 are sequentially added to a service group to be operated, the service processing logic of each service stack is sequentially operated in a single-thread (thread 1) operation mode, and specifically, after the service stack 1 is added to the service group to be operated, the service stack 1 is allocated to the thread 1 for processing; after the service stack 2 is added into the service group to be operated and the service stack 1 is suspended, the service stack 2 is allocated to the thread 1 for processing; after the service stack 3 is added to the service group to be run, and when the service stack 2 is suspended, the service stack 3 is allocated to the thread 1 for processing.
As an optional embodiment, the concurrent processing method further includes:
when the target service stack is suspended from running, marking the target service stack as a blocking state, and deleting the target service stack marked as the blocking state from the service group to be transported;
and when the execution of all the service processing logics of the target service stack is finished, marking the target service stack as a finished state, and deleting the target service stack marked as the finished state from the service group to be transported.
Further, considering that after the target service stack is suspended from running, the target service stack which only includes the service processing logic which is not operated subsequently is added to the service group to be operated again, and the target service stack which is added to the service group to be operated before is not used any more, the application marks the target service stack as a blocking state when the target service stack is suspended from running, and simultaneously deletes the target service stack marked as the blocking state from the service group to be operated, namely deletes the target service stack which is added to the service group to be operated before from the service group to be operated.
In addition, considering that the target service stack added to the service group to be operated is not used any more after all the processing logics of the target service stack are executed, the method marks the target service stack as a finished state and deletes the target service stack marked as the finished state from the service group to be operated so as to release stack resources when all the processing logics of the target service stack are executed.
As an alternative embodiment, the pending transaction group is a FIFO transaction queue.
Specifically, the to-be-shipped service group of the present application may select a First Input First Output (FIFO) service queue, where the FIFO service queue has a certain capacity, and after the capacity of the FIFO service queue is stored fully, and a new element is added, the element that is added to the FIFO service queue First is automatically deleted, so that, as a service stack (a newly created service stack + an unprocessed service stack) is added to the FIFO service queue, the service stack that is added to the FIFO service queue before and is no longer used is automatically deleted, so as to release stack resources.
As an optional embodiment, the concurrent processing method further includes:
when the target service stack is suspended from running, designating a callback processing logic of the target service stack;
correspondingly, after the IO request processing is completed, a process of adding a target service stack, which only includes a service processing logic that is not operated, to a service group to be operated includes:
and after the IO request is processed, determining the non-running service processing logic corresponding to the target service stack according to the callback processing logic, and adding the target service stack only containing the non-running service processing logic into the service group to be run.
Further, the present application may specify the callback processing logic of the target service stack when the target service stack is suspended from running, that is, specify where the service processing logic of the target service stack running next starts. Based on this, after an IO request corresponding to the target service stack is processed, the application may determine the non-running service processing logic corresponding to the target service stack according to the callback processing logic of the target service stack specified in the last suspended running, so as to add the target service stack only including the non-running service processing logic to the service group to be run again, thereby implementing the service processing logic for continuously processing the target service stack.
As an alternative embodiment, a process of adding a target service stack containing only non-running service processing logic to a service group to be run includes:
adding a target service stack containing only non-running service processing logic into the FIFO queue;
and acquiring a service stack from the FIFO queue and adding the service stack into the service group to be operated.
Specifically, after an IO request corresponding to the target service stack is processed, the target service stack only including the non-running service processing logic may be added to the FIFO queue, and based on the first-in first-out rule of the FIFO queue, the service stack first entering the FIFO queue is added to the service group to be run, and the next round of cyclic processing is started.
In summary, the concurrent processing method based on asynchronous IO can be implemented by programming a processor, an FPGA (Field-Programmable Gate Array) chip with high time precision and flexible algorithm adjustment can be used as the processor, and the functional principle of the FPGA chip is as shown in fig. 2: the task management module is used for creating a service stack containing service processing logic according to the service processing request after receiving the service processing request, and adding the service stack into a service group to be operated; the request asynchronous triggering module is used for sequentially operating the service processing logic of each service stack according to the adding sequence of the service stacks in the service group to be operated, and when the service processing logic of the target service stack operates to the processing logic for representing the IO request to be sent, the target service stack is suspended from operating, and the processing of the IO request is triggered in an asynchronous IO mode; the IO request processing module is used for processing an IO request of the request asynchronous trigger module, and adding a service stack which only contains the non-running service processing logic and corresponds to the completed IO request into the completed IO management module after the IO request is processed; and the completed IO management module is used for adding the service stack only containing the service processing logic which is not operated into the service group to be operated so as to enter the next round of circular processing.
The application also provides a concurrent processing system based on asynchronous IO, including:
the task management module is used for creating a service stack containing service processing logic according to the service processing request after receiving the service processing request and adding the service stack into a service group to be operated;
the request asynchronous triggering module is used for sequentially operating the service processing logic of each service stack according to the adding sequence of the service stacks in the service group to be operated, suspending the operation of the target service stack when the service processing logic of the target service stack operates to the processing logic for representing the IO request transmission, and triggering the processing of the IO request in an asynchronous IO mode; the target service stack is any service stack in the service group of the industry to be transported;
and the completed IO management module is used for adding the target service stack only containing the business processing logic which is not operated to the service group to be operated after the IO request processing is completed until all the business processing logics of the target service stack are executed and finished.
As an optional embodiment, the process of sequentially running the service processing logic of each service stack according to the adding order of the service stacks in the service group to be run includes:
and sequentially operating the service processing logic of each service stack by adopting a multithreading operation mode according to the adding sequence of the service stacks in the service group to be operated.
As an optional embodiment, the request asynchronous triggering module is further configured to specify a callback processing logic of the target service stack when the target service stack suspends operation;
correspondingly, the completed IO management module is specifically configured to, after the IO request is processed, determine an un-operated service processing logic corresponding to the target service stack according to the callback processing logic, and add the target service stack including only the un-operated service processing logic to the service group to be operated until all service processing logics of the target service stack are executed.
For introduction of the concurrent processing system provided in the present application, reference is made to the above-mentioned embodiment of the concurrent processing method, and details of the present application are not repeated herein.
The application also provides a concurrent processing device based on asynchronous IO, including:
a memory for storing a computer program;
a processor for implementing the steps of any one of the above asynchronous IO based concurrent processing methods when executing a computer program.
For introduction of the concurrent processing apparatus provided in the present application, please refer to the above-mentioned embodiment of the concurrent processing method, which is not described herein again.
It is further noted that, in the present specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (10)
1. A concurrent processing method based on asynchronous IO is characterized by comprising the following steps:
after receiving a service processing request, creating a service stack containing service processing logic according to the service processing request, and adding the service stack into a service group to be operated;
sequentially operating the service processing logic of each service stack according to the adding sequence of the service stacks in the service group to be operated;
when the service processing logic of a target service stack runs to the processing logic for representing the IO request, the target service stack is suspended, and the processing of the IO request is triggered in an asynchronous IO mode; the target service stack is any service stack in the service group to be operated;
And after the IO request processing is finished, adding a target service stack only containing the service processing logic which is not operated into the service group to be operated until all the service processing logics of the target service stack are executed.
2. The asynchronous IO-based concurrent processing method according to claim 1, wherein a process of sequentially running the service processing logic of each service stack according to an addition order of the service stacks in the service group to be run comprises:
and sequentially operating the service processing logic of each service stack in a multithreading operation mode according to the adding sequence of the service stacks in the service group to be operated.
3. The asynchronous IO based concurrent processing method according to claim 1, wherein the concurrent processing method further comprises:
when the target service stack is suspended from running, marking the target service stack as a blocking state, and deleting the target service stack marked as the blocking state from the service group to be run;
and when the execution of all the service processing logics of the target service stack is finished, marking the target service stack as a finished state, and deleting the target service stack marked as the finished state from the service group to be operated.
4. The asynchronous IO based concurrency processing method according to claim 1, wherein the service group to be run is a FIFO service queue.
5. The asynchronous IO based concurrency processing method according to any one of claims 1 to 4, wherein said concurrency processing method further comprises:
when the target service stack is suspended from running, appointing a callback processing logic of the target service stack;
correspondingly, after the IO request is processed, adding a target service stack including only the non-running service processing logic to the service group to be run includes:
and after the IO request is processed, determining the non-running service processing logic corresponding to the target service stack according to the callback processing logic, and adding the target service stack only containing the non-running service processing logic into the service group to be run.
6. The asynchronous IO based concurrency processing method according to claim 5, wherein the process of adding a target service stack containing only non-running service processing logic to the service group to be run comprises:
adding a target service stack containing only non-running service processing logic into the FIFO queue;
And acquiring a service stack from the FIFO queue and adding the service stack to the service group to be operated.
7. An asynchronous IO-based concurrent processing system, comprising:
the task management module is used for creating a service stack containing service processing logic according to the service processing request after receiving the service processing request, and adding the service stack into a service group to be operated;
the request asynchronous triggering module is used for sequentially operating the service processing logic of each service stack according to the adding sequence of the service stacks in the service group to be operated, suspending the operation of the target service stack when the service processing logic of the target service stack operates to the processing logic for representing the IO request to be sent, and triggering the processing of the IO request in an asynchronous IO mode; the target service stack is any service stack in the service group to be operated;
and the completed IO management module is used for adding a target service stack only containing the service processing logic which is not operated to the service group to be operated after the IO request is processed, until the execution of all the service processing logics of the target service stack is finished.
8. The asynchronous IO based concurrent processing system according to claim 7, wherein the process of sequentially running the service processing logic of each service stack according to the adding order of the service stacks in the service group to be run comprises:
And sequentially operating the service processing logic of each service stack in a multithreading operation mode according to the adding sequence of the service stacks in the service group to be operated.
9. The asynchronous IO based concurrency processing system of claim 7, wherein said request asynchronous trigger module is further to specify callback processing logic of said target service stack when said target service stack is suspended from running;
correspondingly, the completed IO management module is specifically configured to, after the IO request is processed, determine an un-operated service processing logic corresponding to the target service stack according to the callback processing logic, and add the target service stack including only the un-operated service processing logic to the service group to be operated until all service processing logics of the target service stack are executed.
10. An asynchronous IO-based concurrent processing apparatus, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the asynchronous IO based concurrent processing method according to any of claims 1 to 6 when executing the computer program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010686322.1A CN111858002B (en) | 2020-07-16 | 2020-07-16 | Concurrent processing method, system and device based on asynchronous IO |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010686322.1A CN111858002B (en) | 2020-07-16 | 2020-07-16 | Concurrent processing method, system and device based on asynchronous IO |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111858002A true CN111858002A (en) | 2020-10-30 |
CN111858002B CN111858002B (en) | 2022-12-23 |
Family
ID=72983644
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010686322.1A Active CN111858002B (en) | 2020-07-16 | 2020-07-16 | Concurrent processing method, system and device based on asynchronous IO |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111858002B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104932932A (en) * | 2014-03-20 | 2015-09-23 | 腾讯科技(深圳)有限公司 | Asynchronous business processing method, device and system |
CN106371900A (en) * | 2015-07-23 | 2017-02-01 | 腾讯科技(深圳)有限公司 | Data processing method and device for realizing asynchronous call |
CN111190727A (en) * | 2019-11-19 | 2020-05-22 | 腾讯科技(深圳)有限公司 | Asynchronous memory destructuring method and device, computer equipment and storage medium |
-
2020
- 2020-07-16 CN CN202010686322.1A patent/CN111858002B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104932932A (en) * | 2014-03-20 | 2015-09-23 | 腾讯科技(深圳)有限公司 | Asynchronous business processing method, device and system |
CN106371900A (en) * | 2015-07-23 | 2017-02-01 | 腾讯科技(深圳)有限公司 | Data processing method and device for realizing asynchronous call |
CN111190727A (en) * | 2019-11-19 | 2020-05-22 | 腾讯科技(深圳)有限公司 | Asynchronous memory destructuring method and device, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111858002B (en) | 2022-12-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106802826B (en) | Service processing method and device based on thread pool | |
US10223165B2 (en) | Scheduling homogeneous and heterogeneous workloads with runtime elasticity in a parallel processing environment | |
KR100509794B1 (en) | Method of scheduling jobs using database management system for real-time processing | |
JP5650952B2 (en) | Multi-core / thread workgroup calculation scheduler | |
CN113535367B (en) | Task scheduling method and related device | |
CN108595282A (en) | A kind of implementation method of high concurrent message queue | |
US20100153957A1 (en) | System and method for managing thread use in a thread pool | |
US8607239B2 (en) | Lock mechanism to reduce waiting of threads to access a shared resource by selectively granting access to a thread before an enqueued highest priority thread | |
US20190286582A1 (en) | Method for processing client requests in a cluster system, a method and an apparatus for processing i/o according to the client requests | |
CN110851276A (en) | Service request processing method, device, server and storage medium | |
US20240061710A1 (en) | Resource allocation method and system after system restart and related component | |
CN114461365A (en) | Process scheduling processing method, device, equipment and storage medium | |
CN110413210B (en) | Method, apparatus and computer program product for processing data | |
CN111858002B (en) | Concurrent processing method, system and device based on asynchronous IO | |
CN112306827A (en) | Log collection device, method and computer readable storage medium | |
CN111400073A (en) | Formalized system model conversion and reliability analysis method from automobile open architecture system to unified software and hardware representation | |
CN112486638A (en) | Method, apparatus, device and storage medium for executing processing task | |
CN108121580B (en) | Method and device for realizing application program notification service | |
CN115981893A (en) | Message queue task processing method and device, server and storage medium | |
US20230393782A1 (en) | Io request pipeline processing device, method and system, and storage medium | |
JP2008225641A (en) | Computer system, interrupt control method and program | |
CN110825342A (en) | Memory scheduling device and system, method and apparatus for processing information | |
CN114741165A (en) | Processing method of data processing platform, computer equipment and storage device | |
CN113778910A (en) | Data cache processing method and device | |
CN108038007B (en) | Method and system for orderly processing messages based on Ignite |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |