CN117873742A - Asynchronous calling method and system - Google Patents

Asynchronous calling method and system Download PDF

Info

Publication number
CN117873742A
CN117873742A CN202311267712.5A CN202311267712A CN117873742A CN 117873742 A CN117873742 A CN 117873742A CN 202311267712 A CN202311267712 A CN 202311267712A CN 117873742 A CN117873742 A CN 117873742A
Authority
CN
China
Prior art keywords
task
thread
threads
message queue
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311267712.5A
Other languages
Chinese (zh)
Inventor
胡卓
詹马俊
刘鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Unionpay Co Ltd
Original Assignee
China Unionpay Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Unionpay Co Ltd filed Critical China Unionpay Co Ltd
Priority to CN202311267712.5A priority Critical patent/CN117873742A/en
Publication of CN117873742A publication Critical patent/CN117873742A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The application relates to an asynchronous call method, which comprises the following steps: after receiving a connection request from a requester, a forward thread in a first plurality of threads monitors an event of the requester and pushes a first task containing a file descriptor fd into a first message queue; the forward processing thread in the second plurality of threads waits to preempt the first task in the first message queue; and when the first task comprises an outbound call, the forward processing thread saves the context comprising the file descriptor fd to a red-black tree and selects available connection resources from a pool of communication resources to send a request to an outbound system. The application also relates to an asynchronous call system, a computer readable storage medium and a server.

Description

Asynchronous calling method and system
Technical Field
The present application relates to network background technology, and more particularly, to an asynchronous call method and system, a computer readable storage medium, and a server.
Background
The explosive development of mobile payment enables various PC ends and mobile ends with payment requirements to be applied to a payment system in a dispute, and a large number of concurrent requests can cause the back end pressure of the payment system to be overlarge. Particularly, for operations such as cross-system call, database access, disk read-write and the like, a great deal of call waiting can occur so as to influence other normal processing logics and block the system.
In the back-end application, a common processing method is to set a timeout time and set a certain amount of thread resources (generally thousands) for service processing, but at this time, the creation of an asynchronous operation increases a great expense, and the context switching and destruction increases a great expense, so that the abusive asynchronous operation can affect the system performance, and the server cannot cope with massive concurrent service scenes.
Fig. 5 shows a structure diagram of a conventional asynchronous frame. As shown in fig. 5, the asynchronous framework is a thread pool + resource pool model. Specifically, whenever a request is made to the application framework, the application framework fetches a free thread from the thread pool or creates a new thread; the processing process passes through a plurality of processing logics, but occupies the processing thread before the processing is completed; the thread is in a blocked state when disk reads and writes, database accesses, and cross-system calls occur.
It can be seen that this existing asynchronous scheme is not efficient enough to handle complex time scenarios with multiple connection timeouts. When connection resources in the communication resource pool are different from each other, and unstable and frequently unavailable conditions occur, due to the limited number of service threads, excessive overtime connection resources easily cause the blocking of the service threads due to the unavailability of the long-time communication resources, so that the whole application is towed down.
Disclosure of Invention
In response to one or more problems existing in the existing solutions described above, embodiments of the present application propose an asynchronous call solution based on a communication resource pool and read-write separation.
According to one aspect of the present application, there is provided an asynchronous call method, the method comprising: after receiving a connection request from a requester, a forward thread in a first plurality of threads monitors an event of the requester and pushes a first task containing a file descriptor fd into a first message queue; the forward processing thread in the second plurality of threads waits to preempt the first task in the first message queue; and when the first task comprises an outbound call, the forward processing thread saves the context comprising the file descriptor fd to a red-black tree and selects available connection resources from a pool of communication resources to send a request to an outbound system.
Additionally or alternatively to the above, in the above method, the forward thread corresponds to a connection request of the requestor, and the forward thread is configured to maintain and snoop on events of the requestor.
Additionally or alternatively to the above, in the above method, the forward processing thread waits to preempt the first task in the first message queue using a non-blocking epoll_wait function.
Additionally or alternatively to the above, in the above method, the forward processing thread is released after saving the context to the mangrove and sending a request to an external system.
Additionally or alternatively to the above, the method further comprises: the backward thread in the third plurality of threads monitors the event of the external system, and pushes a second task to a second message queue after the external system gives a response; the backward processing thread in the fourth plurality of threads waits for preempting the second task in the second message queue, and obtains the context of the first task by inquiring the red-black tree; and the backward processing thread replies to the requestor.
Additionally or alternatively to the above, in the above method, the second task includes the file descriptor fd, and the backward processing thread deletes a corresponding node in the red-black tree after acquiring a context of the first task.
Additionally or alternatively to the above, in the above method, the responding by the backward processing thread to the requestor includes: the backward processing thread responds directly to the requestor by writing to the file descriptor fd.
Additionally or alternatively to the above, in the above method, the communication resource pool includes a plurality of connection resources, and the node of the stub heap is utilized to store the abnormal resources.
In addition or alternatively to the above solution, in the above method, the key of the node is an absolute time that needs to be fished by the cleaning thread, and the value of the node is a corresponding connection resource.
Additionally or alternatively to the above, in the above method, the absolute time is a current time plus a timeout time to be processed, wherein the timeout time to be processed is different according to the abnormal resource.
Additionally or alternatively to the above, in the above method, after the cleaning thread is configured to obtain a root node from the small root heap, determining whether the root node needs to be processed by comparing a key in the root node with a current time.
According to another aspect of the present application, there is provided an asynchronous call system, the system comprising: a communication resource pool including a plurality of connection resources; a forward thread in a first plurality of threads, configured to monitor an event of a requester after receiving a connection request from the requester, and push a first task including a file descriptor fd to a first message queue; the first message queue is used for storing the first task; and a forward processing thread of a second plurality of threads for waiting to preempt a first task in the first message queue, wherein when the first task comprises an out call, the forward processing thread is configured to save a context comprising the file descriptor fd to a red-black tree and select available connection resources from the pool of communication resources for sending a request to an out system.
Additionally or alternatively to the above, in the above system, the forward thread corresponds to a connection request of the requestor, and the forward thread is configured to maintain and snoop on events of the requestor.
Additionally or alternatively to the above, in the above system, the forward processing thread is configured to utilize a non-blocking epoll_wait function to wait to preempt the first task in the first message queue.
Additionally or alternatively to the above, in the above system, the forward processing thread is released after saving the context to the mangrove and sending a request to an external system.
Additionally or alternatively to the above, the system further comprises: the backward thread in the third plurality of threads is used for monitoring the event of the external system and pushing the second task to a second message queue after the external system gives a response; the second message queue is used for storing the second task; and a backward processing thread in a fourth plurality of threads, configured to wait for preempting a second task in the second message queue, acquire a context of the first task by querying the red-black tree, and answer to the requester.
Additionally or alternatively to the above, in the above system, the second task includes the file descriptor fd, and the backward processing thread is configured to delete a corresponding node in the red-black tree after acquiring the context of the first task.
Additionally or alternatively to the above, in the above system, the backward processing thread is configured to directly reply to the requestor by writing the file descriptor fd.
In addition or alternatively, in the system, the abnormal resources in the plurality of connection resources are stored through nodes of the small root heap.
In addition or alternatively to the above solution, in the above system, the key of the node is an absolute time that needs to be fished by the cleaning thread, and the value of the node is a corresponding connection resource.
According to yet another aspect of the present application, there is provided a computer readable storage medium comprising instructions that, when executed, perform the asynchronous call method as described above.
According to yet another aspect of the present application, a server is provided, the server comprising an asynchronous call system as described above.
Compared with the prior art, the asynchronous call scheme of the embodiment of the application is based on the asynchronous call realized by the message queue and the red black tree, reduces the thread switching overhead of the traditional technology, has extremely high execution efficiency and obvious performance advantage.
In addition, the asynchronous call scheme of the embodiment of the application saves the site after the request is called, namely releases the thread resources, acquires the original task after receiving the response (for example, according to the task ID), and designates the idle thread to continue to process the subsequent service logic, so that the asynchronous call scheme has extremely high practicability.
Furthermore, according to the asynchronous call scheme, read-write separation is achieved by respectively processing read-write events of different types fd through different threads, the external call is decoupled, business thread blocking caused by large-area timeout of a downstream system (such as an external system) can be prevented, and the reliability and stability of the whole asynchronous call system are improved.
Finally, in an embodiment of the present application, the asynchronous call scheme may isolate the normal resource from the abnormal resource by adding the error resource stub heap, and accurately control the error resource processing time by supporting different processing times of different abnormal resources.
Drawings
The foregoing and other objects and advantages of the application will be apparent from the following detailed description taken in conjunction with the accompanying drawings in which like or similar elements are designated by the same reference numerals.
FIG. 1 illustrates a flow diagram of an asynchronous call method according to one embodiment of the present application;
FIG. 2 illustrates a schematic diagram of an asynchronous call system, according to one embodiment of the present application;
fig. 3 shows a schematic diagram of a communication resource pool according to an embodiment of the present application;
FIG. 4 illustrates a schematic diagram of an asynchronous call architecture, according to one embodiment of the present application; and
fig. 5 shows a structure diagram of a conventional asynchronous frame.
Detailed Description
The present application is described more fully hereinafter with reference to the accompanying drawings, in which exemplary embodiments of the application are shown. This application may, however, be embodied in different forms and should not be construed as limited to the embodiments set forth herein. The above-described embodiments are provided to fully complete the disclosure herein so as to more fully convey the scope of the application to those skilled in the art.
In this specification, terms such as "comprising" and "including" mean that there are other elements and steps not directly or explicitly recited in the description and claims, nor do the subject matter of the present application exclude the presence of other elements and steps.
Unless specifically stated otherwise, terms such as "first," "second," "third," and "fourth" do not denote a sequential order of elements in terms of time, space, size, etc., but rather are merely used to distinguish one element from another. Moreover, the terms "first," "second," and the like, are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. In the description of the embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more.
In addition, it should be noted that, without conflict, the embodiments and features of the embodiments in the present application may be combined with each other.
Hereinafter, an asynchronous call scheme according to various exemplary embodiments of the present application will be described in detail with reference to the accompanying drawings.
FIG. 1 illustrates a flow diagram of an asynchronous call method 1000 according to one embodiment of the present application. As shown in fig. 1, the asynchronous call method 1000 includes:
IN step S110, after receiving a connection request (e.g., a new connection request) from a requester, a forward thread of a first plurality of threads listens for an event (e.g., an IN event) of the requester and pushes a first task including a file descriptor fd into a first message queue;
in step S120, a forward processing thread of the second plurality of threads waits to preempt the first task in the first message queue; and
in step S130, when the first task contains an outbound call, the forward processing thread saves the context containing the file descriptor fd to the mangrove and selects available connection resources from the communication resource pool to send a request to an outbound system.
In the context of this application, the term "Thread" (in English) refers to the smallest unit that an operating system can schedule for operations. It is included in the process and is the actual unit of operation in the process. The thread may be a kernel thread, such as a Win32 thread, scheduled by an operating system kernel; user threads which are self-scheduled by user processes, such as POSIX Thread of a Linux platform; or by the kernel mixed scheduling with the threads of user processes, such as Windows 7. Multiple threads in the same process will share all system resources in the process, such as virtual address space, file descriptors, signal processing, and so forth. However, a plurality of threads in the same process have their own call stack (call stack), their own register context (register context), and their own thread local store (thread-local store).
A forward thread is located in the first plurality of threads. For example, referring to FIG. 4, in one embodiment, the first plurality of threads includes two forward threads. These forward threads are used to snoop the event of the requestor (e.g., an IN event). Of course, those skilled in the art will appreciate that FIG. 4 only schematically illustrates the number of forward threads, and is not limiting. In one or more embodiments, the forward threads may have a greater number, depending on the actual needs.
Fd in the term "file descriptor fd" is an abbreviation of Filedescriptor. The file descriptor is a non-negative integer, essentially an index value, that provides access and control of the underlying operating system file handle. For example, in Java, each open file is assigned a unique file descriptor that serves as an identifier for the file in the operating system. For another example, when we want to operate a resource we call the interface corresponding to the operating system, this interface returns a file descriptor fd. Then we can operate on the resource through this fd.
Threads (threads) are an underlying tool to create concurrency, and thus have limitations such as the difficulty in obtaining return values, the hassle of capturing and handling exceptions, etc. Unlike threads, the term "Task" means a concurrent operation implemented with or without threads, which are combinable, can be concatenated together using continuation, which can reduce start-up latency using a thread pool, and can avoid multiple threads waiting for I/O intensive operations at the same time using callback methods.
A forward-handling (forward-handle) thread is located in the second plurality of threads. For example, referring to FIG. 4, in one embodiment, the second plurality of threads includes four forward-handling threads. These forward-handling threads are used to wait to preempt the first task in the first message queue. Of course, those skilled in the art will appreciate that FIG. 4 only schematically illustrates the number of forward-handling threads, and is not intended to be limiting. In one or more embodiments, forward-handling threads may have a greater or lesser number, as may be desired.
The term "red-black tree" is a specific type of binary tree, which is a structure used in computer science to organize blocks of data, such as numbers. The red-black tree keeps balance of the binary search tree through specific operation when performing insertion and deletion operation, so that higher search performance is obtained. It can be searched, inserted, and deleted in O (log n) time, where n is the number of elements in the tree.
The above-mentioned asynchronous call method 1000 is based on the asynchronous call realized by message queue and red black tree, reduces the thread switching overhead of the traditional technology, and has extremely high execution efficiency and obvious performance advantage.
In one embodiment, a forward thread corresponds to a connection request of a requestor, and is used to maintain and snoop events of the requestor. For example, when there is a new connection request (from the requestor), the generated file descriptor fd is hashed (hashed) and then assigned a corresponding forward (forward) thread to be responsible for maintaining and listening for IN events.
In one embodiment, a forward-handling (forward-handle) thread utilizes a non-blocking epoll_wait function to wait to preempt a first task in the first message queue. That is, when the epoll_wait function is used, it can be set to a non-blocking mode.
In one embodiment, a forward-handle thread is released after saving the context to the red-black tree and sending the request to the external system. When task execution occurs for an out call, quickly saving and restoring the scene is an element that effectively improves asynchronous performance. In this embodiment, the asynchronous call method 1000 is capable of efficiently handling asynchronous logic by red-black tree save context, quickly saving and restoring the scene.
Although not shown in fig. 1, in one embodiment, the method 1000 may further include: a backward (backward) thread IN the third plurality of threads listens for an event (for example, an IN event) of the external system, and pushes a second task (i.e., a response task) to a second message queue after the external system gives a response; a backward processing (backword-handle) thread in a fourth plurality of threads waits for preempting a second task in the second message queue, and acquires the context of the first task by querying the red-black tree; and the backward processing thread replies to the requestor.
A backward (backward) thread is located in the third plurality of threads. For example, referring to FIG. 4, in one embodiment, the third plurality of threads includes two backward (backward) threads. These backward (backward) threads are used to snoop events (e.g., IN events) of available connection resources of downstream systems (i.e., foreign systems). Of course, those skilled in the art will appreciate that FIG. 4 only schematically illustrates the number of backward (backward) threads, and is not meant to be limiting. In one or more embodiments, the backward (backward) threads may have a greater number, depending on the actual needs.
Similarly, a backward processing (backword-handle) thread is located in the fourth plurality of threads. For example, referring to FIG. 4, in one embodiment, the fourth plurality of threads includes four backward-processing (backword-handle) threads. These backward-handling threads are used to wait to preempt a second task (reply task) in the second message queue. Of course, those skilled in the art will appreciate that FIG. 4 is only schematically illustrated as a number of backward-processing (backward-handle) threads, and not as a limitation. In one or more embodiments, the backward-processing (backword-handle) threads may have a greater or lesser number, as may be desired.
IN one embodiment, the backward (backward) thread is responsible for listening to IN events of available connection resources of the external system, and if the external system gives a response, the backward (backward) thread acquires a buffered (buf) message and pushes the buffered (buf) message to the message queue.
In one embodiment, a backward-processing (backward-handle) thread non-blocking epoll_wait waits to preempt a second task in a second message queue. The second task may include the file descriptor fd, and the backward processing (backword-handle) thread deletes the corresponding node in the red-black tree after acquiring the context of the first task.
In one embodiment, a back-processing (back-handle) thread replies to a requestor comprising: a backward-processing (backword-handle) thread responds directly to the requestor by writing to the file descriptor fd. For example, a backward processing (backword-handle) thread that obtains the second task continues task processing after obtaining the first task context by querying and deleting the red-black tree node, and after processing is completed, directly responds to the requester by writing the file descriptor fd.
In one embodiment, the communication resource pool may include a plurality of connection resources and utilize nodes of the small root heap to deposit abnormal resources. Referring to fig. 3, a schematic diagram of a communication resource pool is shown, according to one embodiment of the present application. As shown in fig. 3, the service thread and the cleaning thread initialize and maintain a small root heap (i.e. the error resource small root heap in fig. 3), the key value of each node of the small root heap is the absolute time required to be fished by the cleaning thread, and the value of each node is the corresponding connection resource. When the service thread processes the service related to the call between the systems, the corresponding resource is obtained from the communication resource pool, then the external call is initiated, if the resource is successfully connected, the resource pool is put back, otherwise, a small root heap node is generated, the key value is the current time plus the timeout time (absolute time) to be processed, and the key value is inserted into the error resource small root heap.
In one or more embodiments, the processing time of different exception resources is different, e.g., database connections and common service calls have different reconnection processing time requirements for the error resource.
After the cleaning thread is configured to obtain a root node from the small root heap, the key code key in the root node is compared with the current time to determine whether the root node needs to be processed. For example, the cleaning thread may poll the stub heap, after the cleaning thread obtains a root node from the stub heap, compare the key value in the node with the current time, if the key value of the node is greater than the current time, the loop is jumped out, if the key value of the node is less than or equal to the current time, the node needs to be processed when the timeout time is reached, the node is obtained from the stub heap, deleted, and then the stub heap is balanced, and then the resource on the node is processed. If the connection resource is successful, the updated node key is put back into the resource pool, if the connection fails, the updated node key is reinserted into the root pile, and then the next node is processed continuously. The method ensures that each error resource can be processed normally according to the processing time and reconnection is tried, and avoids the situation that no resource is available for the service thread.
In the description of the present specification, the terms "some embodiments," "examples," and the like, describe mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the various embodiments or examples described in this specification and the features of the various embodiments or examples may be combined and combined by persons skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
With respect to the method flow diagrams of the embodiments of the present application, certain operations are described as distinct steps performed in a certain order. Such a flowchart is illustrative and not limiting. Some steps described herein may be grouped together and performed in a single operation, may be partitioned into multiple sub-steps, and may be performed in an order different than that shown herein. The various steps illustrated in the flowcharts may be implemented in any manner by any circuit structure and/or tangible mechanism (e.g., by software running on a computer device, hardware (e.g., processor or chip implemented logic functions), etc., and/or any combination thereof).
In addition, one skilled in the art will readily appreciate that the asynchronous call method 1000 provided by one or more of the above-described embodiments of the present application may be implemented by a computer program. For example, the computer program is embodied in a computer program product that when executed by a processor implements the asynchronous call method 1000 of one or more embodiments of the present application. For another example, when a computer-readable storage medium (e.g., a USB flash disk) storing the computer program is coupled to a computer, the computer program is run to perform the asynchronous call method 1000 of one or more embodiments of the present application.
Referring to fig. 2, fig. 2 illustrates a schematic diagram of an asynchronous call system 2000 according to one embodiment of the present application. The asynchronous call system 2000 includes: a communication resource pool 210, a forward thread 220 in a first plurality of threads, a first message queue 230, and a forward processing thread 240 in a second plurality of threads. Wherein the communication resource pool 210 comprises a plurality of connection resources; the forward thread 220 of the first plurality of threads is configured to, after receiving a connection request from a requester, monitor an event of the requester and push a first task including a file descriptor fd to the first message queue 230; the first message queue 230 is configured to store the first task; and a forward processing thread 240 of a second plurality of threads for waiting to preempt a first task in the first message queue 230, wherein when the first task comprises an out-call, the forward processing thread 240 is configured to save a context comprising the file descriptor fd to a red-black tree and select available connection resources from the pool of communication resources 210 for sending a request to an out-system.
In the context of this application, the term "Thread" (in English) refers to the smallest unit that an operating system can schedule for operations. It is included in the process and is the actual unit of operation in the process. The thread may be a kernel thread, such as a Win32 thread, scheduled by an operating system kernel; user threads which are self-scheduled by user processes, such as POSIX Thread of a Linux platform; or by the kernel mixed scheduling with the threads of user processes, such as Windows 7. Multiple threads in the same process will share all system resources in the process, such as virtual address space, file descriptors, signal processing, and so forth. However, a plurality of threads in the same process have their own call stack (call stack), their own register context (register context), and their own thread local store (thread-local store).
Fd in the term "file descriptor fd" is an abbreviation of Filedescriptor. The file descriptor is a non-negative integer, essentially an index value, that provides access and control of the underlying operating system file handle. For example, in Java, each open file is assigned a unique file descriptor that serves as an identifier for the file in the operating system. For another example, when we want to operate a resource we call the interface corresponding to the operating system, this interface returns a file descriptor fd. Then we can operate on the resource through this fd.
Threads (threads) are an underlying tool to create concurrency, and thus have limitations such as the difficulty in obtaining return values, the hassle of capturing and handling exceptions, etc. Unlike threads, the term "Task" means a concurrent operation implemented with or without threads, which are combinable, can be concatenated together using continuation, which can reduce start-up latency using a thread pool, and can avoid multiple threads waiting for I/O intensive operations at the same time using callback methods.
The term "red-black tree" is a specific type of binary tree, which is a structure used in computer science to organize blocks of data, such as numbers. The red-black tree keeps balance of the binary search tree through specific operation when performing insertion and deletion operation, so that higher search performance is obtained. It can be searched, inserted, and deleted in O (log n) time, where n is the number of elements in the tree.
The asynchronous call system 2000 is based on the asynchronous call realized by the message queue and the red black tree, reduces the thread switching overhead of the traditional technology, has extremely high execution efficiency and obvious performance advantage.
In one embodiment, a forward thread 220 corresponds to a connection request of a requestor, and the forward thread 220 is used to maintain and snoop on events of the requestor. For example, when there is a new connection request (from the requestor), the generated file descriptor fd is hashed (hashed) and assigned a corresponding forward thread 220 that is responsible for maintaining and listening for IN events.
In one embodiment, a forward-handling (forward-handle) thread 240 utilizes a non-blocking epoll_wait function to wait to preempt a first task in the first message queue 230. That is, when the epoll_wait function is used, it can be set to a non-blocking mode.
In one embodiment, forward-handle thread 240 is released after saving the context to the red-black tree and sending the request to the external system. When task execution occurs for an out call, quickly saving and restoring the scene is an element that effectively improves asynchronous performance. In this embodiment, the asynchronous call system 2000 is able to efficiently process asynchronous logic by red-black tree save context, save and restore sites quickly.
In one embodiment, although not shown in FIG. 2, the system 2000 further comprises: a backward (backward) thread in the third plurality of threads, configured to monitor an event of the external system, and push a second task to a second message queue after the external system gives a response; the second message queue is used for storing the second task; and a backward processing (backword-handle) thread in a fourth plurality of threads, configured to wait for preempting a second task in the second message queue, acquire a context of the first task by querying the mangrove, and answer to the requester.
IN one embodiment, the backward (backward) thread is responsible for listening to IN events of available connection resources of the external system, and if the external system gives a response, the backward (backward) thread acquires a buffered (buf) message and pushes the buffered (buf) message to the message queue.
In one embodiment, a backward-processing (backword-handle) thread is configured to non-blocking epoll_wait to preempt a second task in a second message queue. The second task may include the file descriptor fd, and the backward processing (backword-handle) thread deletes the corresponding node in the red-black tree after acquiring the context of the first task.
In one embodiment, a backward-processing (backword-handle) thread is configured to answer directly to the requestor by writing to the file descriptor fd. For example, a backward-processing (backword-handle) thread that obtains the second task is configured to continue task processing after obtaining the first task context by querying and deleting the red-black-tree node, and to directly answer to the requestor by writing the file descriptor fd after processing is completed.
Fig. 3 shows a schematic diagram of a communication resource pool according to an embodiment of the present application. As shown in FIG. 3, the business thread and the cleanup thread are initialized to maintain a stub heap. The key value of each node of the small root heap is the absolute time needed to be fished by the cleaning thread, and the value is the corresponding connection resource. When the service thread processes the service related to the call between the systems, the corresponding resource is obtained from the communication resource pool, then the external call is initiated, if the resource is successfully connected, the resource pool is put back, otherwise, a small root heap node is generated, the key value is the current time plus the timeout time to be processed, and the key value is inserted into the error resource small root heap.
The cleaning thread polls the small root heap, after the cleaning thread acquires a root node from the small root heap, the cleaning thread compares the key value in the node with the current time, if the key value of the node is larger than the current time, the current cycle is jumped out, if the key value of the node is smaller than or equal to the current time, the node is required to be processed when the key value of the node is not overtime, the node is acquired and deleted from the small root heap, then the small root heap is balanced, and then resources on the node are processed. If the connection resource is successful, the updated node key is put back into the resource pool, if the connection fails, the updated node key is reinserted into the root pile, and then the next node is processed continuously. The method ensures that each error resource can be processed normally according to the processing time and reconnection is tried, and avoids the situation that no resource is available for the service thread.
FIG. 4 illustrates a schematic diagram of an asynchronous call architecture, according to one embodiment of the present application. As shown IN fig. 4, when a new connection request exists, the generated file descriptor fd is distributed with a corresponding forward thread after being hashed to be responsible for maintaining and monitoring IN events. When a task enters an application, the forward thread takes a buffer (buf) message and pushes it to a message queue. All forward-handle threads wait for preempting task tasks in the message queue, after the thread acquiring the task processes the service, if the external call is encountered, the context containing the requester fd is saved to the red-black tree, then available connection resources are selected in the communication resource pool, the request is sent to the external system, and the thread continues to process other tasks.
The backward backup thread is responsible for monitoring the IN event of available connection resources of a downstream system (i.e. an external system), and if the downstream system gives a response, the backward backup thread acquires the buf message and pushes the buf message to the message queue. A backward processing (backword-handle) thread does not block the task in the report_wait preemption message queue, the thread obtaining the task continues to process the task after obtaining the original task context by inquiring and deleting the red black tree node, and after the processing is finished, the task is directly responded to a requester by writing fd in the task.
By the asynchronous mode, a small number of threads can process a large number of concurrent services. Whether the requesting party is connected or the external system is connected, the whole architecture decouples reading and writing, and the threads are decoupled in a message queue mode, so that efficient asynchronous concurrent processing is realized, and a large amount of overtime of the external system does not influence the external service and stability of the system.
In one or more embodiments, the asynchronous call system 2000 illustrated in FIG. 2 is integrated in a server that is used, for example, to provide payment services. Because the asynchronous call system 2000 adopts an architecture that an asynchronous processing mechanism with separate reading and writing is combined with a communication resource pool, the server can provide stable high-performance and high-efficiency processing capability in a high-concurrency service scene.
In summary, the asynchronous call scheme of the embodiment of the application is based on the asynchronous call realized by the message queue and the red black tree, reduces the thread switching overhead of the traditional technology, has extremely high execution efficiency and obvious performance advantage. In addition, the asynchronous call scheme of the embodiment of the application saves the site after the request is called, namely releases the thread resources, acquires the original task after receiving the response (for example, according to the task ID), and designates the idle thread to continue to process the subsequent service logic, so that the asynchronous call scheme has extremely high practicability. Furthermore, according to the asynchronous call scheme, read-write separation is achieved by respectively processing read-write events of different types fd through different threads, the external call is decoupled, business thread blocking caused by large-area timeout of a downstream system (such as an external system) can be prevented, and the reliability and stability of the whole asynchronous call system are improved. Finally, in an embodiment of the present application, the asynchronous call scheme may isolate the normal resource from the abnormal resource by adding the error resource stub heap, and accurately control the error resource processing time by supporting different processing times of different abnormal resources.
The above examples mainly illustrate asynchronous call schemes of embodiments of the present application. Although only a few embodiments of the present application have been described, those of ordinary skill in the art will appreciate that the present application may be embodied in many other forms without departing from the spirit or scope thereof. Accordingly, the illustrated examples and embodiments are to be considered as illustrative and not restrictive, and the application is intended to cover various modifications and substitutions without departing from the spirit and scope of the application as defined by the claims.

Claims (22)

1. An asynchronous call method, the method comprising:
after receiving a connection request from a requester, a forward thread in a first plurality of threads monitors an event of the requester and pushes a first task containing a file descriptor fd into a first message queue;
the forward processing thread in the second plurality of threads waits to preempt the first task in the first message queue; and
when the first task includes an external call, the forward processing thread saves the context including the file descriptor fd to a red-black tree and selects available connection resources from a pool of communication resources to send a request to an external system.
2. The method of claim 1, wherein the forward thread corresponds to a connection request of the requestor and is used to maintain and snoop events of the requestor.
3. The method of claim 1, wherein the forward processing thread utilizes a non-blocking epoll_wait function to wait to preempt a first task in the first message queue.
4. The method of claim 1, wherein the forward processing thread is released after saving the context to a mangrove and sending a request to an external system.
5. The method of claim 1 or 4, further comprising:
the backward thread in the third plurality of threads monitors the event of the external system, and pushes a second task to a second message queue after the external system gives a response;
the backward processing thread in the fourth plurality of threads waits for preempting the second task in the second message queue, and obtains the context of the first task by inquiring the red-black tree; and
the backward processing thread responds to the requestor.
6. The method of claim 5, wherein the second task includes the file descriptor fd, and the backward processing thread deletes a corresponding node in the red-black tree after obtaining the context of the first task.
7. The method of claim 5, wherein the back processing thread replying to the requestor comprises:
the backward processing thread responds directly to the requestor by writing to the file descriptor fd.
8. The method of claim 1, wherein the communication resource pool comprises a plurality of connection resources and utilizes nodes of a small root heap to deposit abnormal resources.
9. The method of claim 8, wherein the key of the node is an absolute time that needs to be fished by a clean thread, and the value of the node is a corresponding connection resource.
10. The method of claim 9, wherein the absolute time is a current time plus a timeout to be processed, wherein the timeout to be processed varies according to the abnormal resource.
11. The method of claim 9, wherein the cleaning thread is configured to determine whether processing of the root node is required by comparing key in the root node to a current time after the root node is obtained from the small root heap.
12. An asynchronous call system, the system comprising:
a communication resource pool including a plurality of connection resources;
a forward thread in a first plurality of threads, configured to monitor an event of a requester after receiving a connection request from the requester, and push a first task including a file descriptor fd to a first message queue;
the first message queue is used for storing the first task; and
a forward processing thread of a second plurality of threads for waiting to preempt a first task of the first message queue, wherein when the first task comprises an out call, the forward processing thread is configured to save a context comprising the file descriptor fd to a red-black tree and select available connection resources from the pool of communication resources for sending a request to an out system.
13. The system of claim 12, wherein the forward thread corresponds to a connection request of the requestor and is to maintain and snoop events of the requestor.
14. The system of claim 12, wherein the forward processing thread is configured to utilize a non-blocking epoll_wait function to wait to preempt a first task in the first message queue.
15. The system of claim 12, wherein the forward processing thread is released after saving the context to a mangrove and sending a request to an external system.
16. The system of claim 12 or 15, further comprising:
the backward thread in the third plurality of threads is used for monitoring the event of the external system and pushing the second task to a second message queue after the external system gives a response;
the second message queue is used for storing the second task; and
and the backward processing thread in the fourth plurality of threads is used for waiting for preempting the second task in the second message queue, acquiring the context of the first task by inquiring the red black tree, and responding to the requester.
17. The system of claim 16, wherein the second task includes the file descriptor fd, and the backward processing thread is configured to delete a corresponding node in the red-black tree after obtaining the context of the first task.
18. The system of claim 16, wherein the backward processing thread is configured to reply directly to the requestor by writing to the file descriptor fd.
19. The system of claim 12, wherein an abnormal resource of the plurality of connection resources is deposited by a node of a root heap.
20. The system of claim 19, wherein the key of the node is an absolute time that needs to be fished by the cleaning thread, and the value of the node is a corresponding connection resource.
21. A computer readable storage medium, characterized in that the medium comprises instructions which, when run, perform the method of any one of claims 1 to 11.
22. A server comprising an asynchronous call system as claimed in any one of claims 12 to 20.
CN202311267712.5A 2023-09-27 2023-09-27 Asynchronous calling method and system Pending CN117873742A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311267712.5A CN117873742A (en) 2023-09-27 2023-09-27 Asynchronous calling method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311267712.5A CN117873742A (en) 2023-09-27 2023-09-27 Asynchronous calling method and system

Publications (1)

Publication Number Publication Date
CN117873742A true CN117873742A (en) 2024-04-12

Family

ID=90579848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311267712.5A Pending CN117873742A (en) 2023-09-27 2023-09-27 Asynchronous calling method and system

Country Status (1)

Country Link
CN (1) CN117873742A (en)

Similar Documents

Publication Publication Date Title
CA3104353C (en) Storage volume creation method and apparatus, server, and storage medium
US10528405B2 (en) Methods, apparatus and computer programs for managing persistence
US8074222B2 (en) Job management device, cluster system, and computer-readable medium storing job management program
US5687372A (en) Customer information control system and method in a loosely coupled parallel processing environment
US7698602B2 (en) Systems, methods and computer products for trace capability per work unit
CN111225012A (en) Transaction processing method, device and equipment
US9836516B2 (en) Parallel scanners for log based replication
CN106034137A (en) Intelligent scheduling method for distributed system, and distributed service system
US5682507A (en) Plurality of servers having identical customer information control procedure functions using temporary storage file of a predetermined server for centrally storing temporary data records
US5790868A (en) Customer information control system and method with transaction serialization control functions in a loosely coupled parallel processing environment
US5630133A (en) Customer information control system and method with API start and cancel transaction functions in a loosely coupled parallel processing environment
CN112711606A (en) Database access method and device, computer equipment and storage medium
CN117873742A (en) Asynchronous calling method and system
CN115022402B (en) Agent acquisition method and system based on stack-type integration technology
CN116820430B (en) Asynchronous read-write method, device, computer equipment and storage medium
CN114691309A (en) Batch business processing system, method and device
CN116048787A (en) Asynchronous calling method and device
CN115599838B (en) Data processing method, device, equipment and storage medium based on artificial intelligence
KR100324276B1 (en) method for data backup of DBMS in switching system
CN108900568B (en) Form transmission method, system and server
WO2024174951A1 (en) Data storage method, and device and computer-readable storage medium
CN114051047A (en) Backup method and device of session message, network equipment and storage medium
CN117370047A (en) Data transmission method and system
CN115982263A (en) Dynamic extensible enterprise data integration method and system
CN114201272A (en) Method and device for acquiring input/output information of job

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination