CN111309494A - Multithreading event processing assembly - Google Patents

Multithreading event processing assembly Download PDF

Info

Publication number
CN111309494A
CN111309494A CN201911250374.8A CN201911250374A CN111309494A CN 111309494 A CN111309494 A CN 111309494A CN 201911250374 A CN201911250374 A CN 201911250374A CN 111309494 A CN111309494 A CN 111309494A
Authority
CN
China
Prior art keywords
event
module
thread
timer
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911250374.8A
Other languages
Chinese (zh)
Inventor
张海荣
李思昌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Financial Futures Information Technology Co ltd
Original Assignee
Shanghai Financial Futures Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Financial Futures Information Technology Co ltd filed Critical Shanghai Financial Futures Information Technology Co ltd
Priority to CN201911250374.8A priority Critical patent/CN111309494A/en
Publication of CN111309494A publication Critical patent/CN111309494A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • General Engineering & Computer Science (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention discloses a multithreading event processing component, which realizes the unified management of events by unifying external event definition basic types, encapsulating high-concurrency lock-free queues and unifying a semaphore and IO multiplexing awakening mechanism, and realizes the low-delay and high-concurrency access of thread communication while reducing the development difficulty and cost. The technical scheme is as follows: through a unified event model, a bottom layer event management mechanism and a thread notification management mechanism are decoupled, the cost of an application developer for multi-thread event development is simplified, and the event processing capacity of the application is improved. Compared with the existing event processing method, the method solves the problem of consistency between the communication of the underlying network and the communication of the internal event on one hand; through mechanisms such as an embedded memory pool model and a lock-free queue, the processing efficiency of an application program on an event is greatly improved, and the system ensures high concurrency and low delay while reducing the complexity.

Description

Multithreading event processing assembly
Technical Field
The invention relates to a financial software technology, in particular to a low-delay and high-concurrency multithreading parallel event processing component applied to the field of financial futures.
Background
In the financial futures market, the requirement of low-delay and high-concurrency multithread parallel processing is urgent, and on one hand, the requirement of low delay in an extremely fast trading scene is required to be met, and on the other hand, the requirement of high capacity such as impact statement generated in a trading period is also required to be met. On the use characteristics of CPU and IO, there are two types of common application scenarios: one is CPU intensive, which occurs mainly in usage scenarios with large computation, where threads typically trigger the delivery of messages through semaphore mechanisms; one is IO intensive, which mainly aims at common application scenarios such as network data read-write, disk IO data read-write, etc., and in this scenario, the wake-up of the thread is mainly triggered by an IO multiplexing mechanism to realize the real-time transmission of the message.
At present, common solutions, such as libervent open source libraries, realize uniform and simple abstraction of events, solve event management under IO multiplexing conditions, reduce the complexity of codes to a certain extent for IO-intensive applications, but have two disadvantages for application developers: firstly, the interface packaging is too simple, the development and maintenance cost is higher, and firstly, the intensive development requirements of the CPU cannot be met.
Disclosure of Invention
The following presents a simplified summary of one or more aspects in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key or critical elements of all aspects nor delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more aspects in a simplified form as a prelude to the more detailed description that is presented later.
The present invention is directed to solve the above problems, and provides a multithread event processing component, which encapsulates a high-concurrency lock-free queue by unifying external event definition basic types, unifies a wake-up mechanism of semaphore and IO multiplexing, realizes unified management of events, and realizes low-latency and high-concurrency access of thread communication while reducing development difficulty and cost.
The technical scheme of the invention is as follows: the invention discloses a multithreading event processing component, which comprises a network receiving thread module, an event thread module, a network event queue module, an internal event queue module, an event processing thread module and a timer event module, wherein the output end of the network receiving thread module is connected with the input end of the network event queue module, the output end of the event thread module is connected with the input end of the internal event queue module, the output end of the network event queue module and the output end of the internal event queue module are both connected with the input end of the event processing thread module, and the multithreading event processing component comprises:
the network receiving thread module is used for generating a network event and placing the generated network event into the network event queue module;
the network event queue module is used for storing at least one network event;
the event thread module is used for generating an internal event and placing the generated internal event into the internal event queue module;
the internal event queue module is used for storing at least one internal event;
the event processing thread module is used for reading and processing the network event from the network event queue module, reading and processing the internal event from the internal event queue module, generating and maintaining the timer event in the timer event module, and processing the overtime event in the timer event module.
According to an embodiment of the multithreaded event processing module of the present invention, the network event corresponds to a network packet, the internal event corresponds to a transactional event, and the timer event is an event task that needs to be executed later.
According to an embodiment of the multithreaded event processing component of the present invention, the internal event queue is divided into a private event queue and a public event queue depending on whether the internal event queue can be shared by multiple event processing threads.
According to an embodiment of the multithreaded event processing module of the present invention, a private event queue corresponds to and is shared exclusively by an event processing thread, and a private event is generated and then placed in the corresponding private event queue, and is read and processed by the event processing thread sharing the private event queue.
According to an embodiment of the multithreaded event processing module of the present invention, all the common events in the same common event queue are competitively accessed by the plurality of event processing threads in the thread pool, wherein any one common event in the common event queue is only read and processed by one event processing thread in the thread pool.
According to an embodiment of the multithreaded event processing module of the present invention, different priorities are further set for the plurality of public event queues, and public events set to different priorities are placed in the public event queues of corresponding priorities.
According to an embodiment of the multithreaded event processing component of the present invention, the network event queue and the internal event queue are configured as highly concurrent lock-free queues in terms of underlying event management, and lock-free waiting during a multithreaded access event is implemented by an atomic cas operation.
According to an embodiment of the multi-thread event processing assembly of the present invention, an interface of the timer event module sets a uniform timer timeout callback interface for a timer event, a timer registration interface is called to set timeout time, a callback function, and a timer type, when the timeout time is reached, the multi-thread event processing module automatically callbacks the timer timeout callback interface, and a dynamic parameter binding mechanism is further used to support a user to transfer context information in the timer timeout callback interface, so that the user can conveniently access the context in asynchronous call, wherein the dynamic parameter binding mechanism is used to bind a function parameter list with the timer timeout callback interface, and when the timer timeout callback interface is called, the bound parameter list is used to transfer the context information.
According to an embodiment of the multithreaded event processing module of the present invention, the timer event module implements management of the timer event through a hierarchical time wheel mechanism, under the hierarchical time wheel mechanism, a plurality of time wheels are saved, each time wheel represents a time measurement range of different units, the time measurement ranges are sequentially increased, a circular linked list is used in a single time wheel to perform a seamless deletion operation of the timer, wherein one timer event is placed on the circular linked list mounted in a certain slot corresponding to the certain time wheel according to the time size of the timeout of the timer, the event processing thread module sequentially shifts the pointer of each time wheel backward each time, and the timer event on the circular linked list shifted into the small grid of the certain time wheel is triggered to be executed.
According to an embodiment of the multithreaded event processing module of the present invention, a uniform package abstraction is performed on a thread wakeup mechanism using an observer design model, and thread wakeup is decoupled from an internal event queue and a network event queue, the thread wakeup mechanism treats one event processing thread as an observer, and one network receiving thread or one event thread as an notifier: when a private event arrives, only the thread of the observer who monitors the private event is awakened; when a public event arrives, all watchers' threads listening to handle the public event are woken up.
Compared with the prior art, the invention has the following beneficial effects: in general, the invention decouples the bottom layer event management and the thread notification management mechanism through a unified event model, simplifies the cost of the application developer for the multi-thread event development and improves the event processing capability of the application. Compared with the existing event processing method, the method solves the problem of consistency between the communication of the underlying network and the communication of the internal event on one hand; through mechanisms such as an embedded memory pool model and a lock-free queue, the processing efficiency of an application program on an event is greatly improved, and the system ensures high concurrency and low delay while reducing the complexity.
In detail, the innovation points of the invention are as follows:
1. the multithreading event processing assembly uniformly encapsulates and simplifies the types of the events, the processing modes of the events and the storage modes of the events, provides a private event queue and stores and distributes the events for a public event queue, wherein the public event queue provides priority access, so that application developers can classify the grades of the events. The event bottom layer provides a uniform lock-free queue to realize the storage of the events, so that the reliability is ensured, the capacity is improved, and the time delay is reduced.
2. The multithreading event processing component carries out simplified unified abstract management on the timer event and realizes a hierarchical time wheel management algorithm. Compared with a standard time wheel management algorithm, the hierarchical time wheel management algorithm is optimized on the data structure level, and the ring is used for storing the timer event, so that the time complexity of increasing, deleting, modifying and checking is O (1). In the aspect of timer execution, the invention provides a context saving mechanism, so that a developer can conveniently and quickly obtain the program execution context of the previous period of time in a delayed timing task.
3. The invention creatively unifies two common program notification mechanisms of CPU intensive type and IO intensive type through the notification mechanism of the event, and decouples the notification mechanism and the event storage by using the observer mode, so that a developer can process the event in a streaming way when developing the network application program.
Drawings
The above features and advantages of the present disclosure will be better understood upon reading the detailed description of embodiments of the disclosure in conjunction with the following drawings. In the drawings, components are not necessarily drawn to scale, and components having similar relative characteristics or features may have the same or similar reference numerals.
FIG. 1 illustrates a schematic diagram of one embodiment of a multi-threaded event processing component of the present invention.
Fig. 2 shows the distribution of private event queues.
FIG. 3 illustrates the handling of a common event queue.
FIG. 4 illustrates the communication organization of a prioritized public event queue.
FIG. 5 illustrates an implementation of a hierarchical time-wheel mechanism of the timer event module.
FIG. 6 shows a unified abstraction for semaphore driven CPU intensive events and IO multiplex driven IO intensive events.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. It is noted that the aspects described below in connection with the figures and the specific embodiments are only exemplary and should not be construed as imposing any limitation on the scope of the present invention.
FIG. 1 illustrates a schematic diagram of one embodiment of a multi-threaded event processing component of the present invention. The multithreading event processing component of the embodiment comprises: the system comprises a network receiving thread module, an event thread module, a network event queue module, an internal event queue module, an event processing thread module and a timer event module.
The output end of the network receiving thread module is connected with the input end of the network event queue module, the output end of the event thread module is connected with the input end of the internal event queue module, and the output end of the network event queue module and the output end of the internal event queue module are both connected with the input end of the event processing thread module.
Three event types are abstracted uniformly in the multithread event processing component, and are respectively a network event, an internal event and a timer event.
The network event corresponds to a network message. Internal events correspond to transactionally handled events including, but not limited to, asynchronous log floor events, performance dotting events. The timer event is an event task which needs to be executed later, and the task is characterized in that the task does not need to be executed immediately, an overtime time needs to be set, and the task is executed after the overtime time is up.
The network event is generated by the network receiving thread module, the network event is placed into the network event queue module by the network receiving thread module, and the network event is read and processed by the event processing thread module. The internal event is generated by the event thread module, and the internal event is placed into the internal event queue module by the event thread module and read and processed by the event processing thread module. The timer event in the timer event module is maintained and generated by the event processing thread module, and the overtime event in the timer event module is processed by the event processing thread module.
The internal event queue can be set into two types of a private event queue and a public event queue. As shown in fig. 2, a distribution manner of private event queues is shown, where an owner of a private event queue is an independent private thread, that is, one private thread corresponds to one private event queue, and one private event queue is shared by one processing thread (that is, a private thread), and only the independent private thread owning the queue has processing rights for events in the private event queue. The event thread generates a private event (including an internal event and a timer event), the private event is placed into a corresponding private event queue, and the corresponding private thread of the private event queue is used as an event processing thread for processing.
The owner of the public event queue is a series of threads, and the threads in the thread pool access the public event queue competitively, wherein any public event in the public event queue can be processed by only one thread in the thread pool. As shown in fig. 3, fig. 3 illustrates a processing manner of the common event queue, the event notification thread writes an event into the common event queue, the illustrated three processing threads competitively read an event from the common event queue, and one common event is read only once and can be processed by the processing thread that reads the event. For example, if processing thread 1 shown in FIG. 3 reads the event in the common event queue, the event will be processed by processing thread 1 and the other two processing threads will not process the event.
The multithreading event processing component is used for the aspect of event management of the bottom layer, and a network event queue and an internal event queue (including a public event queue and a private event queue) are realized by a high-concurrency lock-free queue at the bottom layer. Through the atomized CAS operation, lock-free waiting during multi-thread access events is realized, so that safe reading and writing of multi-thread to event queues (including private event queues and public event queues) are supported.
The CAS operation refers to an atomic operation performed by a CAS instruction of a CPU, and comprises three operands: a memory address M, an expected original value A and a new value B, wherein when the operation is executed, if the value of the memory address is matched with the expected original value A, the value of the address is updated to the new value B; otherwise, no update is made, and the operation is guaranteed to be an atomic operation by the CPU and does not need to be locked.
The bottom layers of the network event queue and the internal event queue (divided into a private event queue and a public event queue) are realized by a lock-free queue data structure. The lock-free queue is realized based on CAS operation, when an event is added to the tail of the queue, the CAS operation is used for detecting whether other threads are added to the tail of the queue in advance at other time, if so, the CAS operation is used for repeatedly detecting until no other threads are occupied, and the event can be added to the tail of the queue. The whole process does not need locking, and the safety of multi-thread access can be ensured.
For application scenarios with event priorities, the multithreading event processing component of the invention is also configured with a public event queue with priority identification, the public event is set to different processing priorities, the event with high priority is placed in the public event queue with high priority, and is processed preferentially when processing the event. In order to take care of fairness and avoid high-priority events from being processed all the time, but low-priority public event queues are not processed all the time, the invention limits the maximum event processing quantity which can be processed by each public event queue at one time to be 1000, for example, if the high-priority public event queue has data all the time, after processing of 1000 events, processing resources are allocated to process events in other low-priority public event queues. As shown in fig. 4, the communication organization of the public event queue with priority is shown, and in the figure, the public event queue with three priority is shown, wherein time in the public event queue with high priority is processed preferentially, and the public event queue with certain priority is used for reading events in the queue competitively by a plurality of threads. The event notification thread decides which priority public event queue an event is placed into.
For a timer event, a uniform timer timeout callback interface is set, a developer calls a timer registration interface to set timeout time, a callback function and a timer type (single execution or circular execution) when using the timer, and when the timeout time is up, the multithreading event processing component automatically callbacks the timer timeout callback interface. In addition, a dynamic parameter binding mechanism supports a user to transmit context information in a timer timeout callback interface, so that the user can conveniently access the context in asynchronous calling.
The dynamic parameter binding mechanism is used for binding a function parameter list with a timer timeout callback interface, when the timer timeout callback interface is called, the bound parameter list is used, and a user can transmit context information in the mode.
In addition, the timer event module also realizes the management of the timer event through a hierarchical time wheel mechanism, so that the time complexity of increasing, deleting, modifying and checking is O (1), and under the hierarchical time wheel mechanism, a plurality of time wheels are stored, each time wheel represents the time measurement range of different units, and the time measurement ranges are sequentially increased. In a single time wheel, a circular linked list is used for realizing the seamless deletion operation of a timer, wherein one timer event is put on the circular linked list mounted in a certain slot position corresponding to a certain time wheel according to the overtime time of the timer, an event processing thread module sequentially shifts a pointer of each time wheel backwards every time, and the timer event on the circular linked list in a small grid of the certain time wheel shifted to the small grid is triggered to be executed. The minimum granularity supported by a single timer is 10ms, for software applications 10ms is the minimum granularity of a timer trigger.
FIG. 5 illustrates an implementation of the hierarchical time-wheel mechanism of the timer event module, illustrating a total of five time-wheels, wherein the first time-wheel represents a time range of 0-2.56 seconds, the second time-wheel represents a time range of 2.56 seconds-163 seconds, the third time-wheel represents a time range of 163 seconds-2.9 hours, the fourth time-wheel represents a range of 2.9 hours-7.76 days, and the fifth time-wheel represents a range of 7.76 days-497 days. And one timer event is put on a circular linked list mounted in a certain slot position corresponding to a certain time wheel according to the overtime time of the timer. The accuracy of each cell of the first time round is 10ms, and the accuracy of each cell of the 2 nd to 5 th time rounds is represented as the total time accuracy range of the last time round. And the event processing thread module sequentially shifts the pointer of each time wheel backwards every time, and the timer event on the circular linked list in a certain time wheel cell to which the pointer is shifted is triggered to be executed.
For the application of read-write separation or read-process separation, after the network receiving thread reads the network message, the network message is put into a network event queue, and the event processing thread acquires the network event from the lock-free queue and then processes the network event. In the whole process, the memory pool technology is used in this embodiment, and the memory pool technology refers to allocating a large segment of memory in advance, and directly allocating the memory block from the applied memory in the memory pool when the memory block needs to be applied subsequently, so as to avoid frequently calling the memory allocation interface of the operating system, and improve the memory allocation efficiency. The network read thread applies for a memory block from the memory pool, then copies the network message to the memory block, preprocesses the network message into a network event, then puts the network event into a lock-free queue, and the subsequent service processing thread directly acquires the data of the network message from the memory, thereby reducing the subsequent copy of the network data content.
In order to adapt to CPU intensive and IO intensive applications, an observer design model is used for carrying out uniform packaging abstraction on a thread awakening mechanism, and thread awakening is designed and decoupled with an internal event queue and a network event queue. The observer design model is one of software design patterns, defines a one-to-many relationship between objects, notifies other objects depending on one object data when the object data changes, and is called an observer design pattern, in which observer object data corresponding to the object data changes when the object data changes. The thread wake-up mechanism takes an event processing thread as an observer, a network receiving thread or an event thread as an notifier. For a private event, when a private event arrives, only the thread of the observer listening to the private event will be woken up; for a public event, when a public event arrives, all watchers' threads listening to handle the public event are woken up.
FIG. 6 shows a unified abstraction for semaphore driven CPU intensive events and IO multiplex driven IO intensive events. event _ queue is an abstract event template queue, the queue receives two template parameters, the first one is a notification instance in an observer mode, and two modes, namely semaphore and interrupt, are provided and respectively correspond to semaphore driving and IO multiplexing driving; the second parameter is a storage type, and two storage types are provided, wherein the first storage type is a generalized function object type, and the second storage type is a generalized structure event type, wherein the event queue of the function object type can only be applied to a private event queue access mode, the event queue of the structure type corresponds to a public event queue access mode, and the number of the public event queues can be multiple. The queue _ service is an abstracted uniform access management type for the event queue, and is used for uniformly managing different event types. event _ service is a specialized CPU intensive event management class exposed to the user, and IO _ service is a specialized IO intensive event management class exposed to the user.
While, for purposes of simplicity of explanation, the methodologies are shown and described as a series of acts, it is to be understood and appreciated that the methodologies are not limited by the order of acts, as some acts may, in accordance with one or more embodiments, occur in different orders and/or concurrently with other acts from that shown and described herein or not shown and described herein, as would be understood by one skilled in the art.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
In one or more exemplary embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software as a computer program product, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a web site, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk (disk) and disc (disc), as used herein, includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks (disks) usually reproduce data magnetically, while discs (discs) reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A multithreading event processing assembly, which is characterized by comprising a network receiving thread module, an event thread module, a network event queue module, an internal event queue module, an event processing thread module and a timer event module, wherein the output end of the network receiving thread module is connected with the input end of the network event queue module, the output end of the event thread module is connected with the input end of the internal event queue module, the output end of the network event queue module and the output end of the internal event queue module are both connected with the input end of the event processing thread module, and the multithreading event processing assembly comprises:
the network receiving thread module is used for generating a network event and placing the generated network event into the network event queue module;
the network event queue module is used for storing at least one network event;
the event thread module is used for generating an internal event and placing the generated internal event into the internal event queue module;
the internal event queue module is used for storing at least one internal event;
the event processing thread module is used for reading and processing the network event from the network event queue module, reading and processing the internal event from the internal event queue module, generating and maintaining the timer event in the timer event module, and processing the overtime event in the timer event module.
2. A multi-threaded event processing component as claimed in claim 1, wherein the network events correspond to network packets, the internal events correspond to transactional events, and the timer events are event tasks that need to be executed later.
3. A multithreaded event processing component as in claim 1 wherein the internal event queue is divided into a private event queue and a public event queue depending on whether the internal event queue is shareable by multiple event processing threads.
4. A multi-threaded event processing assembly according to claim 3, wherein a private event queue corresponds to and is shared exclusively by an event processing thread, and a private event is generated and placed into the corresponding private event queue for reading and processing by the event processing thread sharing the private event queue.
5. A multi-threaded event processing component as claimed in claim 3, wherein all common events in the same common event queue are competitively accessed by multiple event processing threads in the thread pool, and wherein any common event in the common event queue is only read and processed by one of the event processing threads in the thread pool.
6. A multithreaded event processing component as in claim 5 wherein different priorities are set for the plurality of public event queues, public events set to different priorities being placed in the public event queues of corresponding priorities.
7. A multi-threaded event processing component as claimed in claim 3, wherein the network event queue and the internal event queue are configured as high-concurrency lock-free queues in the underlying event management, and lock-free waiting of multi-threaded access events is realized by the atomized cas operation.
8. The multithreaded event processing module of claim 1 wherein the interface of the timer event module sets a uniform timer timeout callback interface for the timer event, the call timer registration interface sets timeout time, a callback function, and a timer type, the multithreaded event processing module automatically callbacks the timer timeout callback interface when the timeout time is reached, and further supports a user to transfer context information in the timer timeout callback interface through a dynamic parameter binding mechanism to facilitate the user's access to the context in asynchronous calls, wherein the dynamic parameter binding mechanism is to bind a function parameter list with the timer timeout callback interface, and when the timer timeout callback interface is called, the bound parameter list is used to transfer the context information.
9. The multi-threaded event processing component of claim 1, wherein the timer event module implements management of the timer events through a hierarchical time wheel mechanism, under the hierarchical time wheel mechanism, a plurality of time wheels are stored, each time wheel represents a time measurement range of different units, the time measurement ranges are sequentially increased, a circular linked list is used in a single time wheel for seamless deletion of the timer, one of the timer events is put on the circular linked list mounted at a certain slot corresponding to a certain time wheel according to the time of timeout of the timer, the event processing thread module sequentially shifts the pointer of each time wheel backwards each time, and the timer event on the circular linked list in a small grid of the certain time wheel to which the shift is triggered to be executed.
10. A multithreaded event processing component as in claim 3 wherein a uniform package abstraction is applied to a thread wakeup mechanism using an observer design model, which decouples thread wakeup from internal event queues and network event queues, the thread wakeup mechanism treating an event processing thread as an observer, a network receiving thread, or an event thread as an notifier: when a private event arrives, only the thread of the observer who monitors the private event is awakened; when a public event arrives, all watchers' threads listening to handle the public event are woken up.
CN201911250374.8A 2019-12-09 2019-12-09 Multithreading event processing assembly Pending CN111309494A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911250374.8A CN111309494A (en) 2019-12-09 2019-12-09 Multithreading event processing assembly

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911250374.8A CN111309494A (en) 2019-12-09 2019-12-09 Multithreading event processing assembly

Publications (1)

Publication Number Publication Date
CN111309494A true CN111309494A (en) 2020-06-19

Family

ID=71150756

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911250374.8A Pending CN111309494A (en) 2019-12-09 2019-12-09 Multithreading event processing assembly

Country Status (1)

Country Link
CN (1) CN111309494A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112040317A (en) * 2020-08-21 2020-12-04 海信视像科技股份有限公司 Event response method and display device
CN112732657A (en) * 2020-12-30 2021-04-30 广州金越软件技术有限公司 Method for efficiently reading large number of small files in ftp service scene

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030233485A1 (en) * 2002-06-13 2003-12-18 Mircrosoft Corporation Event queue
CN103092682A (en) * 2011-10-28 2013-05-08 浙江大华技术股份有限公司 Asynchronous network application program processing method
CN104951282A (en) * 2015-05-21 2015-09-30 中国人民解放军理工大学 Timer management system and method
CN110532067A (en) * 2018-05-25 2019-12-03 杭州海康威视数字技术股份有限公司 Event-handling method, device, equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030233485A1 (en) * 2002-06-13 2003-12-18 Mircrosoft Corporation Event queue
CN103092682A (en) * 2011-10-28 2013-05-08 浙江大华技术股份有限公司 Asynchronous network application program processing method
CN104951282A (en) * 2015-05-21 2015-09-30 中国人民解放军理工大学 Timer management system and method
CN110532067A (en) * 2018-05-25 2019-12-03 杭州海康威视数字技术股份有限公司 Event-handling method, device, equipment and storage medium

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
PATRICK SHAUGHNESSY: "《最新DSP技术:"达芬奇"系统、框架和组件》", 北京航空航天大学出版社, pages: 170 - 193 *
PATRICK SHAUGHNESSY: "《最新DSP技术:"达芬奇"系统、框架和组件》", 华中科技大学出版社 *
RYO MIZUTANI: "A Design and Implementation Method for Embedded Systems Using Communicating Sequential Processes with an Event-Driven and Multi-Thread Processor", 《 2012 INTERNATIONAL CONFERENCE ON CYBERWORLDS》, 25 October 2012 (2012-10-25) *
程海洋: "分布式缓存中事件机制子系统的设计与实现", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
程海洋: "分布式缓存中事件机制子系统的设计与实现", 《中国优秀硕士学位论文全文数据库信息科技辑》, vol. 2018, no. 4, 15 April 2018 (2018-04-15) *
韩彪等: "一种适于主-从模式网络计算的事件驱动架构", 《西安交通大学学报》 *
韩彪等: "一种适于主-从模式网络计算的事件驱动架构", 《西安交通大学学报》, no. 02, 10 February 2010 (2010-02-10) *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112040317A (en) * 2020-08-21 2020-12-04 海信视像科技股份有限公司 Event response method and display device
CN112040317B (en) * 2020-08-21 2022-08-09 海信视像科技股份有限公司 Event response method and display device
CN112732657A (en) * 2020-12-30 2021-04-30 广州金越软件技术有限公司 Method for efficiently reading large number of small files in ftp service scene

Similar Documents

Publication Publication Date Title
US10606653B2 (en) Efficient priority-aware thread scheduling
US8763012B2 (en) Scalable, parallel processing of messages while enforcing custom sequencing criteria
EP2893444B1 (en) Quota-based resource management
US8914805B2 (en) Rescheduling workload in a hybrid computing environment
US7802255B2 (en) Thread execution scheduler for multi-processing system and method
US7689998B1 (en) Systems and methods that manage processing resources
CN102541661B (en) Realize the method and apparatus of wait on address synchronization interface
CN113504985B (en) Task processing method and network equipment
US20240070121A1 (en) Thread safe lock-free concurrent write operations for use with multi-threaded in-line logging
US20100211954A1 (en) Practical contention-free distributed weighted fair-share scheduler
JP2013506179A (en) Execution management system combining instruction threads and management method
CN111309494A (en) Multithreading event processing assembly
CN111459622B (en) Method, device, computer equipment and storage medium for scheduling virtual CPU
WO2023011249A1 (en) I/o multiplexing method, medium, device and operation system
US9229716B2 (en) Time-based task priority boost management using boost register values
Wang et al. Real-time middleware for cyber-physical event processing
CN106997304B (en) Input and output event processing method and device
Parikh et al. Performance parameters of RTOSs; comparison of open source RTOSs and benchmarking techniques
CN115658278A (en) Micro task scheduling machine supporting high concurrency protocol interaction
CN108255515A (en) A kind of method and apparatus for realizing timer service
US9201688B2 (en) Configuration of asynchronous message processing in dataflow networks
JP2021060707A (en) Synchronization control system and synchronization control method
Liu et al. RTeX: an Efficient and Timing-Predictable Multi-threaded Executor for ROS 2
TWI748513B (en) Data processing method, system, electronic device and storage media
WO2022174442A1 (en) Multi-core processor, multi-core processor processing method, and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200619