WO2019223596A1 - Method, device, and apparatus for event processing, and storage medium - Google Patents
Method, device, and apparatus for event processing, and storage medium Download PDFInfo
- Publication number
- WO2019223596A1 WO2019223596A1 PCT/CN2019/087219 CN2019087219W WO2019223596A1 WO 2019223596 A1 WO2019223596 A1 WO 2019223596A1 CN 2019087219 W CN2019087219 W CN 2019087219W WO 2019223596 A1 WO2019223596 A1 WO 2019223596A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- event
- queue
- type
- events
- queues
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/466—Transaction processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/546—Message passing systems or structures, e.g. queues
Definitions
- the present application relates to the field of big data technology, and in particular, to an event processing method, device, device, and storage medium.
- a distributed system is a system composed of a group of node devices that communicate and coordinate work through the network to complete common tasks. At each stage of task execution, the distributed system will generate corresponding events due to the process of executing the task. For example, during the task of performing statistics on license plate data, the distributed system will generate an event of writing license plate data to the storage system. Distributed systems need to process the events that are generated in order to complete the task.
- the Spark architecture includes a client (English: client) node.
- the client node can include an event processing module for processing events.
- the client node creates an event during initialization. Queues and event queues are used to buffer events sent to the event processing module. During the processing of tasks, whenever any event occurs, the client node will list the events into the event queue. At the head of the team, the client node dequeues the event from the event queue and sends the event to the event processing module. Through the event processing module, the event can be processed.
- the capacity of a single event queue is small. Once the events in this event queue reach the capacity limit, new events cannot be accommodated, and the distributed system cannot continue to process new events, which affects the processing performance of the distributed system.
- the embodiments of the present application provide an event processing method, device, device, and storage medium, which can solve the technical problem that the capacity of a single event queue in the related technology is limited, resulting in low processing performance of the distributed system.
- the technical solution is as follows:
- an event processing method includes:
- the event is dequeued from the target event queue, the event is processed.
- determining the target event queue from a plurality of event queues based on the event type includes:
- routing information to obtain an event queue identifier corresponding to the event type, where the routing information includes multiple event types and corresponding multiple event queue identifiers;
- the event queue corresponding to the event queue identifier is used as the target event queue.
- the depth of any one of the multiple event queues is positively related to the time-consuming duration of processing events of the corresponding event type.
- the method before the obtaining an event type of the event, the method further includes:
- multiple event queues are generated.
- the method further includes:
- the number of threads corresponding to any of the event queues in the multiple event queues is positively related to the time consumed for processing events of the corresponding event type.
- the processing the event includes:
- the event processing module processes the event.
- the event processing module corresponding to any one of the multiple event types includes at least two.
- the number of event processing modules corresponding to any one of the multiple event types is positively related to the time-consuming duration of processing events of the corresponding event type.
- the multiple event queues include an event queue corresponding to a heartbeat event type, an event queue corresponding to a resource monitoring event type, an event queue corresponding to a resource application event type, and an event queue corresponding to a system file event type. , At least two of the event queues corresponding to the job event type and the event queues corresponding to other event types.
- an event processing apparatus where the apparatus includes:
- An acquisition module configured to acquire an event type of the event when an event is generated in the distributed system
- a determining module configured to determine a target event queue from a plurality of event queues based on the event type, and the plurality of event queues are respectively used to buffer events of multiple event types;
- Enqueuing module configured to enqueue the event into the target event queue
- An event processing module is configured to process the event when the event is dequeued from the target event queue.
- the determining module includes:
- a query submodule configured to query routing information to obtain an event queue identifier corresponding to the event type, where the routing information includes multiple event types and corresponding multiple event queue identifiers;
- a determining submodule is configured to use an event queue corresponding to the event queue identifier as the target event queue.
- the depth of any one of the multiple event queues is positively related to the time-consuming duration of processing events of the corresponding event type.
- the apparatus further includes:
- a generating module is configured to generate multiple event queues for the multiple event types.
- the apparatus further includes:
- the sending module is configured to send an event in the event queue to the event processing module concurrently through multiple threads for any event queue in the multiple event queues.
- the number of threads corresponding to any of the event queues in the multiple event queues is positively related to the time consumed for processing events of the corresponding event type.
- the apparatus further includes:
- a matching module configured to match the event with at least one sub-event type under the event type to obtain the sub-event type matched by the event;
- the sending module is configured to send the event to an event processing module corresponding to the sub-event type.
- the event processing module corresponding to any one of the multiple event types includes at least two.
- the number of event processing modules corresponding to any one of the multiple event types is positively related to the time-consuming duration of processing events of the corresponding event type.
- the multiple event queues include an event queue corresponding to a heartbeat event type, an event queue corresponding to a resource monitoring event type, an event queue corresponding to a resource application event type, and an event queue corresponding to a system file event type. , At least two of the event queues corresponding to the job event type and the event queues corresponding to other event types.
- a computer device in another aspect, includes a processor and a memory, and the memory stores at least one instruction, and the instruction is loaded and executed by the processor to implement the foregoing event processing method.
- a computer-readable storage medium stores at least one instruction, and the at least one instruction is executed by a processor to implement the foregoing event processing method.
- a computer program product containing instructions which when run on a computer device, enables the computer device to implement the event processing method described above.
- a chip in another aspect, includes a processor and / or program instructions. When the chip is running, the event processing method is implemented.
- the method, device, device and storage medium provided in the embodiments of the present application introduce a multi-queue event caching mechanism for a distributed system.
- each event is separately entered according to the event type.
- Listing to the corresponding event queues increases the number of event queues, thereby increasing the total capacity of the event queues, further improving the ability of distributed systems to cache events, and greatly improving the performance of distributed systems.
- the distributed system faces high concurrent access, it can meet the needs of the distributed system to cache a large number of events.
- the capacity of the event queue is expanded, and a large number of events can be cached through multiple event queues, which prevents the frequent loss of events caused by insufficient event queue capacity, thereby improving the stability and availability of the distributed system.
- different event queues are used to cache events of different event types, so that a large number of events in the distributed system can be cached separately. The processing of different types of events will not interfere with each other, which improves the processing efficiency of the entire distributed system.
- FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application.
- FIG. 2 is a flowchart of an event processing method according to an embodiment of the present application.
- FIG. 3 is a flowchart of an event processing method according to an embodiment of the present application.
- FIG. 4 is a schematic diagram of an event processing method according to an embodiment of the present application.
- FIG. 5 is a schematic structural diagram of an event processing apparatus according to an embodiment of the present application.
- FIG. 6 is a schematic structural diagram of a computer device according to an embodiment of the present application.
- FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application.
- the implementation environment includes a master node 101, at least one slave node 102, and a client node 103.
- the master node 101, the at least one slave node 102, and the client node 103 are connected through a network.
- the master node 101, the at least one slave node 102, and the client node 103 can form a distributed system and work together to complete tasks.
- the client node 103 can generate a task to be executed, and send the task to be executed to the master node 101.
- the master node 101 can assign a task to each slave node 102, each slave node 102 can execute the task, and the result of the task processing Sent to the client node 103.
- the architecture of the distributed system includes, but is not limited to, various architectures such as a spark architecture, a flink architecture, a mapreduce architecture, and a storm architecture.
- the client node 103 may be a driver (English: Driver) node in the spark architecture
- the master node 101 may be a cluster manager (English: Cluster Manager) in the spark architecture.
- the node, that is, the master (English: master) node, and the slave node 102 may be a worker (English: Worker) node in the Spark architecture.
- the client node 103 may be a computer device, such as a terminal or a server, and may include a personal computer, a notebook computer, a mobile phone, and the like.
- the master node 101 and at least one slave node 102 may include a server, a terminal, and the like.
- FIG. 2 is a flowchart of an event processing method provided by an embodiment of the present application. The method is executed by a computer device. The method includes the following steps:
- a target event queue from a plurality of event queues, where the plurality of event queues are respectively used to buffer events of multiple event types.
- the method provided in the embodiment of the present application introduces a multi-queue event cache mechanism for a distributed system.
- each event is listed in the corresponding event queue according to the event type.
- Increasing the number of event queues thereby increasing the total capacity of the event queue, thereby improving the ability of distributed systems to cache events, and greatly improving the performance of distributed systems.
- the capacity of the event queue is expanded, and a large number of events can be cached through multiple event queues, which avoids the frequent loss of events caused by insufficient event queue capacity, thereby improving the stability and availability of the distributed system.
- different event queues are used to cache events of different event types, so that a large number of events in the distributed system can be cached separately. The processing of different types of events will not interfere with each other, which improves the processing efficiency of the entire distributed system.
- determining the target event queue from multiple event queues based on the event type includes:
- the routing information includes multiple event types and corresponding multiple event queue identifiers.
- the depth of any one of the plurality of event queues is positively related to the time consuming time for processing events of the corresponding event type.
- the method before obtaining the event type of the event, the method further includes:
- the method further includes:
- multiple events are used to concurrently send events in the event queue to the event processing module;
- the number of threads corresponding to any of the event queues in the multiple event queues is positively related to the time consumed for processing events of the corresponding event type.
- the event processing includes:
- the event processing module processes the event.
- the event processing module corresponding to any one of the multiple event types includes at least two.
- the number of event processing modules corresponding to any one of the multiple event types is positively related to the time-consuming time for processing events of the corresponding event type.
- the multiple event queues include an event queue corresponding to a heartbeat event type, an event queue corresponding to a resource monitoring event type, an event queue corresponding to a resource application event type, an event queue corresponding to a system file event type, At least two of the event queues corresponding to the job event type and the event queues corresponding to other event types.
- FIG. 3 is a flowchart of an event processing method provided by an embodiment of the present application.
- the method is executed by a computer device, and the computer device may be a node device where an event queue is located in a distributed system.
- the computer device may be a driver node, that is, a client node.
- the method includes:
- a computer device generates multiple event queues for multiple event types.
- a multi-queue event cache mechanism is designed, and multiple event queues are generated so that events of different event types can be cached through multiple event queues, respectively.
- the computer device can obtain multiple event types. For each of the multiple event types, the computer device can generate a corresponding event queue for the event type, thereby obtaining multiple event queues. Among them, each event queue is used to cache events of a corresponding event type.
- multiple event types The division of event types can be determined according to the business needs of the distributed system.
- multiple event types can include heartbeat event types, resource monitoring event types, resource application event types, system file events At least two of the type, job event type, and other event types.
- Heartbeat event type includes various heartbeat events, for example, it can include the heartbeat event between the master node and each slave node, the heartbeat event between the master service and each slave service, the client node and each slave node. Heartbeat events, heartbeat events between client nodes and master nodes, etc.
- Resource monitoring event types including various events that obtain information on the use of resources in a distributed system.
- the resources can include CPU (Central Processing Unit), memory, and disk IO (Input / Output, input / output). ), Network bandwidth, etc.
- Types of resource application events Including various events for applying for resources and events for recycling resources, for example, when a job is submitted to a distributed system, events that apply for resources such as CPU and memory, trigger the system to perform gc (Garbage Collection, garbage (Recycling) event, and an event of reclaiming the resource requested by the task after the task execution is completed.
- gc Garbage Collection, garbage (Recycling) event
- System file event type includes various events that interact with the storage system.
- the interaction data includes writing data to the storage system and reading data from the storage system.
- the storage system can include local storage, hdfs, and databases. , Hard disk, cloud storage, etc., this data can include log logs.
- Job event type Includes events where a client submits a job to a distributed system.
- the distributed system will split the job into multiple job phases, and then each job phase Split into multiple tasks, for example, DAG (Directed Acyclic Graph, Directed Acyclic Graph), Job (Job), Stage (Job Stage), Task (Task) in the Spark architecture are merged into this type of event.
- DAG Directed Acyclic Graph, Directed Acyclic Graph
- Job Job
- Stage Job Stage
- Task Task
- the multiple event queues generated by the computer device may include an event queue corresponding to a heartbeat event type, an event queue corresponding to a resource monitoring event type, and an event queue corresponding to a resource application event type. , At least two of the event queue corresponding to the system file event type, the event queue corresponding to the job event type, and the event queue corresponding to other event types.
- the event queue corresponding to the heartbeat event type is used to buffer events belonging to the heartbeat event type.
- the event queue corresponding to the event type is used to cache events belonging to the resource monitoring type
- the event queue corresponding to the resource application event type is used to cache events belonging to the resource application event type
- the event queue corresponding to the system file event type is used to cache system file events Type of event
- the event queue corresponding to the job event type is used to cache events belonging to the job event type
- the event queues corresponding to other event types are used to cache events belonging to the job event type.
- the event queue can be expressed as eventQueue
- the computer device can generate 6 event queues for 6 event types, which are in turn eventQueue1, eventQueue2, ..., eventQueue6, where eventQueue1 is the cache heartbeat event type corresponding Event queue, eventQueue2 is the event queue corresponding to the resource monitoring event type, and so on.
- multiple event queues are generated for multiple event types.
- the number of event queues is increased by adding event queues, so the total capacity of the event queue is increased, and the distributed system cache is also improved.
- the ability of events further improves the performance and scalability of distributed systems.
- the distributed system faces high concurrent access, it can meet the need for the distributed system to cache a large number of events, and improve the processing performance of the distributed computing system.
- the probability of event loss is greatly reduced, and the situation of frequent loss of events by distributed systems is avoided, which also prevents the system from being unstable due to lost resource cleanup events.
- the hidden dangers of unavailability have improved the stability and availability of the distributed system.
- Each event queue is dedicated to cache events of the corresponding event type without paying attention to other event types. Events, alleviating the storage pressure of a single event queue.
- the computer device may obtain the depths of multiple event queues and generate multiple event queues according to the depths of the multiple event queues, so that the capacity of each event queue can meet business requirements.
- the depth of the event queue is used to indicate the number of events that the event queue can hold.
- the depth of the event queue can be equal to the number of events that the event queue can hold.
- the depth of the event queue can be equal to the event that the event queue can hold.
- the ratio between the number of thresholds and a threshold coefficient which may be 80%, 60%, or the like.
- the depth of each event queue in the multiple event queues is positively related to the time-consuming duration of processing events of the corresponding event type.
- the depth of each event queue can be designed in conjunction with the time-consuming process of events. The more time-consuming it takes to process a certain type of event, the deeper the corresponding event queue, and the more events it can cache. More, which improves the ability to cache such events. Similarly, the faster the processing of a certain type of event, the shallower the corresponding event queue.
- the depth of the event queue corresponding to the event type can be configured in advance according to the time spent processing the event type. If the event of the event type is processed, The time-consuming duration of the event type is longer, and the depth of the event queue of the event type can be configured to be larger. If the time-consuming duration of processing the event of the event type is shorter, the depth of the event queue can be configured to be smaller. In this way, the computer device can obtain the depth configured for each event queue, and after generating the event queue according to the configured depth of the event queue, it can achieve the effect that the depth of the event queue is positively related to the time-consuming time of processing the event of the corresponding event type.
- the time-consuming time for processing events of the heartbeat event type is usually short, you can set the depth of the event queue of the heartbeat event type is small, and the time-consuming time for processing events of the job event type is usually long, you can set the job The event queue has a greater depth.
- the depth of each event queue may also be a default value or an empirical value. This is not limited.
- the computer device obtains an event type of the event.
- Distributed systems can generate various events during operation.
- the distributed system triggers a corresponding event due to the execution of the task.
- the Driver node when a client submits a job, the Driver node will establish a connection with the Cluster Manager node, register with the Cluster Manager node and apply for resources.
- each Worker node can report to the Driver node. Send the heartbeat.
- the Driver node after the Driver node gets the job, it can build a DAG graph, decompose the DAG graph into multiple job phases, and decompose each job phase into multiple tasks.
- the distributed system can also generate other events in other scenarios. This embodiment does not limit the scenarios that generate events and the specific types of events.
- the computer device can obtain the event type of the event. Specifically, multiple event types may be pre-configured. When an event is generated, the computer device may obtain an event type that matches the event from the pre-configured multiple event types.
- a correspondence relationship between an event type and an event name may be set in advance, and each event type corresponds to at least one event name.
- the computer device may obtain the name of the event, query the correspondence relationship, and obtain the event. The name of the event type.
- the computer device determines a target event queue from a plurality of event queues based on the event type.
- the computer device can establish the correspondence between the event type and the event queue in advance by means of event routing. After the event type of the event is determined, the event can be determined from multiple event queues based on the event type and the pre-established correspondence.
- the event queue corresponding to the type uses the event queue corresponding to the event type as the target event queue, so that the generated events are listed in the target event queue.
- the correspondence between the event type and the event queue may be indicated by routing information.
- the computer device can use the event type as an index, query the routing information, obtain the event queue identifier corresponding to the event type, and use the event queue corresponding to the event queue identifier as the target event queue.
- the routing information is used to indicate the correspondence between the event type and the event queue.
- the routing information includes multiple event types and corresponding multiple event queue identifiers.
- the event queue identifier is used to identify the corresponding event queue. Name, number, etc. With reference to the six exemplary event types in step 301, the routing information can be shown in Table 1 below:
- Event type Event queue ID Heartbeat event type eventQueue1 Resource monitoring event type eventQueue2 Resource Request Event Type eventQueue3 System file event type eventQueue4 Job event type eventQueue5 Other event types eventQueue6
- the computer device queues the event into the target event queue.
- Enqueue refers to sending an event to a queue, that is, inserting an event into the queue, so that the event is queued in the queue, and the event is cached.
- the event queue can be a FIFO (First Input First Output) queue
- the entry queue can be the event insertion at the end of the event queue.
- the computer device After the computer device determines the target event queue of the event, it can enqueue the event to the target event queue corresponding to the event type, that is, send the event to the target event queue, that is, insert the event at the end of the target event queue. After that, the event will Queue in the target event queue. When all events that precede the event are dequeued from the target event queue, the event will be queued at the head of the target event queue to be dequeued.
- the function of event routing can be implemented by querying routing information and enqueuing events to an event queue, that is, each event generated can be routed to a corresponding event queue to achieve The effect of events listed by type.
- the foregoing steps 303 and 304 may be encapsulated into a routing module, and the function of event routing is implemented by the routing module.
- the computer device may execute the foregoing steps 303 to 304 by running the routing module.
- various heartbeat events may be routed to an event queue corresponding to the heartbeat event type according to the heartbeat event type.
- events such as write data events and read events can be routed to the event queue corresponding to the system file event type according to the system file event type, and so on.
- a refined event caching mechanism is provided. All events are unified into an event queue, and various events are routed to their corresponding event queues. By routing different types of events to different event queues, at least the following technical effects can be achieved:
- each event can be sent to the corresponding event queue according to the event type, which realizes the function of each event being listed separately according to the event type.
- the event processing module that processes the log event is currently busy and the log event cannot be dequeued, resulting in the log event in the event queue.
- Short-lived events such as heartbeat events, are also blocked in the event queue, unable to dequeue from the event queue, and cannot be sent to the corresponding event processing module, which affects the event processing efficiency of the entire distributed system.
- the congestion of the event queue of the log event will not interfere with the event queue of the heartbeat event. Even if the log event is blocked in the event queue corresponding to the log event type, the heartbeat event can be Normally queue and dequeue from the event queue corresponding to the heartbeat event type, thereby improving the event processing efficiency of the entire distributed system.
- the computer device sends the event to the event processing module.
- each event in the target event queue will move from the end of the team to the head of the team.
- the computer device will send the event to the event processing module.
- the event processing module can also be called event handler, listener, etc.
- the event processing module is used to process events. It can be a virtual program module and can be executed by a thread, object, process or other program execution unit in a computer device.
- the event processing module encapsulates a method for processing events, and the event processing module can call the encapsulated method to process the event.
- the computer device may generate a thread for sending an event to the event processing module, and send the event to the event processing module through the thread.
- the thread refers to the execution flow of the program, and is the basic unit for CPU execution.
- the thread used to send events can be a daemon thread, and the daemon thread can listen to the target event queue, and when the event is dequeued from the target event queue At this time, the daemon thread can obtain the event and send the event to the event processing module.
- events can be distributed to the event processing module concurrently through multiple threads.
- Concurrency refers to a mechanism in which multiple threads execute tasks in turn. For example, for thread A, thread B, and thread C, these three threads execute tasks concurrently, that is, thread A executes the task first, and then thread B executes the task. Task, and then thread C executes the task.
- thread A executes the task first
- thread B executes the task
- Task and then thread C executes the task.
- the multi-threaded concurrency mechanism can greatly improve the overall efficiency of task execution.
- events in the event queue can be sent to the event processing module concurrently through multiple threads. That is, multiple threads will send events to the event processing module in turn.
- a thread sends an event to the event processing module, there is no need to wait for the thread to finish sending. Instead, the next thread continues to send events to the event processing module.
- the process of sending these two events through these two threads can include the following steps one to two:
- Step 1 When the first event is dequeued from the event queue, an event is sent to the event processing module through the first thread.
- Step 2 When the second event is dequeued from the event queue, the second event is sent to the event processing module through the second thread, where the second thread is different from the first thread.
- the second event after the first event is moved to the head of the team and dequeued from the event queue. At this time, there is no need to wait for the first thread to finish sending the first event, and it is sufficient to send the second event directly through the second thread.
- the number of the multiple threads may be two or more, and the data of the multiple threads is specifically determined according to business requirements, which is not limited in this embodiment.
- a single thread is used in a distributed system to serially send events to an event processing module. That is, a thread is fixed to send events in the event queue to the event processing module.
- the thread When the current event is dequeued from the event queue, the thread must obtain the previous event, send the previous event to the event processing module, and wait for the previous one. After the event is sent, the thread can continue to send the next event, so the efficiency of sending the event is very low.
- multiple threads can send events concurrently, and multiple threads can send each event in the event queue in turn.
- the multi-thread mechanism greatly improves the speed of sending events, and the previous one in the event queue The sending process of the event will not block the sending process of the next event, thereby greatly improving the efficiency of sending the event.
- the number of threads sending events for each event queue may be designed in combination with the time consumption of processing events.
- the number of threads corresponding to any event queue in multiple event queues is positively related to the time spent processing events of the corresponding event type, that is, the more time it takes to process a certain type of event, the event queue is for such events
- the greater the number of threads that send events improving the ability to send such events.
- the faster the processing of certain types of events the fewer the number of threads that send events to the event queue of such events, such as a single thread, thereby saving system resources.
- the number of threads of the event queue corresponding to the event type can be configured in advance according to the time spent processing the event type. If the event type is processed, The event takes a long time, and you can configure more threads of the event type event queue. If the event type of the event type has a shorter time, you can configure the event type event queue thread Less, in this way, the computer device can obtain the number of threads configured for each event queue, and based on the number of threads configured for the event queue, after generating the corresponding threads for each event queue, the number of threads and the processing of the corresponding event type can be achieved. The time-consuming effect of the event is positively related.
- a heartbeat event type event queue you can set the event queue to still send the events of the event queue serially according to a single thread.
- multiple threads can be set to send events in the event queue concurrently.
- an event queue for time-consuming events can be sent through multiple threads, and an event queue for non-time-consuming events can be sent through a single thread.
- the flexibility of the process of sending events through threads is improved, and the ability to send time-consuming events is significantly improved, and the events in the time-consuming event queue can be specifically processed.
- multiple event processing modules may be introduced to process events concurrently, that is, for each event type of multiple event types, all events of the event type may be collectively processed through multiple event processing modules.
- the event processing module corresponding to any one of the multiple event types may include at least two. During the processing of an event by an event processing module, there is no need to wait for the event processing module to finish processing. The event processing module continues to process events.
- the number of event processing modules corresponding to any one of the multiple event types is positively related to the time-consuming time for processing the event of the corresponding event type, that is, if the time-consuming time corresponding to the event type is longer, the The greater the number of event processing modules corresponding to the event type, the stronger the distributed system can handle such events. For example, the number of event processing modules of the heartbeat event type is larger, and the number of event processing modules of the job event type is larger. Less in quantity.
- the degree of concurrency of processing events can be improved, thereby improving the concurrent performance and availability of the distributed system.
- the number of event processing modules for each type of event is designed to improve the flexibility of processing events, and at the same time significantly improve the ability to process time-consuming events. Special events are handled at specific times.
- At least one sub-event type under the event type may be determined, the event is matched with at least one sub-event type under the event type, and the sub-event type matched by the event is obtained, and the The event processing module corresponding to the event type sends an event.
- Event types and sub-event types You can think of event types as large classes and sub-event types as small classes. Sub-event types are more specific and detailed types than the dimensions of event types. They are types that belong to event types. Each event type can include one or more sub-event types.
- the job event type may include a type of starting a job, a type of ending a job, a type of starting a task, a type of ending a task, and the like.
- all sub-event types can be classified into corresponding event types in advance, and the event type and all sub-event types under the event type are stored on the computer device correspondingly. After the device determines the event type of the event, it can obtain at least one sub-event type under the event type.
- the event can be matched with each sub-event type in turn. For example, all sub-event types under the event type can be traversed. During the traversal, for the current traversal, To determine the event type that matches the event type. When the event matches the event type, the event type is used as the event type.
- the name of the sub-event type can be stored in advance to determine whether the name of the event is the same as the name of the sub-event type.
- the event and the sub-type Event type matches.
- the correspondence between the sub-event type and the event processing module can be established in advance. After the event-matched sub-event type is obtained, the sub-event type can be determined according to the pre-established correspondence. Corresponding event processing module.
- the computer device processes the event through the event processing module.
- the event processing module After the event processing module receives the event, it can call its own method to process the event and obtain the processing result of the event.
- the event cache mechanism in current distributed systems includes the following features:
- the missing new event is an event that triggers resource reclamation and other types of events such as gc
- an OOM (OutOfMemoryError, memory leak) mechanism will be triggered, that is, the operating system will kill the process to release memory, affecting the node device where the event queue is located
- OOM OutOfMemoryError, memory leak
- the normal operation of the device may even cause the node device to crash or be paralyzed, resulting in the node device not being able to communicate with other node devices in the distributed system and affecting the operation of the distributed system. That is, the loss of events will affect the stability of the distributed system with a great probability, and easily cause the situation that the distributed system is unavailable.
- each event has to wait for all events before the event in the event queue to finish sending before it can be sent. It can be seen that the efficiency of sending events is extremely low. In addition, if a certain type of event processing takes time and congestion occurs in the event queue, it will affect the dequeue of other types of events in the event queue due to the blocking effect of the head. The efficiency with which the system processes events.
- the event and all sub-event types need to be matched one by one in order to find a matching event processing module for processing.
- This matching method requires a large range of traversal. Not only does it affect the efficiency of the distributed system in processing events, it may also cause the distributed system to crash or paralyze.
- the embodiments of the present application solve the above-mentioned technical problems, and propose an optimized scheme for processing events.
- this solution by reclassifying events in a distributed system and creating different event queues for different event types, and introducing an event routing method between events and event queues, the event distribution and processing are improved. effectiveness. Furthermore, it not only solves the problem of inefficient event queue capacity in the current distributed system, or the low efficiency of the entire system event processing caused by a time-consuming operation, but also the problem of insufficient performance caused by a single thread sending an event.
- the operating system OOM caused by the problem, which in turn causes communication problems between node devices and system crashes or paralysis.
- the concurrency and stability of the distributed system is further improved.
- the method provided in the embodiment of the present application introduces a multi-queue event cache mechanism for a distributed system.
- each event is listed in the corresponding event queue according to the event type.
- Increasing the number of event queues thereby increasing the total capacity of the event queue, thereby improving the ability of distributed systems to cache events, and greatly improving the performance of distributed systems.
- the capacity of the event queue is expanded, and a large number of events can be cached through multiple event queues, which avoids the frequent loss of events caused by insufficient event queue capacity, thereby improving the stability and availability of the distributed system.
- different event queues are used to cache events of different event types, so that a large number of events in the distributed system can be cached separately. The processing of different types of events will not interfere with each other, which improves the processing efficiency of the entire distributed system.
- FIG. 5 is a schematic structural diagram of an event processing apparatus according to an embodiment of the present application.
- the apparatus includes: an obtaining module 501, a determining module 502, an enqueuing module 503, and an event processing module 504.
- An obtaining module 501 configured to obtain an event type of an event when an event is generated in the distributed system
- a determining module 502 configured to determine a target event queue from a plurality of event queues based on the event type, and the multiple event queues are respectively used to buffer events of multiple event types;
- the enqueuing module 503 is configured to enqueue the event into the target event queue
- An event processing module 504 is configured to process the event when the event is dequeued from the target event queue.
- the device provided in the embodiment of the present application introduces a multi-queue event cache mechanism for a distributed system.
- each event is individually listed in the corresponding event queue according to the event type.
- Increasing the number of event queues thereby increasing the total capacity of the event queue, thereby improving the ability of distributed systems to cache events, and greatly improving the performance of distributed systems.
- the capacity of the event queue is expanded, and a large number of events can be cached through multiple event queues, which avoids the frequent loss of events caused by insufficient event queue capacity, thereby improving the stability and availability of the distributed system.
- different event queues are used to cache events of different event types, so that a large number of events in the distributed system can be cached separately. The processing of different types of events will not interfere with each other, which improves the processing efficiency of the entire distributed system.
- the determining module 502 includes:
- the query submodule is used to query routing information to obtain an event queue identifier corresponding to the event type, and the routing information includes multiple event types and corresponding multiple event queue identifiers;
- a determining submodule is configured to use the event queue corresponding to the event queue identifier as the target event queue.
- the depth of any one of the plurality of event queues is positively related to the time consuming time for processing events of the corresponding event type.
- the apparatus further includes:
- a generating module is configured to generate multiple event queues for the multiple event types.
- the apparatus further includes:
- the sending module is configured to send events in the event queue to the event processing module 504 concurrently through multiple threads for any event queue in the multiple event queues.
- the number of threads corresponding to any of the event queues in the multiple event queues is positively related to the time consumed for processing events of the corresponding event type.
- the apparatus further includes:
- a matching module configured to match the event with at least one sub-event type under the event type to obtain a sub-event type matched by the event;
- the sending module is configured to send the event to an event processing module 504 corresponding to the sub-event type.
- the event processing module 504 corresponding to any one of the multiple event types includes at least two.
- the number of the event processing modules 504 corresponding to any one of the multiple event types is positively related to the time consuming time for processing the events of the corresponding event type.
- the multiple event queues include an event queue corresponding to a heartbeat event type, an event queue corresponding to a resource monitoring event type, an event queue corresponding to a resource application event type, an event queue corresponding to a system file event type, At least two of the event queues corresponding to the job event type and the event queues corresponding to other event types.
- the event processing device provided in the foregoing embodiment only uses the division of the foregoing functional modules as an example for processing events.
- the above functions may be allocated by different functional modules as required.
- the internal structure of the event processing device is divided into different functional modules to complete all or part of the functions described above.
- the event processing device and the event processing method embodiments provided by the foregoing embodiments belong to the same concept. For specific implementation processes, refer to the method embodiments, and details are not described herein again.
- FIG. 6 is a schematic structural diagram of a computer device according to an embodiment of the present application.
- the computer device 600 may have a large difference due to different configurations or performance, and may include one or more processors (central processing units) (CPU) 601. And one or more memories 602, where at least one instruction is stored in the memory 602, and the at least one instruction is loaded and executed by the processor 601 to implement the event processing methods provided by the foregoing method embodiments.
- the computer device may also have components such as a wired or wireless network interface and an input-output interface for input and output.
- the computer device may also include other components for implementing the functions of the device, and details are not described herein.
- a computer-readable storage medium such as a memory including instructions, which can be executed by a processor in a computer device to complete the event processing method in the foregoing embodiment.
- the computer-readable storage medium may be a ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
- the present application further provides a computer program product containing instructions, which when executed on a computer device, enables the computer device to implement the event processing method in the foregoing embodiment.
- the present application further provides a chip that includes a processor and / or program instructions.
- the chip runs, the event processing method in the foregoing embodiment is implemented.
- the program may be stored in a computer-readable storage medium.
- the storage medium may be a read-only memory, a magnetic disk or an optical disk.
Abstract
Description
事件类型Event type | 事件队列标识Event queue ID |
心跳事件类型Heartbeat event type | eventQueue1eventQueue1 |
资源监控事件类型Resource monitoring event type | eventQueue2eventQueue2 |
资源申请事件类型Resource Request Event Type | eventQueue3eventQueue3 |
系统文件事件类型System file event type | eventQueue4eventQueue4 |
作业事件类型Job event type | eventQueue5eventQueue5 |
其他事件类型Other event types | eventQueue6eventQueue6 |
Claims (22)
- 一种事件处理方法,其特征在于,所述方法包括:An event processing method, characterized in that the method includes:当分布式系统中产生事件时,获取所述事件的事件类型;When an event is generated in a distributed system, obtaining the event type of the event;基于所述事件类型,从多个事件队列中确定目标事件队列,所述多个事件队列用于分别缓存多种事件类型的事件;Determining a target event queue from a plurality of event queues based on the event type, and the plurality of event queues are used to separately cache events of multiple event types;将所述事件入列至所述目标事件队列中;Enumerating the event into the target event queue;当所述事件从所述目标事件队列出列时,对所述事件进行处理。When the event is dequeued from the target event queue, the event is processed.
- 根据权利要求1所述的方法,其特征在于,所述基于所述事件类型,从多个事件队列中确定目标事件队列,包括:The method according to claim 1, wherein determining the target event queue from a plurality of event queues based on the event type comprises:查询路由信息,得到所述事件类型对应的事件队列标识,所述路由信息包括多种事件类型以及对应的多个事件队列标识;Query routing information to obtain an event queue identifier corresponding to the event type, where the routing information includes multiple event types and corresponding multiple event queue identifiers;将所述事件队列标识对应的事件队列,作为所述目标事件队列。The event queue corresponding to the event queue identifier is used as the target event queue.
- 根据权利要求1所述的方法,其特征在于,所述多个事件队列中任一个事件队列的深度与处理对应事件类型的事件的耗时时长正相关。The method according to claim 1, wherein the depth of any one of the plurality of event queues is positively related to the time consuming time for processing events of the corresponding event type.
- 根据权利要求1所述的方法,其特征在于,所述获取所述事件的事件类型之前,所述方法还包括:The method according to claim 1, wherein before the obtaining the event type of the event, the method further comprises:为所述多种事件类型,生成多个事件队列。For the multiple event types, multiple event queues are generated.
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method according to claim 1, further comprising:针对所述多个事件队列中的任一事件队列,通过多个线程,并发向事件处理模块发送所述事件队列中的事件;Sending events in the event queue to an event processing module concurrently through multiple threads for any event queue in the multiple event queues;通过所述事件处理模块,对所述事件进行处理。The event processing module processes the event.
- 根据权利要求5所述的方法,其特征在于,所述多个事件队列中任一事件队列对应的线程的数量,与处理对应事件类型的事件的耗时时长正相关。The method according to claim 5, wherein the number of threads corresponding to any one of the plurality of event queues is positively related to the time-consuming time for processing events of the corresponding event type.
- 根据权利要求1所述的方法,其特征在于,所述对所述事件进行处理,包括:The method according to claim 1, wherein the processing the event comprises:对所述事件与所述事件类型下的至少一个子事件类型进行匹配,得到所述事件匹配的子事件类型;Matching the event with at least one sub-event type under the event type to obtain the sub-event type matched by the event;向所述子事件类型对应的事件处理模块发送所述事件;Sending the event to an event processing module corresponding to the sub-event type;通过所述事件处理模块,对所述事件进行处理。The event processing module processes the event.
- 根据权利要求5至7任一项所述的方法,其特征在于,所述多种事件类型中任一种事件类型对应的事件处理模块包括至少两个。The method according to any one of claims 5 to 7, wherein the event processing module corresponding to any one of the multiple event types includes at least two.
- 根据权利要求8所述的方法,其特征在于,所述多种事件类型中任一种事件类型对应的事件处理模块的数量与处理对应事件类型的事件的耗时时长正相关。The method according to claim 8, wherein the number of event processing modules corresponding to any one of the plurality of event types is positively related to the time-consuming time for processing events of the corresponding event type.
- 根据权利要求1所述的方法,其特征在于,所述多个事件队列包括心跳事件类型对应的事件队列、资源监控事件类型对应的事件队列、资源申请事件类型对应的事件队列、系统文件事件类型对应的事件队列、作业事件类型对应的事件队列以及其他事件类型对应的事件队列中的至少两个。The method according to claim 1, wherein the plurality of event queues include an event queue corresponding to a heartbeat event type, an event queue corresponding to a resource monitoring event type, an event queue corresponding to a resource application event type, and a system file event type. At least two of the corresponding event queue, the event queue corresponding to the job event type, and the event queue corresponding to other event types.
- 一种事件处理装置,其特征在于,所述装置包括:An event processing device, characterized in that the device includes:获取模块,用于当分布式系统中产生事件时,获取所述事件的事件类型;An acquisition module, configured to acquire an event type of the event when an event is generated in the distributed system;确定模块,用于基于所述事件类型,从多个事件队列中确定目标事件队列,所述多个事件队列用于分别缓存多种事件类型的事件;A determining module, configured to determine a target event queue from a plurality of event queues based on the event type, and the plurality of event queues are respectively used to buffer events of multiple event types;入列模块,用于将所述事件入列至所述目标事件队列中;Enqueuing module, configured to enqueue the event into the target event queue;事件处理模块,用于当所述事件从所述目标事件队列出列时,对所述事件进行处理。An event processing module is configured to process the event when the event is dequeued from the target event queue.
- 根据权利要求11所述的装置,其特征在于,所述确定模块,包括:The apparatus according to claim 11, wherein the determining module comprises:查询子模块,用于查询路由信息,得到所述事件类型对应的事件队列标识,所述路由信息包括多种事件类型以及对应的多个事件队列标识;A query submodule, configured to query routing information to obtain an event queue identifier corresponding to the event type, where the routing information includes multiple event types and corresponding multiple event queue identifiers;确定子模块,用于将所述事件队列标识对应的事件队列,作为所述目标事件队列。A determining submodule is configured to use an event queue corresponding to the event queue identifier as the target event queue.
- 根据权利要求11所述的装置,其特征在于,所述多个事件队列中任一事件队列的深度与处理对应事件类型的事件的耗时时长正相关。The apparatus according to claim 11, wherein the depth of any one of the plurality of event queues is positively related to the time consuming time for processing events of the corresponding event type.
- 根据权利要求11所述的装置,其特征在于,所述装置还包括:The device according to claim 11, further comprising:生成模块,用于为所述多种事件类型,生成多个事件队列。A generating module is configured to generate multiple event queues for the multiple event types.
- 根据权利要求11所述的装置,其特征在于,所述装置还包括:The device according to claim 11, further comprising:发送模块,用于针对所述多个事件队列中的任一事件队列,通过多个线程,并发向事件处理模块发送所述事件队列中的事件。The sending module is configured to send an event in the event queue to the event processing module concurrently through multiple threads for any event queue in the multiple event queues.
- 根据权利要求15所述的装置,其特征在于,所述多个事件队列中任一事件队列对应的线程的数量,与处理对应事件类型的事件的耗时时长正相关。The device according to claim 15, wherein the number of threads corresponding to any one of the plurality of event queues is positively related to the time-consuming time for processing events of the corresponding event type.
- 根据权利要求11所述的装置,其特征在于,所述装置还包括:The device according to claim 11, further comprising:匹配模块,用于对所述事件与所述事件类型下的至少一个子事件类型进行匹配,得到所述事件匹配的子事件类型;A matching module, configured to match the event with at least one sub-event type under the event type to obtain the sub-event type matched by the event;发送模块,用于向所述子事件类型对应的事件处理模块发送所述事件。The sending module is configured to send the event to an event processing module corresponding to the sub-event type.
- 根据权利要求14-17中任一项所述的装置,其特征在于,所述多种事件类型中任一种事件类型对应的事件处理模块包括至少两个。The apparatus according to any one of claims 14 to 17, wherein the event processing module corresponding to any one of the multiple event types includes at least two.
- 根据权利要求18所述的装置,其特征在于,所述多种事件类型中任一种事件类型对应的事件处理模块的数量与处理对应事件类型的事件的耗时时长正相关。The device according to claim 18, wherein the number of event processing modules corresponding to any one of the plurality of event types is positively related to the time-consuming duration of processing events of the corresponding event type.
- 根据权利要求11所述的装置,其特征在于,所述多个事件队列包括心跳事件类型对应的事件队列、资源监控事件类型对应的事件队列、资源申请事件类型对应的事件队列、系统文件事件类型对应的事件队列、作业事件类型对应的事件队列以及其他事件类型对应的事件队列中的至少两个。The device according to claim 11, wherein the plurality of event queues include an event queue corresponding to a heartbeat event type, an event queue corresponding to a resource monitoring event type, an event queue corresponding to a resource application event type, and a system file event type At least two of the corresponding event queue, the event queue corresponding to the job event type, and the event queue corresponding to other event types.
- 一种计算机设备,其特征在于,所述计算机设备包括处理器和存储器,所述存储器中存储有至少一条指令,所述指令由所述处理器加载并执行以实现权利要求1-10中任一项所述的方法步骤。A computer device, wherein the computer device includes a processor and a memory, and the memory stores at least one instruction, and the instruction is loaded and executed by the processor to implement any one of claims 1-10. Method steps described in item.
- 一种计算机可读存储介质,其特征在于,所述存储介质内存储有至少一条指令,所述至少一条指令被处理器执行以实现权利要求1-10中任一项所述的方法步骤。A computer-readable storage medium, characterized in that at least one instruction is stored in the storage medium, and the at least one instruction is executed by a processor to implement the method steps of any one of claims 1-10.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810545759.6 | 2018-05-25 | ||
CN201810545759.6A CN110532067A (en) | 2018-05-25 | 2018-05-25 | Event-handling method, device, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019223596A1 true WO2019223596A1 (en) | 2019-11-28 |
Family
ID=68617357
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/087219 WO2019223596A1 (en) | 2018-05-25 | 2019-05-16 | Method, device, and apparatus for event processing, and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110532067A (en) |
WO (1) | WO2019223596A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP4086768A1 (en) * | 2021-05-03 | 2022-11-09 | TeleNav, Inc. | Computing system with message ordering mechanism and method of operation thereof |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111092865B (en) * | 2019-12-04 | 2022-08-19 | 全球能源互联网研究院有限公司 | Security event analysis method and system |
CN111309494A (en) * | 2019-12-09 | 2020-06-19 | 上海金融期货信息技术有限公司 | Multithreading event processing assembly |
CN111461198B (en) * | 2020-03-27 | 2023-10-13 | 杭州海康威视数字技术股份有限公司 | Action determining method, system and device |
CN112040317B (en) * | 2020-08-21 | 2022-08-09 | 海信视像科技股份有限公司 | Event response method and display device |
CN112102063A (en) * | 2020-08-31 | 2020-12-18 | 深圳前海微众银行股份有限公司 | Data request method, device, equipment, platform and computer storage medium |
CN112416632B (en) * | 2020-12-14 | 2023-01-17 | 五八有限公司 | Event communication method and device, electronic equipment and computer readable medium |
CN112860400A (en) * | 2021-02-09 | 2021-05-28 | 山东英信计算机技术有限公司 | Method, system, device and medium for processing distributed training task |
CN113254466B (en) * | 2021-06-18 | 2022-03-01 | 腾讯科技(深圳)有限公司 | Data processing method and device, electronic equipment and storage medium |
CN113608842B (en) * | 2021-09-30 | 2022-02-18 | 苏州浪潮智能科技有限公司 | Container cluster and component management method, device, system and storage medium |
CN115391058B (en) * | 2022-08-05 | 2023-07-25 | 安超云软件有限公司 | SDN-based resource event processing method, resource creation method and system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102457578A (en) * | 2011-12-16 | 2012-05-16 | 中标软件有限公司 | Distributed network monitoring method based on event mechanism |
US20120239372A1 (en) * | 2011-03-14 | 2012-09-20 | Nec Laboratories America, Inc. | Efficient discrete event simulation using priority queue tagging |
WO2013097248A1 (en) * | 2011-12-31 | 2013-07-04 | 华为技术有限公司 | Distributed task processing method, device and system based on message queue |
CN105302638A (en) * | 2015-11-04 | 2016-02-03 | 国家计算机网络与信息安全管理中心 | MPP (Massively Parallel Processing) cluster task scheduling method based on system load |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7111001B2 (en) * | 2003-01-27 | 2006-09-19 | Seiko Epson Corporation | Event driven transaction state management with single cache for persistent framework |
CN104133724B (en) * | 2014-04-03 | 2015-08-19 | 腾讯科技(深圳)有限公司 | Concurrent tasks dispatching method and device |
CN105337896A (en) * | 2014-07-25 | 2016-02-17 | 华为技术有限公司 | Message processing method and device |
US10515326B2 (en) * | 2015-08-28 | 2019-12-24 | Exacttarget, Inc. | Database systems and related queue management methods |
CN106095535B (en) * | 2016-06-08 | 2019-11-08 | 东华大学 | A kind of thread management system for supporting Data Stream Processing under multi-core platform |
-
2018
- 2018-05-25 CN CN201810545759.6A patent/CN110532067A/en active Pending
-
2019
- 2019-05-16 WO PCT/CN2019/087219 patent/WO2019223596A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120239372A1 (en) * | 2011-03-14 | 2012-09-20 | Nec Laboratories America, Inc. | Efficient discrete event simulation using priority queue tagging |
CN102457578A (en) * | 2011-12-16 | 2012-05-16 | 中标软件有限公司 | Distributed network monitoring method based on event mechanism |
WO2013097248A1 (en) * | 2011-12-31 | 2013-07-04 | 华为技术有限公司 | Distributed task processing method, device and system based on message queue |
CN105302638A (en) * | 2015-11-04 | 2016-02-03 | 国家计算机网络与信息安全管理中心 | MPP (Massively Parallel Processing) cluster task scheduling method based on system load |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP4086768A1 (en) * | 2021-05-03 | 2022-11-09 | TeleNav, Inc. | Computing system with message ordering mechanism and method of operation thereof |
Also Published As
Publication number | Publication date |
---|---|
CN110532067A (en) | 2019-12-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019223596A1 (en) | Method, device, and apparatus for event processing, and storage medium | |
US8381230B2 (en) | Message passing with queues and channels | |
US10248175B2 (en) | Off-line affinity-aware parallel zeroing of memory in non-uniform memory access (NUMA) servers | |
EP2618257B1 (en) | Scalable sockets | |
Sengupta et al. | Scheduling multi-tenant cloud workloads on accelerator-based systems | |
US8881161B1 (en) | Operating system with hardware-enabled task manager for offloading CPU task scheduling | |
CN111367630A (en) | Multi-user multi-priority distributed cooperative processing method based on cloud computing | |
US9378047B1 (en) | Efficient communication of interrupts from kernel space to user space using event queues | |
WO2021022964A1 (en) | Task processing method, device, and computer-readable storage medium based on multi-core system | |
WO2023274278A1 (en) | Resource scheduling method and device and computing node | |
US8543722B2 (en) | Message passing with queues and channels | |
JP4183712B2 (en) | Data processing method, system and apparatus for moving processor task in multiprocessor system | |
Hussain et al. | A counter based approach for reducer placement with augmented Hadoop rackawareness | |
Lin et al. | {RingLeader}: Efficiently Offloading {Intra-Server} Orchestration to {NICs} | |
JP6283376B2 (en) | System and method for supporting work sharing multiplexing in a cluster | |
WO2016187831A1 (en) | Method and device for accessing file, and storage system | |
US20230393782A1 (en) | Io request pipeline processing device, method and system, and storage medium | |
CN113076180B (en) | Method for constructing uplink data path and data processing system | |
CN113076189B (en) | Data processing system with multiple data paths and virtual electronic device constructed using multiple data paths | |
CN115098220A (en) | Large-scale network node simulation method based on container thread management technology | |
EP3387529A1 (en) | Method and apparatus for time-based scheduling of tasks | |
CN110955461A (en) | Processing method, device and system of computing task, server and storage medium | |
US20140237149A1 (en) | Sending a next request to a resource before a completion interrupt for a previous request | |
Park et al. | OCTOKV: An Agile Network-Based Key-Value Storage System with Robust Load Orchestration | |
US11334246B2 (en) | Nanoservices—a programming design pattern for managing the state of fine-grained object instances |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19806835 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19806835 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19806835 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 160621) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19806835 Country of ref document: EP Kind code of ref document: A1 |