WO2019223596A1 - Method, device, and apparatus for event processing, and storage medium - Google Patents

Method, device, and apparatus for event processing, and storage medium Download PDF

Info

Publication number
WO2019223596A1
WO2019223596A1 PCT/CN2019/087219 CN2019087219W WO2019223596A1 WO 2019223596 A1 WO2019223596 A1 WO 2019223596A1 CN 2019087219 W CN2019087219 W CN 2019087219W WO 2019223596 A1 WO2019223596 A1 WO 2019223596A1
Authority
WO
WIPO (PCT)
Prior art keywords
event
queue
type
events
queues
Prior art date
Application number
PCT/CN2019/087219
Other languages
French (fr)
Chinese (zh)
Inventor
李锐
邓长春
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Publication of WO2019223596A1 publication Critical patent/WO2019223596A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/466Transaction processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues

Definitions

  • the present application relates to the field of big data technology, and in particular, to an event processing method, device, device, and storage medium.
  • a distributed system is a system composed of a group of node devices that communicate and coordinate work through the network to complete common tasks. At each stage of task execution, the distributed system will generate corresponding events due to the process of executing the task. For example, during the task of performing statistics on license plate data, the distributed system will generate an event of writing license plate data to the storage system. Distributed systems need to process the events that are generated in order to complete the task.
  • the Spark architecture includes a client (English: client) node.
  • the client node can include an event processing module for processing events.
  • the client node creates an event during initialization. Queues and event queues are used to buffer events sent to the event processing module. During the processing of tasks, whenever any event occurs, the client node will list the events into the event queue. At the head of the team, the client node dequeues the event from the event queue and sends the event to the event processing module. Through the event processing module, the event can be processed.
  • the capacity of a single event queue is small. Once the events in this event queue reach the capacity limit, new events cannot be accommodated, and the distributed system cannot continue to process new events, which affects the processing performance of the distributed system.
  • the embodiments of the present application provide an event processing method, device, device, and storage medium, which can solve the technical problem that the capacity of a single event queue in the related technology is limited, resulting in low processing performance of the distributed system.
  • the technical solution is as follows:
  • an event processing method includes:
  • the event is dequeued from the target event queue, the event is processed.
  • determining the target event queue from a plurality of event queues based on the event type includes:
  • routing information to obtain an event queue identifier corresponding to the event type, where the routing information includes multiple event types and corresponding multiple event queue identifiers;
  • the event queue corresponding to the event queue identifier is used as the target event queue.
  • the depth of any one of the multiple event queues is positively related to the time-consuming duration of processing events of the corresponding event type.
  • the method before the obtaining an event type of the event, the method further includes:
  • multiple event queues are generated.
  • the method further includes:
  • the number of threads corresponding to any of the event queues in the multiple event queues is positively related to the time consumed for processing events of the corresponding event type.
  • the processing the event includes:
  • the event processing module processes the event.
  • the event processing module corresponding to any one of the multiple event types includes at least two.
  • the number of event processing modules corresponding to any one of the multiple event types is positively related to the time-consuming duration of processing events of the corresponding event type.
  • the multiple event queues include an event queue corresponding to a heartbeat event type, an event queue corresponding to a resource monitoring event type, an event queue corresponding to a resource application event type, and an event queue corresponding to a system file event type. , At least two of the event queues corresponding to the job event type and the event queues corresponding to other event types.
  • an event processing apparatus where the apparatus includes:
  • An acquisition module configured to acquire an event type of the event when an event is generated in the distributed system
  • a determining module configured to determine a target event queue from a plurality of event queues based on the event type, and the plurality of event queues are respectively used to buffer events of multiple event types;
  • Enqueuing module configured to enqueue the event into the target event queue
  • An event processing module is configured to process the event when the event is dequeued from the target event queue.
  • the determining module includes:
  • a query submodule configured to query routing information to obtain an event queue identifier corresponding to the event type, where the routing information includes multiple event types and corresponding multiple event queue identifiers;
  • a determining submodule is configured to use an event queue corresponding to the event queue identifier as the target event queue.
  • the depth of any one of the multiple event queues is positively related to the time-consuming duration of processing events of the corresponding event type.
  • the apparatus further includes:
  • a generating module is configured to generate multiple event queues for the multiple event types.
  • the apparatus further includes:
  • the sending module is configured to send an event in the event queue to the event processing module concurrently through multiple threads for any event queue in the multiple event queues.
  • the number of threads corresponding to any of the event queues in the multiple event queues is positively related to the time consumed for processing events of the corresponding event type.
  • the apparatus further includes:
  • a matching module configured to match the event with at least one sub-event type under the event type to obtain the sub-event type matched by the event;
  • the sending module is configured to send the event to an event processing module corresponding to the sub-event type.
  • the event processing module corresponding to any one of the multiple event types includes at least two.
  • the number of event processing modules corresponding to any one of the multiple event types is positively related to the time-consuming duration of processing events of the corresponding event type.
  • the multiple event queues include an event queue corresponding to a heartbeat event type, an event queue corresponding to a resource monitoring event type, an event queue corresponding to a resource application event type, and an event queue corresponding to a system file event type. , At least two of the event queues corresponding to the job event type and the event queues corresponding to other event types.
  • a computer device in another aspect, includes a processor and a memory, and the memory stores at least one instruction, and the instruction is loaded and executed by the processor to implement the foregoing event processing method.
  • a computer-readable storage medium stores at least one instruction, and the at least one instruction is executed by a processor to implement the foregoing event processing method.
  • a computer program product containing instructions which when run on a computer device, enables the computer device to implement the event processing method described above.
  • a chip in another aspect, includes a processor and / or program instructions. When the chip is running, the event processing method is implemented.
  • the method, device, device and storage medium provided in the embodiments of the present application introduce a multi-queue event caching mechanism for a distributed system.
  • each event is separately entered according to the event type.
  • Listing to the corresponding event queues increases the number of event queues, thereby increasing the total capacity of the event queues, further improving the ability of distributed systems to cache events, and greatly improving the performance of distributed systems.
  • the distributed system faces high concurrent access, it can meet the needs of the distributed system to cache a large number of events.
  • the capacity of the event queue is expanded, and a large number of events can be cached through multiple event queues, which prevents the frequent loss of events caused by insufficient event queue capacity, thereby improving the stability and availability of the distributed system.
  • different event queues are used to cache events of different event types, so that a large number of events in the distributed system can be cached separately. The processing of different types of events will not interfere with each other, which improves the processing efficiency of the entire distributed system.
  • FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application.
  • FIG. 2 is a flowchart of an event processing method according to an embodiment of the present application.
  • FIG. 3 is a flowchart of an event processing method according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of an event processing method according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of an event processing apparatus according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a computer device according to an embodiment of the present application.
  • FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application.
  • the implementation environment includes a master node 101, at least one slave node 102, and a client node 103.
  • the master node 101, the at least one slave node 102, and the client node 103 are connected through a network.
  • the master node 101, the at least one slave node 102, and the client node 103 can form a distributed system and work together to complete tasks.
  • the client node 103 can generate a task to be executed, and send the task to be executed to the master node 101.
  • the master node 101 can assign a task to each slave node 102, each slave node 102 can execute the task, and the result of the task processing Sent to the client node 103.
  • the architecture of the distributed system includes, but is not limited to, various architectures such as a spark architecture, a flink architecture, a mapreduce architecture, and a storm architecture.
  • the client node 103 may be a driver (English: Driver) node in the spark architecture
  • the master node 101 may be a cluster manager (English: Cluster Manager) in the spark architecture.
  • the node, that is, the master (English: master) node, and the slave node 102 may be a worker (English: Worker) node in the Spark architecture.
  • the client node 103 may be a computer device, such as a terminal or a server, and may include a personal computer, a notebook computer, a mobile phone, and the like.
  • the master node 101 and at least one slave node 102 may include a server, a terminal, and the like.
  • FIG. 2 is a flowchart of an event processing method provided by an embodiment of the present application. The method is executed by a computer device. The method includes the following steps:
  • a target event queue from a plurality of event queues, where the plurality of event queues are respectively used to buffer events of multiple event types.
  • the method provided in the embodiment of the present application introduces a multi-queue event cache mechanism for a distributed system.
  • each event is listed in the corresponding event queue according to the event type.
  • Increasing the number of event queues thereby increasing the total capacity of the event queue, thereby improving the ability of distributed systems to cache events, and greatly improving the performance of distributed systems.
  • the capacity of the event queue is expanded, and a large number of events can be cached through multiple event queues, which avoids the frequent loss of events caused by insufficient event queue capacity, thereby improving the stability and availability of the distributed system.
  • different event queues are used to cache events of different event types, so that a large number of events in the distributed system can be cached separately. The processing of different types of events will not interfere with each other, which improves the processing efficiency of the entire distributed system.
  • determining the target event queue from multiple event queues based on the event type includes:
  • the routing information includes multiple event types and corresponding multiple event queue identifiers.
  • the depth of any one of the plurality of event queues is positively related to the time consuming time for processing events of the corresponding event type.
  • the method before obtaining the event type of the event, the method further includes:
  • the method further includes:
  • multiple events are used to concurrently send events in the event queue to the event processing module;
  • the number of threads corresponding to any of the event queues in the multiple event queues is positively related to the time consumed for processing events of the corresponding event type.
  • the event processing includes:
  • the event processing module processes the event.
  • the event processing module corresponding to any one of the multiple event types includes at least two.
  • the number of event processing modules corresponding to any one of the multiple event types is positively related to the time-consuming time for processing events of the corresponding event type.
  • the multiple event queues include an event queue corresponding to a heartbeat event type, an event queue corresponding to a resource monitoring event type, an event queue corresponding to a resource application event type, an event queue corresponding to a system file event type, At least two of the event queues corresponding to the job event type and the event queues corresponding to other event types.
  • FIG. 3 is a flowchart of an event processing method provided by an embodiment of the present application.
  • the method is executed by a computer device, and the computer device may be a node device where an event queue is located in a distributed system.
  • the computer device may be a driver node, that is, a client node.
  • the method includes:
  • a computer device generates multiple event queues for multiple event types.
  • a multi-queue event cache mechanism is designed, and multiple event queues are generated so that events of different event types can be cached through multiple event queues, respectively.
  • the computer device can obtain multiple event types. For each of the multiple event types, the computer device can generate a corresponding event queue for the event type, thereby obtaining multiple event queues. Among them, each event queue is used to cache events of a corresponding event type.
  • multiple event types The division of event types can be determined according to the business needs of the distributed system.
  • multiple event types can include heartbeat event types, resource monitoring event types, resource application event types, system file events At least two of the type, job event type, and other event types.
  • Heartbeat event type includes various heartbeat events, for example, it can include the heartbeat event between the master node and each slave node, the heartbeat event between the master service and each slave service, the client node and each slave node. Heartbeat events, heartbeat events between client nodes and master nodes, etc.
  • Resource monitoring event types including various events that obtain information on the use of resources in a distributed system.
  • the resources can include CPU (Central Processing Unit), memory, and disk IO (Input / Output, input / output). ), Network bandwidth, etc.
  • Types of resource application events Including various events for applying for resources and events for recycling resources, for example, when a job is submitted to a distributed system, events that apply for resources such as CPU and memory, trigger the system to perform gc (Garbage Collection, garbage (Recycling) event, and an event of reclaiming the resource requested by the task after the task execution is completed.
  • gc Garbage Collection, garbage (Recycling) event
  • System file event type includes various events that interact with the storage system.
  • the interaction data includes writing data to the storage system and reading data from the storage system.
  • the storage system can include local storage, hdfs, and databases. , Hard disk, cloud storage, etc., this data can include log logs.
  • Job event type Includes events where a client submits a job to a distributed system.
  • the distributed system will split the job into multiple job phases, and then each job phase Split into multiple tasks, for example, DAG (Directed Acyclic Graph, Directed Acyclic Graph), Job (Job), Stage (Job Stage), Task (Task) in the Spark architecture are merged into this type of event.
  • DAG Directed Acyclic Graph, Directed Acyclic Graph
  • Job Job
  • Stage Job Stage
  • Task Task
  • the multiple event queues generated by the computer device may include an event queue corresponding to a heartbeat event type, an event queue corresponding to a resource monitoring event type, and an event queue corresponding to a resource application event type. , At least two of the event queue corresponding to the system file event type, the event queue corresponding to the job event type, and the event queue corresponding to other event types.
  • the event queue corresponding to the heartbeat event type is used to buffer events belonging to the heartbeat event type.
  • the event queue corresponding to the event type is used to cache events belonging to the resource monitoring type
  • the event queue corresponding to the resource application event type is used to cache events belonging to the resource application event type
  • the event queue corresponding to the system file event type is used to cache system file events Type of event
  • the event queue corresponding to the job event type is used to cache events belonging to the job event type
  • the event queues corresponding to other event types are used to cache events belonging to the job event type.
  • the event queue can be expressed as eventQueue
  • the computer device can generate 6 event queues for 6 event types, which are in turn eventQueue1, eventQueue2, ..., eventQueue6, where eventQueue1 is the cache heartbeat event type corresponding Event queue, eventQueue2 is the event queue corresponding to the resource monitoring event type, and so on.
  • multiple event queues are generated for multiple event types.
  • the number of event queues is increased by adding event queues, so the total capacity of the event queue is increased, and the distributed system cache is also improved.
  • the ability of events further improves the performance and scalability of distributed systems.
  • the distributed system faces high concurrent access, it can meet the need for the distributed system to cache a large number of events, and improve the processing performance of the distributed computing system.
  • the probability of event loss is greatly reduced, and the situation of frequent loss of events by distributed systems is avoided, which also prevents the system from being unstable due to lost resource cleanup events.
  • the hidden dangers of unavailability have improved the stability and availability of the distributed system.
  • Each event queue is dedicated to cache events of the corresponding event type without paying attention to other event types. Events, alleviating the storage pressure of a single event queue.
  • the computer device may obtain the depths of multiple event queues and generate multiple event queues according to the depths of the multiple event queues, so that the capacity of each event queue can meet business requirements.
  • the depth of the event queue is used to indicate the number of events that the event queue can hold.
  • the depth of the event queue can be equal to the number of events that the event queue can hold.
  • the depth of the event queue can be equal to the event that the event queue can hold.
  • the ratio between the number of thresholds and a threshold coefficient which may be 80%, 60%, or the like.
  • the depth of each event queue in the multiple event queues is positively related to the time-consuming duration of processing events of the corresponding event type.
  • the depth of each event queue can be designed in conjunction with the time-consuming process of events. The more time-consuming it takes to process a certain type of event, the deeper the corresponding event queue, and the more events it can cache. More, which improves the ability to cache such events. Similarly, the faster the processing of a certain type of event, the shallower the corresponding event queue.
  • the depth of the event queue corresponding to the event type can be configured in advance according to the time spent processing the event type. If the event of the event type is processed, The time-consuming duration of the event type is longer, and the depth of the event queue of the event type can be configured to be larger. If the time-consuming duration of processing the event of the event type is shorter, the depth of the event queue can be configured to be smaller. In this way, the computer device can obtain the depth configured for each event queue, and after generating the event queue according to the configured depth of the event queue, it can achieve the effect that the depth of the event queue is positively related to the time-consuming time of processing the event of the corresponding event type.
  • the time-consuming time for processing events of the heartbeat event type is usually short, you can set the depth of the event queue of the heartbeat event type is small, and the time-consuming time for processing events of the job event type is usually long, you can set the job The event queue has a greater depth.
  • the depth of each event queue may also be a default value or an empirical value. This is not limited.
  • the computer device obtains an event type of the event.
  • Distributed systems can generate various events during operation.
  • the distributed system triggers a corresponding event due to the execution of the task.
  • the Driver node when a client submits a job, the Driver node will establish a connection with the Cluster Manager node, register with the Cluster Manager node and apply for resources.
  • each Worker node can report to the Driver node. Send the heartbeat.
  • the Driver node after the Driver node gets the job, it can build a DAG graph, decompose the DAG graph into multiple job phases, and decompose each job phase into multiple tasks.
  • the distributed system can also generate other events in other scenarios. This embodiment does not limit the scenarios that generate events and the specific types of events.
  • the computer device can obtain the event type of the event. Specifically, multiple event types may be pre-configured. When an event is generated, the computer device may obtain an event type that matches the event from the pre-configured multiple event types.
  • a correspondence relationship between an event type and an event name may be set in advance, and each event type corresponds to at least one event name.
  • the computer device may obtain the name of the event, query the correspondence relationship, and obtain the event. The name of the event type.
  • the computer device determines a target event queue from a plurality of event queues based on the event type.
  • the computer device can establish the correspondence between the event type and the event queue in advance by means of event routing. After the event type of the event is determined, the event can be determined from multiple event queues based on the event type and the pre-established correspondence.
  • the event queue corresponding to the type uses the event queue corresponding to the event type as the target event queue, so that the generated events are listed in the target event queue.
  • the correspondence between the event type and the event queue may be indicated by routing information.
  • the computer device can use the event type as an index, query the routing information, obtain the event queue identifier corresponding to the event type, and use the event queue corresponding to the event queue identifier as the target event queue.
  • the routing information is used to indicate the correspondence between the event type and the event queue.
  • the routing information includes multiple event types and corresponding multiple event queue identifiers.
  • the event queue identifier is used to identify the corresponding event queue. Name, number, etc. With reference to the six exemplary event types in step 301, the routing information can be shown in Table 1 below:
  • Event type Event queue ID Heartbeat event type eventQueue1 Resource monitoring event type eventQueue2 Resource Request Event Type eventQueue3 System file event type eventQueue4 Job event type eventQueue5 Other event types eventQueue6
  • the computer device queues the event into the target event queue.
  • Enqueue refers to sending an event to a queue, that is, inserting an event into the queue, so that the event is queued in the queue, and the event is cached.
  • the event queue can be a FIFO (First Input First Output) queue
  • the entry queue can be the event insertion at the end of the event queue.
  • the computer device After the computer device determines the target event queue of the event, it can enqueue the event to the target event queue corresponding to the event type, that is, send the event to the target event queue, that is, insert the event at the end of the target event queue. After that, the event will Queue in the target event queue. When all events that precede the event are dequeued from the target event queue, the event will be queued at the head of the target event queue to be dequeued.
  • the function of event routing can be implemented by querying routing information and enqueuing events to an event queue, that is, each event generated can be routed to a corresponding event queue to achieve The effect of events listed by type.
  • the foregoing steps 303 and 304 may be encapsulated into a routing module, and the function of event routing is implemented by the routing module.
  • the computer device may execute the foregoing steps 303 to 304 by running the routing module.
  • various heartbeat events may be routed to an event queue corresponding to the heartbeat event type according to the heartbeat event type.
  • events such as write data events and read events can be routed to the event queue corresponding to the system file event type according to the system file event type, and so on.
  • a refined event caching mechanism is provided. All events are unified into an event queue, and various events are routed to their corresponding event queues. By routing different types of events to different event queues, at least the following technical effects can be achieved:
  • each event can be sent to the corresponding event queue according to the event type, which realizes the function of each event being listed separately according to the event type.
  • the event processing module that processes the log event is currently busy and the log event cannot be dequeued, resulting in the log event in the event queue.
  • Short-lived events such as heartbeat events, are also blocked in the event queue, unable to dequeue from the event queue, and cannot be sent to the corresponding event processing module, which affects the event processing efficiency of the entire distributed system.
  • the congestion of the event queue of the log event will not interfere with the event queue of the heartbeat event. Even if the log event is blocked in the event queue corresponding to the log event type, the heartbeat event can be Normally queue and dequeue from the event queue corresponding to the heartbeat event type, thereby improving the event processing efficiency of the entire distributed system.
  • the computer device sends the event to the event processing module.
  • each event in the target event queue will move from the end of the team to the head of the team.
  • the computer device will send the event to the event processing module.
  • the event processing module can also be called event handler, listener, etc.
  • the event processing module is used to process events. It can be a virtual program module and can be executed by a thread, object, process or other program execution unit in a computer device.
  • the event processing module encapsulates a method for processing events, and the event processing module can call the encapsulated method to process the event.
  • the computer device may generate a thread for sending an event to the event processing module, and send the event to the event processing module through the thread.
  • the thread refers to the execution flow of the program, and is the basic unit for CPU execution.
  • the thread used to send events can be a daemon thread, and the daemon thread can listen to the target event queue, and when the event is dequeued from the target event queue At this time, the daemon thread can obtain the event and send the event to the event processing module.
  • events can be distributed to the event processing module concurrently through multiple threads.
  • Concurrency refers to a mechanism in which multiple threads execute tasks in turn. For example, for thread A, thread B, and thread C, these three threads execute tasks concurrently, that is, thread A executes the task first, and then thread B executes the task. Task, and then thread C executes the task.
  • thread A executes the task first
  • thread B executes the task
  • Task and then thread C executes the task.
  • the multi-threaded concurrency mechanism can greatly improve the overall efficiency of task execution.
  • events in the event queue can be sent to the event processing module concurrently through multiple threads. That is, multiple threads will send events to the event processing module in turn.
  • a thread sends an event to the event processing module, there is no need to wait for the thread to finish sending. Instead, the next thread continues to send events to the event processing module.
  • the process of sending these two events through these two threads can include the following steps one to two:
  • Step 1 When the first event is dequeued from the event queue, an event is sent to the event processing module through the first thread.
  • Step 2 When the second event is dequeued from the event queue, the second event is sent to the event processing module through the second thread, where the second thread is different from the first thread.
  • the second event after the first event is moved to the head of the team and dequeued from the event queue. At this time, there is no need to wait for the first thread to finish sending the first event, and it is sufficient to send the second event directly through the second thread.
  • the number of the multiple threads may be two or more, and the data of the multiple threads is specifically determined according to business requirements, which is not limited in this embodiment.
  • a single thread is used in a distributed system to serially send events to an event processing module. That is, a thread is fixed to send events in the event queue to the event processing module.
  • the thread When the current event is dequeued from the event queue, the thread must obtain the previous event, send the previous event to the event processing module, and wait for the previous one. After the event is sent, the thread can continue to send the next event, so the efficiency of sending the event is very low.
  • multiple threads can send events concurrently, and multiple threads can send each event in the event queue in turn.
  • the multi-thread mechanism greatly improves the speed of sending events, and the previous one in the event queue The sending process of the event will not block the sending process of the next event, thereby greatly improving the efficiency of sending the event.
  • the number of threads sending events for each event queue may be designed in combination with the time consumption of processing events.
  • the number of threads corresponding to any event queue in multiple event queues is positively related to the time spent processing events of the corresponding event type, that is, the more time it takes to process a certain type of event, the event queue is for such events
  • the greater the number of threads that send events improving the ability to send such events.
  • the faster the processing of certain types of events the fewer the number of threads that send events to the event queue of such events, such as a single thread, thereby saving system resources.
  • the number of threads of the event queue corresponding to the event type can be configured in advance according to the time spent processing the event type. If the event type is processed, The event takes a long time, and you can configure more threads of the event type event queue. If the event type of the event type has a shorter time, you can configure the event type event queue thread Less, in this way, the computer device can obtain the number of threads configured for each event queue, and based on the number of threads configured for the event queue, after generating the corresponding threads for each event queue, the number of threads and the processing of the corresponding event type can be achieved. The time-consuming effect of the event is positively related.
  • a heartbeat event type event queue you can set the event queue to still send the events of the event queue serially according to a single thread.
  • multiple threads can be set to send events in the event queue concurrently.
  • an event queue for time-consuming events can be sent through multiple threads, and an event queue for non-time-consuming events can be sent through a single thread.
  • the flexibility of the process of sending events through threads is improved, and the ability to send time-consuming events is significantly improved, and the events in the time-consuming event queue can be specifically processed.
  • multiple event processing modules may be introduced to process events concurrently, that is, for each event type of multiple event types, all events of the event type may be collectively processed through multiple event processing modules.
  • the event processing module corresponding to any one of the multiple event types may include at least two. During the processing of an event by an event processing module, there is no need to wait for the event processing module to finish processing. The event processing module continues to process events.
  • the number of event processing modules corresponding to any one of the multiple event types is positively related to the time-consuming time for processing the event of the corresponding event type, that is, if the time-consuming time corresponding to the event type is longer, the The greater the number of event processing modules corresponding to the event type, the stronger the distributed system can handle such events. For example, the number of event processing modules of the heartbeat event type is larger, and the number of event processing modules of the job event type is larger. Less in quantity.
  • the degree of concurrency of processing events can be improved, thereby improving the concurrent performance and availability of the distributed system.
  • the number of event processing modules for each type of event is designed to improve the flexibility of processing events, and at the same time significantly improve the ability to process time-consuming events. Special events are handled at specific times.
  • At least one sub-event type under the event type may be determined, the event is matched with at least one sub-event type under the event type, and the sub-event type matched by the event is obtained, and the The event processing module corresponding to the event type sends an event.
  • Event types and sub-event types You can think of event types as large classes and sub-event types as small classes. Sub-event types are more specific and detailed types than the dimensions of event types. They are types that belong to event types. Each event type can include one or more sub-event types.
  • the job event type may include a type of starting a job, a type of ending a job, a type of starting a task, a type of ending a task, and the like.
  • all sub-event types can be classified into corresponding event types in advance, and the event type and all sub-event types under the event type are stored on the computer device correspondingly. After the device determines the event type of the event, it can obtain at least one sub-event type under the event type.
  • the event can be matched with each sub-event type in turn. For example, all sub-event types under the event type can be traversed. During the traversal, for the current traversal, To determine the event type that matches the event type. When the event matches the event type, the event type is used as the event type.
  • the name of the sub-event type can be stored in advance to determine whether the name of the event is the same as the name of the sub-event type.
  • the event and the sub-type Event type matches.
  • the correspondence between the sub-event type and the event processing module can be established in advance. After the event-matched sub-event type is obtained, the sub-event type can be determined according to the pre-established correspondence. Corresponding event processing module.
  • the computer device processes the event through the event processing module.
  • the event processing module After the event processing module receives the event, it can call its own method to process the event and obtain the processing result of the event.
  • the event cache mechanism in current distributed systems includes the following features:
  • the missing new event is an event that triggers resource reclamation and other types of events such as gc
  • an OOM (OutOfMemoryError, memory leak) mechanism will be triggered, that is, the operating system will kill the process to release memory, affecting the node device where the event queue is located
  • OOM OutOfMemoryError, memory leak
  • the normal operation of the device may even cause the node device to crash or be paralyzed, resulting in the node device not being able to communicate with other node devices in the distributed system and affecting the operation of the distributed system. That is, the loss of events will affect the stability of the distributed system with a great probability, and easily cause the situation that the distributed system is unavailable.
  • each event has to wait for all events before the event in the event queue to finish sending before it can be sent. It can be seen that the efficiency of sending events is extremely low. In addition, if a certain type of event processing takes time and congestion occurs in the event queue, it will affect the dequeue of other types of events in the event queue due to the blocking effect of the head. The efficiency with which the system processes events.
  • the event and all sub-event types need to be matched one by one in order to find a matching event processing module for processing.
  • This matching method requires a large range of traversal. Not only does it affect the efficiency of the distributed system in processing events, it may also cause the distributed system to crash or paralyze.
  • the embodiments of the present application solve the above-mentioned technical problems, and propose an optimized scheme for processing events.
  • this solution by reclassifying events in a distributed system and creating different event queues for different event types, and introducing an event routing method between events and event queues, the event distribution and processing are improved. effectiveness. Furthermore, it not only solves the problem of inefficient event queue capacity in the current distributed system, or the low efficiency of the entire system event processing caused by a time-consuming operation, but also the problem of insufficient performance caused by a single thread sending an event.
  • the operating system OOM caused by the problem, which in turn causes communication problems between node devices and system crashes or paralysis.
  • the concurrency and stability of the distributed system is further improved.
  • the method provided in the embodiment of the present application introduces a multi-queue event cache mechanism for a distributed system.
  • each event is listed in the corresponding event queue according to the event type.
  • Increasing the number of event queues thereby increasing the total capacity of the event queue, thereby improving the ability of distributed systems to cache events, and greatly improving the performance of distributed systems.
  • the capacity of the event queue is expanded, and a large number of events can be cached through multiple event queues, which avoids the frequent loss of events caused by insufficient event queue capacity, thereby improving the stability and availability of the distributed system.
  • different event queues are used to cache events of different event types, so that a large number of events in the distributed system can be cached separately. The processing of different types of events will not interfere with each other, which improves the processing efficiency of the entire distributed system.
  • FIG. 5 is a schematic structural diagram of an event processing apparatus according to an embodiment of the present application.
  • the apparatus includes: an obtaining module 501, a determining module 502, an enqueuing module 503, and an event processing module 504.
  • An obtaining module 501 configured to obtain an event type of an event when an event is generated in the distributed system
  • a determining module 502 configured to determine a target event queue from a plurality of event queues based on the event type, and the multiple event queues are respectively used to buffer events of multiple event types;
  • the enqueuing module 503 is configured to enqueue the event into the target event queue
  • An event processing module 504 is configured to process the event when the event is dequeued from the target event queue.
  • the device provided in the embodiment of the present application introduces a multi-queue event cache mechanism for a distributed system.
  • each event is individually listed in the corresponding event queue according to the event type.
  • Increasing the number of event queues thereby increasing the total capacity of the event queue, thereby improving the ability of distributed systems to cache events, and greatly improving the performance of distributed systems.
  • the capacity of the event queue is expanded, and a large number of events can be cached through multiple event queues, which avoids the frequent loss of events caused by insufficient event queue capacity, thereby improving the stability and availability of the distributed system.
  • different event queues are used to cache events of different event types, so that a large number of events in the distributed system can be cached separately. The processing of different types of events will not interfere with each other, which improves the processing efficiency of the entire distributed system.
  • the determining module 502 includes:
  • the query submodule is used to query routing information to obtain an event queue identifier corresponding to the event type, and the routing information includes multiple event types and corresponding multiple event queue identifiers;
  • a determining submodule is configured to use the event queue corresponding to the event queue identifier as the target event queue.
  • the depth of any one of the plurality of event queues is positively related to the time consuming time for processing events of the corresponding event type.
  • the apparatus further includes:
  • a generating module is configured to generate multiple event queues for the multiple event types.
  • the apparatus further includes:
  • the sending module is configured to send events in the event queue to the event processing module 504 concurrently through multiple threads for any event queue in the multiple event queues.
  • the number of threads corresponding to any of the event queues in the multiple event queues is positively related to the time consumed for processing events of the corresponding event type.
  • the apparatus further includes:
  • a matching module configured to match the event with at least one sub-event type under the event type to obtain a sub-event type matched by the event;
  • the sending module is configured to send the event to an event processing module 504 corresponding to the sub-event type.
  • the event processing module 504 corresponding to any one of the multiple event types includes at least two.
  • the number of the event processing modules 504 corresponding to any one of the multiple event types is positively related to the time consuming time for processing the events of the corresponding event type.
  • the multiple event queues include an event queue corresponding to a heartbeat event type, an event queue corresponding to a resource monitoring event type, an event queue corresponding to a resource application event type, an event queue corresponding to a system file event type, At least two of the event queues corresponding to the job event type and the event queues corresponding to other event types.
  • the event processing device provided in the foregoing embodiment only uses the division of the foregoing functional modules as an example for processing events.
  • the above functions may be allocated by different functional modules as required.
  • the internal structure of the event processing device is divided into different functional modules to complete all or part of the functions described above.
  • the event processing device and the event processing method embodiments provided by the foregoing embodiments belong to the same concept. For specific implementation processes, refer to the method embodiments, and details are not described herein again.
  • FIG. 6 is a schematic structural diagram of a computer device according to an embodiment of the present application.
  • the computer device 600 may have a large difference due to different configurations or performance, and may include one or more processors (central processing units) (CPU) 601. And one or more memories 602, where at least one instruction is stored in the memory 602, and the at least one instruction is loaded and executed by the processor 601 to implement the event processing methods provided by the foregoing method embodiments.
  • the computer device may also have components such as a wired or wireless network interface and an input-output interface for input and output.
  • the computer device may also include other components for implementing the functions of the device, and details are not described herein.
  • a computer-readable storage medium such as a memory including instructions, which can be executed by a processor in a computer device to complete the event processing method in the foregoing embodiment.
  • the computer-readable storage medium may be a ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
  • the present application further provides a computer program product containing instructions, which when executed on a computer device, enables the computer device to implement the event processing method in the foregoing embodiment.
  • the present application further provides a chip that includes a processor and / or program instructions.
  • the chip runs, the event processing method in the foregoing embodiment is implemented.
  • the program may be stored in a computer-readable storage medium.
  • the storage medium may be a read-only memory, a magnetic disk or an optical disk.

Abstract

A method, device, and equipment for event processing, and a storage medium, related to the technical field of big data. When an event is generated in a distributed system, the event type of the event is acquired (201); a target event queue is determined among multiple event queues (202); the event is enqueued to the target event queue (203); when the event is dequeued from the target event queue, the event is processed (204). By designing multiple event types and multiple event queues, events are enqueued to corresponding event queues on the basis of the event type of each, the number of event queues is increased, thus increasing the total capacity of the event queues, increasing the capacity of the distributed system in buffering events, and increasing the stability and availability of the distributed system.

Description

事件处理方法、装置、设备及存储介质Event processing method, device, equipment and storage medium
本申请要求于2018年05月25日提交的申请号为201810545759.6、发明名称为“事件处理方法、装置、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims priority from a Chinese patent application filed on May 25, 2018 with an application number of 201810545759.6 and an invention name of "Event Processing Method, Apparatus, Equipment, and Storage Medium", the entire contents of which are incorporated herein by reference. .
技术领域Technical field
本申请涉及大数据技术领域,特别涉及一种事件处理方法、装置、设备及存储介质。The present application relates to the field of big data technology, and in particular, to an event processing method, device, device, and storage medium.
背景技术Background technique
分布式系统是指一组通过网络进行通信、协调工作从而完成共同的任务的节点设备组成的系统。在任务执行的各个阶段,分布式系统会由于执行任务的过程,产生相应的事件,例如在执行统计车牌数据的任务的过程中,分布式系统会产生向存储系统写入车牌数据的事件。分布式系统需要对产生的事件进行处理,以便完成任务。A distributed system is a system composed of a group of node devices that communicate and coordinate work through the network to complete common tasks. At each stage of task execution, the distributed system will generate corresponding events due to the process of executing the task. For example, during the task of performing statistics on license plate data, the distributed system will generate an event of writing license plate data to the storage system. Distributed systems need to process the events that are generated in order to complete the task.
以分布式系统基于Spark架构运行为例,Spark架构中包括客户端(英文:client)节点,client节点可以包括用于处理事件的事件处理模块,基于Spark架构,client节点在初始化时会创建一个事件队列,事件队列用于缓存向事件处理模块发送的事件,在处理任务的过程中,每当任一事件产生时,client节点就会将事件入列至该事件队列,当事件队列中的事件排在队首时,client节点会将事件从事件队列中出列,将该事件发送给事件处理模块,通过事件处理模块,可以对事件进行处理。Take the distributed system running based on the Spark architecture as an example. The Spark architecture includes a client (English: client) node. The client node can include an event processing module for processing events. Based on the Spark architecture, the client node creates an event during initialization. Queues and event queues are used to buffer events sent to the event processing module. During the processing of tasks, whenever any event occurs, the client node will list the events into the event queue. At the head of the team, the client node dequeues the event from the event queue and sends the event to the event processing module. Through the event processing module, the event can be processed.
在实现本发明的过程中,发明人发现相关技术至少存在以下问题:In the process of implementing the present invention, the inventors found that the related technology has at least the following problems:
单个事件队列的容量很小,一旦这一个事件队列中的事件达到容量上限时,就无法容纳新事件,分布式系统也就无法继续处理新事件,影响了分布式系统的处理性能。The capacity of a single event queue is small. Once the events in this event queue reach the capacity limit, new events cannot be accommodated, and the distributed system cannot continue to process new events, which affects the processing performance of the distributed system.
发明内容Summary of the Invention
本申请实施例提供了一种事件处理方法、装置、设备及存储介质,能够解决相关技术中单个事件队列容量有限,导致分布式系统处理性能不高的技术问题。所述技术方案如下:The embodiments of the present application provide an event processing method, device, device, and storage medium, which can solve the technical problem that the capacity of a single event queue in the related technology is limited, resulting in low processing performance of the distributed system. The technical solution is as follows:
一方面,提供了一种事件处理方法,所述方法包括:In one aspect, an event processing method is provided. The method includes:
当分布式系统中产生事件时,获取所述事件的事件类型;When an event is generated in a distributed system, obtaining the event type of the event;
基于所述事件类型,从多个事件队列中确定目标事件队列,所述多个事件队列用于分别缓存多种事件类型的事件;Determining a target event queue from a plurality of event queues based on the event type, and the plurality of event queues are used to separately cache events of multiple event types;
将所述事件入列至所述目标事件队列中;Enumerating the event into the target event queue;
当所述事件从所述目标事件队列出列时,对所述事件进行处理。When the event is dequeued from the target event queue, the event is processed.
在一种可能的实现方式中,所述基于所述事件类型,从多个事件队列中确定目标 事件队列,包括:In a possible implementation manner, determining the target event queue from a plurality of event queues based on the event type includes:
查询路由信息,得到所述事件类型对应的事件队列标识,所述路由信息包括多种事件类型以及对应的多个事件队列标识;Query routing information to obtain an event queue identifier corresponding to the event type, where the routing information includes multiple event types and corresponding multiple event queue identifiers;
将所述事件队列标识对应的事件队列,作为所述目标事件队列。The event queue corresponding to the event queue identifier is used as the target event queue.
在一种可能的实现方式中,所述多个事件队列中任一事件队列的深度与处理对应事件类型的事件的耗时时长正相关。In a possible implementation manner, the depth of any one of the multiple event queues is positively related to the time-consuming duration of processing events of the corresponding event type.
在一种可能的实现方式中,所述获取所述事件的事件类型之前,所述方法还包括:In a possible implementation manner, before the obtaining an event type of the event, the method further includes:
为所述多种事件类型,生成多个事件队列。For the multiple event types, multiple event queues are generated.
在一种可能的实现方式中,所述方法还包括:In a possible implementation manner, the method further includes:
针对所述多个事件队列中的任一事件队列,通过多个线程,并发向事件处理模块发送所述事件队列中的事件;Sending events in the event queue to an event processing module concurrently through multiple threads for any event queue in the multiple event queues;
通过所述事件处理模块,对所述事件进行处理;Process the event through the event processing module;
在一种可能的实现方式中,所述多个事件队列中任一事件队列对应的线程的数量,与处理对应事件类型的事件的耗时时长正相关。In a possible implementation manner, the number of threads corresponding to any of the event queues in the multiple event queues is positively related to the time consumed for processing events of the corresponding event type.
所述对所述事件进行处理,包括:The processing the event includes:
对所述事件与所述事件类型下的至少一个子事件类型进行匹配,得到所述事件匹配的子事件类型;Matching the event with at least one sub-event type under the event type to obtain the sub-event type matched by the event;
向所述子事件类型对应的事件处理模块发送所述事件;Sending the event to an event processing module corresponding to the sub-event type;
通过所述事件处理模块,对所述事件进行处理。The event processing module processes the event.
在一种可能的实现方式中,所述多种事件类型中任一种事件类型对应的事件处理模块包括至少两个。In a possible implementation manner, the event processing module corresponding to any one of the multiple event types includes at least two.
在一种可能的实现方式中,所述多种事件类型中任一种事件类型对应的事件处理模块的数量与处理对应事件类型的事件的耗时时长正相关。In a possible implementation manner, the number of event processing modules corresponding to any one of the multiple event types is positively related to the time-consuming duration of processing events of the corresponding event type.
在一种可能的实现方式中,所述多个事件队列包括心跳事件类型对应的事件队列、资源监控事件类型对应的事件队列、资源申请事件类型对应的事件队列、系统文件事件类型对应的事件队列、作业事件类型对应的事件队列以及其他事件类型对应的事件队列中的至少两个。In a possible implementation manner, the multiple event queues include an event queue corresponding to a heartbeat event type, an event queue corresponding to a resource monitoring event type, an event queue corresponding to a resource application event type, and an event queue corresponding to a system file event type. , At least two of the event queues corresponding to the job event type and the event queues corresponding to other event types.
另一方面,提供了一种事件处理装置,所述装置包括:In another aspect, an event processing apparatus is provided, where the apparatus includes:
获取模块,用于当分布式系统中产生事件时,获取所述事件的事件类型;An acquisition module, configured to acquire an event type of the event when an event is generated in the distributed system;
确定模块,用于基于所述事件类型,从多个事件队列中确定目标事件队列,所述多个事件队列用于分别缓存多种事件类型的事件;A determining module, configured to determine a target event queue from a plurality of event queues based on the event type, and the plurality of event queues are respectively used to buffer events of multiple event types;
入列模块,用于将所述事件入列至所述目标事件队列中;Enqueuing module, configured to enqueue the event into the target event queue;
事件处理模块,用于当所述事件从所述目标事件队列出列时,对所述事件进行处理。An event processing module is configured to process the event when the event is dequeued from the target event queue.
在一种可能的实现方式中,所述确定模块,包括:In a possible implementation manner, the determining module includes:
查询子模块,用于查询路由信息,得到所述事件类型对应的事件队列标识,所述路由信息包括多种事件类型以及对应的多个事件队列标识;A query submodule, configured to query routing information to obtain an event queue identifier corresponding to the event type, where the routing information includes multiple event types and corresponding multiple event queue identifiers;
确定子模块,用于将所述事件队列标识对应的事件队列,作为所述目标事件队列。A determining submodule is configured to use an event queue corresponding to the event queue identifier as the target event queue.
在一种可能的实现方式中,所述多个事件队列中任一事件队列的深度与处理对应事件类型的事件的耗时时长正相关。In a possible implementation manner, the depth of any one of the multiple event queues is positively related to the time-consuming duration of processing events of the corresponding event type.
在一种可能的实现方式中,所述装置还包括:In a possible implementation manner, the apparatus further includes:
生成模块,用于为所述多种事件类型,生成多个事件队列。A generating module is configured to generate multiple event queues for the multiple event types.
在一种可能的实现方式中,所述装置还包括:In a possible implementation manner, the apparatus further includes:
发送模块,用于针对所述多个事件队列中的任一事件队列,通过多个线程,并发向事件处理模块发送所述事件队列中的事件。The sending module is configured to send an event in the event queue to the event processing module concurrently through multiple threads for any event queue in the multiple event queues.
在一种可能的实现方式中,所述多个事件队列中任一事件队列对应的线程的数量,与处理对应事件类型的事件的耗时时长正相关。In a possible implementation manner, the number of threads corresponding to any of the event queues in the multiple event queues is positively related to the time consumed for processing events of the corresponding event type.
在一种可能的实现方式中,所述装置还包括:In a possible implementation manner, the apparatus further includes:
匹配模块,用于对所述事件与所述事件类型下的至少一个子事件类型进行匹配,得到所述事件匹配的子事件类型;A matching module, configured to match the event with at least one sub-event type under the event type to obtain the sub-event type matched by the event;
发送模块,用于向所述子事件类型对应的事件处理模块发送所述事件。The sending module is configured to send the event to an event processing module corresponding to the sub-event type.
在一种可能的实现方式中,所述多种事件类型中任一种事件类型对应的事件处理模块包括至少两个。In a possible implementation manner, the event processing module corresponding to any one of the multiple event types includes at least two.
在一种可能的实现方式中,所述多种事件类型中任一种事件类型对应的事件处理模块的数量与处理对应事件类型的事件的耗时时长正相关。In a possible implementation manner, the number of event processing modules corresponding to any one of the multiple event types is positively related to the time-consuming duration of processing events of the corresponding event type.
在一种可能的实现方式中,所述多个事件队列包括心跳事件类型对应的事件队列、资源监控事件类型对应的事件队列、资源申请事件类型对应的事件队列、系统文件事件类型对应的事件队列、作业事件类型对应的事件队列以及其他事件类型对应的事件队列中的至少两个。In a possible implementation manner, the multiple event queues include an event queue corresponding to a heartbeat event type, an event queue corresponding to a resource monitoring event type, an event queue corresponding to a resource application event type, and an event queue corresponding to a system file event type. , At least two of the event queues corresponding to the job event type and the event queues corresponding to other event types.
另一方面,提供了一种计算机设备,所述计算机设备包括处理器和存储器,所述存储器中存储有至少一条指令,所述指令由所述处理器加载并执行以实现上述事件处理方法。In another aspect, a computer device is provided. The computer device includes a processor and a memory, and the memory stores at least one instruction, and the instruction is loaded and executed by the processor to implement the foregoing event processing method.
另一方面,提供了一种计算机可读存储介质,所述存储介质内存储有至少一条指令,所述至少一条指令被处理器执行以实现上述事件处理方法。In another aspect, a computer-readable storage medium is provided. The storage medium stores at least one instruction, and the at least one instruction is executed by a processor to implement the foregoing event processing method.
另一方面,提供了一种包含指令的计算机程序产品,当其在计算机设备上运行时,使得所述计算机设备能够实现上述事件处理方法。In another aspect, a computer program product containing instructions is provided, which when run on a computer device, enables the computer device to implement the event processing method described above.
另一方面,提供了一种芯片,所述芯片包括处理器和/或程序指令,当所述芯片运行时,实现上述事件处理方法。In another aspect, a chip is provided. The chip includes a processor and / or program instructions. When the chip is running, the event processing method is implemented.
本申请实施例提供的技术方案带来的有益效果至少包括:The beneficial effects brought by the technical solutions provided in the embodiments of the present application include at least:
本申请实施例提供的方法、装置、设备及存储介质,为分布式系统引入了多队列的事件缓存机制,通过设计多种事件类型以及多个事件队列,将每个事件按照事件类型,分别入列至对应的事件队列中,提高了事件队列的数量,从而提高了事件队列的总容量,进而提升了分布式系统缓存事件的能力,极大地改善了分布式系统的性能。尤其是,在分布式系统面临高并发访问的场景下,能够满足分布式系统缓存大量事件的需求。并且,扩充了事件队列容量,通过多个事件队列能够缓存大量事件,避免了 事件队列容量不足导致事件频繁丢失的情况,从而提升分布式系统的稳定性和可用性。同时,通过不同的事件队列来缓存不同事件类型的事件,令分布式系统中的大量事件得以分门别类地缓存,不同类型的事件的处理过程不会互相干扰,提高整个分布式系统的处理效率。The method, device, device and storage medium provided in the embodiments of the present application introduce a multi-queue event caching mechanism for a distributed system. By designing multiple event types and multiple event queues, each event is separately entered according to the event type. Listing to the corresponding event queues increases the number of event queues, thereby increasing the total capacity of the event queues, further improving the ability of distributed systems to cache events, and greatly improving the performance of distributed systems. In particular, in the scenario where the distributed system faces high concurrent access, it can meet the needs of the distributed system to cache a large number of events. In addition, the capacity of the event queue is expanded, and a large number of events can be cached through multiple event queues, which prevents the frequent loss of events caused by insufficient event queue capacity, thereby improving the stability and availability of the distributed system. At the same time, different event queues are used to cache events of different event types, so that a large number of events in the distributed system can be cached separately. The processing of different types of events will not interfere with each other, which improves the processing efficiency of the entire distributed system.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the technical solutions in the embodiments of the present application more clearly, the drawings used in the description of the embodiments will be briefly introduced below. Obviously, the drawings in the following description are just some embodiments of the application. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without paying creative labor.
图1是本申请实施例提供的一种实施环境的示意图;FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application; FIG.
图2是本申请实施例提供的一种事件处理方法的流程图;2 is a flowchart of an event processing method according to an embodiment of the present application;
图3是本申请实施例提供的一种事件处理方法的流程图;3 is a flowchart of an event processing method according to an embodiment of the present application;
图4是本申请实施例提供的一种事件处理方法的示意图;4 is a schematic diagram of an event processing method according to an embodiment of the present application;
图5是本申请实施例提供的一种事件处理装置的结构示意图;5 is a schematic structural diagram of an event processing apparatus according to an embodiment of the present application;
图6是本申请实施例提供的一种计算机设备的结构示意图。FIG. 6 is a schematic structural diagram of a computer device according to an embodiment of the present application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。In the following, the technical solutions in the embodiments of the present application will be clearly and completely described with reference to the drawings in the embodiments of the present application. Obviously, the described embodiments are part of the embodiments of the present application, but not all of the embodiments. Based on the embodiments in the present application, all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
图1是本申请实施例提供的一种实施环境的示意图,该实施环境包括主节点101、至少一个从节点102以及客户端节点103。FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application. The implementation environment includes a master node 101, at least one slave node 102, and a client node 103.
主节点101、至少一个从节点102以及客户端节点103通过网络连接,主节点101、至少一个从节点102以及客户端节点103可以组成分布式系统,通过协同工作从而完成任务。例如,客户端节点103可以生成待执行的任务,向主节点101发送待执行的任务,主节点101可以为每个从节点102分配任务,每个从节点102可以执行任务,将任务处理的结果发送给客户端节点103。The master node 101, the at least one slave node 102, and the client node 103 are connected through a network. The master node 101, the at least one slave node 102, and the client node 103 can form a distributed system and work together to complete tasks. For example, the client node 103 can generate a task to be executed, and send the task to be executed to the master node 101. The master node 101 can assign a task to each slave node 102, each slave node 102 can execute the task, and the result of the task processing Sent to the client node 103.
该分布式系统的架构包括而不限于spark架构、flink架构、mapreduce架构、storm架构等各种架构。举例来说,当分布式系统基于spark架构运行时,客户端节点103可以为spark架构中的驱动器(英文:Driver)节点,主节点101可以为spark架构中的集群管理器(英文:Cluster Manager)节点,即主(英文:master)节点,从节点102可以为spark架构中的工作(英文:Worker)节点。The architecture of the distributed system includes, but is not limited to, various architectures such as a spark architecture, a flink architecture, a mapreduce architecture, and a storm architecture. For example, when the distributed system is running based on the spark architecture, the client node 103 may be a driver (English: Driver) node in the spark architecture, and the master node 101 may be a cluster manager (English: Cluster Manager) in the spark architecture. The node, that is, the master (English: master) node, and the slave node 102 may be a worker (English: Worker) node in the Spark architecture.
客户端节点103可以为计算机设备,例如为终端或服务器,可以包括个人电脑、笔记本电脑、手机等。主节点101以及至少一个从节点102可以包括服务器、终端等。The client node 103 may be a computer device, such as a terminal or a server, and may include a personal computer, a notebook computer, a mobile phone, and the like. The master node 101 and at least one slave node 102 may include a server, a terminal, and the like.
图2是本申请实施例提供的一种事件处理方法的流程图,该方法的执行主体为计算机设备,该方法包括以下步骤:FIG. 2 is a flowchart of an event processing method provided by an embodiment of the present application. The method is executed by a computer device. The method includes the following steps:
201、当分布式系统中产生事件时,获取该事件的事件类型。201. When an event occurs in a distributed system, obtain an event type of the event.
202、基于该事件类型,从多个事件队列中确定目标事件队列,该多个事件队列用于分别缓存多种事件类型的事件。202. Based on the event type, determine a target event queue from a plurality of event queues, where the plurality of event queues are respectively used to buffer events of multiple event types.
203、将该事件入列至该目标事件队列中。203. Queue the event into the target event queue.
204、当该事件从该目标事件队列出列时,对该事件进行处理。204. When the event is dequeued from the target event queue, process the event.
本申请实施例提供的方法,为分布式系统引入了多队列的事件缓存机制,通过设计多种事件类型以及多个事件队列,将每个事件按照事件类型,分别入列至对应的事件队列中,提高了事件队列的数量,从而提高了事件队列的总容量,进而提升了分布式系统缓存事件的能力,极大地改善了分布式系统的性能。尤其是,在分布式系统面临高并发访问的场景下,能够满足分布式系统缓存大量事件的需求。并且,扩充了事件队列容量,通过多个事件队列能够缓存大量事件,避免了事件队列容量不足导致事件频繁丢失的情况,从而提升分布式系统的稳定性和可用性。同时,通过不同的事件队列来缓存不同事件类型的事件,令分布式系统中的大量事件得以分门别类地缓存,不同类型的事件的处理过程不会互相干扰,提高整个分布式系统的处理效率。The method provided in the embodiment of the present application introduces a multi-queue event cache mechanism for a distributed system. By designing multiple event types and multiple event queues, each event is listed in the corresponding event queue according to the event type. , Increasing the number of event queues, thereby increasing the total capacity of the event queue, thereby improving the ability of distributed systems to cache events, and greatly improving the performance of distributed systems. In particular, in the scenario where the distributed system faces high concurrent access, it can meet the needs of the distributed system to cache a large number of events. In addition, the capacity of the event queue is expanded, and a large number of events can be cached through multiple event queues, which avoids the frequent loss of events caused by insufficient event queue capacity, thereby improving the stability and availability of the distributed system. At the same time, different event queues are used to cache events of different event types, so that a large number of events in the distributed system can be cached separately. The processing of different types of events will not interfere with each other, which improves the processing efficiency of the entire distributed system.
在一种可能的实现方式中,该基于该事件类型,从多个事件队列中确定目标事件队列,包括:In a possible implementation manner, determining the target event queue from multiple event queues based on the event type includes:
查询路由信息,得到该事件类型对应的事件队列标识,该路由信息包括多种事件类型以及对应的多个事件队列标识;Query routing information to obtain the event queue identifier corresponding to the event type. The routing information includes multiple event types and corresponding multiple event queue identifiers.
将该事件队列标识对应的事件队列,作为该目标事件队列。Use the event queue corresponding to the event queue identifier as the target event queue.
在一种可能的实现方式中,该多个事件队列中任一事件队列的深度与处理对应事件类型的事件的耗时时长正相关。In a possible implementation manner, the depth of any one of the plurality of event queues is positively related to the time consuming time for processing events of the corresponding event type.
在一种可能的实现方式中,该获取该事件的事件类型之前,该方法还包括:In a possible implementation manner, before obtaining the event type of the event, the method further includes:
为该多种事件类型,生成多个事件队列。Generate multiple event queues for the multiple event types.
在一种可能的实现方式中,该方法还包括:In a possible implementation manner, the method further includes:
针对该多个事件队列中的任一事件队列,通过多个线程,并发向事件处理模块发送该事件队列中的事件;For any event queue in the multiple event queues, multiple events are used to concurrently send events in the event queue to the event processing module;
通过该事件处理模块,对该事件进行处理;Process the event through the event processing module;
在一种可能的实现方式中,该多个事件队列中任一事件队列对应的线程的数量,与处理对应事件类型的事件的耗时时长正相关。In a possible implementation manner, the number of threads corresponding to any of the event queues in the multiple event queues is positively related to the time consumed for processing events of the corresponding event type.
在一种可能的实现方式中,该对该事件进行处理,包括:In a possible implementation manner, the event processing includes:
对该事件与该事件类型下的至少一个子事件类型进行匹配,得到该事件匹配的子事件类型;Matching the event with at least one sub-event type under the event type to obtain the sub-event type matched by the event;
向该子事件类型对应的事件处理模块发送该事件;Sending the event to an event processing module corresponding to the sub-event type;
通过该事件处理模块,对该事件进行处理。The event processing module processes the event.
在一种可能的实现方式中,该多种事件类型中任一种事件类型对应的事件处理模块包括至少两个。In a possible implementation manner, the event processing module corresponding to any one of the multiple event types includes at least two.
在一种可能的实现方式中,该多种事件类型中任一种事件类型对应的事件处理模块的数量与处理对应事件类型的事件的耗时时长正相关。In a possible implementation manner, the number of event processing modules corresponding to any one of the multiple event types is positively related to the time-consuming time for processing events of the corresponding event type.
在一种可能的实现方式中,该多个事件队列包括心跳事件类型对应的事件队列、 资源监控事件类型对应的事件队列、资源申请事件类型对应的事件队列、系统文件事件类型对应的事件队列、作业事件类型对应的事件队列以及其他事件类型对应的事件队列中的至少两个。In a possible implementation manner, the multiple event queues include an event queue corresponding to a heartbeat event type, an event queue corresponding to a resource monitoring event type, an event queue corresponding to a resource application event type, an event queue corresponding to a system file event type, At least two of the event queues corresponding to the job event type and the event queues corresponding to other event types.
图3是本申请实施例提供的一种事件处理方法的流程图,该方法的执行主体为计算机设备,该计算机设备可以为分布式系统中事件队列所在的节点设备,例如在Spark架构中,该计算机设备可以为driver节点,即客户端节点。该方法包括:FIG. 3 is a flowchart of an event processing method provided by an embodiment of the present application. The method is executed by a computer device, and the computer device may be a node device where an event queue is located in a distributed system. For example, in the Spark architecture, the The computer device may be a driver node, that is, a client node. The method includes:
301、计算机设备为多种事件类型,生成多个事件队列。301. A computer device generates multiple event queues for multiple event types.
本实施例中设计了多队列的事件缓存机制,会生成多个事件队列,以便通过多个事件队列,分别缓存不同事件类型的事件。In this embodiment, a multi-queue event cache mechanism is designed, and multiple event queues are generated so that events of different event types can be cached through multiple event queues, respectively.
具体来说,计算机设备可以获取多种事件类型,对于多种事件类型中的每种事件类型,计算机设备可以为该事件类型生成对应的事件队列,从而得到多个事件队列。其中,每个事件队列用于缓存对应事件类型的事件。Specifically, the computer device can obtain multiple event types. For each of the multiple event types, the computer device can generate a corresponding event queue for the event type, thereby obtaining multiple event queues. Among them, each event queue is used to cache events of a corresponding event type.
多种事件类型:事件类型的划分可以根据分布式系统的业务需求确定,在一种可能的实现中,多种事件类型可以包括心跳事件类型、资源监控事件类型、资源申请事件类型、系统文件事件类型、作业事件类型以及其他事件类型中的至少两项。Multiple event types: The division of event types can be determined according to the business needs of the distributed system. In a possible implementation, multiple event types can include heartbeat event types, resource monitoring event types, resource application event types, system file events At least two of the type, job event type, and other event types.
以下通过(1)至(6)对上述事件类型进行阐述:The following types of events are described through (1) to (6):
(1)心跳事件类型:包括各种心跳事件,例如可以包括主节点与各个从节点之间的心跳事件、主服务与各个从服务之间的心跳事件、客户端节点与各个从节点之间的心跳事件、客户端节点与主节点之间的心跳事件等。(1) Heartbeat event type: includes various heartbeat events, for example, it can include the heartbeat event between the master node and each slave node, the heartbeat event between the master service and each slave service, the client node and each slave node. Heartbeat events, heartbeat events between client nodes and master nodes, etc.
(2)资源监控事件类型:包括各种获取分布式系统中资源的使用信息的事件,该资源可以包括CPU(Central Processing Unit,中央处理器)、内存、磁盘IO(Input/Output,输入/输出)、网络带宽等。(2) Resource monitoring event types: including various events that obtain information on the use of resources in a distributed system. The resources can include CPU (Central Processing Unit), memory, and disk IO (Input / Output, input / output). ), Network bandwidth, etc.
(3)资源申请事件类型:包括各种申请资源的事件以及回收资源的事件,例如,当作业提交至分布式系统时,申请CPU、内存等资源的事件、触发系统进行gc(Garbage Collection,垃圾回收)的事件、在任务执行结束后对任务申请的资源进行回收的事件。(3) Types of resource application events: Including various events for applying for resources and events for recycling resources, for example, when a job is submitted to a distributed system, events that apply for resources such as CPU and memory, trigger the system to perform gc (Garbage Collection, garbage (Recycling) event, and an event of reclaiming the resource requested by the task after the task execution is completed.
(4)系统文件事件类型:包括各种与存储系统交互数据的事件,该交互数据包括向存储系统写入数据以及从存储系统读取数据,该存储系统可以包括本机的存储器、hdfs、数据库、硬盘、云存储等,该数据可以包括log日志。(4) System file event type: includes various events that interact with the storage system. The interaction data includes writing data to the storage system and reading data from the storage system. The storage system can include local storage, hdfs, and databases. , Hard disk, cloud storage, etc., this data can include log logs.
(5)作业事件类型:包括客户端向分布式系统提交作业的事件,当客户端向分布式系统提交作业后,分布式系统会将作业拆分为多个作业阶段,再将每个作业阶段拆分为多个任务,例如,spark架构中的DAG(Directed Acyclic Graph,有向无环图)、Job(作业)、Stage(作业的阶段)、Task(任务)归并为该类事件。(5) Job event type: Includes events where a client submits a job to a distributed system. When a client submits a job to a distributed system, the distributed system will split the job into multiple job phases, and then each job phase Split into multiple tasks, for example, DAG (Directed Acyclic Graph, Directed Acyclic Graph), Job (Job), Stage (Job Stage), Task (Task) in the Spark architecture are merged into this type of event.
(6)其他事件类型:不属于上述事件类别的所有事件可以归并为其他事件类型。(6) Other event types: All events that do not belong to the above event categories can be merged into other event types.
结合上述(1)至(6)所述的事件类型,计算机设备生成的多个事件队列可以包括心跳事件类型对应的事件队列、资源监控事件类型对应的事件队列、资源申请事件类型对应的事件队列、系统文件事件类型对应的事件队列、作业事件类型对应的事件队列以及其他事件类型对应的事件队列中的至少两项,心跳事件类型对应的事件队列用于缓存属于心跳事件类型的事件,资源监控事件类型对应的事件队列用于缓存属于 资源监控类型的事件,资源申请事件类型对应的事件队列用于缓存属于资源申请事件类型的事件,系统文件事件类型对应的事件队列用于缓存属于系统文件事件类型的事件,作业事件类型对应的事件队列用于缓存属于作业事件类型的事件,其他事件类型对应的事件队列用于缓存属于作业事件类型的事件。In combination with the event types described in (1) to (6) above, the multiple event queues generated by the computer device may include an event queue corresponding to a heartbeat event type, an event queue corresponding to a resource monitoring event type, and an event queue corresponding to a resource application event type. , At least two of the event queue corresponding to the system file event type, the event queue corresponding to the job event type, and the event queue corresponding to other event types. The event queue corresponding to the heartbeat event type is used to buffer events belonging to the heartbeat event type. Resource monitoring The event queue corresponding to the event type is used to cache events belonging to the resource monitoring type, the event queue corresponding to the resource application event type is used to cache events belonging to the resource application event type, and the event queue corresponding to the system file event type is used to cache system file events Type of event, the event queue corresponding to the job event type is used to cache events belonging to the job event type, and the event queues corresponding to other event types are used to cache events belonging to the job event type.
示例性地,请参见图4,事件队列可以表示为eventQueue,计算机设备可以为6种事件类型,生成6个事件队列,依次为eventQueue1,eventQueue2,……,eventQueue6,其中eventQueue1为缓存心跳事件类型对应的事件队列,eventQueue2为资源监控事件类型对应的事件队列,以此类推。Exemplarily, referring to FIG. 4, the event queue can be expressed as eventQueue, and the computer device can generate 6 event queues for 6 event types, which are in turn eventQueue1, eventQueue2, ..., eventQueue6, where eventQueue1 is the cache heartbeat event type corresponding Event queue, eventQueue2 is the event queue corresponding to the resource monitoring event type, and so on.
本步骤中,通过为多种事件类型,生成了多个事件队列,一方面,通过新增事件队列,提高事件队列的数量,因此提高了事件队列的总容量,也就提升了分布式系统缓存事件的能力,进而提升了分布式系统的性能和扩展性。尤其是,在分布式系统面临高并发访问的场景下,能够满足分布式系统缓存大量事件的需求,提高了分布式计算系统的处理性能。另一方面,通过设置多个事件队列缓存事件,极大降低了事件丢失的概率,避免分布式系统频繁丢失事件的情况,也就避免了分布式系统由于丢失资源清理类的事件引发系统不稳定、不可用的隐患,提升了分布式系统的稳定性和可用性。再一方面,通过生成不同的事件队列来缓存不同事件类型的事件,令分布式系统中的大量事件得以分门别类地缓存,每个事件队列专用于缓存对应事件类型的事件,而无需关注其他事件类型的事件,减轻了单个事件队列的存储压力。In this step, multiple event queues are generated for multiple event types. On the one hand, the number of event queues is increased by adding event queues, so the total capacity of the event queue is increased, and the distributed system cache is also improved. The ability of events further improves the performance and scalability of distributed systems. In particular, in the scenario where the distributed system faces high concurrent access, it can meet the need for the distributed system to cache a large number of events, and improve the processing performance of the distributed computing system. On the other hand, by setting multiple event queues to cache events, the probability of event loss is greatly reduced, and the situation of frequent loss of events by distributed systems is avoided, which also prevents the system from being unstable due to lost resource cleanup events. The hidden dangers of unavailability have improved the stability and availability of the distributed system. On the other hand, by generating different event queues to cache events of different event types, a large number of events in the distributed system can be cached separately. Each event queue is dedicated to cache events of the corresponding event type without paying attention to other event types. Events, alleviating the storage pressure of a single event queue.
可选地,计算机设备可以获取多个事件队列的深度,按照该多个事件队列的深度,生成多个事件队列,以使每个事件队列的容量满足业务需求。其中,事件队列的深度用于指示事件队列最多容纳的事件的数量,例如,事件队列的深度可以等于事件队列最多容纳的事件的数量,又如,事件队列的深度可以等于事件队列最多容纳的事件的数量与阈值系数之间的比值,该阈值系数可以为80%、60%等。Optionally, the computer device may obtain the depths of multiple event queues and generate multiple event queues according to the depths of the multiple event queues, so that the capacity of each event queue can meet business requirements. The depth of the event queue is used to indicate the number of events that the event queue can hold. For example, the depth of the event queue can be equal to the number of events that the event queue can hold. For another example, the depth of the event queue can be equal to the event that the event queue can hold. The ratio between the number of thresholds and a threshold coefficient, which may be 80%, 60%, or the like.
关于每个事件队列的深度,可选地,多个事件队列中每个事件队列的深度与处理对应事件类型的事件的耗时时长正相关。具体来说,可以结合处理事件的耗时程度,设计每个事件队列的深度,处理某类事件越耗时,则对应的事件队列越深,则该事件队列能缓存的事件的数量也就越多,从而提升了缓存这类事件的能力。同理地,处理某类事件比较迅速,则对应的事件队列越浅。Regarding the depth of each event queue, optionally, the depth of each event queue in the multiple event queues is positively related to the time-consuming duration of processing events of the corresponding event type. Specifically, the depth of each event queue can be designed in conjunction with the time-consuming process of events. The more time-consuming it takes to process a certain type of event, the deeper the corresponding event queue, and the more events it can cache. More, which improves the ability to cache such events. Similarly, the faster the processing of a certain type of event, the shallower the corresponding event queue.
在一种可能的实现中,对于多个事件类型中的每个事件类型,可以预先根据处理该事件类型的耗时时长,配置该事件类型对应的事件队列的深度,若处理该事件类型的事件的耗时时长较长,可以将该事件类型的事件队列的深度配置地较大,若处理该事件类型的事件的耗时时长较短,可以将该事件类型的事件队列的深度配置地较小,如此,计算机设备可以获取每个事件队列配置的深度,根据事件队列所配置的深度,生成事件队列后,能够实现事件队列的深度与处理对应事件类型的事件的耗时时长正相关的效果。In a possible implementation, for each event type of multiple event types, the depth of the event queue corresponding to the event type can be configured in advance according to the time spent processing the event type. If the event of the event type is processed, The time-consuming duration of the event type is longer, and the depth of the event queue of the event type can be configured to be larger. If the time-consuming duration of processing the event of the event type is shorter, the depth of the event queue can be configured to be smaller. In this way, the computer device can obtain the depth configured for each event queue, and after generating the event queue according to the configured depth of the event queue, it can achieve the effect that the depth of the event queue is positively related to the time-consuming time of processing the event of the corresponding event type.
举例来说,处理心跳事件类型的事件的耗时时长通常较短,则可以设置心跳事件类型的事件队列的深度较小,处理作业事件类型的事件的耗时时长通常较长,则可以设置作业事件类型的事件队列的深度较大。For example, the time-consuming time for processing events of the heartbeat event type is usually short, you can set the depth of the event queue of the heartbeat event type is small, and the time-consuming time for processing events of the job event type is usually long, you can set the job The event queue has a greater depth.
需要说明的是,上述仅是以事件队列的深度与处理事件的耗时时长正相关为例进 行描述,可选地,每个事件队列的深度也可以为默认值或经验值,本实施例对此不做限定。It should be noted that the above is only described by taking the positive correlation between the depth of the event queue and the processing time of the event as an example. Optionally, the depth of each event queue may also be a default value or an empirical value. This is not limited.
302、当分布式系统中产生事件时,计算机设备获取事件的事件类型。302. When an event is generated in the distributed system, the computer device obtains an event type of the event.
分布式系统在运行中可以产生各种事件。在一个示例性场景中,在执行任务的过程中,分布式系统会由于执行任务的过程,触发相应的事件。例如,在Spark架构中,当客户端提交作业后,Driver节点会与Cluster Manager节点建立连接,向Cluster Manager节点注册并申请资源,又如,在执行任务的过程中,各个Worker节点可以向Driver节点发送心跳,再如,Driver节点得到作业后,可以构建DAG图,将DAG图分解成多个作业的阶段,将每个作业的阶段分解成多个任务。当然,分布式系统还可以在其他场景产生其他事件,本实施例对产生事件的场景以及事件的具体类型不做限定。Distributed systems can generate various events during operation. In an exemplary scenario, during the execution of a task, the distributed system triggers a corresponding event due to the execution of the task. For example, in the Spark architecture, when a client submits a job, the Driver node will establish a connection with the Cluster Manager node, register with the Cluster Manager node and apply for resources. For another example, during the execution of tasks, each Worker node can report to the Driver node. Send the heartbeat. For example, after the Driver node gets the job, it can build a DAG graph, decompose the DAG graph into multiple job phases, and decompose each job phase into multiple tasks. Of course, the distributed system can also generate other events in other scenarios. This embodiment does not limit the scenarios that generate events and the specific types of events.
当产生事件后,计算机设备可以获取该事件的事件类型。具体来说,可以预先配置多个事件类型,当产生事件后,计算机设备可以从预先配置的多个事件类型中,获取与事件匹配的事件类型。When an event is generated, the computer device can obtain the event type of the event. Specifically, multiple event types may be pre-configured. When an event is generated, the computer device may obtain an event type that matches the event from the pre-configured multiple event types.
示例性地,可以预先设置事件类型与事件名称之间的对应关系,每个事件类型对应至少一个事件名称,当产生事件后,计算机设备可以获取该事件的名称,查询该对应关系,得到该事件的名称对应的事件类型。For example, a correspondence relationship between an event type and an event name may be set in advance, and each event type corresponds to at least one event name. After an event is generated, the computer device may obtain the name of the event, query the correspondence relationship, and obtain the event. The name of the event type.
303、计算机设备基于事件类型,从多个事件队列中确定目标事件队列。303. The computer device determines a target event queue from a plurality of event queues based on the event type.
计算机设备可以通过事件路由的方式,预先建立事件类型与事件队列之间的对应关系,当确定了事件的事件类型后,可以基于事件类型以及预先建立的对应关系,从多个事件队列中确定事件类型对应的事件队列,将事件类型对应的事件队列作为目标事件队列,以便将生成的事件入列至该目标事件队列中。The computer device can establish the correspondence between the event type and the event queue in advance by means of event routing. After the event type of the event is determined, the event can be determined from multiple event queues based on the event type and the pre-established correspondence. The event queue corresponding to the type uses the event queue corresponding to the event type as the target event queue, so that the generated events are listed in the target event queue.
在一种可能的实现中,事件类型与事件队列之间的对应关系可以通过路由信息指示。结合路由信息,针对确定目标事件队列的具体过程,计算机设备可以将事件类型作为索引,查询路由信息,得到事件类型对应的事件队列标识,将事件队列标识对应的事件队列,作为目标事件队列。In a possible implementation, the correspondence between the event type and the event queue may be indicated by routing information. In conjunction with routing information, for the specific process of determining the target event queue, the computer device can use the event type as an index, query the routing information, obtain the event queue identifier corresponding to the event type, and use the event queue corresponding to the event queue identifier as the target event queue.
其中,路由信息用于指示事件类型与事件队列之间的对应关系,路由信息包括多种事件类型以及对应的多个事件队列标识,事件队列标识用于标识对应的事件队列,可以为事件队列的名称、编号等。结合上述步骤301中的6种示例性的事件类型,路由信息可以如下表1所示:The routing information is used to indicate the correspondence between the event type and the event queue. The routing information includes multiple event types and corresponding multiple event queue identifiers. The event queue identifier is used to identify the corresponding event queue. Name, number, etc. With reference to the six exemplary event types in step 301, the routing information can be shown in Table 1 below:
表1Table 1
事件类型Event type 事件队列标识Event queue ID
心跳事件类型Heartbeat event type eventQueue1eventQueue1
资源监控事件类型Resource monitoring event type eventQueue2eventQueue2
资源申请事件类型Resource Request Event Type eventQueue3eventQueue3
系统文件事件类型System file event type eventQueue4eventQueue4
作业事件类型Job event type eventQueue5eventQueue5
其他事件类型Other event types eventQueue6eventQueue6
304、计算机设备将事件入列至目标事件队列中。304. The computer device queues the event into the target event queue.
入列(enqueue)是指将事件发送给队列,即将事件插入到队列中,以使事件在队列中进行排队,从而对事件进行缓存。其中,事件队列可以为FIFO(First Input First Output,先入先出)队列,则入列可以为将事件插入到事件队列的队尾。Enqueue refers to sending an event to a queue, that is, inserting an event into the queue, so that the event is queued in the queue, and the event is cached. Among them, the event queue can be a FIFO (First Input First Output) queue, and the entry queue can be the event insertion at the end of the event queue.
计算机设备当确定事件的目标事件队列后,可以将事件入列至事件类型对应的目标事件队列,即将事件发送给目标事件队列,也就是将事件插入到目标事件队列的队尾,之后,事件会在目标事件队列中排队,当排在该事件之前的所有事件从目标事件队列中出列后,该事件会在目标事件队列中排在队首,从而得以出列。After the computer device determines the target event queue of the event, it can enqueue the event to the target event queue corresponding to the event type, that is, send the event to the target event queue, that is, insert the event at the end of the target event queue. After that, the event will Queue in the target event queue. When all events that precede the event are dequeued from the target event queue, the event will be queued at the head of the target event queue to be dequeued.
可选地,结合上述步骤303以及步骤304,可以通过查询路由信息以及将事件入列至事件队列,实现事件路由的功能,即,可以将产生的每个事件路由至对应的事件队列中,达到事件按类型入列的效果。在一种可能的实施例中,可以将上述步骤303以及步骤304封装为路由模块,由路由模块实现事件路由的功能,计算机设备可以通过运行路由模块,执行上述步骤303至304。Optionally, in combination with the above steps 303 and 304, the function of event routing can be implemented by querying routing information and enqueuing events to an event queue, that is, each event generated can be routed to a corresponding event queue to achieve The effect of events listed by type. In a possible embodiment, the foregoing steps 303 and 304 may be encapsulated into a routing module, and the function of event routing is implemented by the routing module. The computer device may execute the foregoing steps 303 to 304 by running the routing module.
在一个示例性场景中,每当分布式系统中的各个节点设备产生心跳事件时,可以将各种心跳事件按照心跳事件类型,路由至心跳事件类型对应的事件队列中,每当分布式系统中的各个节点设备向存储系统读写数据时,可以将写入数据事件、读取事件等事件,按照系统文件事件类型,路由至系统文件事件类型对应的事件队列中,依次类推。In an exemplary scenario, whenever a heartbeat event is generated by each node device in a distributed system, various heartbeat events may be routed to an event queue corresponding to the heartbeat event type according to the heartbeat event type. When each node device reads and writes data to the storage system, events such as write data events and read events can be routed to the event queue corresponding to the system file event type according to the system file event type, and so on.
本实施例中,提供了精细化的事件缓存机制,从所有事件统一进入一个事件队列,改进为各种事件路由至各自对应的事件队列。通过将不同类型的事件路由至不同的事件队列中,至少可以达到以下技术效果:In this embodiment, a refined event caching mechanism is provided. All events are unified into an event queue, and various events are routed to their corresponding event queues. By routing different types of events to different event queues, at least the following technical effects can be achieved:
一方面,通过在事件分发与事件队列之间引入事件路由的过程,每个事件可以按照事件类型,发送至对应的事件队列中,实现了各个事件按照事件类型分别入列的功能。On the one hand, by introducing the process of event routing between event distribution and event queue, each event can be sent to the corresponding event queue according to the event type, which realizes the function of each event being listed separately according to the event type.
另一方面,相关技术中,由于采用单一事件队列缓存所有的事件,各种类型的事件容易由于排头阻塞效应,产生相互干扰。而本实施例中不同类型的事件在不同事件队列分开排队,能够避免不同类型的事件由于排头阻塞,而产生干扰的情况,也就避免了某类事件处理耗时,而导致影响其他类事件的处理进度的情况,提高事件排队的效率。On the other hand, in the related art, because a single event queue is used to cache all events, various types of events are prone to cause mutual interference due to the head-of-line blocking effect. In this embodiment, different types of events are queued separately in different event queues, which can avoid the interference of different types of events due to blockage of the headlines, and also avoid the time-consuming processing of certain types of events, which will affect other types of events. Handle progress and improve the efficiency of event queuing.
在一个示例性场景中,如果事件队列中日志事件排在队首,而由于日志事件比较耗时,处理日志事件的事件处理模块当前繁忙,日志事件无法出列,则导致事件队列中日志事件之后的心跳事件等耗时较短的事件也阻塞在事件队列中,无法从事件队列中出列,也就无法发送至对应的事件处理模块,影响整个分布式系统的事件处理效率。In an exemplary scenario, if the log event is ranked first in the event queue, and because the log event is time-consuming, the event processing module that processes the log event is currently busy and the log event cannot be dequeued, resulting in the log event in the event queue. Short-lived events, such as heartbeat events, are also blocked in the event queue, unable to dequeue from the event queue, and cannot be sent to the corresponding event processing module, which affects the event processing efficiency of the entire distributed system.
而本申请实施例中,不同类型的事件分开排队,日志事件的事件队列的拥塞情况不会干扰到心跳事件的事件队列,即使日志事件在日志事件类型对应的事件队列中阻塞,心跳事件也可正常从心跳事件类型对应的事件队列中排队并出列,从而提高了整个分布式系统的事件处理效率。In the embodiment of the present application, different types of events are queued separately. The congestion of the event queue of the log event will not interfere with the event queue of the heartbeat event. Even if the log event is blocked in the event queue corresponding to the log event type, the heartbeat event can be Normally queue and dequeue from the event queue corresponding to the heartbeat event type, thereby improving the event processing efficiency of the entire distributed system.
305、当事件从目标事件队列出列时,计算机设备向事件处理模块发送事件。305. When the event is dequeued from the target event queue, the computer device sends the event to the event processing module.
随着时间的推移,目标事件队列中的每个事件会从队尾移动至队首,当生成的事件排在了目标事件队列的队首时,计算机设备会向事件处理模块发送该事件。Over time, each event in the target event queue will move from the end of the team to the head of the team. When the generated event is ranked at the head of the target event queue, the computer device will send the event to the event processing module.
其中,事件处理模块也可以称为事件处理器、监听器等,事件处理模块用于处理事件,可以为虚拟的程序模块,可以由计算机设备内的一个线程、对象、进程或其他程序执行单元执行,事件处理模块中封装了处理事件的方法,事件处理模块能够调用封装的方法,对事件进行处理。Among them, the event processing module can also be called event handler, listener, etc. The event processing module is used to process events. It can be a virtual program module and can be executed by a thread, object, process or other program execution unit in a computer device. The event processing module encapsulates a method for processing events, and the event processing module can call the encapsulated method to process the event.
关于向事件处理模块发送事件的过程,计算机设备可以生成用于向事件处理模块发送事件的线程,通过线程向事件处理模块发送事件。其中,线程是指程序的执行流程,是CPU调度执行的基本单位,示例性地,用于发送事件的线程可以为守护线程,守护线程可以监听目标事件队列,当事件从目标事件队列中出列时,守护线程可以获取该事件,将事件发送给事件处理模块。Regarding the process of sending an event to the event processing module, the computer device may generate a thread for sending an event to the event processing module, and send the event to the event processing module through the thread. Among them, the thread refers to the execution flow of the program, and is the basic unit for CPU execution. For example, the thread used to send events can be a daemon thread, and the daemon thread can listen to the target event queue, and when the event is dequeued from the target event queue At this time, the daemon thread can obtain the event and send the event to the event processing module.
可选地,可以通过多个线程,并发向事件处理模块分发事件。其中,并发(Concurrency)是指多个线程轮流执行任务的机制,例如对于线程A、线程B和线程C来说,这三个线程并发执行任务,即为线程A先执行任务,之后线程B执行任务,之后线程C执行任务,其中由于不同线程之间切换的时间间隔极短,可以看作多个任务在同时执行。通过多线程的并发机制,可以极大地提高执行任务的整体效率。Optionally, events can be distributed to the event processing module concurrently through multiple threads. Among them, Concurrency refers to a mechanism in which multiple threads execute tasks in turn. For example, for thread A, thread B, and thread C, these three threads execute tasks concurrently, that is, thread A executes the task first, and then thread B executes the task. Task, and then thread C executes the task. Among them, because the time interval between different threads is extremely short, it can be regarded as multiple tasks being executed at the same time. The multi-threaded concurrency mechanism can greatly improve the overall efficiency of task execution.
结合多线程的并发机制,针对多个事件队列中的任一事件队列,可以通过多个线程,并发向事件处理模块发送事件队列中的事件。即,会由多个线程轮流向事件处理模块发送事件,当一个线程向事件处理模块发送事件后,无需等待该线程发送结束,而是直接由下一个线程继续向事件处理模块发送事件。Combined with the multi-threaded concurrency mechanism, for any event queue in multiple event queues, events in the event queue can be sent to the event processing module concurrently through multiple threads. That is, multiple threads will send events to the event processing module in turn. When a thread sends an event to the event processing module, there is no need to wait for the thread to finish sending. Instead, the next thread continues to send events to the event processing module.
示例性地,假设通过第一线程、第二线程并发来发送事件,对于事件队列中任意两个相邻的事件来说,以前一个事件称为第一事件,后一个事件称为第二事件为例,通过这两个线程发送这两个事件的过程,可以包括以下步骤一至步骤二:Exemplarily, it is assumed that events are sent through the first thread and the second thread concurrently. For any two adjacent events in the event queue, the previous event is called the first event and the latter event is called the second event. For example, the process of sending these two events through these two threads can include the following steps one to two:
步骤一、当第一事件从事件队列中出列时,通过第一线程,向事件处理模块发送事件。Step 1: When the first event is dequeued from the event queue, an event is sent to the event processing module through the first thread.
步骤二、当第二事件从事件队列中出列时,通过第二线程,向事件处理模块发送第二事件,其中,第二线程与第一线程不同。Step 2: When the second event is dequeued from the event queue, the second event is sent to the event processing module through the second thread, where the second thread is different from the first thread.
当第一事件从事件队列出列后,排在第一事件之后的第二事件会移动至队首,从事件队列中出列。此时,无需等待第一线程完成发送第一事件,直接通过第二线程发送第二事件即可。When the first event is dequeued from the event queue, the second event after the first event is moved to the head of the team and dequeued from the event queue. At this time, there is no need to wait for the first thread to finish sending the first event, and it is sufficient to send the second event directly through the second thread.
需要说明的是,该多个线程的数量可以为两个或两个以上,多个线程的数据具体根据业务需求确定,本实施例对此不做限定。It should be noted that the number of the multiple threads may be two or more, and the data of the multiple threads is specifically determined according to business requirements, which is not limited in this embodiment.
通过多线程分发事件,至少可以达到以下技术效果:Distributing events through multiple threads can achieve at least the following technical effects:
相关技术中,分布式系统中均采用单个线程,串行地向事件处理模块发送事件。即,固定由一个线程,向事件处理模块发送事件队列中的事件,当前一个事件从事件队列中出列时,该线程要获取前一个事件,向事件处理模块发送该前一个事件,等待前一个事件发送结束后,该线程才能够继续发送后一个事件,如此,发送事件的效率很低。In the related art, a single thread is used in a distributed system to serially send events to an event processing module. That is, a thread is fixed to send events in the event queue to the event processing module. When the current event is dequeued from the event queue, the thread must obtain the previous event, send the previous event to the event processing module, and wait for the previous one. After the event is sent, the thread can continue to send the next event, so the efficiency of sending the event is very low.
而本申请实施例中,通过多线程并发地发送事件,多个线程可以轮流发送事件队列中的每个事件,通过多线程的机制极大地提高了发送事件的速度,并且,事件队列中前一个事件的发送过程不会阻塞下一个事件的发送过程,从而极大地提高了发送事 件的效率。In the embodiment of the present application, multiple threads can send events concurrently, and multiple threads can send each event in the event queue in turn. The multi-thread mechanism greatly improves the speed of sending events, and the previous one in the event queue The sending process of the event will not block the sending process of the next event, thereby greatly improving the efficiency of sending the event.
可选地,在多线程发送事件的基础上,可以结合处理事件的耗时程度,设计为每个事件队列发送事件的线程的数量。具体地,多个事件队列中任一事件队列对应的线程的数量,与处理对应事件类型的事件的耗时时长正相关,即,处理某类事件越耗时,则为这类事件的事件队列发送事件的线程的数量越多,从而提升发送这类事件的能力。同理地,处理某类事件比较迅速,则为这类事件的事件队列发送事件的线程的数量越少,例如可以为单个线程,从而节约系统资源。Optionally, on the basis of sending events by multiple threads, the number of threads sending events for each event queue may be designed in combination with the time consumption of processing events. Specifically, the number of threads corresponding to any event queue in multiple event queues is positively related to the time spent processing events of the corresponding event type, that is, the more time it takes to process a certain type of event, the event queue is for such events The greater the number of threads that send events, improving the ability to send such events. Similarly, the faster the processing of certain types of events, the fewer the number of threads that send events to the event queue of such events, such as a single thread, thereby saving system resources.
在一种可能的实现中,对于多个事件类型中的每个事件类型,可以预先根据处理该事件类型的耗时时长,配置该事件类型对应的事件队列的线程的数量,若处理该事件类型的事件的耗时时长较长,可以将该事件类型的事件队列的线程配置地较多,若处理该事件类型的事件的耗时时长较短,可以将该事件类型的事件队列的线程配置地较少,如此,计算机设备可以获取每个事件队列配置的线程的数量,根据事件队列所配置的线程的数量,生成每个事件队列对应的线程后,能够实现线程的数量与处理对应事件类型的事件的耗时时长正相关的效果。In a possible implementation, for each event type of multiple event types, the number of threads of the event queue corresponding to the event type can be configured in advance according to the time spent processing the event type. If the event type is processed, The event takes a long time, and you can configure more threads of the event type event queue. If the event type of the event type has a shorter time, you can configure the event type event queue thread Less, in this way, the computer device can obtain the number of threads configured for each event queue, and based on the number of threads configured for the event queue, after generating the corresponding threads for each event queue, the number of threads and the processing of the corresponding event type can be achieved. The time-consuming effect of the event is positively related.
举例来说,处理心跳事件类型的事件的耗时时长通常较短,则对于心跳事件类型的事件队列来说,可以设置仍按照单个线程,串行发送该事件队列的事件。而处理作业事件类型的事件的耗时时长通常较长,则对于作业事件类型的事件队列来说,可以设置按照多个线程,并发地发送该事件队列中的事件。For example, it usually takes a short time to process a heartbeat event type event. For a heartbeat event type event queue, you can set the event queue to still send the events of the event queue serially according to a single thread. However, it usually takes a long time to process a job event type event. For a job event type event queue, multiple threads can be set to send events in the event queue concurrently.
本步骤中,能够为耗时事件的事件队列,通过多个线程来发送事件,为不耗时的事件的事件队列,通过单个线程发送事件。如此,提高了通过线程发送事件的过程的灵活性,同时显著地提高了发送耗时事件的能力,能够针对性的对耗时的事件队列中的事件进行专项处理。In this step, an event queue for time-consuming events can be sent through multiple threads, and an event queue for non-time-consuming events can be sent through a single thread. In this way, the flexibility of the process of sending events through threads is improved, and the ability to send time-consuming events is significantly improved, and the events in the time-consuming event queue can be specifically processed.
可选地,可以引入多个事件处理模块并发处理事件,即,对于多种事件类型中的每种事件类型,可以通过多个事件处理模块,共同处理该事件类型的所有事件。具体来说,该多种事件类型中任一种事件类型对应的事件处理模块可以包括至少两个,在一个事件处理模块处理事件的过程中,无需等待该事件处理模块处理结束,即可由下一个事件处理模块继续处理事件。Optionally, multiple event processing modules may be introduced to process events concurrently, that is, for each event type of multiple event types, all events of the event type may be collectively processed through multiple event processing modules. Specifically, the event processing module corresponding to any one of the multiple event types may include at least two. During the processing of an event by an event processing module, there is no need to wait for the event processing module to finish processing. The event processing module continues to process events.
可选地,多种事件类型中任一种事件类型对应的事件处理模块的数量与处理对应事件类型的事件的耗时时长正相关,即,若事件类型对应的耗时时长越长,则该事件类型对应的事件处理模块的数量越多,那么分布式系统对这类事件的处理能力也就越强,例如,心跳事件类型的事件处理模块的数量较多,作业事件类型的事件处理模块的数量较少。Optionally, the number of event processing modules corresponding to any one of the multiple event types is positively related to the time-consuming time for processing the event of the corresponding event type, that is, if the time-consuming time corresponding to the event type is longer, the The greater the number of event processing modules corresponding to the event type, the stronger the distributed system can handle such events. For example, the number of event processing modules of the heartbeat event type is larger, and the number of event processing modules of the job event type is larger. Less in quantity.
本步骤中,通过引入多个事件处理模块处理事件,可以提升处理事件的并发度,进而提升分布式系统的并发性能和可用性。进一步地,通过结合处理事件的耗时程度,设计每类事件的事件处理模块的数量,提高了处理事件的的灵活性,同时显著地提高了处理耗时事件的能力,能够针对性的对耗时的事件进行专项处理。In this step, by introducing multiple event processing modules to process events, the degree of concurrency of processing events can be improved, thereby improving the concurrent performance and availability of the distributed system. Further, by combining the time consumption of processing events, the number of event processing modules for each type of event is designed to improve the flexibility of processing events, and at the same time significantly improve the ability to process time-consuming events. Special events are handled at specific times.
可选地,针对确定事件处理模块的过程,可以确定事件类型下的至少一个子事件类型,对事件与事件类型下的至少一个子事件类型进行匹配,得到事件匹配的子事件类型,再向子事件类型对应的事件处理模块发送事件。Optionally, for the process of determining the event processing module, at least one sub-event type under the event type may be determined, the event is matched with at least one sub-event type under the event type, and the sub-event type matched by the event is obtained, and the The event processing module corresponding to the event type sends an event.
事件类型以及子事件类型:可以将事件类型看作大类,子事件类型看作小类,子事件类型是比事件类型的维度更具体、更细化的类型,是隶属于事件类型的类型,每个事件类型可以包括一个或多个子事件类型。例如,作业事件类型可以包括启动作业的类型、结束作业的类型、启动任务的类型、结束任务的类型等。Event types and sub-event types: You can think of event types as large classes and sub-event types as small classes. Sub-event types are more specific and detailed types than the dimensions of event types. They are types that belong to event types. Each event type can include one or more sub-event types. For example, the job event type may include a type of starting a job, a type of ending a job, a type of starting a task, a type of ending a task, and the like.
关于确定事件类型下的至少一个子事件类型的过程,可以预先将所有子事件类型分别归入对应的事件类型中,在计算机设备上对应存储事件类型以及事件类型下的所有子事件类型,当计算机设备确定事件的事件类型后,即可获取该事件类型下的至少一个子事件类型。Regarding the process of determining at least one sub-event type under the event type, all sub-event types can be classified into corresponding event types in advance, and the event type and all sub-event types under the event type are stored on the computer device correspondingly. After the device determines the event type of the event, it can obtain at least one sub-event type under the event type.
关于对事件与事件类型下的子事件类型进行匹配的过程,可以依次对事件与每个子事件类型进行匹配,例如,可以遍历事件类型下的所有子事件类型,在遍历的过程中,对于当前遍历到的子事件类型,判断事件是否与该子事件类型匹配,当事件与该子事件类型匹配时,则将该子事件类型作为事件匹配的子事件类型。Regarding the process of matching the event and the sub-event type under the event type, the event can be matched with each sub-event type in turn. For example, all sub-event types under the event type can be traversed. During the traversal, for the current traversal, To determine the event type that matches the event type. When the event matches the event type, the event type is used as the event type.
其中,在一种可能的实现中,可以预先存储子事件类型的名称,判断事件的名称是否与子事件类型的名称相同,当事件的名称与子事件类型的名称相同时,则确定事件与子事件类型匹配。Among them, in a possible implementation, the name of the sub-event type can be stored in advance to determine whether the name of the event is the same as the name of the sub-event type. When the name of the event is the same as the name of the sub-event type, the event and the sub-type Event type matches.
关于根据子事件类型确定事件处理模块的过程,可以预先建立子事件类型与事件处理模块之间的对应关系,当得到事件匹配的子事件类型后,可以根据预先建立的对应关系,确定子事件类型对应的事件处理模块。Regarding the process of determining the event processing module according to the sub-event type, the correspondence between the sub-event type and the event processing module can be established in advance. After the event-matched sub-event type is obtained, the sub-event type can be determined according to the pre-established correspondence. Corresponding event processing module.
以下,对上述确定事件处理模块的过程至少可达到的技术效果进行描述:The following describes at least the technical effects that can be achieved by the above process of determining the event processing module:
相关技术中,当分布式系统中产生事件时,需要将所有的子事件类型作为遍历的范围,对事件与所有的子事件类型依次进行匹配,才能找到匹配的子事件类型,进而找到合适的事件处理模块以进行处理。如此,遍历的范围很大,需要耗费很长的时间,才能找到匹配的子事件类型以及事件处理模块,从而影响了处理事件的时间,更无法应对分布式系统中事件日趋多样化的挑战。In related technologies, when an event is generated in a distributed system, all sub-event types need to be used as the traversal range, and the event and all sub-event types must be matched in order to find the matching sub-event type, and then find the appropriate event. Processing module for processing. In this way, the scope of traversal is very large, and it takes a long time to find the matching sub-event type and event processing module, which affects the processing time of the event, and it cannot meet the challenge of increasingly diverse events in the distributed system.
在一个示例性场景中,假设分布式系统中共有1000种子事件类型,当发生为第1000种子事件类型的事件时,则需要从第1种子事件类型遍历至第1000种子事件类型,对事件与1000种子事件类型进行依次匹配,在第1000次进行匹配时,才能确定匹配的事件处理模块。In an exemplary scenario, it is assumed that there are a total of 1000 seed event types in a distributed system. When an event of the 1000th seed event type occurs, it is necessary to traverse from the 1st seed event type to the 1000th seed event type. The seed event types are sequentially matched, and the matching event processing module can be determined only when the 1000th match is performed.
而本实施例中,通过将大量事件划分为多个事件类型,对事件与事件类型下的至少一个子事件类型进行匹配,将遍历的范围从所有子事件类型,缩小为某一事件类型下的所有子事件类型。也即是,无需对事件与所有的子事件类型依次匹配,只需对事件与某一类型下的子事件类型依次匹配即可,缩小了遍历的范围,提高了匹配的效率,能够快速找到事件匹配的事件处理模块。In this embodiment, by dividing a large number of events into multiple event types, matching the event with at least one sub-event type under the event type, narrowing the traversal range from all sub-event types to those under a certain event type. All child event types. That is, it is not necessary to match the event with all sub-event types in order, it is only necessary to match the event with the sub-event type under a certain type, which narrows the scope of traversal, improves the efficiency of matching, and can quickly find the event. Matching event processing module.
在一个示例性场景中,假设分布式系统中共有1000种子事件类型,划分为10种事件类型,其中,第900种子事件类型至第1000种子事件类型划分为第10种事件类型。那么,若发生了第1000种子事件类型的事件,在确定该事件属于第10种事件类型后,无需对事件与第1种子事件类型至第1000种子事件类型进行匹配,只需对事件从第900种子事件类型至第1000种子事件类型进行匹配即可。In an exemplary scenario, it is assumed that there are a total of 1000 seed event types in a distributed system, which are divided into 10 event types, where the 900th seed event type to the 1000th seed event type are divided into the 10th event type. Then, if an event of the 1000th seed event type occurs, after determining that the event belongs to the 10th event type, there is no need to match the event with the 1st seed event type to the 1000th seed event type, only the event from 900th Match the seed event type to the 1000th seed event type.
306、计算机设备通过事件处理模块,对事件进行处理。306. The computer device processes the event through the event processing module.
事件处理模块接收到事件后,可以调用自身的方法,对事件进行处理,得到事件的处理结果。After the event processing module receives the event, it can call its own method to process the event and obtain the processing result of the event.
目前的分布式系统(例如spark)中的事件缓存机制包括以下特点:The event cache mechanism in current distributed systems (such as spark) includes the following features:
第一,通过一个固定长度的事件队列缓存所有类型的事件。First, all types of events are cached through a fixed-length event queue.
因此,当这一个事件队列中的事件的数量达到该事件队列的容量的上限时,如有新事件产生,新事件将无法入列至事件队列中,导致新事件丢失,进而导致任务执行失败。Therefore, when the number of events in this event queue reaches the upper limit of the capacity of the event queue, if a new event occurs, the new event cannot be listed in the event queue, resulting in the loss of the new event and the failure of the task execution.
进一步地,如果丢失的新事件是触发gc等资源回收等类型的事件,会触发OOM(OutOfMemoryError,内存泄露)机制,即,操作系统会杀死进程以释放内存,影响了事件队列所在的节点设备的正常运行,甚至会引发节点设备的崩溃或瘫痪,导致节点设备无法与分布式系统中的其他节点设备进行通信,影响分布式系统的运行。也即是,事件的丢失会很大概率上影响分布式系统的稳定性,容易引起分布式系统不可用的情况。Further, if the missing new event is an event that triggers resource reclamation and other types of events such as gc, an OOM (OutOfMemoryError, memory leak) mechanism will be triggered, that is, the operating system will kill the process to release memory, affecting the node device where the event queue is located The normal operation of the device may even cause the node device to crash or be paralyzed, resulting in the node device not being able to communicate with other node devices in the distributed system and affecting the operation of the distributed system. That is, the loss of events will affect the stability of the distributed system with a great probability, and easily cause the situation that the distributed system is unavailable.
第二,通过同一个线程,向事件处理模块串行发送所有事件,当线程发送前一个事件结束后,这个线程才能继续发送后一个事件。Second, all events are sent serially to the event processing module through the same thread. After the thread sends the previous event, the thread can continue to send the next event.
因此,每个事件都要等待事件队列中该事件之前的所有事件均发送结束,才能被发送,可见发送事件的效率极低。此外,若某类事件处理耗时,在事件队列中发生拥塞,会由于排头阻塞的效应,会影响事件队列中其他类事件的出列,导致其他类事件无法得到处理,因此会影响整个分布式系统处理事件的效率。Therefore, each event has to wait for all events before the event in the event queue to finish sending before it can be sent. It can be seen that the efficiency of sending events is extremely low. In addition, if a certain type of event processing takes time and congestion occurs in the event queue, it will affect the dequeue of other types of events in the event queue due to the blocking effect of the head. The efficiency with which the system processes events.
第三,当某个事件进入事件处理模块前,需要对事件以及所有的子事件类型进行逐个匹配,才能找到匹配的事件处理模块以进行处理,这种匹配的方式,需要遍历的范围极大,不仅影响分布式系统的处理事件的效率,还可能引发分布式系统崩溃或瘫痪。Third, before an event enters the event processing module, the event and all sub-event types need to be matched one by one in order to find a matching event processing module for processing. This matching method requires a large range of traversal. Not only does it affect the efficiency of the distributed system in processing events, it may also cause the distributed system to crash or paralyze.
而本申请实施例解决了上述几个技术问题,提出了一种优化了的处理事件的方案。本方案中,通过对分布式系统的事件重新分类,并为不同的事件类型,创建不同的事件队列,以及在事件与事件队列之间,引入了事件路由的方法,提升了事件分发及处理的效率。更进一步,不仅解决了当前分布式系统因事件队列容量不足,或者某个耗时操作导致的整个系统事件处理效率低的问题,以及单线程发送事件而导致性能不足的问题,还解决了由上述问题引发的操作系统OOM,进而导致节点设备之间通信的问题以及系统崩溃或瘫痪等问题。同时,通过多线程以及多事件处理模块的机制,进一步提高了分布式系统的并发性和稳定性。The embodiments of the present application solve the above-mentioned technical problems, and propose an optimized scheme for processing events. In this solution, by reclassifying events in a distributed system and creating different event queues for different event types, and introducing an event routing method between events and event queues, the event distribution and processing are improved. effectiveness. Furthermore, it not only solves the problem of inefficient event queue capacity in the current distributed system, or the low efficiency of the entire system event processing caused by a time-consuming operation, but also the problem of insufficient performance caused by a single thread sending an event. The operating system OOM caused by the problem, which in turn causes communication problems between node devices and system crashes or paralysis. At the same time, through the mechanism of multi-thread and multi-event processing modules, the concurrency and stability of the distributed system is further improved.
本申请实施例提供的方法,为分布式系统引入了多队列的事件缓存机制,通过设计多种事件类型以及多个事件队列,将每个事件按照事件类型,分别入列至对应的事件队列中,提高了事件队列的数量,从而提高了事件队列的总容量,进而提升了分布式系统缓存事件的能力,极大地改善了分布式系统的性能。尤其是,在分布式系统面临高并发访问的场景下,能够满足分布式系统缓存大量事件的需求。并且,扩充了事件队列容量,通过多个事件队列能够缓存大量事件,避免了事件队列容量不足导致事件频繁丢失的情况,从而提升分布式系统的稳定性和可用性。同时,通过不同的事件队列来缓存不同事件类型的事件,令分布式系统中的大量事件得以分门别类地缓存, 不同类型的事件的处理过程不会互相干扰,提高整个分布式系统的处理效率。The method provided in the embodiment of the present application introduces a multi-queue event cache mechanism for a distributed system. By designing multiple event types and multiple event queues, each event is listed in the corresponding event queue according to the event type. , Increasing the number of event queues, thereby increasing the total capacity of the event queue, thereby improving the ability of distributed systems to cache events, and greatly improving the performance of distributed systems. In particular, in the scenario where the distributed system faces high concurrent access, it can meet the needs of the distributed system to cache a large number of events. In addition, the capacity of the event queue is expanded, and a large number of events can be cached through multiple event queues, which avoids the frequent loss of events caused by insufficient event queue capacity, thereby improving the stability and availability of the distributed system. At the same time, different event queues are used to cache events of different event types, so that a large number of events in the distributed system can be cached separately. The processing of different types of events will not interfere with each other, which improves the processing efficiency of the entire distributed system.
图5是本申请实施例提供的一种事件处理装置的结构示意图。参见图5,该装置包括:获取模块501、确定模块502、入列模块503以及事件处理模块504。FIG. 5 is a schematic structural diagram of an event processing apparatus according to an embodiment of the present application. Referring to FIG. 5, the apparatus includes: an obtaining module 501, a determining module 502, an enqueuing module 503, and an event processing module 504.
获取模块501,用于当分布式系统中产生事件时,获取该事件的事件类型;An obtaining module 501, configured to obtain an event type of an event when an event is generated in the distributed system;
确定模块502,用于基于该事件类型,从多个事件队列中确定目标事件队列,该多个事件队列用于分别缓存多种事件类型的事件;A determining module 502, configured to determine a target event queue from a plurality of event queues based on the event type, and the multiple event queues are respectively used to buffer events of multiple event types;
入列模块503,用于将该事件入列至该目标事件队列中;The enqueuing module 503 is configured to enqueue the event into the target event queue;
事件处理模块504,用于当该事件从该目标事件队列出列时,对该事件进行处理。An event processing module 504 is configured to process the event when the event is dequeued from the target event queue.
本申请实施例提供的装置,为分布式系统引入了多队列的事件缓存机制,通过设计多种事件类型以及多个事件队列,将每个事件按照事件类型,分别入列至对应的事件队列中,提高了事件队列的数量,从而提高了事件队列的总容量,进而提升了分布式系统缓存事件的能力,极大地改善了分布式系统的性能。尤其是,在分布式系统面临高并发访问的场景下,能够满足分布式系统缓存大量事件的需求。并且,扩充了事件队列容量,通过多个事件队列能够缓存大量事件,避免了事件队列容量不足导致事件频繁丢失的情况,从而提升分布式系统的稳定性和可用性。同时,通过不同的事件队列来缓存不同事件类型的事件,令分布式系统中的大量事件得以分门别类地缓存,不同类型的事件的处理过程不会互相干扰,提高整个分布式系统的处理效率。The device provided in the embodiment of the present application introduces a multi-queue event cache mechanism for a distributed system. By designing multiple event types and multiple event queues, each event is individually listed in the corresponding event queue according to the event type. , Increasing the number of event queues, thereby increasing the total capacity of the event queue, thereby improving the ability of distributed systems to cache events, and greatly improving the performance of distributed systems. In particular, in the scenario where the distributed system faces high concurrent access, it can meet the needs of the distributed system to cache a large number of events. In addition, the capacity of the event queue is expanded, and a large number of events can be cached through multiple event queues, which avoids the frequent loss of events caused by insufficient event queue capacity, thereby improving the stability and availability of the distributed system. At the same time, different event queues are used to cache events of different event types, so that a large number of events in the distributed system can be cached separately. The processing of different types of events will not interfere with each other, which improves the processing efficiency of the entire distributed system.
在一种可能的实现方式中,该确定模块502,包括:In a possible implementation manner, the determining module 502 includes:
查询子模块,用于查询路由信息,得到该事件类型对应的事件队列标识,该路由信息包括多种事件类型以及对应的多个事件队列标识;The query submodule is used to query routing information to obtain an event queue identifier corresponding to the event type, and the routing information includes multiple event types and corresponding multiple event queue identifiers;
确定子模块,用于将该事件队列标识对应的事件队列,作为该目标事件队列。A determining submodule is configured to use the event queue corresponding to the event queue identifier as the target event queue.
在一种可能的实现方式中,该多个事件队列中任一事件队列的深度与处理对应事件类型的事件的耗时时长正相关。In a possible implementation manner, the depth of any one of the plurality of event queues is positively related to the time consuming time for processing events of the corresponding event type.
在一种可能的实现方式中,该装置还包括:In a possible implementation manner, the apparatus further includes:
生成模块,用于为该多种事件类型,生成多个事件队列。A generating module is configured to generate multiple event queues for the multiple event types.
在一种可能的实现方式中,该装置还包括:In a possible implementation manner, the apparatus further includes:
发送模块,用于针对该多个事件队列中的任一事件队列,通过多个线程,并发向事件处理模块504发送该事件队列中的事件。The sending module is configured to send events in the event queue to the event processing module 504 concurrently through multiple threads for any event queue in the multiple event queues.
在一种可能的实现方式中,该多个事件队列中任一事件队列对应的线程的数量,与处理对应事件类型的事件的耗时时长正相关。In a possible implementation manner, the number of threads corresponding to any of the event queues in the multiple event queues is positively related to the time consumed for processing events of the corresponding event type.
在一种可能的实现方式中,该装置还包括:In a possible implementation manner, the apparatus further includes:
匹配模块,用于对该事件与该事件类型下的至少一个子事件类型进行匹配,得到该事件匹配的子事件类型;A matching module, configured to match the event with at least one sub-event type under the event type to obtain a sub-event type matched by the event;
发送模块,用于向该子事件类型对应的事件处理模块504发送该事件。The sending module is configured to send the event to an event processing module 504 corresponding to the sub-event type.
在一种可能的实现方式中,该多种事件类型中任一种事件类型对应的事件处理模块504包括至少两个。In a possible implementation manner, the event processing module 504 corresponding to any one of the multiple event types includes at least two.
在一种可能的实现方式中,该多种事件类型中任一种事件类型对应的事件处理模块504的数量与处理对应事件类型的事件的耗时时长正相关。In a possible implementation manner, the number of the event processing modules 504 corresponding to any one of the multiple event types is positively related to the time consuming time for processing the events of the corresponding event type.
在一种可能的实现方式中,该多个事件队列包括心跳事件类型对应的事件队列、资源监控事件类型对应的事件队列、资源申请事件类型对应的事件队列、系统文件事件类型对应的事件队列、作业事件类型对应的事件队列以及其他事件类型对应的事件队列中的至少两个。In a possible implementation manner, the multiple event queues include an event queue corresponding to a heartbeat event type, an event queue corresponding to a resource monitoring event type, an event queue corresponding to a resource application event type, an event queue corresponding to a system file event type, At least two of the event queues corresponding to the job event type and the event queues corresponding to other event types.
上述所有可选技术方案,可以采用任意结合形成本公开的可选实施例,在此不再一一赘述。All the above-mentioned optional technical solutions may be used in any combination to form optional embodiments of the present disclosure, which will not be described in detail here.
需要说明的是:上述实施例提供的事件处理装置在处理事件时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将事件处理装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的处理事件装置与处理事件方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。It should be noted that the event processing device provided in the foregoing embodiment only uses the division of the foregoing functional modules as an example for processing events. In practical applications, the above functions may be allocated by different functional modules as required. The internal structure of the event processing device is divided into different functional modules to complete all or part of the functions described above. In addition, the event processing device and the event processing method embodiments provided by the foregoing embodiments belong to the same concept. For specific implementation processes, refer to the method embodiments, and details are not described herein again.
图6是本申请实施例提供的一种计算机设备的结构示意图,该计算机设备600可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上处理器(central processing units,CPU)601和一个或一个以上的存储器602,其中,该存储器602中存储有至少一条指令,该至少一条指令由该处理器601加载并执行以实现上述各个方法实施例提供的事件处理方法。当然,该计算机设备还可以具有有线或无线网络接口以及输入输出接口等部件,以便进行输入输出,该计算机设备还可以包括其他用于实现设备功能的部件,在此不做赘述。FIG. 6 is a schematic structural diagram of a computer device according to an embodiment of the present application. The computer device 600 may have a large difference due to different configurations or performance, and may include one or more processors (central processing units) (CPU) 601. And one or more memories 602, where at least one instruction is stored in the memory 602, and the at least one instruction is loaded and executed by the processor 601 to implement the event processing methods provided by the foregoing method embodiments. Of course, the computer device may also have components such as a wired or wireless network interface and an input-output interface for input and output. The computer device may also include other components for implementing the functions of the device, and details are not described herein.
在示例性实施例中,还提供了一种计算机可读存储介质,例如包括指令的存储器,上述指令可由计算机设备中的处理器执行以完成上述实施例中的事件处理方法。例如,该计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。In an exemplary embodiment, a computer-readable storage medium is also provided, such as a memory including instructions, which can be executed by a processor in a computer device to complete the event processing method in the foregoing embodiment. For example, the computer-readable storage medium may be a ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
在一个示例性实施例中,本申请还提供了一种包含指令的计算机程序产品,当其在计算机设备上运行时,使得该计算机设备能够实现上述实施例中的事件处理方法。In an exemplary embodiment, the present application further provides a computer program product containing instructions, which when executed on a computer device, enables the computer device to implement the event processing method in the foregoing embodiment.
在一个示例性实施例中,本申请还提供了一种芯片,该芯片包括处理器和/或程序指令,当该芯片运行时,实现上述实施例中的事件处理方法。In an exemplary embodiment, the present application further provides a chip that includes a processor and / or program instructions. When the chip runs, the event processing method in the foregoing embodiment is implemented.
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,该程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。Those of ordinary skill in the art may understand that all or part of the steps for implementing the foregoing embodiments may be completed by hardware, or may be instructed by a program to complete related hardware. The program may be stored in a computer-readable storage medium. The storage medium may be a read-only memory, a magnetic disk or an optical disk.
以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。The above is only a preferred embodiment of the present application and is not intended to limit the present application. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present application shall be included in the protection of the present application. Within range.

Claims (22)

  1. 一种事件处理方法,其特征在于,所述方法包括:An event processing method, characterized in that the method includes:
    当分布式系统中产生事件时,获取所述事件的事件类型;When an event is generated in a distributed system, obtaining the event type of the event;
    基于所述事件类型,从多个事件队列中确定目标事件队列,所述多个事件队列用于分别缓存多种事件类型的事件;Determining a target event queue from a plurality of event queues based on the event type, and the plurality of event queues are used to separately cache events of multiple event types;
    将所述事件入列至所述目标事件队列中;Enumerating the event into the target event queue;
    当所述事件从所述目标事件队列出列时,对所述事件进行处理。When the event is dequeued from the target event queue, the event is processed.
  2. 根据权利要求1所述的方法,其特征在于,所述基于所述事件类型,从多个事件队列中确定目标事件队列,包括:The method according to claim 1, wherein determining the target event queue from a plurality of event queues based on the event type comprises:
    查询路由信息,得到所述事件类型对应的事件队列标识,所述路由信息包括多种事件类型以及对应的多个事件队列标识;Query routing information to obtain an event queue identifier corresponding to the event type, where the routing information includes multiple event types and corresponding multiple event queue identifiers;
    将所述事件队列标识对应的事件队列,作为所述目标事件队列。The event queue corresponding to the event queue identifier is used as the target event queue.
  3. 根据权利要求1所述的方法,其特征在于,所述多个事件队列中任一个事件队列的深度与处理对应事件类型的事件的耗时时长正相关。The method according to claim 1, wherein the depth of any one of the plurality of event queues is positively related to the time consuming time for processing events of the corresponding event type.
  4. 根据权利要求1所述的方法,其特征在于,所述获取所述事件的事件类型之前,所述方法还包括:The method according to claim 1, wherein before the obtaining the event type of the event, the method further comprises:
    为所述多种事件类型,生成多个事件队列。For the multiple event types, multiple event queues are generated.
  5. 根据权利要求1所述的方法,其特征在于,所述方法还包括:The method according to claim 1, further comprising:
    针对所述多个事件队列中的任一事件队列,通过多个线程,并发向事件处理模块发送所述事件队列中的事件;Sending events in the event queue to an event processing module concurrently through multiple threads for any event queue in the multiple event queues;
    通过所述事件处理模块,对所述事件进行处理。The event processing module processes the event.
  6. 根据权利要求5所述的方法,其特征在于,所述多个事件队列中任一事件队列对应的线程的数量,与处理对应事件类型的事件的耗时时长正相关。The method according to claim 5, wherein the number of threads corresponding to any one of the plurality of event queues is positively related to the time-consuming time for processing events of the corresponding event type.
  7. 根据权利要求1所述的方法,其特征在于,所述对所述事件进行处理,包括:The method according to claim 1, wherein the processing the event comprises:
    对所述事件与所述事件类型下的至少一个子事件类型进行匹配,得到所述事件匹配的子事件类型;Matching the event with at least one sub-event type under the event type to obtain the sub-event type matched by the event;
    向所述子事件类型对应的事件处理模块发送所述事件;Sending the event to an event processing module corresponding to the sub-event type;
    通过所述事件处理模块,对所述事件进行处理。The event processing module processes the event.
  8. 根据权利要求5至7任一项所述的方法,其特征在于,所述多种事件类型中任一种事件类型对应的事件处理模块包括至少两个。The method according to any one of claims 5 to 7, wherein the event processing module corresponding to any one of the multiple event types includes at least two.
  9. 根据权利要求8所述的方法,其特征在于,所述多种事件类型中任一种事件类型对应的事件处理模块的数量与处理对应事件类型的事件的耗时时长正相关。The method according to claim 8, wherein the number of event processing modules corresponding to any one of the plurality of event types is positively related to the time-consuming time for processing events of the corresponding event type.
  10. 根据权利要求1所述的方法,其特征在于,所述多个事件队列包括心跳事件类型对应的事件队列、资源监控事件类型对应的事件队列、资源申请事件类型对应的事件队列、系统文件事件类型对应的事件队列、作业事件类型对应的事件队列以及其他事件类型对应的事件队列中的至少两个。The method according to claim 1, wherein the plurality of event queues include an event queue corresponding to a heartbeat event type, an event queue corresponding to a resource monitoring event type, an event queue corresponding to a resource application event type, and a system file event type. At least two of the corresponding event queue, the event queue corresponding to the job event type, and the event queue corresponding to other event types.
  11. 一种事件处理装置,其特征在于,所述装置包括:An event processing device, characterized in that the device includes:
    获取模块,用于当分布式系统中产生事件时,获取所述事件的事件类型;An acquisition module, configured to acquire an event type of the event when an event is generated in the distributed system;
    确定模块,用于基于所述事件类型,从多个事件队列中确定目标事件队列,所述多个事件队列用于分别缓存多种事件类型的事件;A determining module, configured to determine a target event queue from a plurality of event queues based on the event type, and the plurality of event queues are respectively used to buffer events of multiple event types;
    入列模块,用于将所述事件入列至所述目标事件队列中;Enqueuing module, configured to enqueue the event into the target event queue;
    事件处理模块,用于当所述事件从所述目标事件队列出列时,对所述事件进行处理。An event processing module is configured to process the event when the event is dequeued from the target event queue.
  12. 根据权利要求11所述的装置,其特征在于,所述确定模块,包括:The apparatus according to claim 11, wherein the determining module comprises:
    查询子模块,用于查询路由信息,得到所述事件类型对应的事件队列标识,所述路由信息包括多种事件类型以及对应的多个事件队列标识;A query submodule, configured to query routing information to obtain an event queue identifier corresponding to the event type, where the routing information includes multiple event types and corresponding multiple event queue identifiers;
    确定子模块,用于将所述事件队列标识对应的事件队列,作为所述目标事件队列。A determining submodule is configured to use an event queue corresponding to the event queue identifier as the target event queue.
  13. 根据权利要求11所述的装置,其特征在于,所述多个事件队列中任一事件队列的深度与处理对应事件类型的事件的耗时时长正相关。The apparatus according to claim 11, wherein the depth of any one of the plurality of event queues is positively related to the time consuming time for processing events of the corresponding event type.
  14. 根据权利要求11所述的装置,其特征在于,所述装置还包括:The device according to claim 11, further comprising:
    生成模块,用于为所述多种事件类型,生成多个事件队列。A generating module is configured to generate multiple event queues for the multiple event types.
  15. 根据权利要求11所述的装置,其特征在于,所述装置还包括:The device according to claim 11, further comprising:
    发送模块,用于针对所述多个事件队列中的任一事件队列,通过多个线程,并发向事件处理模块发送所述事件队列中的事件。The sending module is configured to send an event in the event queue to the event processing module concurrently through multiple threads for any event queue in the multiple event queues.
  16. 根据权利要求15所述的装置,其特征在于,所述多个事件队列中任一事件队列对应的线程的数量,与处理对应事件类型的事件的耗时时长正相关。The device according to claim 15, wherein the number of threads corresponding to any one of the plurality of event queues is positively related to the time-consuming time for processing events of the corresponding event type.
  17. 根据权利要求11所述的装置,其特征在于,所述装置还包括:The device according to claim 11, further comprising:
    匹配模块,用于对所述事件与所述事件类型下的至少一个子事件类型进行匹配,得到所述事件匹配的子事件类型;A matching module, configured to match the event with at least one sub-event type under the event type to obtain the sub-event type matched by the event;
    发送模块,用于向所述子事件类型对应的事件处理模块发送所述事件。The sending module is configured to send the event to an event processing module corresponding to the sub-event type.
  18. 根据权利要求14-17中任一项所述的装置,其特征在于,所述多种事件类型中任一种事件类型对应的事件处理模块包括至少两个。The apparatus according to any one of claims 14 to 17, wherein the event processing module corresponding to any one of the multiple event types includes at least two.
  19. 根据权利要求18所述的装置,其特征在于,所述多种事件类型中任一种事件类型对应的事件处理模块的数量与处理对应事件类型的事件的耗时时长正相关。The device according to claim 18, wherein the number of event processing modules corresponding to any one of the plurality of event types is positively related to the time-consuming duration of processing events of the corresponding event type.
  20. 根据权利要求11所述的装置,其特征在于,所述多个事件队列包括心跳事件类型对应的事件队列、资源监控事件类型对应的事件队列、资源申请事件类型对应的事件队列、系统文件事件类型对应的事件队列、作业事件类型对应的事件队列以及其他事件类型对应的事件队列中的至少两个。The device according to claim 11, wherein the plurality of event queues include an event queue corresponding to a heartbeat event type, an event queue corresponding to a resource monitoring event type, an event queue corresponding to a resource application event type, and a system file event type At least two of the corresponding event queue, the event queue corresponding to the job event type, and the event queue corresponding to other event types.
  21. 一种计算机设备,其特征在于,所述计算机设备包括处理器和存储器,所述存储器中存储有至少一条指令,所述指令由所述处理器加载并执行以实现权利要求1-10中任一项所述的方法步骤。A computer device, wherein the computer device includes a processor and a memory, and the memory stores at least one instruction, and the instruction is loaded and executed by the processor to implement any one of claims 1-10. Method steps described in item.
  22. 一种计算机可读存储介质,其特征在于,所述存储介质内存储有至少一条指令,所述至少一条指令被处理器执行以实现权利要求1-10中任一项所述的方法步骤。A computer-readable storage medium, characterized in that at least one instruction is stored in the storage medium, and the at least one instruction is executed by a processor to implement the method steps of any one of claims 1-10.
PCT/CN2019/087219 2018-05-25 2019-05-16 Method, device, and apparatus for event processing, and storage medium WO2019223596A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810545759.6 2018-05-25
CN201810545759.6A CN110532067A (en) 2018-05-25 2018-05-25 Event-handling method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
WO2019223596A1 true WO2019223596A1 (en) 2019-11-28

Family

ID=68617357

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/087219 WO2019223596A1 (en) 2018-05-25 2019-05-16 Method, device, and apparatus for event processing, and storage medium

Country Status (2)

Country Link
CN (1) CN110532067A (en)
WO (1) WO2019223596A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4086768A1 (en) * 2021-05-03 2022-11-09 TeleNav, Inc. Computing system with message ordering mechanism and method of operation thereof

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111092865B (en) * 2019-12-04 2022-08-19 全球能源互联网研究院有限公司 Security event analysis method and system
CN111309494A (en) * 2019-12-09 2020-06-19 上海金融期货信息技术有限公司 Multithreading event processing assembly
CN111461198B (en) * 2020-03-27 2023-10-13 杭州海康威视数字技术股份有限公司 Action determining method, system and device
CN112040317B (en) * 2020-08-21 2022-08-09 海信视像科技股份有限公司 Event response method and display device
CN112102063A (en) * 2020-08-31 2020-12-18 深圳前海微众银行股份有限公司 Data request method, device, equipment, platform and computer storage medium
CN112416632B (en) * 2020-12-14 2023-01-17 五八有限公司 Event communication method and device, electronic equipment and computer readable medium
CN112860400A (en) * 2021-02-09 2021-05-28 山东英信计算机技术有限公司 Method, system, device and medium for processing distributed training task
CN113254466B (en) * 2021-06-18 2022-03-01 腾讯科技(深圳)有限公司 Data processing method and device, electronic equipment and storage medium
CN113608842B (en) * 2021-09-30 2022-02-18 苏州浪潮智能科技有限公司 Container cluster and component management method, device, system and storage medium
CN115391058B (en) * 2022-08-05 2023-07-25 安超云软件有限公司 SDN-based resource event processing method, resource creation method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102457578A (en) * 2011-12-16 2012-05-16 中标软件有限公司 Distributed network monitoring method based on event mechanism
US20120239372A1 (en) * 2011-03-14 2012-09-20 Nec Laboratories America, Inc. Efficient discrete event simulation using priority queue tagging
WO2013097248A1 (en) * 2011-12-31 2013-07-04 华为技术有限公司 Distributed task processing method, device and system based on message queue
CN105302638A (en) * 2015-11-04 2016-02-03 国家计算机网络与信息安全管理中心 MPP (Massively Parallel Processing) cluster task scheduling method based on system load

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7111001B2 (en) * 2003-01-27 2006-09-19 Seiko Epson Corporation Event driven transaction state management with single cache for persistent framework
CN104133724B (en) * 2014-04-03 2015-08-19 腾讯科技(深圳)有限公司 Concurrent tasks dispatching method and device
CN105337896A (en) * 2014-07-25 2016-02-17 华为技术有限公司 Message processing method and device
US10515326B2 (en) * 2015-08-28 2019-12-24 Exacttarget, Inc. Database systems and related queue management methods
CN106095535B (en) * 2016-06-08 2019-11-08 东华大学 A kind of thread management system for supporting Data Stream Processing under multi-core platform

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120239372A1 (en) * 2011-03-14 2012-09-20 Nec Laboratories America, Inc. Efficient discrete event simulation using priority queue tagging
CN102457578A (en) * 2011-12-16 2012-05-16 中标软件有限公司 Distributed network monitoring method based on event mechanism
WO2013097248A1 (en) * 2011-12-31 2013-07-04 华为技术有限公司 Distributed task processing method, device and system based on message queue
CN105302638A (en) * 2015-11-04 2016-02-03 国家计算机网络与信息安全管理中心 MPP (Massively Parallel Processing) cluster task scheduling method based on system load

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4086768A1 (en) * 2021-05-03 2022-11-09 TeleNav, Inc. Computing system with message ordering mechanism and method of operation thereof

Also Published As

Publication number Publication date
CN110532067A (en) 2019-12-03

Similar Documents

Publication Publication Date Title
WO2019223596A1 (en) Method, device, and apparatus for event processing, and storage medium
US8381230B2 (en) Message passing with queues and channels
US10248175B2 (en) Off-line affinity-aware parallel zeroing of memory in non-uniform memory access (NUMA) servers
EP2618257B1 (en) Scalable sockets
Sengupta et al. Scheduling multi-tenant cloud workloads on accelerator-based systems
US8881161B1 (en) Operating system with hardware-enabled task manager for offloading CPU task scheduling
CN111367630A (en) Multi-user multi-priority distributed cooperative processing method based on cloud computing
US9378047B1 (en) Efficient communication of interrupts from kernel space to user space using event queues
WO2021022964A1 (en) Task processing method, device, and computer-readable storage medium based on multi-core system
WO2023274278A1 (en) Resource scheduling method and device and computing node
US8543722B2 (en) Message passing with queues and channels
JP4183712B2 (en) Data processing method, system and apparatus for moving processor task in multiprocessor system
Hussain et al. A counter based approach for reducer placement with augmented Hadoop rackawareness
Lin et al. {RingLeader}: Efficiently Offloading {Intra-Server} Orchestration to {NICs}
JP6283376B2 (en) System and method for supporting work sharing multiplexing in a cluster
WO2016187831A1 (en) Method and device for accessing file, and storage system
US20230393782A1 (en) Io request pipeline processing device, method and system, and storage medium
CN113076180B (en) Method for constructing uplink data path and data processing system
CN113076189B (en) Data processing system with multiple data paths and virtual electronic device constructed using multiple data paths
CN115098220A (en) Large-scale network node simulation method based on container thread management technology
EP3387529A1 (en) Method and apparatus for time-based scheduling of tasks
CN110955461A (en) Processing method, device and system of computing task, server and storage medium
US20140237149A1 (en) Sending a next request to a resource before a completion interrupt for a previous request
Park et al. OCTOKV: An Agile Network-Based Key-Value Storage System with Robust Load Orchestration
US11334246B2 (en) Nanoservices—a programming design pattern for managing the state of fine-grained object instances

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19806835

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19806835

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19806835

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 160621)

122 Ep: pct application non-entry in european phase

Ref document number: 19806835

Country of ref document: EP

Kind code of ref document: A1