CN116302420A - Concurrent scheduling method, concurrent scheduling device, computer equipment and computer readable storage medium - Google Patents

Concurrent scheduling method, concurrent scheduling device, computer equipment and computer readable storage medium Download PDF

Info

Publication number
CN116302420A
CN116302420A CN202310234006.4A CN202310234006A CN116302420A CN 116302420 A CN116302420 A CN 116302420A CN 202310234006 A CN202310234006 A CN 202310234006A CN 116302420 A CN116302420 A CN 116302420A
Authority
CN
China
Prior art keywords
event
target
message bus
processed
thread
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310234006.4A
Other languages
Chinese (zh)
Inventor
何峰权
夏阳
徐晗
吴永亮
鲁莹莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Genus Information Technology Co ltd
Original Assignee
Shanghai Genus Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Genus Information Technology Co ltd filed Critical Shanghai Genus Information Technology Co ltd
Priority to CN202310234006.4A priority Critical patent/CN116302420A/en
Publication of CN116302420A publication Critical patent/CN116302420A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/542Event management; Broadcasting; Multicasting; Notifications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Multi Processors (AREA)

Abstract

The embodiment of the invention provides a concurrent scheduling method, a concurrent scheduling device, computer equipment and a computer readable storage medium, which belong to the technical field of computers, and define various types of services as specific events in a mode that one message bus corresponds to all events of one service type, so that the computer equipment queues acquired events to be processed into an event queue of a target message bus corresponding to a scheme number of the service type to which the events to be processed belong, when a distribution condition is met, queues the events to be processed in the event queue forefront, encapsulates the events to be processed in the queue forefront into target execution tasks through a processing function corresponding to the target message bus, submits the tasks to a target thread in a thread pool for execution, realizes the scheduling and distribution of the events of the same service type by the same message bus, does not need frequent locking, unlocking and switching of CPU resources, and greatly improves the processing performance of the computer equipment.

Description

Concurrent scheduling method, concurrent scheduling device, computer equipment and computer readable storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a concurrent scheduling method, apparatus, computer device, and computer readable storage medium.
Background
When a computer device processes some highly concurrent complex services, multiple service requests are generally processed in parallel in a multi-process manner, and each thread independently executes a corresponding processing task. However, multi-threaded security assurance has been a difficulty in programming and development.
A common lock synchronization-based resource access control mechanism is a mechanism that performs passive control when concurrent access to a resource occurs. However, frequent locking, unlocking, and switching of CPU resources in high concurrency situations can result in reduced processing performance of the computer device.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a concurrent scheduling method, apparatus, computer device, and computer readable storage medium, which can improve processing performance when a computer device such as a server performs concurrent tasks of multiple services.
In order to achieve the above object, the technical scheme adopted by the embodiment of the invention is as follows:
in a first aspect, an embodiment of the present invention provides a concurrent scheduling method, applied to a computer device, where a thread pool and a plurality of message buses are deployed on the computer device, where the thread pool includes a plurality of threads, and each message bus corresponds to a processing function and a scheme number of a service type, where the method includes:
any event to be processed is acquired, and an event identifier of the event to be processed is determined; the event identification comprises a scheme number corresponding to a theme of the service type to which the event to be processed belongs;
queuing the event to be processed to an event queue of a target message bus corresponding to the scheme number according to the event identifier;
when the distribution condition is determined to be met, taking the event to be processed which is queued forefront in the event queue as a target event;
and packaging the target event into a target execution task through a processing function corresponding to the target message bus, and submitting the target execution task to a target thread in the thread pool for execution.
Further, the method further comprises:
for each message bus, calculating a target thread index based on a preset thread index calculation rule according to a scheme number and the total number of threads corresponding to the message bus, and taking a thread corresponding to the target thread index as a dedicated thread of the message bus;
the step of submitting the target execution task to a target thread in the thread pool for execution includes:
and submitting the target execution task to a dedicated thread of the target message bus for execution.
Further, the thread index calculation rule includes:
Index=executors[Math.abs(groupId.hashCode)%executors.length](I)
wherein Index characterizes thread Index, groupid hashcode characterizes hash value of scheme number, executors.
Further, the step of determining that the distribution condition is satisfied includes:
judging whether the execution of the last event to be processed distributed from the event queue is finished, and if yes, meeting the distribution condition.
Further, an event agent is deployed on the computer equipment;
before the step of acquiring any event to be processed and determining the event identification of the event to be processed, the method further comprises:
registering the theme on the event agent by using the scheme number corresponding to the theme of each service type, and constructing a message bus uniquely corresponding to each scheme number;
registering the processing functions of each type of service on the corresponding message buses in a function annotation mode to configure the corresponding processing functions for each message bus.
Further, the method further comprises:
acquiring and analyzing an event expansion instruction to obtain a scheme number to be expanded and a processing function to be expanded;
and performing theme registration on the event agency device according to the scheme number to be expanded, constructing a message bus corresponding to the scheme number to be expanded, and registering the processing function to be expanded on the corresponding message bus.
Further, the step of encapsulating the target event as a target execution task includes:
adopting a processing function corresponding to the target message bus to package the processing logic of the target event into a target execution task; wherein the target execution task includes a plurality of ordered execution operations.
In a second aspect, an embodiment of the present invention provides a concurrent scheduling apparatus, applied to a computer device, where a thread pool and a plurality of message buses are deployed on the computer device, where the thread pool includes a plurality of threads, and each message bus corresponds to a processing function and a scheme number of a service type, and the concurrent scheduling apparatus includes a preprocessing module, a first distribution module, and a second distribution module;
the preprocessing module is used for acquiring any event to be processed and determining an event identifier of the event to be processed; the event identification comprises a scheme number corresponding to a theme of the service type to which the event to be processed belongs;
the first distributing module is configured to queue the event to be processed into an event queue of a target message bus corresponding to the scheme number according to the event identifier;
and the second distributing module is used for taking the event to be processed which is queued in the event queue and is the forefront as a target event when the distribution condition is determined to be met.
The second distributing module is further configured to encapsulate the target event into a target execution task through a processing function corresponding to the target message bus, and submit the target execution task to a target thread in the thread pool for execution.
In a third aspect, an embodiment of the present invention provides a computer device, including a processor and a memory storing machine executable instructions executable by the processor to implement the concurrent scheduling method according to the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the concurrent scheduling method according to the first aspect.
The concurrency scheduling method, the device, the computer equipment and the computer readable storage medium provided by the embodiment of the invention define each service type as a specific event by a mode that one message bus corresponds to a processing function and a scheme number of one service type, so that for any acquired event to be processed, the event to be processed is queued to an event queue of a target message bus corresponding to the scheme number of the service type to which the event belongs, when a distribution condition is met, the event to be processed in the event queue is packaged into a target execution task by the processing function corresponding to the target message bus, and then the target execution task is submitted to a target thread in a thread pool for execution, thereby realizing that all the events to be processed of the same service type are distributed by the same message bus in a scheduling manner, ensuring the sequence of event distribution, and realizing high concurrency scheduling without frequently locking, releasing the lock and switching CPU resources, and greatly improving the processing performance of the computer equipment.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 shows a block schematic diagram of a concurrent scheduling system according to an embodiment of the present invention.
Fig. 2 shows one of flow diagrams of a concurrent scheduling system according to an embodiment of the present invention.
Fig. 3 shows a second flowchart of a concurrent scheduling system according to an embodiment of the present invention.
Fig. 4 shows a schematic configuration diagram of a computer device according to an embodiment of the present invention.
Fig. 5 illustrates a third flowchart of a concurrent scheduling system according to an embodiment of the present invention.
Fig. 6 shows a fourth flowchart of a concurrent dispatch system according to an embodiment of the present invention.
Fig. 7 is a block diagram of a concurrent scheduling apparatus according to an embodiment of the present invention.
Fig. 8 shows a block schematic diagram of a computer device according to an embodiment of the present invention.
Reference numerals: 100-concurrent scheduling systems; 110-an external device; 120-computer device; 130-concurrent scheduling means; 140, a preprocessing module; 150-a first distribution module; 160-a second distribution module; 170-a configuration module; 180-updating the expansion module.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention.
It is noted that relational terms such as "first" and "second", and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The resource access control mechanism based on lock synchronization is a mechanism for performing passive control when concurrent access to resources occurs. However, in a high concurrency situation, the resource access control mechanism based on lock synchronization needs to lock and release the lock frequently and switch the CPU resources, which may cause degradation of processing performance of computer devices such as a service server. In addition, a strict sequential guarantee is required in a general transaction service, and uncertainty of general thread competition in a resource access control mechanism based on lock synchronization can bring complexity to service implementation, so that processing efficiency is low.
Based on the above consideration, the embodiments of the present invention provide a concurrent scheduling method, apparatus, computer device, and computer readable storage medium, which can improve the processing performance of the computer device and the service processing efficiency while ensuring high concurrency capability of service processing. Hereinafter, this scheme will be described.
The concurrent scheduling method provided by the embodiment of the present invention may be applied to the concurrent scheduling system 100 shown in fig. 1, where the concurrent scheduling system 100 includes a computer device 120 and a plurality of external devices 110, and any external device 110 may be communicatively connected to the computer device 120 through wired or wireless modes such as ethernet, serial communication, and the like.
Wherein the computer device 120 includes, but is not limited to: independent server, server cluster and terminal equipment. The external devices 110 include, but are not limited to: server, personal computer, notebook computer, wearable mobile device.
By pre-configuration, an event agent, a thread pool, and a plurality of message buses are deployed on the computer device 120, the thread pool including a plurality of threads, each message bus corresponding to a processing function of one traffic type (essentially a set of all processing functions of a traffic type) and a scheme number. That is, each message bus is structured specifically for one traffic type. All events of the event type under the same service type can be processed by adopting the processing function of the service type.
The external device 110 is configured to generate a to-be-processed event and send the to-be-processed event to the computer device 120.
The computer device 120 is configured to obtain a generated event to be processed and any event to be processed sent by the external device 110, and implement a concurrent scheduling method according to an embodiment of the present invention.
Prior to application, the computer device needs to be configured with: registering the theme on the event agency device according to the scheme number corresponding to the theme of each service type, and constructing a message bus uniquely corresponding to each scheme number; registering the processing functions of each service type on the corresponding message buses in a function annotation mode to configure the corresponding processing functions for each message bus.
It should be understood that a service type includes a plurality of events (messages) and each event (message) has a corresponding processing function, and thus, a message bus corresponds to a set of processing functions that includes the processing functions of all events under the service type.
Further, the computer device is further configured to: for each message bus, based on a preset thread index calculation rule, calculating a target thread index according to a scheme number and the total number of threads corresponding to the message bus, and taking a thread corresponding to the target thread index as a dedicated thread of the message bus.
It should be noted that there are multiple service types, each service type includes multiple event types, and the to-be-processed events of all event types of the same service type are dispatched by the message bus schedule corresponding to the service type.
Through the configuration, each service type is defined as a specific event, a dedicated message bus is constructed for each service type, and a processing function and a scheme number of the corresponding service type and a dedicated thread are configured for each message bus, so that the concurrent scheduling method provided by the embodiment of the invention is realized.
The concurrent scheduling system 100 realizes that all events to be processed of the same service type are scheduled and distributed by the same message bus, ensures the sequence of event distribution, can realize high concurrent scheduling without frequently locking, unlocking and switching CPU resources, and greatly improves the processing performance of the computer device 120.
In a possible implementation manner, an embodiment of the present invention provides a concurrent scheduling method, and referring to fig. 2, the concurrent scheduling method may include the following steps. In the present embodiment, the concurrent scheduling method is applied to the computer device 120 in fig. 1 for illustration.
S11, any event to be processed is acquired, and an event identification of the event to be processed is determined.
In this embodiment, the event identifier includes a scheme number corresponding to a theme of a service type to which the event to be processed belongs.
S13, queuing the event to be processed to an event queue of a target message bus corresponding to the scheme number according to the event identification.
And S15, when the distribution condition is determined to be met, taking the waiting event which is queued forefront in the event queue as a target event.
S17, packaging the target event into a target execution task through a processing function corresponding to the target message bus, and submitting the target execution task to a target thread in a thread pool for execution.
When the external device 110 or the computer device 120 generates any event to be processed, according to the service type to which the event to be processed belongs, an event identifier (including the scheme number of the service type) is added to the event to be processed, and then the event identifier is sent to the service processing engine of the computer device 120. When the service processing engine of the computer device 120 receives the event to be processed, the event to be processed is queued (distributed) to an event queue of the target message bus corresponding to the scheme number according to the event identifier parsed from the event to be processed.
For the event queue of the target message bus, when the distribution condition is satisfied, for example, when the last distributed pending event in the event queue has been executed to end, the first queued pending event in the event queue is taken as the target event.
And packaging the target event into a target execution task through a processing function corresponding to the target message bus. For example, if the number of the target message bus is 001 and the corresponding processing function set is 001, the target event is encapsulated into the target execution task through the processing function in the corresponding processing function set, and then the target execution task is submitted to the target thread in the thread pool for execution.
It should be understood that the target message bus may be any one of all message buses deployed on the computer device 120, and when the computer device 120 receives the pending events of different service types at the same time, steps S11 to S17 may be adopted for each of the pending events to process to implement high concurrency service processing.
Compared with the traditional resource access control mechanism based on lock synchronization, the concurrent scheduling method provided by the embodiment of the invention defines various types of services as specific events in a mode that one message bus corresponds to all events of one service type, so that all events to be processed of the same service type are scheduled and distributed by the same message bus, the sequence of event distribution is ensured, high concurrent scheduling can be realized without frequently locking, unlocking and switching CPU resources, and the processing performance of computer equipment is greatly improved. In addition, all the events to be processed of each service type are strictly scheduled and distributed in sequence, and the events to be processed of each service type are executed by specific threads, so that the occurrence of common thread competition can be avoided, the complexity of service realization is reduced, and the service processing efficiency can be improved.
Further, in order to implement the method provided in the above steps S11 to S17, in a possible implementation manner, a service definition, a message bus, and a configuration of a processor are introduced in a concurrent scheduling method. Referring to fig. 3, the configuration process may include the following steps.
S21, registering the theme on the event agency device according to the scheme number corresponding to the theme of each service type, and constructing a message bus uniquely corresponding to each scheme number.
S23, registering the processing functions of each service type on the corresponding message buses in a function annotation mode to configure the corresponding processing functions for each message bus.
It should be appreciated that the processing functions for each service type have been pre-stored on the computer device. Each service type has a unique corresponding topic, each topic having a unique corresponding scheme number (GroupID). Referring to fig. 4, the service processing engine of the computer device 120 performs the topic registration of each service type on the event broker with a scheme number, and constructs a message bus uniquely corresponding to each scheme number, so as to define each service type as a specific event. Furthermore, all processing functions and scheme numbers of each service type are registered on the corresponding message bus by adopting function annotation, for example, the scheme number of a certain service type is 001, the message bus number corresponding to the scheme number of the service type is 001, the processing functions of the service type comprise 1-8 processing functions, and all the 1-8 processing functions are registered on the 001 message bus through function annotation so as to bind the 1-8 processing functions with the 001 message bus, so that the corresponding processing functions are configured for the 001 message bus.
Through the steps S21 to S25, the definition of each service type as a specific event is implemented, a dedicated message bus is constructed for each service type, and a processing function and a scheme number of the corresponding service type are bound for each message bus, so that the steps S11 to S17 can be implemented.
In consideration of the situation that a new service type is generated in the actual application process, a new message bus needs to be expanded in computer equipment for the new service type, relevant configuration is carried out, and a new service expansion step is introduced in the concurrent scheduling method provided by the embodiment of the invention. In one possible embodiment, referring to fig. 5, the following steps may be included.
S31, acquiring and analyzing an event expansion instruction to obtain a scheme number to be expanded and a processing function to be expanded.
S33, subject registration is carried out on the event agency device according to the scheme number to be expanded, a message bus corresponding to the scheme number to be expanded is constructed, and the processing function to be expanded is registered on the corresponding message bus.
The developer can package the scheme number corresponding to the theme of the new service type and the processing function of the service type into an event expansion instruction, and issue the event expansion instruction to the computer device, and store the processing function of the new service type in the computer device. And the service processing engine of the computer equipment analyzes the event expansion instruction to obtain a scheme number to be expanded and a processing function to be expanded. Furthermore, the computer device adopts the same principle as in steps S21 and S23, first performs theme registration, constructs a message bus, and registers a processing function of a new service type on the message bus, thereby completing a new increase of service processing of the new service type.
For step S15, the manner of determining whether the distribution condition is satisfied may be flexibly set, for example, when the time from the last distribution of the event to be processed reaches the preset time, that is, the distribution condition is satisfied, or when the specific thread is idle, and in this embodiment, the distribution condition is satisfied, which is not limited specifically.
In a possible implementation manner, the determining in step S15 that the distribution condition is satisfied may be further implemented as follows: judging whether the last event to be processed distributed from the event queue is executed, if yes, meeting the distribution condition, otherwise, not meeting the distribution condition.
For example, for the event queue of the 001 message bus, if one of the events to be processed distributed in the event queue is executed, the distribution condition is satisfied, otherwise, the distribution condition is not satisfied. Thus, execution derangements caused by processing event conflicts on a particular thread can be avoided to some extent.
Further, in order to improve the complexity caused by the competition of the thread resources and improve the service processing efficiency, for the distributing the target execution task to the target thread in the thread pool in step S17, the target thread may be a dedicated thread of the target message bus. The manner of determining the dedicated thread of each message bus may be flexibly selected, for example, the dedicated thread may be allocated in advance, or the dedicated thread may be determined according to the busy condition of the thread, which is not specifically limited in this embodiment.
In one possible implementation, a thread index calculation rule is introduced to configure a particular thread for each message bus's event queue, reducing the instances where threads are contended. In a possible implementation, referring to fig. 4 and 6, the configuration process may further include step S25.
S25, calculating a target thread index according to a scheme number and the total number of threads corresponding to the message buses based on a preset thread index calculation rule aiming at each message bus, and taking the thread corresponding to the target thread index as a dedicated thread of the message bus.
The dedicated thread is determined for each message bus, via step S25. It should be appreciated that after expanding the new message bus, step S25 may also be performed to determine the dedicated thread for the new message bus.
Further, the thread index calculation rule may include:
Index=executors[Math.abs(groupId.hashCode)%executors.length](I)
wherein Index characterizes thread Index, groupid hashcode characterizes hash value of scheme number, executors.
By the thread index calculation rule, a specific thread is configured for each message bus to execute the event to be processed distributed by the corresponding message bus, so that the sequence of the event processing stages of the same type of service is improved, and the thread competition is reduced.
In other embodiments, in the initialization configuration of the service definition, the message bus and the processor, for each message bus, the message bus may be bound with a specific thread determined by using the thread index calculation rule, so that, in actual application, the message bus only needs to schedule the event to be processed to the bound thread.
Further, for step S17, encapsulating the target event as a target execution task may be further implemented as: and adopting a processing function corresponding to the target message bus to package the processing logic of the target event into a target execution task. Wherein the target execution task includes a plurality of ordered execution operations.
Since the message bus corresponds to a set of multiple processing functions, in order to facilitate rapid determination of a processing function of a target event from among the multiple processing functions, an event type may be introduced such that the event and its corresponding processing function all have the same event type. Thus, processing logic for a target event may be encapsulated by invoking processing functions having the same event ID from the processing functions based on the event type of the target event.
In one possible implementation manner, any event to be processed may be identified by [ topic, event ], where topic characterizes a service type to which the event to be processed belongs, and event characterizes an event type to which the event to be processed belongs, so that a corresponding message bus and a processing function can be conveniently and rapidly determined.
And packaging the target event into a target execution task consisting of a plurality of ordered execution operations based on a processing function corresponding to the target event, so that the target thread can sequentially execute the execution operations in the target execution task to complete the event to be processed.
According to the concurrency scheduling method provided by the embodiment of the invention, all service types are processed and packaged into the event, the event to be processed of each service type is uniformly scheduled by the corresponding message bus, and the concurrency conflict in the highly-cohesive service subdomain is actively controlled, so that the popularity of high concurrency capacity of service processing is ensured, the development difficulty of service processing is reduced, and the service processing efficiency is improved.
Based on the same inventive concept as the concurrent scheduling method described above, the embodiment of the present invention further provides a concurrent scheduling apparatus 130, where the concurrent scheduling apparatus 130 may be applied to the computer device 120 in fig. 1. In one possible implementation, referring to fig. 7, the concurrency scheduler 130 may include a preprocessing module 140, a first distribution module 150, and a second distribution module 160.
The preprocessing module 140 is configured to obtain any event to be processed, and determine an event identifier of the event to be processed. The event identification comprises a scheme number corresponding to a theme of the service type to which the event to be processed belongs.
The first distributing module 150 is configured to queue, according to the event identifier, the event to be processed to an event queue of the target message bus corresponding to the scheme number.
And the second distributing module 160 is configured to, when it is determined that the distributing condition is satisfied, take the first waiting event queued in the event queue as a target event.
The second distributing module 160 is further configured to package the target event into a target execution task through a processing function corresponding to the target message bus, and submit the target execution task to a target thread in the thread pool for execution.
Further, the concurrency scheduler 130 may also include a configuration module 170 and an update extension module 180.
The configuration module 170 is configured to register a theme on the event agent according to a scheme number corresponding to the theme of each service type, and construct a message bus uniquely corresponding to each scheme number; registering the processing function of each service type on the corresponding message bus by adopting a function annotation mode so as to configure a corresponding processor for each message bus.
The update expansion module 180 is configured to obtain and parse an event expansion instruction to obtain a to-be-expanded scheme number and a to-be-expanded processing function; and registering the subject on the event agency device by using the scheme number to be expanded, constructing a message bus corresponding to the scheme number to be expanded, and registering the processing function to be expanded on the corresponding message bus.
In the concurrency scheduling device 130, through the synergistic effect of the preprocessing module 140, the first distributing module 150 and the second distributing module 160, each service type is defined as a specific event in a manner that one message bus corresponds to an event of one service type, so that all events to be processed of the same service type are scheduled and distributed by the same message bus, the sequence of event distribution is ensured, high concurrency scheduling can be realized without frequently locking, releasing the lock and switching CPU resources, and the processing performance of computer equipment is greatly improved.
The specific limitation of the concurrent scheduling apparatus 130 may be referred to the limitation of the concurrent scheduling method hereinabove, and will not be described herein. The foregoing modules in the concurrency scheduler 130 may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory of the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device 120 is provided, the computer device 120 may be a server, and the internal structure of the computer device may be as shown in fig. 8. The computer device 120 includes a processor, memory, communication interfaces, and input means connected by a system bus. Wherein the processor of the computer device 120 is configured to provide computing and control capabilities. The memory of the computer device 120 includes non-volatile storage media, internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device 120 is used for communicating with an external terminal in a wired or wireless manner, where the wireless manner may be implemented through WIFI, an operator network, near Field Communication (NFC), or other technologies. The computer program, when executed by a processor, implements the concurrent scheduling method provided by the above embodiments.
The architecture shown in fig. 8 is merely a block diagram of a portion of an architecture related to the present inventive arrangements and is not limiting as to the computer device 120 to which the present inventive arrangements may be implemented, as a particular computer device 120 may include more or fewer components than shown in fig. 8, or may combine some components, or have a different arrangement of components.
In one embodiment, the concurrency scheduler 130 provided by the present invention may be implemented in the form of a computer program that is executable on a computer device 120 as shown in fig. 8. The memory of the computer device 120 may store various program modules constituting the concurrency scheduler 130, such as the preprocessing module 140, the first distribution module 150, and the second distribution module 160 shown in fig. 7. The computer program of each program module causes a processor to execute the steps of the concurrent scheduling method described in the present specification.
For example, the computer device 120 shown in fig. 8 may perform step S11 through the preprocessing module 140 in the concurrent scheduling apparatus 130 as shown in fig. 7. The computer device 120 may perform step S13 through the first distribution module 150. The computer device 120 may perform S15 and step S17 through the second distribution module 160.
In one embodiment, a computer device 120 is provided that includes a memory storing machine executable instructions and a processor that when executing the machine executable instructions performs the steps of: any event to be processed is acquired, and an event identification of the event to be processed is determined; queuing the event to be processed to an event queue of a target message bus corresponding to the scheme number according to the event identification; when the distribution condition is determined to be met, taking the event to be processed which is queued forefront in the event queue as a target event; and packaging the target event into a target execution task through a processing function corresponding to the target message bus, and submitting the target execution task to a target thread in a thread pool for execution.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of: any event to be processed is acquired, and an event identification of the event to be processed is determined; queuing the event to be processed to an event queue of a target message bus corresponding to the scheme number according to the event identification; when the distribution condition is determined to be met, taking the event to be processed which is queued forefront in the event queue as a target event; and packaging the target event into a target execution task through a processing function corresponding to the target message bus, and submitting the target execution task to a target thread in a thread pool for execution.
In the several embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus embodiments described above are merely illustrative, for example, of the flowcharts and block diagrams in the figures that illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present invention may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A concurrent scheduling method, applied to a computer device, where a thread pool and a plurality of message buses are deployed on the computer device, where the thread pool includes a plurality of threads, and each message bus corresponds to a processing function and a scheme number of a service type, the method includes:
any event to be processed is acquired, and an event identifier of the event to be processed is determined; the event identification comprises a scheme number corresponding to a theme of the service type to which the event to be processed belongs;
queuing the event to be processed to an event queue of a target message bus corresponding to the scheme number according to the event identifier;
when the distribution condition is determined to be met, taking the event to be processed which is queued forefront in the event queue as a target event;
and packaging the target event into a target execution task through a processing function corresponding to the target message bus, and submitting the target execution task to a target thread in the thread pool for execution.
2. The concurrent scheduling method of claim 1, further comprising:
for each message bus, calculating a target thread index based on a preset thread index calculation rule according to a scheme number and the total number of threads corresponding to the message bus, and taking a thread corresponding to the target thread index as a dedicated thread of the message bus;
the step of submitting the target execution task to a target thread in the thread pool for execution includes:
and submitting the target execution task to a dedicated thread of the target message bus for execution.
3. The concurrent scheduling method of claim 2, wherein the thread index calculation rule comprises:
Index=executors[Math.abs(groupId.hashCode)%executors.length] (I)
wherein Index characterizes thread Index, groupid hashcode characterizes hash value of scheme number, executors.
4. The concurrent scheduling method according to claim 1, wherein the step of determining that the distribution condition is satisfied comprises:
judging whether the execution of the last event to be processed distributed from the event queue is finished, and if yes, meeting the distribution condition.
5. A concurrent scheduling method according to any one of claims 1 to 3, wherein the computer device has an event broker deployed thereon;
before the step of acquiring any event to be processed and determining the event identification of the event to be processed, the method further comprises:
registering the theme on the event agent by using the scheme number corresponding to the theme of each service type, and constructing a message bus uniquely corresponding to each scheme number;
registering the processing functions of each service type on the corresponding message buses in a function annotation mode to configure the corresponding processing functions for each message bus.
6. The concurrent scheduling method of claim 5, further comprising:
acquiring and analyzing an event expansion instruction to obtain a scheme number to be expanded and a processing function to be expanded;
and performing theme registration on the event agency device according to the scheme number to be expanded, constructing a message bus corresponding to the scheme number to be expanded, and registering the processing function to be expanded on the corresponding message bus.
7. A concurrent scheduling method according to any one of claims 1 to 3, wherein the step of encapsulating the target event as a target execution task comprises:
adopting a processing function corresponding to the target message bus to package the processing logic of the target event into a target execution task; wherein the target execution task includes a plurality of ordered execution operations.
8. The concurrent scheduling device is characterized by being applied to computer equipment, wherein a thread pool and a plurality of message buses are deployed on the computer equipment, the thread pool comprises a plurality of threads, each message bus corresponds to a processing function and a scheme number of a service type, and the concurrent scheduling device comprises a preprocessing module, a first distribution module and a second distribution module;
the preprocessing module is used for acquiring any event to be processed and determining an event identifier of the event to be processed; the event identification comprises a scheme number corresponding to a theme of the service type to which the event to be processed belongs;
the first distributing module is configured to queue the event to be processed into an event queue of a target message bus corresponding to the scheme number according to the event identifier;
the second distributing module is used for taking the event to be processed which is queued forefront in the event queue as a target event when the distribution condition is determined to be met;
the second distributing module is further configured to encapsulate the target event into a target execution task through a processing function corresponding to the target message bus, and submit the target execution task to a target thread in the thread pool for execution.
9. A computer device comprising a processor and a memory, the memory storing machine executable instructions executable by the processor to implement the concurrent scheduling method of any of claims 1-7.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the concurrent scheduling method according to any one of claims 1 to 7.
CN202310234006.4A 2023-03-09 2023-03-09 Concurrent scheduling method, concurrent scheduling device, computer equipment and computer readable storage medium Pending CN116302420A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310234006.4A CN116302420A (en) 2023-03-09 2023-03-09 Concurrent scheduling method, concurrent scheduling device, computer equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310234006.4A CN116302420A (en) 2023-03-09 2023-03-09 Concurrent scheduling method, concurrent scheduling device, computer equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN116302420A true CN116302420A (en) 2023-06-23

Family

ID=86818113

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310234006.4A Pending CN116302420A (en) 2023-03-09 2023-03-09 Concurrent scheduling method, concurrent scheduling device, computer equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN116302420A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117971439A (en) * 2024-03-29 2024-05-03 山东云海国创云计算装备产业创新中心有限公司 Task processing method, system, equipment and computer readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117971439A (en) * 2024-03-29 2024-05-03 山东云海国创云计算装备产业创新中心有限公司 Task processing method, system, equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
EP3425502B1 (en) Task scheduling method and device
CN106802826B (en) Service processing method and device based on thread pool
US9798830B2 (en) Stream data multiprocessing method
US9778962B2 (en) Method for minimizing lock contention among threads when tasks are distributed in multithreaded system and apparatus using the same
CN113504985B (en) Task processing method and network equipment
CN110795222B (en) Multithreading task scheduling method, device, equipment and readable medium
CN110795254A (en) Method for processing high-concurrency IO based on PHP
CN106569887B (en) Fine-grained task scheduling method in cloud environment
CN111897637B (en) Job scheduling method, device, host and storage medium
WO2008101756A1 (en) Method and system for concurrent message processing
US20160371123A1 (en) Data Processing Method and Apparatus
Behera et al. A new dynamic round robin and SRTN algorithm with variable original time slice and intelligent time slice for soft real time systems
CN116302420A (en) Concurrent scheduling method, concurrent scheduling device, computer equipment and computer readable storage medium
US11301255B2 (en) Method, apparatus, device, and storage medium for performing processing task
CN112395062A (en) Task processing method, device, equipment and computer readable storage medium
US9384047B2 (en) Event-driven computation
US10656967B1 (en) Actor and thread message dispatching
CN110968876A (en) MILS architecture-based secure operating system
CN111989651A (en) Method and device for managing kernel service in multi-core system
CN114035928A (en) Distributed task allocation processing method
CN112749020A (en) Microkernel optimization method of Internet of things operating system
CN109901917B (en) Real-time operating system scheduling method and device and computer readable storage medium
CN116755868B (en) Task processing system and method
CN114816678B (en) Virtual machine scheduling method, system, equipment and storage medium
US11809219B2 (en) System implementing multi-threaded applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination