CN113485800B - Automatic dispatch method, system, equipment and storage medium based on central node - Google Patents

Automatic dispatch method, system, equipment and storage medium based on central node Download PDF

Info

Publication number
CN113485800B
CN113485800B CN202110697832.3A CN202110697832A CN113485800B CN 113485800 B CN113485800 B CN 113485800B CN 202110697832 A CN202110697832 A CN 202110697832A CN 113485800 B CN113485800 B CN 113485800B
Authority
CN
China
Prior art keywords
task
template
queue
tasks
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110697832.3A
Other languages
Chinese (zh)
Other versions
CN113485800A (en
Inventor
曹海涛
孙啸寅
何欣远
何先华
杜佳
付晟
刘海东
顾军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huatai Securities Co ltd
Original Assignee
Huatai Securities Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huatai Securities Co ltd filed Critical Huatai Securities Co ltd
Priority to CN202110697832.3A priority Critical patent/CN113485800B/en
Publication of CN113485800A publication Critical patent/CN113485800A/en
Application granted granted Critical
Publication of CN113485800B publication Critical patent/CN113485800B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system

Abstract

The invention discloses an automatic dispatch method, system, equipment and storage medium based on a central node, wherein the method comprises the following steps: selecting a main node from cluster nodes, and executing primary task scheduling logic and secondary task scheduling logic in the main node; the first-level task scheduling logic is used for acquiring tasks, calculating the scores of the tasks in each task template, matching each task template with the second-level task template cache queue, matching the tasks with each task template based on the score results, and distributing the tasks with the score results to the second-level task template cache queue; the secondary task scheduling logic is used for acquiring the online user, judging whether the online user is idle or not, and distributing tasks for the idle online user.

Description

Automatic dispatch method, system, equipment and storage medium based on central node
Technical Field
The invention relates to the technical field of automatic task allocation, in particular to a general automatic dispatch method, system, equipment and storage medium based on a central node.
Background
The common practice of task allocation is: the background has a task pool for storing tasks, the user actively initiates task polling, and the background calculates in real time according to a certain calculation rule to determine whether to distribute tasks to the user; this manner of task allocation that is actively polled by the user is visually referred to as "robbery-ing". Because each user can continuously request the background system in real time, and in the request process, the background can calculate the task information of the user in real time; as the number of users increases, the pressure of the background system may be very high, and this task allocation method is not suitable for a scenario of large-scale task allocation.
As described above, the task allocation method of the robbery order requests the task in real time through the user polling, which consumes the resources of the background system, and the user can continuously poll the background system to cause great resource waste even if the background system has no task, and this problem is particularly prominent in the scene of large-scale task allocation. Meanwhile, the mode of robbing the order is actively initiated by each user, which is not beneficial to unified scheduling and allocation of tasks, so that the optimal allocation of resources cannot be achieved.
The dispatching list is mainly divided into the following scenes, and the dealer industry business audit is taken as an example for illustration:
1. tasks such as service application, for example, account opening service, are not matched with the task processing capability of a background auditor, and generally refer to: more tasks and insufficient task processing capacity.
2. Different business needs different auditors to audit, such as: the entrepreneur board business needs to be audited by a person with entrepreneur board auditing experience, and when tasks are distributed, the tasks are distributed to proper persons for processing.
3. To maximize the processing efficiency of the task, when the tasks are more, the task is distributed to idle auditors, so that the task can be timely processed.
According to the search, the Chinese patent publication No. CN111459666A is in favor of the task dispatch method, device, task execution system and server disclosed in the year 2020, 7 and 28, and the method comprises the steps of firstly obtaining the task to be dispatched of a target tenant; the target tenant corresponds to the first dispatch node; according to the method, each dispatch node respectively manages corresponding target tenants, the to-be-dispatched task is placed in a preset to-be-dispatched queue, so that the to-be-dispatched task is dispatched to a task executor by a first dispatch node or dispatch nodes except the first dispatch node. However, the to-be-dispatched tasks of the corresponding target tenant are put into the to-be-dispatched queue, the to-be-dispatched tasks of the plurality of tenants are dispersed to each dispatch node for respective management, and only the management of each dispatch node to the to-be-dispatched task is realized, and the technical problems of distributing management between the to-be-dispatched task and the task processing end, namely matching between dealer industry business and auditors, and reasonably dispatching the to-be-dispatched task to the dealer industry business auditors are not considered.
The Chinese patent publication No. CN111785346A is in favor of the prescription order dispatch method, system, device and storage medium disclosed in the year 10 and 16 of 2020, the method comprises the steps of responding to the received prescription order of the end user, obtaining busyness information of a plurality of pharmacist groups, and the pharmacists in the pharmacist groups review the prescription orders in the corresponding queues; determining a pharmacist group for reviewing the prescription order according to the busyness information; transmitting the prescription order to a queue corresponding to a pharmacist group reviewing the prescription order; and sending the review result of the prescription order by the pharmacies in the pharmacies to the terminal. According to the invention, the processing conditions of a plurality of pharmacists in the pharmacist group are comprehensively considered, and the processing capacity of individual pharmacists does not excessively influence the overall processing capacity of the pharmacist group, so that the prescription order can be processed in time. However, the received prescription order of the end user is matched with the pharmacist group according to the busyness of the pharmacist group, and the prescription order is sent to the queue corresponding to the determined pharmacist group, so that the received prescription order of the end user is matched with the pharmacist group and then matched with pharmacist in the pharmacist group, the matching degree of the task and the template is calculated through the task rule without considering a task allocation mechanism based on the task template, the task to be dispatched in the whole task pool is reasonably divided, and the task is redistributed to auditors of the dealer industry business for task processing.
According to the search, huang Wei, pang Lin, cao Bin and Jiao Runhai published in 2014 in China of the power grid technology disclose a distributed parallel computing platform of a power distribution network based on data-level task decomposition, and the distributed parallel computing platform of the power distribution network based on the data-level task decomposition is constructed for realizing real-time analysis and computation of a large-scale power distribution network. And combining the operation structure and equipment configuration of the power distribution network, taking the feeder line of the power distribution network as an analysis unit, and decomposing the calculation task of the power distribution network by adopting a data-level parallel calculation mode. And 4 subsystems of the configuration management module, the instance, the execution end and the client are respectively used for realizing functions of task generation, task decomposition, task dispatch, subtask calculation and the like, so as to form a distributed parallel computing platform frame. The message middleware ZeroMQ technology is introduced, and the combination of different types of sockets is adopted to realize the N-N efficient communication inside the distributed system and the data interaction with an external system. In order to verify the practicability and parallel computing performance of the platform, the distributed parallel computing of the urban power distribution network global state estimation of certain city in Shandong province is realized on the platform, and when the power distribution network nodes reach a certain scale, the distributed parallel computing by adopting the platform has obvious speed advantages. However, there may be feeder lines with larger node sizes in the distribution network, and in the parallel computing process, such feeder lines will determine the length of the parallel computing time, and further blocking processing is required to reduce the parallel computing time; however, the blocking process inevitably increases the task decomposition time, resulting in an increase in task processing time. Thus, how to deal with feeder lines with larger node scale is a problem to be solved next for parallel computation of distribution networks based on data-level task decomposition.
Therefore, the reasonable allocation and high-efficiency processing of tasks still are the urgent problems in the art, the central node-based automatic dispatch method disclosed by the invention is used for uniformly managing the tasks and users, and the best matched tasks are allocated to idle users through a scheduling algorithm, so that the technical problems can be effectively solved.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides an automatic dispatch method, system, equipment and storage medium based on a central node. The automatic dispatch method based on the central node avoids the problem of unbalanced task allocation and system resource consumption caused by the active polling of users, and the method does not need to consume resource allocation tasks when no task exists in the system, and allocates proper tasks for idle users through a scheduling algorithm when more tasks exist; when the amount of tasks exceeds the processing capacity of the user, the excess tasks are queued for system allocation.
To achieve the above object, in one aspect, the present application provides an automatic dispatch method based on a central node, including:
selecting a main node from cluster nodes, and executing primary task scheduling logic and secondary task scheduling logic in the main node; the first-level task scheduling logic is used for acquiring tasks, calculating the scores of the tasks in each task template, matching each task template with the second-level task template cache queue, matching the tasks with each task template based on the score results, and distributing the tasks with the score results to the second-level task template cache queue; the secondary task scheduling logic is used for acquiring the online user, judging whether the online user is idle or not, and distributing tasks for the idle online user.
Preferably, the first level task scheduling logic further comprises, at regular time intervals, cycling the acquisition of tasks from the first level task cache queue; calculating the score of the task in each task template; storing tasks with scores of not 0 in the task template into the secondary task template cache queue matched with the task template; when the tasks are stored in the secondary task template cache queue, sorting according to the scores in the task templates matched with the secondary task template cache queue.
Further, the tasks are circularly acquired from the first-level task cache queue at fixed time intervals, wherein the fixed time intervals can be set to be 1s, 2s and 3s … …, and specifically, the setting of the fixed time intervals can be adjusted according to the total amount of the tasks in the first-level task cache queue.
Further, tasks are circularly acquired from the first-level task cache queue, wherein the number of the tasks acquired at a time does not exceed a set value, the set value can be set to be 50, 100 and 150 … …, and specifically, the setting of the number of the tasks acquired at a time can be adjusted according to the total number of the tasks in the first-level task cache queue.
Further, the acquired task is processed, and task details are acquired.
Further, the calculation method for calculating the score of the task in each task template comprises the steps that the task template comprises a plurality of task rules, each task rule comprises a plurality of rule items, each task rule has an integer score and weight, and each rule item has an integer score and weight.
Further, the calculation method for calculating the scores of the tasks in the task templates is that rule scores are rule weights and rule item scores are rule item weights.
Further, the higher the score of a task in the task template, the higher the matching degree between the representative task and the task template; and if the score of the task in the task template is 0, indicating that the task is not matched with the task template, storing the task with the score of not 0 in the task template into the secondary task template cache queue matched with the task template.
Further, the score ordering rule in the task template matched with the secondary task template cache queue is as follows: and preferentially comparing the scores of the tasks in the task templates, wherein the higher the score is, the earlier the score is, the higher the score is, the score is the same, the sorting is performed according to the receiving time of the tasks, and the earlier the receiving time is, the earlier the sorting is.
Further, the tasks obtained from the first-level task cache queue each time are stored into a template queue of a default template.
Further, after the primary scheduling task logic is completed, the tasks obtained from the primary task cache queue each time are deleted from the primary scheduling queue.
Preferably, the information of the online users is stored in an online user queue, the online user queue sorts the online users according to the idle time of the online users, and the online users who enter the idle state first are ranked in front.
Further, the information of the online user includes, but is not limited to, any one or more of information such as online time of the user, scoring of experience of the user on processing of each type of service, average time of the user on processing of each type of service, evaluation of completion condition of the user on processing of each type of service, scoring of service feedback of the user, and age of the user.
Further, the idle time of the online user is determined according to the time when the online user completes all tasks in the task queue of the user.
Preferably, the online user queue determines whether the user is online based on a heartbeat mechanism, and the user which is not online is removed from the online user queue.
Further, whether the user is online is determined based on a heartbeat mechanism using one of a piezoelectric sensor, a piezoresistive sensor, or an optoelectronic sensor.
Further, when the user is judged to be offline, the unprocessed tasks of the user can be recovered and reassigned to other users for processing.
Preferably, the secondary task scheduling logic further comprises, cyclically, retrieving users from an online user queue; checking whether the task being processed exists in a user task queue of the online user, and if the task being processed exists in the user task queue of the online user, judging that the online user is idle; searching whether the task template matched with the idle online user exists or not, if so, writing the task in the secondary task template cache queue matched with the task template into the user task queue; deleting the task from the secondary task template cache queue matched with the task template.
Further, the online users are circularly acquired from the online user queue, the number of the online users acquired once does not exceed a set value, wherein the number of the online users acquired once does not exceed the set value, the set value can be set to be 50, 100 and 150 … …, and specifically, the setting of the number of the online users acquired once can be adjusted according to the number of tasks acquired from the primary task cache queue through the primary task scheduling logic circulation.
Further, checking whether the task being processed exists in a user task queue of the online user, if so, judging that the online user is busy, and deleting the online user from the secondary scheduling queue.
Further, searching whether the task template matched with the idle online user exists, and if not, writing the task in the default template into the user task queue.
Further, the task status written into the user task queue is set as: and (5) treating.
Further, after task allocation is successful, deleting the task from the secondary task template cache queue corresponding to the task template, and deleting the task from the default template.
Further, for tasks assigned to users but not yet processed for a long period of time even exceeding the system threshold, the system may reclaim the tasks to the level one task cache queue.
Preferably, the relationship between the user and the task template includes: each user is associated with one task template so as to realize unidirectional matching between the user and the task template; each task template is associated with a plurality of users so as to realize multidirectional matching between the task template and the users; and associating all users with a default template prefabricated by the system so as to realize that each user has the corresponding task template.
Preferably, the relationship between the task template and the task includes: each task template is associated with a plurality of tasks so as to realize multidirectional matching between the task templates and the tasks; each task belongs to a plurality of task templates so as to realize multidirectional matching between the task and the task templates; and all tasks are contained in a default template prefabricated by the system so as to realize that each task has a corresponding task template.
In another aspect, the application provides an automatic dispatch system based on a central node, which comprises a central node subsystem and a user subsystem, wherein the central node subsystem comprises a task scheduler, a cache queue and a task template pool.
The task scheduler comprises a primary task scheduler and a secondary task scheduler, and the primary task scheduler circularly acquires tasks from the primary task cache queue at fixed time intervals; calculating the score of the task in each task template; storing tasks with scores of not 0 in the task template into the secondary task template cache queue matched with the task template; when the tasks are stored in the secondary task template cache queue, sorting according to the scores in the task templates matched with the secondary task template cache queue.
The secondary task scheduler is used for circularly acquiring users from the online user queue; checking whether the task being processed exists in a user task queue of the user, and if the task being processed exists in the user task queue of the user, judging that the user is idle; searching whether the task template matched with the idle online user exists or not, if so, writing the task in the secondary task template cache queue matched with the task template into the user task queue; deleting the task from the secondary task template cache queue matched with the task template; the buffer queue comprises a first-level task buffer queue and a plurality of second-level task buffer queues, wherein the first-level task buffer queue is used for uniformly receiving tasks to be processed submitted to a background by clients, and the second-level task template buffer queue is used for storing the tasks in the first-level task buffer queue in a grouping way.
The task template pool is used for storing the task templates, and each task template is matched with the secondary task template cache queue.
The user subsystem comprises the online user queue and the user task queue, wherein the online user queue is used for storing online user information, and the user task queue is used for storing information of tasks allocated to users.
In yet another aspect, the present application provides an automatic dispatch device based on a central node, including a memory storing a computer program and a processor implementing the steps of the method as described above when executing the computer program. The automatic dispatch equipment based on the central node is particularly a computer equipment, which can be a server, and comprises a processor, a memory, an interface and a storage medium which are connected through a bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a storage medium and an internal memory. The internal memory provides an environment for the execution of the computer program. The interface of the computer device is used for connecting with an external terminal.
In yet another aspect, the present application provides an automatic dispatch storage medium based on a central node, the storage medium storing computer instructions which, when run on a computer, cause the computer to perform the steps of the method as described above. The storage medium storage is embodied as a readable storage medium. The readable storage medium stores an operating system, computer instructions and a database, which when executed on a computer cause a processor to implement the task dispatch method described above. The database is used for storing system data and business data of the main system and the standby system.
Compared with the prior art, the invention has the beneficial effects that:
(1) Compared with the traditional order-robbing mode, the automatic order-robbing mode can rationally allocate resources, and background resource waste caused by order-robbing is reduced; the automatic dispatching task scheduling is based on the central node, other nodes provide external interaction capability, and when the central node fails, one central node is automatically selected again from the cluster to execute task scheduling so as to ensure the overall robustness of the system.
(2) The hierarchical storage of the task, the first-level task buffer queue is used for storing all original task information, and the second-level task template queue is used for storing the information after task classification; the two-stage task scheduling mechanism is established, the primary task scheduling is responsible for distributing tasks to task template queues, the secondary task scheduling is responsible for distributing tasks to user task queues, the task and users are pre-matched, the calculation pressure caused by real-time matching is reduced, and the system pressure caused by simultaneous ordering of a large number of users in the peak flow can be avoided.
(3) Defining a task template for associating a user with a task; each task template defines a task template queue for storing task information; a task priority evaluation mechanism in the task template, wherein the higher the score is, the higher the representative priority is; tasks are ordered according to the priority, and tasks with higher priority are arranged in front, so that a user can process tasks with higher priority preferentially.
(4) Based on a task allocation mechanism of the task template, calculating the matching degree of the task and the template through a task rule, so that the whole task pool can be reasonably divided; user responsibilities are divided through the task template, the most suitable task is distributed to the user, and meanwhile, the most suitable user is ensured to process one task, so that the task processing efficiency is improved.
(5) An online user queue is established, the online state of the user is maintained through a heartbeat mechanism, the offline state is indicated by the fact that the heartbeat is not received for a long time, and the user is deleted from the online user list, so that the accuracy of task allocation is ensured; task allocation is based on idle online users in online state, and queues are ordered according to idle time points of the online users; users who enter an idle state first can allocate tasks preferentially, tasks are allocated preferentially for users with longer idle time, and fairness of task allocation is guaranteed.
(6) When the tasks are fewer, each user can be guaranteed to be distributed to the tasks; when more tasks are needed, the user with higher task processing speed can be ensured to be capable of distributing more tasks, so that the rationality of task distribution is ensured; for tasks which are distributed to users but are not processed even exceeding the system threshold for a long time, the system can reclaim the tasks and put the tasks into a first-level task buffer queue for task redistribution, so that the tasks can be timely processed.
Drawings
FIG. 1 is a schematic diagram of logic execution according to an embodiment of the present invention;
FIG. 2 is a block diagram of the primary task scheduling logic and the secondary task scheduling logic according to an embodiment of the present invention;
FIG. 3 is a flow diagram of primary task scheduling logic according to an embodiment of the present invention;
FIG. 4 is a flow diagram of secondary task scheduling logic according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of the relationship of an online user and a task template according to an embodiment of the invention;
FIG. 6 is a diagram of a task template and task relationship according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made in detail and with reference to the accompanying drawings, wherein it is apparent that the embodiments described are only some, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention.
In the description of the present invention, it should be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings are merely for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the device or element referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more of the described features. In the description of the present invention, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the description of the present invention, it should be noted that, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically connected, electrically connected or can be communicated with each other; can be directly connected or indirectly connected through an intermediate medium, and can be communicated with the inside of two elements or the interaction relationship of the two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
In the present invention, unless expressly stated or limited otherwise, a first feature "above" or "below" a second feature may include both the first and second features being in direct contact, as well as the first and second features not being in direct contact but being in contact with each other through additional features therebetween. Moreover, a first feature being "above," "over" and "on" a second feature includes the first feature being directly above and obliquely above the second feature, or simply indicating that the first feature is higher in level than the second feature. The first feature being "under", "below" and "beneath" the second feature includes the first feature being directly under and obliquely below the second feature, or simply means that the first feature is less level than the second feature.
FIG. 1 is a schematic diagram illustrating logic execution according to an embodiment of the present invention. The ZooKeeper is mainly responsible for distributed coordination work, and is used for selecting a master node from all cluster nodes and executing scheduling logic inside the master node. The task scheduling is based on a central node, and other nodes provide external interaction capability; when the central node fails, a central node is automatically selected from the cluster again to execute task scheduling, so that the overall robustness of the system is ensured.
Based on the above, the embodiment of the invention provides an automatic dispatch method, system, equipment and storage medium based on a central node, and the technology can be applied to a scene of task allocation with a larger service scale. In order to facilitate understanding of the embodiments of the present invention, first, a detailed description is given of an automatic dispatch method based on a central node disclosed in the embodiments of the present invention, as shown in fig. 2, where the method includes the following specific steps:
selecting a main node from cluster nodes, and executing primary task scheduling logic and secondary task scheduling logic in the main node; the first-level task scheduling logic is used for acquiring tasks, calculating the scores of the tasks in each task template, matching each task template with a second-level task template cache queue, matching the tasks with each task template based on a score result, and distributing the tasks with the score result to the second-level task template cache queue; the secondary task scheduling logic is used for acquiring the online user, judging whether the online user is idle or not, and distributing tasks for the idle online user.
As shown in fig. 3, the primary task scheduling logic further comprises:
in step S111, the primary task scheduling logic loops acquiring tasks from the primary task cache queue at regular time intervals
Specifically, the first-level task scheduling logic is used for acquiring tasks in a first-level task cache queue, and the first-level task cache queue is used for uniformly receiving tasks to be processed submitted to the background by clients.
Specifically, the first-level task cache queue sorts the received tasks to be processed submitted to the background by the clients according to the received time sequence.
Specifically, the first-level task cache queue is used for storing all original task information and uniformly storing tasks to be audited by the dealer industry.
Specifically, at fixed time intervals, tasks are circularly acquired from the first-level task cache queue, wherein the fixed time intervals can be set to be 1s, 2s and 3s … …, and the setting of the fixed time intervals can be adjusted according to the total amount of the tasks in the first-level task cache queue.
Specifically, the duration of task acquisition from the primary task cache queue at regular time intervals may also be adjusted according to the total amount of tasks in the primary task cache queue.
Specifically, tasks are circularly acquired from the first-level task cache queue, wherein the number of the tasks acquired once does not exceed a set value, the set value can be set to be 50, 100 and 150 … …, and the setting of the number of the tasks acquired once can be adjusted according to the total number of the tasks in the first-level task cache queue.
Specifically, the number of the loops of the tasks acquired from the first-level task cache queue is adjusted according to the total number of the tasks in the first-level task cache queue.
Specifically, a primary scheduling queue formed by the primary task scheduling logic processes the acquired task to acquire task details.
In step S112, the primary task scheduling logic calculates the scores of the tasks acquired from the primary task cache queues in the respective task templates.
Specifically, the calculation method for calculating the score of the task in each task template comprises the steps that the task template comprises a plurality of task rules, each task rule comprises a plurality of rule items, each task rule has an integer score and weight, and each rule item has an integer score and weight.
Specifically, the calculation method for calculating the scores of the tasks in the task templates is that rule scores are rule weights and rule item scores are rule item weights.
Specifically, each task template has a corresponding secondary task template cache queue, and each task template corresponds to a default task template queue.
In step S113, the primary task scheduling logic matches a task with each task template based on the result of the score of the task in the task template, and stores the task with the score of not 0 in the task template in the secondary task template buffer queue matched with the task template.
Specifically, the higher the score of a task in the task template, the higher the matching degree between the representative task and the task template; and if the score of the task in the task template is 0, indicating that the task is not matched with the task template, storing the task with the score of not 0 in the task template into the secondary task template cache queue matched with the task template.
Specifically, the same task to be processed can correspond to a plurality of different task templates, namely the same task to be processed is stored in different secondary task template cache queues.
In step S114, when the primary task scheduling logic stores a task in the secondary task template buffer queue, the task scheduling logic orders the tasks according to scores in the task templates matched in the secondary task template buffer queue.
Specifically, the score ordering rule in the task template matched with the secondary task template cache queue is as follows: and preferentially comparing the scores of the tasks in the task templates, wherein the higher the score is, the earlier the score is, the higher the score is, the score is the same, the sorting is performed according to the receiving time of the tasks, and the earlier the receiving time is, the earlier the sorting is.
Specifically, the tasks obtained from the first-level task cache queue each time are stored into a template queue of a default template at the same time.
Specifically, after the primary scheduling task logic is completed, the tasks acquired from the primary task cache queue each time are deleted from the primary scheduling queue.
In one embodiment, the information of the online users is stored in an online user queue, and the online user queue sorts the online users according to the idle time of the online users, and the online users who enter the idle state first are ranked in front.
Specifically, the information of the online user includes, but is not limited to, any one or more of information such as online time of the user, scoring of experience of the user on processing various types of business, average time of the user on processing various types of business, evaluation of completion condition of the user on processing various types of business, scoring of service feedback of the user, and age of the user.
Specifically, the idle time of the online user is determined according to the time when the online user completes all tasks in the task queue of the user.
Specifically, the online user queue judges whether the user is online or not based on a heartbeat mechanism, and the user which is not online is removed from the online user queue.
Specifically, the online state of the user is calculated based on the heartbeat mechanism, and the absence of heartbeat in the preset time indicates offline, wherein the preset time can be set to 2s and 3s … ….
Specifically, a method of judging whether a user is online based on a heartbeat mechanism is implemented using one of a piezoelectric sensor, a piezoresistive sensor, or an optoelectronic sensor.
Specifically, when the user is judged to be offline, the unprocessed tasks of the user are recovered and are reassigned to other users for processing.
As shown in fig. 4, the secondary task scheduling logic further includes:
in step S121, the secondary task scheduling logic loops to acquire a user from an online user queue.
Specifically, the online users are circularly acquired from the online user queue, the number of the online users acquired once does not exceed a set value, wherein the number of the online users acquired once does not exceed the set value, the set value can be set to be 50, 100 and 150 … …, and specifically, the setting of the number of the online users acquired once can be adjusted according to the number of tasks acquired from the primary task cache queue through the primary task scheduling logic circulation.
In step S122, the secondary task scheduling logic checks whether there is a task being processed in the user task queue of the online user, and if not, determines that the online user is idle.
Specifically, whether the task being processed exists in the user task queue of the online user is checked, if so, the online user is judged to be busy, and the online user is deleted from the secondary scheduling queue.
In step S123, the secondary task scheduling logic searches whether the task template matching with the idle online user exists, if yes, writes the task in the secondary task template buffer queue matching with the task template into the user task queue.
Specifically, each task corresponds to a plurality of task templates, namely, each task corresponds to a plurality of secondary task template cache queues, but each task can only write into the user task queue once.
Specifically, after the task in the secondary task template buffer queue matched with the task template is written into the user task queue, the task in the secondary task template buffer queue matched with other task templates is immediately deleted.
Specifically, the number of the tasks written into the user task queue in the second-level task template cache queue matched with the task template each time may be 1, 2 or 3 … …, and the setting of the number of the user task queues written into a single time may be adjusted according to the total number of the tasks in the first-level task cache queue.
Specifically, the number of tasks written into the user task queue in the second-level task template buffer queue matched with the task template is not more than 5 each time, and the setting of the number of the user task queues written into one time can be adjusted according to the total number of the tasks in the first-level task buffer queue.
Specifically, whether the task template matched with the idle online user exists or not is searched, if not, the task in the default template queue matched with the default template is written into the user task queue.
Specifically, each task corresponds to the default template, i.e. each task corresponds to the default template queue, but each task can only write to the user task queue once.
Specifically, after the task in the default template queue is written into the user task queue, the task in the secondary task template cache queue matched with the other task templates is immediately deleted.
Specifically, the state of the second-level task template buffer queue or the default template queue after the task in the default template queue is written into the user task queue is set as follows: and (5) treating.
In step S124, the secondary task scheduling logic deletes the task from the secondary task template cache queue that matches the task template.
Specifically, after task allocation is successful, the task is deleted from the secondary task template cache queue corresponding to the task template, and meanwhile, the task is also deleted from the default template.
Specifically, after the user completes execution of the task in the user task queue, the task state in the user task queue is modified: has been completed.
Specifically, for tasks assigned to users but not yet processed for a long period of time even exceeding the system threshold, the system may reclaim the tasks to a level one task cache queue.
In one embodiment, the relationship between the user and the task template is as shown in fig. 5, and includes: each user is associated with one task template so as to realize unidirectional matching between the user and the task template; each task template is associated with a plurality of users so as to realize multidirectional matching between the task template and the users; and associating all users with a default template prefabricated by the system so as to realize that each user has the corresponding task template.
In one embodiment, the relationship between the task template and the task is as shown in fig. 6, and includes: each task template is associated with a plurality of tasks so as to realize multidirectional matching between the task templates and the tasks; each task belongs to a plurality of task templates so as to realize multidirectional matching between the task and the task templates; and all tasks are contained in a default template prefabricated by the system so as to realize that each task has a corresponding task template.
In one embodiment, an automatic dispatch system based on a central node is provided, the system comprising a central node subsystem and a user subsystem, the central node subsystem comprising a task scheduler, a cache queue and a task template pool.
The task scheduler comprises a primary task scheduler and a secondary task scheduler, and the primary task scheduler circularly acquires tasks from the primary task cache queue at fixed time intervals; calculating the score of the task in each task template; storing tasks with scores of not 0 in the task template into the secondary task template cache queue matched with the task template; when the tasks are stored in the secondary task template cache queue, sorting according to the scores in the task templates matched with the secondary task template cache queue.
The second-level task scheduler circularly acquires users from the online user queue; checking whether the task being processed exists in a user task queue of the user, and if the task being processed exists in the user task queue of the user, judging that the user is idle; searching whether the task template matched with the idle online user exists or not, if so, writing the task in the secondary task template cache queue matched with the task template into the user task queue; deleting the task from the secondary task template cache queue matched with the task template.
The buffer queues comprise a first-level task buffer queue and a plurality of second-level task buffer queues, the first-level task buffer queue is used for uniformly receiving tasks to be processed submitted to the background by clients, and the second-level task template buffer queue is used for storing the tasks in the first-level task buffer queue in groups.
The task template pool is used for storing the task templates, and each task template is matched with the secondary task template cache queue.
The user subsystem comprises the online user queue and the user task queue, wherein the online user queue is used for storing online user information, and the user task queue is used for storing information of tasks allocated to users.
In one embodiment, an automatic dispatch device based on a central node is provided, comprising a memory storing a computer program and a processor implementing the steps of the method as described above when executing the computer program. The automatic dispatch equipment based on the central node is specifically a computer equipment, which can be a server, and the internal structure diagram of the computer equipment can be shown in fig. 7. The device includes a processor, a memory, an interface, and a storage medium connected by a bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a storage medium and an internal memory. The internal memory provides an environment for the execution of the computer program. The interface of the computer device is used for connecting with an external terminal.
In one embodiment, a central node based automated dispatch storage medium is provided that stores computer instructions that, when executed on a computer, cause the computer to perform the steps of the methods described above. The storage medium is in particular a readable storage medium. The readable storage medium stores an operating system, computer instructions and a database, which when executed on a computer cause a processor to implement the task dispatch method described above. The database is used for storing system data and business data of the main system and the standby system.
In one embodiment, based on the automatic dispatch method based on the central node defined in this embodiment, an automatic dispatch system is developed in Java language, which is used to solve the business transaction auditing problem in the securities industry, specifically:
1. the List of Redis is used for realizing a first-level task cache queue OneLevelTaskQueue.
2. And realizing a secondary task template queue by using the Zset of Redis, wherein the queue stores task IDs, and sequencing the tasks according to the scores of the tasks in the task template, and sequencing the tasks according to the task receiving time under the condition that the scores are the same.
3. And the MySql is used for realizing an online user queue, and maintaining the online, offline, idle and other states of the user.
4. And realizing a user task queue by using MySql, maintaining the states of task waiting, processing completion and the like.
5. And storing heartbeat information of the user by using a Hash structure of Redis.
6. A primary task scheduler OneLevelTaskDispatch is implemented with a scheduling thread pool inside the JDK.
7. The second level task scheduler, second level task dispatcher, is implemented with a dispatch thread pool internal to JDK. .
8. The cluster nodes are based on the ZooKeeper, and primary task scheduling and secondary task scheduling are executed in the master node.
In the description of the present specification, reference to the terms "one embodiment," "certain embodiments," "illustrative embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced with equivalents; these modifications or substitutions do not depart from the essence of the corresponding technical solutions from the technical solutions of the embodiments of the present invention.

Claims (9)

1. An automatic dispatch method based on a central node, comprising:
selecting a main node from cluster nodes, and executing primary task scheduling logic and secondary task scheduling logic in the main node;
the first-level task scheduling logic is used for acquiring tasks, calculating the scores of the tasks in each task template, matching each task template with a second-level task template cache queue, matching the tasks with each task template based on a score result, and distributing the tasks with the score result to the second-level task template cache queue;
the secondary task scheduling logic is used for acquiring the online user, judging whether the online user is idle, and distributing tasks for the idle online user;
The primary task scheduling logic further comprises,
circularly acquiring tasks from the first-level task cache queue at fixed time intervals;
calculating the score of the task in each task template;
storing tasks with scores of not 0 in the task template into the secondary task template cache queue matched with the task template;
when the tasks are stored in the secondary task template cache queue, sorting according to scores in the task templates matched with the secondary task template cache queue;
the calculation method for calculating the scores of the tasks in the task templates comprises the steps that the task templates comprise a plurality of task rules, each task rule comprises a plurality of rule items, each task rule has an integer score and weight, and each rule item has an integer score and weight;
the calculation method for calculating the scores of the tasks in the task templates comprises the steps of rule score, rule weight and rule item score, and rule item weight;
the higher the score of a task in the task template, the higher the matching degree between the representative task and the task template; if the score of the task in the task template is 0, indicating that the task is not matched with the task template, storing the task with the score of not 0 in the task template into the secondary task template cache queue matched with the task template;
The score ordering rules in the task templates matched with the secondary task template cache queue are as follows: and comparing the scores of the tasks in the task templates, wherein the higher the score is, the more the scores are, the higher the scores are, the more the scores are, the scores are the same, the ranks are according to the receiving time of the tasks, and the earlier the receiving time is, the more the ranks are.
2. The automatic dispatch method based on central node of claim 1, wherein the information of the online users is stored in an online user queue, the online user queue sorts the online users according to the idle time of the online users, and the online users who first enter the idle state are ranked in front.
3. The automatic dispatch method based on a central node of claim 2, wherein the online user queue determines whether a user is online based on a heartbeat mechanism, and removes users that are not online from the online user queue.
4. The automatic dispatch method based on a central node of claim 2, wherein the secondary task scheduling logic further comprises,
circularly acquiring online users from an online user queue;
checking whether the task being processed exists in a user task queue of the online user, and if the task being processed exists in the user task queue of the online user, judging that the online user is idle;
Searching whether the task template matched with the idle online user exists or not, if so, writing the task in the secondary task template cache queue matched with the task template into the user task queue;
deleting the task from the secondary task template cache queue matched with the task template.
5. The automatic dispatch method based on a central node of claim 1, wherein the relationship of a user and the task template comprises: associating one of the task templates with each user; each task template is associated with a plurality of users; and, the default template prefabricated by the system associates all users.
6. The automatic dispatch method based on a central node of claim 1, wherein the task template and task relationship comprises: each task template is associated with a plurality of tasks; each task belongs to a plurality of task templates; and, the default template prefabricated by the system contains all tasks.
7. An automatic dispatch system based on a central node for implementing the method of any one of claims 1 to 6, characterized in that it comprises a central node subsystem and a user subsystem,
The central node subsystem comprises a task scheduler, a buffer queue and a task template pool,
the task scheduler includes a primary task scheduler and a secondary task scheduler,
the primary task scheduler circularly acquires tasks from the primary task cache queue at fixed time intervals; calculating the score of the task in each task template; storing tasks with scores of not 0 in the task template into the secondary task template cache queue matched with the task template; when the tasks are stored in the secondary task template cache queue, sorting according to scores in the task templates matched with the secondary task template cache queue;
the secondary task scheduler is used for circularly acquiring online users from the online user queue; checking whether the task being processed exists in a user task queue of the online user, and if the task being processed exists in the user task queue of the online user, judging that the online user is idle; searching whether the task template matched with the idle online user exists or not, if so, writing the task in the secondary task template cache queue matched with the task template into the user task queue; deleting the task from the secondary task template cache queue matched with the task template;
The buffer queue comprises a first-level task buffer queue and a plurality of second-level task buffer queues, wherein the first-level task buffer queue is used for uniformly receiving tasks to be processed submitted to a background by clients, and the second-level task template buffer queue is used for storing the tasks in the first-level task buffer queue in groups;
the task template pool is used for storing the task templates, and each task template is matched with the secondary task template cache queue;
the user subsystem comprises the online user queue and the user task queue, wherein the online user queue is used for storing online user information, and the user task queue is used for storing information of tasks allocated to users.
8. Automatic dispatch device based on a central node, comprising a memory and a processor, said memory storing a computer program, characterized in that said processor implements the steps of the method according to any one of claims 1 to 6 when executing said computer program.
9. A central node based automated dispatch storage medium storing computer instructions which, when executed on a computer, cause the computer to perform the method of any of claims 1-6.
CN202110697832.3A 2021-06-23 2021-06-23 Automatic dispatch method, system, equipment and storage medium based on central node Active CN113485800B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110697832.3A CN113485800B (en) 2021-06-23 2021-06-23 Automatic dispatch method, system, equipment and storage medium based on central node

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110697832.3A CN113485800B (en) 2021-06-23 2021-06-23 Automatic dispatch method, system, equipment and storage medium based on central node

Publications (2)

Publication Number Publication Date
CN113485800A CN113485800A (en) 2021-10-08
CN113485800B true CN113485800B (en) 2024-01-23

Family

ID=77935912

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110697832.3A Active CN113485800B (en) 2021-06-23 2021-06-23 Automatic dispatch method, system, equipment and storage medium based on central node

Country Status (1)

Country Link
CN (1) CN113485800B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109064005A (en) * 2018-07-27 2018-12-21 北京中关村科金技术有限公司 A kind of loan examination & approval task justice auto form delivering system of task based access control priority
CN110704186A (en) * 2019-09-25 2020-01-17 国家计算机网络与信息安全管理中心 Computing resource allocation method and device based on hybrid distribution architecture and storage medium
CN111324427A (en) * 2018-12-14 2020-06-23 深圳云天励飞技术有限公司 Task scheduling method and device based on DSP
CN111813513A (en) * 2020-06-24 2020-10-23 中国平安人寿保险股份有限公司 Real-time task scheduling method, device, equipment and medium based on distribution
CN112486648A (en) * 2020-11-30 2021-03-12 北京百度网讯科技有限公司 Task scheduling method, device, system, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9250953B2 (en) * 2013-11-12 2016-02-02 Oxide Interactive Llc Organizing tasks by a hierarchical task scheduler for execution in a multi-threaded processing system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109064005A (en) * 2018-07-27 2018-12-21 北京中关村科金技术有限公司 A kind of loan examination & approval task justice auto form delivering system of task based access control priority
CN111324427A (en) * 2018-12-14 2020-06-23 深圳云天励飞技术有限公司 Task scheduling method and device based on DSP
CN110704186A (en) * 2019-09-25 2020-01-17 国家计算机网络与信息安全管理中心 Computing resource allocation method and device based on hybrid distribution architecture and storage medium
CN111813513A (en) * 2020-06-24 2020-10-23 中国平安人寿保险股份有限公司 Real-time task scheduling method, device, equipment and medium based on distribution
CN112486648A (en) * 2020-11-30 2021-03-12 北京百度网讯科技有限公司 Task scheduling method, device, system, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113485800A (en) 2021-10-08

Similar Documents

Publication Publication Date Title
CN112162865B (en) Scheduling method and device of server and server
CN107038069B (en) Dynamic label matching DLMS scheduling method under Hadoop platform
CN109471705B (en) Task scheduling method, device and system, and computer device
CN109120715A (en) Dynamic load balancing method under a kind of cloud environment
CN107832153B (en) Hadoop cluster resource self-adaptive allocation method
US8276146B2 (en) Grid non-deterministic job scheduling
CN105892996A (en) Assembly line work method and apparatus for batch data processing
CN104601664B (en) A kind of control system of cloud computing platform resource management and scheduling virtual machine
CN106330987A (en) Dynamic load balancing method
WO2011029253A1 (en) Web load balancing method, grid server and system thereof
CN106407244A (en) Multi-database-based data query method, system and apparatus
CN104298550A (en) Hadoop-oriented dynamic scheduling method
CN107515784A (en) A kind of method and apparatus of computing resource in a distributed system
CN113010576A (en) Method, device, equipment and storage medium for capacity evaluation of cloud computing system
CN109412838A (en) Server cluster host node selection method based on hash calculating and Performance Evaluation
CN108563495A (en) The cloud resource queue graded dispatching system and method for data center's total management system
Mahato et al. Balanced task allocation in the on‐demand computing‐based transaction processing system using social spider optimization
CN116302568A (en) Computing power resource scheduling method and system, scheduling center and data center
CN116467076A (en) Multi-cluster scheduling method and system based on cluster available resources
CN114327811A (en) Task scheduling method, device and equipment and readable storage medium
CN115237568A (en) Mixed weight task scheduling method and system for edge heterogeneous equipment
CN104156505A (en) Hadoop cluster job scheduling method and device on basis of user behavior analysis
CN105550025B (en) Distributed infrastructure services (IaaS) dispatching method and system
CN113485800B (en) Automatic dispatch method, system, equipment and storage medium based on central node
CN111782627A (en) Task and data cooperative scheduling method for wide-area high-performance computing environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant