CN113485800A - Automatic dispatching method, system, equipment and storage medium based on central node - Google Patents

Automatic dispatching method, system, equipment and storage medium based on central node Download PDF

Info

Publication number
CN113485800A
CN113485800A CN202110697832.3A CN202110697832A CN113485800A CN 113485800 A CN113485800 A CN 113485800A CN 202110697832 A CN202110697832 A CN 202110697832A CN 113485800 A CN113485800 A CN 113485800A
Authority
CN
China
Prior art keywords
task
tasks
queue
template
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110697832.3A
Other languages
Chinese (zh)
Other versions
CN113485800B (en
Inventor
曹海涛
孙啸寅
何欣远
何先华
杜佳
付晟
刘海东
顾军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huatai Securities Co ltd
Original Assignee
Huatai Securities Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huatai Securities Co ltd filed Critical Huatai Securities Co ltd
Priority to CN202110697832.3A priority Critical patent/CN113485800B/en
Publication of CN113485800A publication Critical patent/CN113485800A/en
Application granted granted Critical
Publication of CN113485800B publication Critical patent/CN113485800B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system

Abstract

The invention discloses an automatic dispatching method, a system, equipment and a storage medium based on a central node, wherein the method comprises the following steps: selecting a main node from the cluster nodes, and executing a primary task scheduling logic and a secondary task scheduling logic in the main node; the primary task scheduling logic is used for acquiring tasks, calculating scores of the tasks in each task template, matching each task template with the secondary task template cache queue, matching the tasks with each task template based on score results, and distributing the tasks with the score results to the secondary task template cache queue; and the secondary task scheduling logic is used for acquiring the online users, judging whether the online users are idle or not, and distributing tasks for the idle online users.

Description

Automatic dispatching method, system, equipment and storage medium based on central node
Technical Field
The invention relates to the technical field of automatic task allocation, in particular to a universal automatic order dispatching method, a universal automatic order dispatching system, universal automatic order dispatching equipment and a universal automatic order dispatching storage medium.
Background
The conventional way of task allocation is: the background is provided with a task pool for storing tasks, a user actively initiates task polling, and the background calculates in real time according to a certain calculation rule to determine whether to distribute the tasks to the user; this manner of task allocation that is actively polled by the user is figuratively referred to as "order grabbing". Each user can continuously request the background system in real time, and the background can calculate the task information of the user in real time in the request process; as the number of users increases, the stress of a background system is very large, and the task distribution mode is not suitable for a large-scale task distribution scene.
As described above, in the task allocation manner of order grabbing, a user polls to request a task in real time, which consumes resources of a background system very much, and the user polls the background system continuously even if there is no task in the background, which causes a great waste of resources. Meanwhile, the form-robbing mode is actively initiated by each user, which is not beneficial to the uniform scheduling and allocation of tasks, so that the optimal allocation of resources cannot be achieved.
The dispatching list is mainly divided into the following scenes, which are described by taking dealer industry business auditing as an example:
1. tasks such as service application and the like, such as account opening services, are not matched with the task processing capacity of the background auditor, and generally refer to: and (4) a scene that the task is more and the processing capacity of the task is insufficient.
2. Different services require different auditors to audit, such as: the entrepreneurship board business needs the person with entrepreneurship board auditing experience to audit, and when the task is distributed, the task needs to be distributed to a proper person to be processed.
3. To maximize the processing efficiency of the tasks, when the tasks are many, the tasks are distributed to idle auditors, and the tasks can be processed in time.
Through retrieval, China with the publication number of CN111459666A specially favorable for 7/28/2020, discloses a task dispatching method, a device, a task execution system and a server, and the method firstly obtains a task to be dispatched of a target tenant; the target tenant corresponds to a first dispatch node; in the method, the pressure of the dispatching task is dispersed to each dispatching node, even when the task amount is large, the pressure dispersed to each dispatching node is small, so that the stable operation of a task dispatching system is ensured, and the dispatching efficiency of the task is improved. However, the tasks to be dispatched of the corresponding target tenants are placed in the queue to be dispatched, the tasks to be dispatched of the multiple tenants are distributed to each dispatching node to be managed respectively, only the management of the tasks to be dispatched by each dispatching node is realized, and the technical problems of solving the distribution management between the tasks to be dispatched and the task processing end, namely the matching between the securities dealer industry business and the auditors and reasonably dispatching the tasks to be dispatched to the securities dealer industry business auditors are not considered.
According to retrieval, Chinese speciality in publication No. CN111785346A, 10, 16, 2020 discloses a prescription order dispatching method, system, apparatus and storage medium, the method comprising receiving a prescription order from an end user, obtaining busyness information for a plurality of pharmacist groups, and having pharmacists in the pharmacist group review prescription orders in a corresponding queue; determining a pharmacist group for reviewing the prescription order according to the busyness information; sending the prescription order to a queue corresponding to a pharmacist group reviewing the prescription order; the results of the review of the prescription order by the pharmacists in the pharmacist group are sent to the terminal. The processing conditions of a plurality of pharmacists in the pharmacist group are comprehensively considered, the processing capacity of an individual pharmacist does not excessively influence the overall processing capacity of the pharmacist group, and therefore the prescription order can be processed in time. However, the received terminal user prescription order is matched with the pharmacist group according to the busyness of the pharmacist group, and the prescription order is sent to the queue corresponding to the determined pharmacist group, so that the received terminal user prescription order is matched with the pharmacist group and is further matched with the pharmacists in the pharmacist group, a task distribution mechanism based on a task template is not considered, the matching degree of the tasks and the template is calculated through a task rule, and therefore tasks to be dispatched in the whole task pool are reasonably divided and are distributed to auditors of the business of the securities dealer industry for task processing.
Through retrieval, the Chinese journal published in the power grid technology by Huangwei, Ponlin, Caabin and Johnun sea in 2014 4 discloses a power distribution network distributed parallel computing platform based on data-level task decomposition, and the power distribution network distributed parallel computing platform based on the data-level task decomposition is constructed for realizing real-time analysis and calculation of a large-scale power distribution network. And (3) decomposing the calculation task of the power distribution network by using a data level parallel calculation mode by taking the power distribution network feeder as an analysis unit in combination with the power distribution network operation structure and equipment configuration. 4 subsystems of a management module, an instance, an execution end and a client end are configured, functions of task generation, task decomposition, task distribution, subtask calculation and the like are respectively realized, and a distributed parallel computing platform framework is formed. Introducing a message middleware zeroMQ technology, and realizing N-N efficient communication in the distributed system and data interaction with an external system by adopting the combination of different types of sockets. In order to verify the practicability and the parallel computing performance of the platform, the distributed parallel computing of the urban distribution network global state estimation in a certain city of Shandong province is realized on the platform, and when the nodes of the distribution network reach a certain scale, the platform is adopted to carry out the distributed parallel computing, so that the speed advantage is obvious. However, there may be a feeder with a large node size in the power distribution network, and in the parallel computing process, such a feeder will determine the length of the parallel computing time, and needs to be further processed in blocks to reduce the parallel computing time; the blocking process inevitably increases the task resolution time, resulting in an increase in task processing time. Therefore, how to handle the feeder with larger node size is the problem to be solved next by the parallel computation of the power distribution network based on the data-level task decomposition.
Therefore, the problems which are urgently solved in the field are still solved by reasonably distributing and efficiently processing the tasks, the automatic dispatching method based on the central node uniformly manages the tasks and the users, and the task which is most matched is distributed to the idle users through a scheduling algorithm, so that the technical problems can be effectively solved.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides an automatic dispatching method, an automatic dispatching system, automatic dispatching equipment and a storage medium based on a central node. The automatic order dispatching method based on the central node avoids the problem of unbalanced task allocation and the problem of system resource consumption caused by active polling of users, does not need to consume resource allocation tasks when no task exists in the system, and allocates proper tasks to idle users through a scheduling algorithm when more tasks exist; when the amount of tasks exceeds the user's processing capacity, the excess tasks are queued for system allocation.
In order to achieve the above object, in one aspect, the present application provides an automatic order dispatching method based on a central node, including:
selecting a main node from the cluster nodes, and executing a primary task scheduling logic and a secondary task scheduling logic in the main node; the primary task scheduling logic is used for acquiring tasks, calculating scores of the tasks in each task template, matching each task template with the secondary task template cache queue, matching the tasks with each task template based on score results, and distributing the tasks with the score results to the secondary task template cache queue; and the secondary task scheduling logic is used for acquiring the online users, judging whether the online users are idle or not, and distributing tasks for the idle online users.
Preferably, the primary task scheduling logic further includes circularly obtaining tasks from the primary task buffer queue at fixed time intervals; calculating the scores of the tasks in the task templates; storing the tasks with scores different from 0 in the task template into the second-level task template cache queue matched with the task template; and when the tasks are stored in the second-level task template cache queue, sorting the tasks according to scores in the task templates matched with the second-level task template cache queue.
Further, the tasks are cyclically obtained from the primary task buffer queue at a fixed time interval, where the fixed time interval may be set to "1 s, 2s, and 3s … …", and specifically, the setting of the fixed time interval may be adjusted according to the total amount of the tasks in the primary task buffer queue.
Further, the tasks are circularly acquired from the primary task buffer queue, wherein the number of the tasks acquired at a time does not exceed a set value, the set value may be set to "50, 100, 150 … …", and specifically, the setting of the number of the tasks acquired at a time may be adjusted according to the total number of the tasks in the primary task buffer queue.
And further, processing the acquired task and acquiring the task details.
Further, the method for calculating the scores of the tasks in the task templates comprises the steps that the task template comprises a plurality of task rules, each task rule comprises a plurality of rule items, each task rule has an integer score and weight, and each rule item has an integer score and weight.
Further, the calculation method for calculating the scores of the tasks in the task templates comprises the steps of rule score, rule weight and rule term score, rule term weight and rule term weight.
Further, the higher the score of the task in the task template is, the higher the matching degree of the representative task and the task template is; if the score of the task in the task template is 0, the task is not matched with the task template, and the task with the score of not 0 in the task template is stored in the second-level task template cache queue matched with the task template.
Further, the score ordering rule in the task template matched with the second-level task template cache queue is as follows: and preferentially comparing scores of the tasks in the task template, wherein the higher the score is, the more advanced the ranking is, the same score is, the ranking is performed according to the receiving time of the tasks, and the earlier the receiving time is, the more advanced the ranking is.
Further, the tasks acquired from the first-level task cache queue each time are simultaneously stored into the template queue of the default template.
Further, after the first-level scheduling task logic is completed, the tasks acquired from the first-level task cache queue each time are deleted from the first-level scheduling queue.
Preferably, the information of the online users is stored in an online user queue, the online user queue sorts the online users according to the idle time of the online users, and the online users who first enter the idle state are arranged in front.
Further, the information of the online user includes, but is not limited to, any one or any multiple of information of online time of the user, scores of experience of the user for processing various types of services, average time of the user for processing various types of services, completion evaluation of the user for processing various types of services, scores of user service feedback, age of the user in work, and the like.
Further, determining the idle time of the online user according to the time of the online user for completing all tasks in the task queue of the user.
Preferably, the online user queue determines whether the user is online based on a heartbeat mechanism, and removes the user who is not online from the online user queue.
Further, using one of a piezoelectric sensor, a piezoresistive sensor, or a photoelectric sensor enables determining whether the user is online based on a heartbeat mechanism.
Further, when the user is judged to be offline, the tasks which are not processed by the user can be recycled and redistributed to other users for processing.
Preferably, the secondary task scheduling logic further comprises circularly acquiring users from the online user queue; checking whether a task which is being processed exists in a user task queue of the online user, and if not, judging that the online user is idle; searching whether the task template matched with an idle online user exists, and if yes, writing the task in the secondary task template cache queue matched with the task template into the user task queue; and deleting the task from the secondary task template cache queue matched with the task template.
Further, the online users are circularly obtained from the online user queue, the number of the online users obtained at a time does not exceed a set value, wherein the number of the online users obtained at a time does not exceed the set value, and the set value can be set to be "50, 100, 150 … …", and specifically, the setting of the number of the online users obtained at a time can be adjusted according to the number of the tasks obtained from the first-level task cache queue circularly by the first-level task scheduling logic.
And further, checking whether a task which is being processed exists in a user task queue of the online user, if so, judging that the online user is busy, and deleting the online user from the secondary scheduling queue.
And further searching whether the task template matched with the idle online user exists or not, and if not, writing the task in the default template into the user task queue.
Further, the task state written into the user task queue is set as: and (5) waiting for treatment.
Further, after the task is successfully distributed, the task is deleted from the second-level task template cache queue corresponding to the task template, and meanwhile, the task is also deleted from the default template.
Further, for tasks that are assigned to the user but have not been processed for a long time even beyond the system threshold, the system will reclaim the tasks to the level one task buffer queue.
Preferably, the relationship between the user and the task template comprises: each user is associated with one task template so as to realize one-way matching between the user and the task template; each task template is associated with a plurality of users so as to realize multidirectional matching between the task template and the users; and the default template prefabricated by the system is associated with all users, so that each user has a corresponding task template.
Preferably, the relationship between the task template and the task includes: each task template is associated with a plurality of tasks so as to realize multi-directional matching between the task template and the tasks; each task belongs to a plurality of task templates so as to realize multi-directional matching between the task and the task template; and the default template prefabricated by the system comprises all tasks, so that each task has a corresponding task template.
On the other hand, the application provides an automatic dispatching system based on a central node, the system comprises a central node subsystem and a user subsystem, and the central node subsystem comprises a task scheduler, a cache queue and a task template pool.
The task scheduler comprises a primary task scheduler and a secondary task scheduler, wherein the primary task scheduler circularly acquires tasks from the primary task cache queue at fixed time intervals; calculating the scores of the tasks in the task templates; storing the tasks with scores different from 0 in the task template into the second-level task template cache queue matched with the task template; and when the tasks are stored in the second-level task template cache queue, sorting the tasks according to scores in the task templates matched with the second-level task template cache queue.
The secondary task scheduler circularly acquires users from the online user queue; checking whether a task which is being processed exists in a user task queue of the user, and if not, judging that the user is idle; searching whether the task template matched with an idle online user exists, and if yes, writing the task in the secondary task template cache queue matched with the task template into the user task queue; deleting the task from the second-level task template cache queue matched with the task template; the buffer queue comprises the first-level task buffer queue and a plurality of second-level task buffer queues, the first-level task buffer queue is used for uniformly receiving the tasks to be processed submitted to the background by the client, and the second-level task template buffer queue is used for storing the tasks in the first-level task buffer queue in a grouping mode.
The task template pool is used for storing the task templates, and each task template is matched with the second-level task template cache queue.
The user subsystem comprises the online user queue and the user task queue, the online user queue is used for storing online user information, and the user task queue is used for storing information of tasks allocated to users.
In yet another aspect, the present application provides a central node-based automatic dispatch device, which includes a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the method when executing the computer program. The automatic dispatching equipment based on the central node is particularly computer equipment which can be a server and comprises a processor, a memory, an interface and a storage medium which are connected through a bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a storage medium and an internal memory. The internal memory provides an environment for the running of the computer program. The interface of the computer device is used for connecting with an external terminal.
In yet another aspect, the present application provides a central node-based automated dispatch storage medium storing computer instructions which, when executed on a computer, cause the computer to perform the steps of the method as described above. The storage medium storage is embodied as a readable storage medium. The readable storage medium stores an operating system, computer instructions and a database, the computer instructions, when executed on a computer, cause the processor to implement the task distribution method described above. The database is used for storing system data and service data of the main system and the standby system.
Compared with the prior art, the invention has the beneficial effects that:
(1) compared with the traditional order grabbing mode, the automatic order dispatching mode can reasonably allocate resources and reduce background resource waste caused by order grabbing; the task scheduling of automatic dispatching is based on the central node, other nodes provide external interaction capacity, and when the central node fails, one central node can be automatically selected from the cluster again to execute task scheduling, so that the overall robustness of the system is ensured.
(2) The task hierarchical storage method comprises the steps that tasks are stored hierarchically, a first-level task cache queue is used for storing all original task information, and a second-level task template queue is used for storing information after the tasks are classified; a two-stage task scheduling mechanism is established, wherein the first-stage task scheduling is responsible for distributing tasks to task template queues, and the second-stage task scheduling is responsible for distributing tasks to user task queues, so that pre-matching of the tasks and users is realized, the calculation pressure brought by real-time matching is reduced, and the system pressure brought by simultaneous order grabbing of a large number of users at the time of a flow peak can be avoided.
(3) Defining a task template for associating a user with a task; each task template defines a task template queue for storing task information; the task priority evaluation mechanism in the task template indicates that the higher the score is, the higher the priority is; the tasks are sorted according to the priority, and the tasks with higher priority are arranged in front, so that the user can process the tasks with higher priority in priority.
(4) Based on a task allocation mechanism of the task template, the matching degree of the task and the template is calculated through a task rule, so that the whole task pool can be reasonably divided; the task template is used for dividing user responsibilities, distributing the most appropriate task for the user, and simultaneously ensuring that one task is processed by the most appropriate user, thereby improving the processing efficiency of the task.
(5) Establishing an online user queue, maintaining the online state of a user through a heartbeat mechanism, indicating offline if the heartbeat is not received for a long time, and deleting the user from an online user list so as to ensure the accuracy of task allocation; task allocation is based on idle online users in an online state, and queues are sorted according to idle time points of the online users; the users who enter the idle state firstly can distribute tasks preferentially, and the users with longer idle time can distribute tasks preferentially, so that the fairness of task distribution is ensured.
(6) When the number of tasks is small, each user can be guaranteed to be allocated with the tasks; when the number of tasks is large, the user with high task processing speed can be ensured to distribute more tasks, so that the reasonability of task distribution is ensured; for the tasks which are distributed to the user but are not processed for a long time even exceeding the system threshold value, the system can recover the tasks again, and place the tasks into the first-level task buffer queue for task redistribution, thereby ensuring that the tasks can be processed in time.
Drawings
FIG. 1 is a schematic diagram of a logic implementation according to an embodiment of the present invention;
FIG. 2 is a block diagram of the structure of primary and secondary task scheduling logic according to an embodiment of the present invention;
FIG. 3 is a flow diagram of primary task scheduling logic according to an embodiment of the present invention;
FIG. 4 is a flow diagram of secondary task scheduling logic according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating the relationship between an online user and a task template, according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating a relationship between task templates and tasks according to an embodiment of the invention;
fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
The technical solutions of the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings, and it is to be understood that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without any inventive step, are within the scope of the present invention.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", and the like, indicate orientations and positional relationships based on those shown in the drawings, and are used only for convenience of description and simplicity of description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be considered as limiting the present invention. Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; may be mechanically connected, may be electrically connected or may be in communication with each other; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In the present invention, unless otherwise expressly stated or limited, "above" or "below" a first feature means that the first and second features are in direct contact, or that the first and second features are not in direct contact but are in contact with each other via another feature therebetween. Also, the first feature being "on," "above" and "over" the second feature includes the first feature being directly on and obliquely above the second feature, or merely indicating that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature includes the first feature being directly under and obliquely below the second feature, or simply meaning that the first feature is at a lesser elevation than the second feature.
FIG. 1 is a diagram illustrating an embodiment of logic implementation. The ZooKeeper is mainly responsible for distributed coordination work, a main node is selected from all cluster nodes by using the ZooKeeper, and scheduling logic is executed in the main node. The task scheduling is based on a central node, and other nodes provide external interaction capacity; when the central node fails, one central node is automatically selected from the cluster again to execute task scheduling, so that the overall robustness of the system is ensured.
Based on this, the embodiment of the invention provides an automatic dispatching method, system, device and storage medium based on a central node, and the technology can be applied to a task distribution scene with a large service scale. To facilitate understanding of the embodiment of the present invention, first, an automatic dispatch method based on a central node disclosed in the embodiment of the present invention is described in detail, as shown in fig. 2, the method includes the following specific steps:
selecting a main node from the cluster nodes, and executing a primary task scheduling logic and a secondary task scheduling logic in the main node; the primary task scheduling logic is used for acquiring tasks, calculating scores of the tasks in each task template, matching each task template with a secondary task template cache queue, matching the tasks with each task template based on score results, and distributing the tasks with the score results to the secondary task template cache queue; and the secondary task scheduling logic is used for acquiring the online users, judging whether the online users are idle or not, and distributing tasks for the idle online users.
As shown in fig. 3, the primary task scheduling logic further comprises:
in step S111, the primary task scheduling logic circularly obtains tasks from the primary task buffer queue at fixed time intervals
Specifically, the primary task scheduling logic is configured to obtain tasks in a primary task cache queue, where the primary task cache queue is configured to uniformly receive to-be-processed tasks submitted to a background by a client.
Specifically, the first-level task buffer queue sorts the received to-be-processed tasks submitted to the background by the client according to the receiving time sequence.
Specifically, the primary task cache queue is used for storing all original task information and uniformly storing tasks to be audited in the dealer industry.
Specifically, the tasks are cyclically acquired from the primary task buffer queue at fixed time intervals, where the fixed time intervals may be set to "1 s, 2s, and 3s … …", and the setting of the fixed time intervals may be adjusted according to the total amount of the tasks in the primary task buffer queue.
Specifically, the duration of circularly obtaining the task from the primary task buffer queue at a fixed time interval may also be adjusted according to the total amount of tasks in the primary task buffer queue.
Specifically, the tasks are cyclically acquired from the primary task buffer queue, wherein the number of the tasks acquired at a time does not exceed a set value, the set value may be set to "50, 100, 150 … …", and the setting of the number of the tasks acquired at a time may be adjusted according to the total number of the tasks in the primary task buffer queue.
Specifically, the number of cycles of the tasks obtained from the primary task buffer queue is adjusted according to the total number of the tasks in the primary task buffer queue.
Specifically, a primary scheduling queue formed by the primary task scheduling logic processes the acquired tasks, and acquires details of the tasks.
In step S112, the primary task scheduling logic calculates scores of the tasks obtained from the primary task buffer queue in the respective task templates.
Specifically, the method for calculating the score of the task in each task template comprises the steps that the task template comprises a plurality of task rules, each task rule comprises a plurality of rule items, each task rule has an integer score and weight, and each rule item has an integer score and weight.
Specifically, the score of the task in each task template is calculated by rule score + rule term weight.
Specifically, each task template has a corresponding secondary task template cache queue, and each task template corresponds to a default task template queue.
In step S113, the primary task scheduling logic matches tasks with the task templates based on the scoring results of the tasks in the task templates, and stores the tasks with the scores of not 0 in the task templates into the secondary task template cache queue matched with the task templates.
Specifically, the higher the score of the task in the task template is, the higher the matching degree of the representative task and the task template is; if the score of the task in the task template is 0, the task is not matched with the task template, and the task with the score of not 0 in the task template is stored in the second-level task template cache queue matched with the task template.
Specifically, the same task to be processed may correspond to a plurality of different task templates, that is, the same task to be processed is stored in different second-level task template cache queues.
In step S114, when the primary task scheduling logic stores the tasks into the secondary task template cache queue, the tasks are sorted according to the scores in the task templates matching the secondary task template cache queue.
Specifically, the score ordering rule in the task template matched with the second-level task template cache queue is as follows: and preferentially comparing scores of the tasks in the task template, wherein the higher the score is, the more advanced the ranking is, the same score is, the ranking is performed according to the receiving time of the tasks, and the earlier the receiving time is, the more advanced the ranking is.
Specifically, the tasks acquired from the first-level task cache queue each time are simultaneously stored in the template queue of the default template.
Specifically, after the first-level scheduling task logic is completed, the tasks acquired from the first-level task buffer queue each time are deleted from the first-level scheduling queue.
In one embodiment, the information of the online users is stored in an online user queue, the online user queue sorts the online users according to the idle time of the online users, and the online users who first enter the idle state are arranged in front.
Specifically, the information of the online user includes, but is not limited to, any one or any multiple of information of online time of the user, scores of experience of the user for processing various types of services, average time of the user for processing various types of services, completion evaluation of the user for processing various types of services, scores of user service feedback, age of the user, and the like.
Specifically, the idle time of the online user is determined according to the time of the online user for completing all tasks in the task queue of the user.
Specifically, the online user queue judges whether the user is online based on a heartbeat mechanism, and removes the user who is not online from the online user queue.
Specifically, the online status of the user is calculated based on a heartbeat mechanism, and no heartbeat is received within a preset time to indicate offline, wherein the preset time can be set to 2s or 3s … ….
Specifically, a method for determining whether a user is online based on a heartbeat mechanism is implemented using one of a piezoelectric sensor, a piezoresistive sensor, or a photoelectric sensor.
Specifically, when the user is judged to be offline, the tasks which are not processed by the user are recovered and are redistributed to other users for processing.
As shown in fig. 4, the secondary task scheduling logic further comprises:
in step S121, the secondary task scheduling logic circularly obtains users from the online user queue.
Specifically, the online users are cyclically acquired from the online user queue, the number of the online users acquired at a time does not exceed a set value, wherein the number of the online users acquired at a time does not exceed the set value, and the set value may be set to "50, 100, 150 … …", and specifically, the setting of the number of the online users acquired at a time may be adjusted according to the number of the tasks acquired from the first-level task cache queue cyclically by the first-level task scheduling logic.
In step S122, the secondary task scheduling logic checks whether there is a task being processed in the user task queue of the online user, and if not, determines that the online user is idle.
Specifically, whether a task which is being processed exists in a user task queue of the online user is checked, if yes, the online user is judged to be busy, and the online user is deleted from the secondary scheduling queue.
In step S123, the secondary task scheduling logic searches whether the task template matching an idle online user exists, and if yes, writes a task in the secondary task template cache queue matching the task template into the user task queue.
Specifically, each task corresponds to a plurality of task templates, that is, each task corresponds to a plurality of second-level task template cache queues, but each task can only be written into the user task queue once.
Specifically, after the task in the second-level task template cache queue matched with the task template is written into the user task queue, the task in the second-level task template cache queue matched with other task templates is immediately deleted.
Specifically, the number of tasks in the second-level task template cache queue, which is matched with the task template, written into the user task queue at each time may be 1, 2, or 3 … …, and the setting of the number of tasks written into the user task queue at a time may be adjusted according to the total amount of tasks in the first-level task cache queue.
Specifically, the number of the tasks in the second-level task template cache queue matched with the task template written into the user task queue is not more than 5 every time, and the setting of the number of the tasks written into the user task queue at a time can be adjusted according to the total amount of the tasks in the first-level task cache queue.
Specifically, whether the task template matched with the idle online user exists is searched, and if not, the tasks in the default template queue matched with the default template are written into the user task queue.
Specifically, each task corresponds to the default template, that is, each task corresponds to the default template queue, but each task can only be written into the user task queue once.
Specifically, after the task in the default template queue is written into the user task queue, the task in the second-level task template cache queue matched with the other task templates is immediately deleted.
Specifically, the state of the second-level task template cache queue or the state of the default template queue after the tasks are written into the user task queue is as follows: and (5) waiting for treatment.
In step S124, the secondary task scheduling logic deletes the task from the secondary task template cache queue matching the task template.
Specifically, after the task is successfully allocated, the task is deleted from the second-level task template cache queue corresponding to the task template, and the task is also deleted from the default template.
Specifically, after the user completes the execution of the task in the user task queue, the task state in the user task queue is modified: is complete.
Specifically, for tasks that are assigned to a user but have not been processed for a long time even beyond the system threshold, the system will reclaim the tasks to the level one task buffer queue.
In one embodiment, the relationship between the user and the task template is shown in FIG. 5, and includes: each user is associated with one task template so as to realize one-way matching between the user and the task template; each task template is associated with a plurality of users so as to realize multidirectional matching between the task template and the users; and the default template prefabricated by the system is associated with all users, so that each user has a corresponding task template.
In one embodiment, the relationship between the task template and the task is shown in fig. 6, and includes: each task template is associated with a plurality of tasks so as to realize multi-directional matching between the task template and the tasks; each task belongs to a plurality of task templates so as to realize multi-directional matching between the task and the task template; and the default template prefabricated by the system comprises all tasks, so that each task has a corresponding task template.
In one embodiment, an automatic dispatch system based on a central node is provided, and comprises a central node subsystem and a user subsystem, wherein the central node subsystem comprises a task scheduler, a cache queue and a task template pool.
The task scheduler comprises a primary task scheduler and a secondary task scheduler, wherein the primary task scheduler circularly acquires tasks from the primary task cache queue at fixed time intervals; calculating the scores of the tasks in the task templates; storing the tasks with scores different from 0 in the task template into the second-level task template cache queue matched with the task template; and when the tasks are stored in the second-level task template cache queue, sorting the tasks according to scores in the task templates matched with the second-level task template cache queue.
The secondary task scheduler circularly acquires users from the online user queue; checking whether a task which is being processed exists in a user task queue of the user, and if not, judging that the user is idle; searching whether the task template matched with an idle online user exists, and if yes, writing the task in the secondary task template cache queue matched with the task template into the user task queue; and deleting the task from the secondary task template cache queue matched with the task template.
The cache queue comprises a first-level task cache queue and a plurality of second-level task cache queues, the first-level task cache queue is used for uniformly receiving tasks to be processed submitted to a background by a client, and the second-level task template cache queues are used for storing the tasks in the first-level task cache queues in a grouping mode.
The task template pool is used for storing the task templates, and each task template is matched with the second-level task template cache queue.
The user subsystem comprises the online user queue and the user task queue, the online user queue is used for storing online user information, and the user task queue is used for storing information of tasks allocated to users.
In one embodiment, there is provided a central node-based automatic dispatch device comprising a memory storing a computer program and a processor implementing the steps of the method as described above when the processor executes the computer program. The automatic dispatching device based on the central node is specifically a computer device, and the computer device may be a server, and the internal structure diagram of the computer device may be as shown in fig. 7. The apparatus includes a processor, a memory, an interface, and a storage medium connected by a bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a storage medium and an internal memory. The internal memory provides an environment for the running of the computer program. The interface of the computer device is used for connecting with an external terminal.
In one embodiment, a central node based automated dispatch storage medium is provided, the storage medium storing computer instructions which, when executed on a computer, cause the computer to perform the steps of a method as described above. The storage medium is embodied as a readable storage medium. The readable storage medium stores an operating system, computer instructions and a database, the computer instructions, when executed on a computer, cause the processor to implement the task distribution method described above. The database is used for storing system data and service data of the main system and the standby system.
In an embodiment, based on the automatic order dispatching method based on the central node defined in this embodiment, an automatic order dispatching system is developed in Java language, and is used for solving the problem of auditing of business handling in the securities industry, specifically:
1. and realizing the oneLevelTaskQueue of the primary task cache queue by using the List of Redis.
2. And realizing a secondary task template queue by using the Zset of Redis, storing the ID of the task in the queue, sequencing the task according to the score of the task in the task template, and sequencing according to the receiving time of the task under the condition of the same score.
3. And realizing an online user queue by using the MySql, and maintaining the online, offline, idle and other states of the user.
4. And realizing a user task queue by using the MySql, and maintaining the states of waiting processing, processing completion and the like of the task.
5. The Hash structure of Redis is used to store the heartbeat information of the user.
6. And a scheduling thread pool in the JDK is used for realizing the oneLevelTaskDispatch of the primary task scheduler.
7. And a primary task scheduler SecondLevelTaskDispatch is realized by using a scheduling thread pool in the JDK. .
8. The cluster node selects a master based on the ZooKeeper and executes primary task scheduling and secondary task scheduling in the master node.
In the description herein, references to the description of the terms "one embodiment," "certain embodiments," "an illustrative embodiment," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions deviate from the technical solutions of the embodiments of the present invention.

Claims (10)

1. Automatic dispatching method based on central node, characterized by comprising:
selecting a main node from the cluster nodes, and executing a primary task scheduling logic and a secondary task scheduling logic in the main node;
the primary task scheduling logic is used for acquiring tasks, calculating scores of the tasks in each task template, matching each task template with a secondary task template cache queue, matching the tasks with each task template based on score results, and distributing the tasks with the score results to the secondary task template cache queue;
and the secondary task scheduling logic is used for acquiring the online users, judging whether the online users are idle or not, and distributing tasks for the idle online users.
2. The hub node-based automated dispatching method of claim 1, wherein the primary task scheduling logic further comprises,
circularly acquiring tasks from the first-level task cache queue at fixed time intervals;
calculating the scores of the tasks in the task templates;
storing the tasks with scores different from 0 in the task template into the second-level task template cache queue matched with the task template;
and when the tasks are stored in the second-level task template cache queue, sorting the tasks according to scores in the task templates matched with the second-level task template cache queue.
3. The method of claim 1, wherein the information of the online users is stored in an online user queue, the online user queue sorts the online users according to idle time of the online users, and the online users who first enter an idle state are ranked in front.
4. The hub node-based automated order dispatching method of claim 3, wherein the online user queue determines whether a user is online based on a heartbeat mechanism, and removes users that are not online from the online user queue.
5. The hub node-based automated dispatching method of claim 3, wherein the secondary task scheduling logic further comprises,
circularly acquiring online users from an online user queue;
checking whether a task which is being processed exists in a user task queue of the online user, and if not, judging that the online user is idle;
searching whether the task template matched with an idle online user exists, and if yes, writing the task in the secondary task template cache queue matched with the task template into the user task queue;
and deleting the task from the secondary task template cache queue matched with the task template.
6. The center node-based automated dispatch method of claim 1, wherein a relationship between a user and the task template comprises: associating one of the task templates with each user; each task template is associated with a plurality of users; and, default templates pre-made by the system are associated with all users.
7. The method of claim 1, wherein the relationship between the task template and the task comprises: each task template is associated with a plurality of tasks; each task belongs to a plurality of the task templates; and the default template prefabricated by the system comprises all tasks.
8. Automatic central node-based dispatch system for carrying out the method of any one of claims 1 to 7, characterized in that it comprises a central node subsystem and a user subsystem,
the central node subsystem comprises a task scheduler, a buffer queue and a task template pool,
the task scheduler comprises a primary task scheduler and a secondary task scheduler,
the primary task scheduler circularly acquires tasks from the primary task cache queue at fixed time intervals; calculating the scores of the tasks in the task templates; storing the tasks with scores different from 0 in the task template into the second-level task template cache queue matched with the task template; when the tasks are stored in the second-level task template cache queue, the tasks are sorted according to scores in the task templates matched with the second-level task template cache queue;
the secondary task scheduler circularly acquires online users from the online user queue; checking whether a task which is being processed exists in a user task queue of the online user, and if not, judging that the online user is idle; searching whether the task template matched with an idle online user exists, and if yes, writing the task in the secondary task template cache queue matched with the task template into the user task queue; deleting the task from the second-level task template cache queue matched with the task template;
the cache queue comprises a first-level task cache queue and a plurality of second-level task cache queues, the first-level task cache queue is used for uniformly receiving tasks to be processed submitted to a background by a client, and the second-level task template cache queue is used for storing the tasks in the first-level task cache queue in a grouping manner;
the task template pool is used for storing the task templates, and each task template is matched with the second-level task template cache queue;
the user subsystem comprises the online user queue and the user task queue, the online user queue is used for storing online user information, and the user task queue is used for storing information of tasks allocated to users.
9. Automatic central node-based dispatching device comprising a memory and a processor, said memory storing a computer program, characterized in that said processor, when executing said computer program, implements the steps of the method according to any one of claims 1 to 7.
10. A central node based automated dispatch storage media, wherein the storage media stores computer instructions which, when executed on a computer, cause the computer to perform the method of any of claims 1-7.
CN202110697832.3A 2021-06-23 2021-06-23 Automatic dispatch method, system, equipment and storage medium based on central node Active CN113485800B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110697832.3A CN113485800B (en) 2021-06-23 2021-06-23 Automatic dispatch method, system, equipment and storage medium based on central node

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110697832.3A CN113485800B (en) 2021-06-23 2021-06-23 Automatic dispatch method, system, equipment and storage medium based on central node

Publications (2)

Publication Number Publication Date
CN113485800A true CN113485800A (en) 2021-10-08
CN113485800B CN113485800B (en) 2024-01-23

Family

ID=77935912

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110697832.3A Active CN113485800B (en) 2021-06-23 2021-06-23 Automatic dispatch method, system, equipment and storage medium based on central node

Country Status (1)

Country Link
CN (1) CN113485800B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150135183A1 (en) * 2013-11-12 2015-05-14 Oxide Interactive, LLC Method and system of a hierarchical task scheduler for a multi-thread system
CN109064005A (en) * 2018-07-27 2018-12-21 北京中关村科金技术有限公司 A kind of loan examination & approval task justice auto form delivering system of task based access control priority
CN110704186A (en) * 2019-09-25 2020-01-17 国家计算机网络与信息安全管理中心 Computing resource allocation method and device based on hybrid distribution architecture and storage medium
CN111324427A (en) * 2018-12-14 2020-06-23 深圳云天励飞技术有限公司 Task scheduling method and device based on DSP
CN111813513A (en) * 2020-06-24 2020-10-23 中国平安人寿保险股份有限公司 Real-time task scheduling method, device, equipment and medium based on distribution
CN112486648A (en) * 2020-11-30 2021-03-12 北京百度网讯科技有限公司 Task scheduling method, device, system, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150135183A1 (en) * 2013-11-12 2015-05-14 Oxide Interactive, LLC Method and system of a hierarchical task scheduler for a multi-thread system
CN109064005A (en) * 2018-07-27 2018-12-21 北京中关村科金技术有限公司 A kind of loan examination & approval task justice auto form delivering system of task based access control priority
CN111324427A (en) * 2018-12-14 2020-06-23 深圳云天励飞技术有限公司 Task scheduling method and device based on DSP
CN110704186A (en) * 2019-09-25 2020-01-17 国家计算机网络与信息安全管理中心 Computing resource allocation method and device based on hybrid distribution architecture and storage medium
CN111813513A (en) * 2020-06-24 2020-10-23 中国平安人寿保险股份有限公司 Real-time task scheduling method, device, equipment and medium based on distribution
CN112486648A (en) * 2020-11-30 2021-03-12 北京百度网讯科技有限公司 Task scheduling method, device, system, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113485800B (en) 2024-01-23

Similar Documents

Publication Publication Date Title
CN108984301A (en) Self-adaptive cloud resource allocation method and device
CN105892996A (en) Assembly line work method and apparatus for batch data processing
CN112019620B (en) Web cluster load balancing method and system based on Nginx dynamic weighting
WO2011029253A1 (en) Web load balancing method, grid server and system thereof
CN110347888B (en) Order data processing method and device and storage medium
CN113010576A (en) Method, device, equipment and storage medium for capacity evaluation of cloud computing system
CN116663938B (en) Informatization management method based on enterprise data center system and related device thereof
CN116302568A (en) Computing power resource scheduling method and system, scheduling center and data center
CN114327811A (en) Task scheduling method, device and equipment and readable storage medium
Mahato et al. Balanced task allocation in the on‐demand computing‐based transaction processing system using social spider optimization
CN116467076A (en) Multi-cluster scheduling method and system based on cluster available resources
CN103997515A (en) Distributed cloud computing center selection method and application thereof
CN114356531A (en) Edge calculation task classification scheduling method based on K-means clustering and queuing theory
CN104156505A (en) Hadoop cluster job scheduling method and device on basis of user behavior analysis
CN113867907A (en) CPU resource-based scheduling system and optimization algorithm in engineering field
CN113485800B (en) Automatic dispatch method, system, equipment and storage medium based on central node
CN116708446A (en) Network performance comprehensive weight decision-based computing network scheduling service method and system
CN111144701A (en) ETL job scheduling resource classification evaluation method under distributed environment
CN114610758A (en) Data processing method and device based on data warehouse, readable medium and equipment
CN111124681B (en) Cluster load distribution method and device
CN114741161A (en) HPC job cluster sensing method based on mixed cluster
CN114090256A (en) Application delivery load management method and system based on cloud computing
CN112698944A (en) Distributed cloud computing system and method based on human brain simulation
Pechenkin et al. Architecture of a scalable system of fuzzing network protocols on a multiprocessor cluster
Du et al. OctopusKing: A TCT-aware task scheduling on spark platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant