CN112764924A - Task scheduling method and device and electronic equipment - Google Patents
Task scheduling method and device and electronic equipment Download PDFInfo
- Publication number
- CN112764924A CN112764924A CN202110048212.7A CN202110048212A CN112764924A CN 112764924 A CN112764924 A CN 112764924A CN 202110048212 A CN202110048212 A CN 202110048212A CN 112764924 A CN112764924 A CN 112764924A
- Authority
- CN
- China
- Prior art keywords
- task
- processed
- processor
- tasks
- score
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5066—Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5083—Techniques for rebalancing the load in a distributed system
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multi Processors (AREA)
Abstract
The invention relates to a task scheduling method, a task scheduling device and electronic equipment, wherein the scheme comprises the following scheduling steps: acquiring a plurality of tasks to be processed; dividing the tasks to be processed into a plurality of task sets to be processed based on a task distribution strategy; the task set to be processed is sent to the task processors, wherein each task processor corresponds to each task set to be processed one by one.
Description
Technical Field
The invention relates to the technical field of task scheduling, in particular to a task scheduling method and device and electronic equipment.
Background
The task scheduling means that the server reasonably distributes a plurality of received tasks to be processed to a plurality of task processors; in other words, task scheduling refers to dividing a plurality of received tasks to be processed into a plurality of sets of tasks to be processed by a server based on the load status of each task processor, and distributing each set of tasks to be processed to a corresponding task processor.
The current task scheduling mode has the problem that the load balancing of a plurality of task processors cannot be guaranteed.
In summary, a task scheduling method, a task scheduling device, and an electronic device capable of effectively guaranteeing balanced load among a plurality of task processors are needed.
Disclosure of Invention
The invention aims to provide a task scheduling method, a task scheduling device and electronic equipment aiming at the problems in the prior art.
In order to realize the purpose of the invention, the invention adopts the following technical scheme: a task scheduling method comprises the following scheduling steps:
s1000: acquiring a plurality of tasks to be processed;
s2000: dividing the tasks to be processed into a plurality of task sets to be processed based on a task distribution strategy;
s3000: and sending the task sets to be processed to a plurality of task processors, wherein each task processor corresponds to each task set to be processed one by one.
The working principle and the beneficial effects are as follows: 1. in the prior art, load state judgment is mainly performed on a single processing device or a single processor, then tasks are scheduled according to a set threshold, the tasks can be requested only when the set value is reached, each processor needs to be monitored in real time, extra processing operation burden is large, and scheduling capability is poor;
2. according to the scheme, when the number of tasks to be processed is large, the task distribution strategy in the scheme can be circularly executed in batches and then uniformly distributed, each task processor is guaranteed to be utilized to the maximum extent, resource waste is avoided, the prior art needs to continuously judge, the processing efficiency is too low, and once the number of the tasks to be processed is too large, the operation burden is larger, and the situation of blocking or downtime is easily caused
3. The one-to-one correspondence of the number of each task set to be processed and the number of the task processors can facilitate the classification of the total tasks to be processed, and particularly, the tasks can be distributed according to the characteristics and the processing task types of each task processor, and the controllable space is large, so that the load conditions among the task processors can be better coordinated, and better load balance can be achieved.
Further, the specific steps of obtaining a plurality of tasks to be processed are as follows:
s1100: sending an unprocessed task query request carrying a target time interval to a task storage center;
s1200: receiving all the unprocessed tasks sent by the task storage center;
s1300: and confirming that all unprocessed tasks are a plurality of to-be-processed tasks.
The steps can realize one or more task requests, and only the task requests are required to be acquired regularly according to the processing capacity of all the task processors, so that the overload condition of each processor can be reduced, the overheating and downtime conditions of the task processors are obviously reduced, the influence on the task processing time can be almost ignored, and new tasks also need to wait for the completion of the prior tasks.
Further, in step S2000, the task distribution policy includes one or more of a task processor load policy, a task processor task level policy, and a task processor task status policy. And the distribution is carried out through the combination of various strategies, so that the task distribution effect is better.
Further, the task processor load policy includes the steps of:
s2110: respectively acquiring the load state of each task processor, wherein the load state of each task processor represents the number of tasks to be processed which can be processed again by the task processor;
s2120: comparing the load state of each task processor to determine the distribution proportion of the tasks to be distributed and processed;
s2130: and distributing the tasks to be distributed and processed according to the load state proportional relation of all the task processors to form a task set to be processed which corresponds to each task processor one by one.
In the steps, the tasks to be processed are distributed based on the load state of each task processor, so that the load balance of each task processor after distribution can be ensured, the utilization rate of each task processor is obviously improved, and the processing efficiency is improved.
Further, the task processor task level policy includes the steps of:
s2210: acquiring the task level of each task processor, wherein the task level of each task processor represents the importance degree of a task to be processed by the task processor;
s2220: determining the task grade of each task to be allocated to be processed according to the task grade of each task processor;
s2230: and distributing the tasks to be distributed in the same level according to the task grade of each task processor to form a task set to be processed corresponding to the task processors one by one.
The above steps, especially for the task processor, are divided according to the processed task level, then divide all the acquired tasks not allocated to be processed, divide these tasks according to the task level, and allocate to the task processor of the corresponding level, can be better applied to the task processors of various models, the task processor of each model can specially and efficiently process some levels of tasks, certainly can process other levels of tasks, but only the processing efficiency is not high, so the above steps can better solve the problem that the load can not be balanced in this kind of scene.
Further, the task processor task status policy comprises the steps of:
s2310: dividing the emergency degree grade of the task to be processed;
s2320: acquiring a task state of each task processor, wherein the task state of each task processor represents the urgency level of a task to be processed by the task processor;
s2330: determining tasks to be allocated with the same urgency level according to the task condition of each task processor;
s2340: and distributing the tasks to be distributed at the same level according to the urgency degree grade of the task condition in each task processor to form a task set to be processed corresponding to the task processors one by one.
The steps are also that the task processors are classified according to the degree of urgency, then the obtained tasks to be processed which are not allocated are classified according to the degree of urgency, and then the classified tasks to be processed are classified into the corresponding task processors at the same level.
Further, determining the load status of the task processor comprises the steps of:
s2111: at least acquiring the current CPU utilization rate, the current memory occupancy rate, the current network rate and the current task process quantity of each task processor;
s2112: determining a first score corresponding to the current CPU utilization rate according to a pre-stored mapping relation between the CPU utilization rate and the score;
s2113: determining a second score corresponding to the current memory occupancy rate according to a mapping relation between the pre-stored memory occupancy rate and the score;
s2114: determining a third score corresponding to the current network rate according to a mapping relation between the pre-stored network rate and the score;
s2115: determining a fourth score corresponding to the number of the current task processes according to a mapping relation between the number of the pre-stored task processes and the score;
s2116: determining a target load score according to the first score, the second score, the third score, the fourth score and a pre-stored load score formula;
wherein, the load score formula is as follows:
P=A1×α1+A2×α2+A3×α3+A4×α4,
p is a target load score, a1 is a score corresponding to the current CPU utilization, α 1 is a weight corresponding to the CPU utilization, a2 is a score corresponding to the current memory occupancy, α 2 is a weight corresponding to the memory occupancy, A3 is a score corresponding to the current network rate, α 3 is a weight corresponding to the network rate, a4 is a score corresponding to the current task process number, α 4 is a weight corresponding to the task process number, and α 1+ α 2+ α 3+ α 4 is 1;
s2117: and determining the target task number which can still be processed and corresponds to the target load score according to the mapping relation between the pre-stored load score and the task number which can still be processed.
In the steps, the load score of each current task processor is obtained by performing weighted calculation on each hardware parameter of each task processor, and the load of the task processor is quantized, so that the load state of each task processor can be more conveniently and more intuitively seen, and the distribution of the tasks to be processed is facilitated.
Further, the task level allocation mode comprises the following steps:
s2211: acquiring task creating time of each task to be processed;
s2212: sequencing the tasks to be processed according to the task creation time of each task to be processed in sequence to obtain a task sequence to be processed;
s2213: dividing the grade of the task sequence to be processed according to the number of the task processors, wherein the grade order of the task sequence to be processed is the same as the number of the task processors;
s2214: and dividing all the tasks to be processed into a plurality of task sets to be processed according to the task grade of each task to be processed.
The above steps mainly sequence all tasks according to time, then segment according to the number of task processors, that is, the task grades are graded, for example, the former is high grade, the middle is medium grade, and the last is grade, which respectively corresponds to three task processors of high, middle and low, so that for the task to be processed, only the principle of coming first and going last is considered, for example, the subsequent command needs the preorder command to be executed first to obtain the structure to be executed, the subsequent command predicts the structure of the preorder command to be calculated, if the structure obtained by the preorder command is consistent with the subsequent command, the result is executed by the subsequent command directly, therefore, compared with the sequential execution mode, the processing speed can be obviously improved, the load balancing situation can be better achieved, and the occurrence of the situation of 'laziness' of some processors can be avoided.
A task scheduling device comprises an acquisition module, a division module, a storage module and an allocation module;
the acquisition module is used for acquiring a plurality of tasks to be processed from the storage module or the task storage center;
the storage module is used for storing a task distribution strategy and a task to be processed;
the dividing module is used for dividing the tasks to be processed into a plurality of task sets to be processed according to a task distribution strategy;
the distribution module is used for sending the task sets to be processed to a plurality of task processors, wherein each task processor is in one-to-one correspondence with each task set to be processed.
The task scheduling device can be directly applied to various scenes, such as being installed on a server and the like, and plays a role in balancing load.
A task scheduling electronic device includes a processor and a memory; the processor is used for executing the task scheduling method; the memory is to store a task distribution policy, a task to be processed, and machine-readable instructions executable by the processor.
By adopting the task scheduling electronic equipment, task scheduling can be performed on each processing unit in a network center or a distributed computing system, so that the functions of reducing resource waste and improving the computing efficiency of the whole network are achieved.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
fig. 2 is a schematic diagram of an application example of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present invention.
It will be understood by those skilled in the art that in the present disclosure, the terms "longitudinal," "lateral," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like are used in an orientation or positional relationship indicated in the drawings for ease of description and simplicity of description, and do not indicate or imply that the referenced device or element must have a particular orientation, be constructed and operated in a particular orientation, and thus, the above terms should not be construed as limiting the present invention.
Example 1:
referring to fig. 1, the task scheduling method is applied to a server, and the method includes the following steps:
s1000, the server obtains a plurality of tasks to be processed.
The mode of obtaining a plurality of tasks to be processed comprises the following steps:
s1100, a server sends a to-be-processed task query request carrying a target time period to a task storage center, wherein the to-be-processed task query request is used for indicating the task storage center to feed back all to-be-processed tasks in the target time period;
s1200, the server receives all to-be-processed tasks in a target time period, which are sent by the task storage center;
s1300, the server determines all the tasks to be processed in the target time period to be a plurality of tasks to be processed.
S2000, the server divides the plurality of tasks to be processed into a plurality of task sets to be processed based on a task distribution strategy, and the task distribution strategy is used for reasonably distributing the plurality of tasks to be processed to a plurality of task processors.
Wherein, the task distribution strategy is stored in the server in advance. The task distribution strategy comprises one or more of a task processor load strategy, a task processor task level strategy and a task processor task condition strategy. And the distribution is carried out through the combination of various strategies, so that the task distribution effect is better.
Preferably, the task distribution policy may further include considering a case where a task processor with a large task level shares a part of the tasks to be processed with a small task processor. Consider the case where one task processor handles urgent tasks + multiple task processors handle non-urgent tasks; consider the situation where multiple task processors first process urgent tasks in a parallel manner and then process non-urgent tasks in a parallel manner. Consider the above situation where multiple tasks to be processed are reasonably distributed to several task processors by task classification.
The server is one of computers, the server runs faster than a common computer, is higher in load and more expensive, and provides calculation or application services for other clients (such as terminals of a PC (personal computer), a smart phone, an ATM (automatic teller machine), even large equipment such as a train system and the like) in a network. The server has high-speed CPU computing capability, long-time reliable operation, strong I/O external data throughput capability and better expansibility. Generally, a server has the capability of responding to a service request, supporting a service, and guaranteeing the service according to the service provided by the server. The server is used as an electronic device, and the internal structure of the server is very complex, but the difference with the internal structure of a common computer is not great, such as: CPU, hard disk, memory, system bus, etc.
S3000, the server sends the task sets to be processed to the task processors, and the task processors correspond to the task sets to be processed one by one.
The mode in which the server sends the plurality of to-be-processed task sets to the plurality of task processors may include the following modes:
the server sequentially sends the plurality of to-be-processed task sets to the plurality of task processors; or the server sends the plurality of to-be-processed task sets to the plurality of task processors in a parallel mode.
It can be seen that, compared with the server that first distributes a part of to-be-processed tasks (which can be currently reprocessed by the task processor a) to the task processor a, then distributes another part of to-be-processed tasks (which can be currently reprocessed by the task processor B) to the task processor B, and then distributes the remaining to-be-processed tasks (which are less than the to-be-processed tasks which can be currently reprocessed by the task processor C) to the task processor C, load balance of the plurality of task processors cannot be ensured. In the embodiment of the invention, the server divides the received multiple tasks to be processed into the multiple task sets to be processed based on the task distribution strategy and distributes each task set to the corresponding task processor, thereby realizing reasonable distribution of the multiple tasks to be processed to the multiple task processors and ensuring the load balance of the multiple task processors.
If the number of the tasks to be processed is excessive, the first server is circularly executed in batches in a mode that a plurality of tasks to be processed are divided into a plurality of task sets to be processed based on a task distribution strategy, and each task set to be processed is distributed to a corresponding task processor so as to complete the dispatching of the tasks to be processed; or circularly executing the operation of distributing the set of tasks to be processed to the corresponding task processors in batches, and completing the dispatching of the tasks to be processed.
Example 2:
the task processor load strategy comprises the following steps:
assume that the number of task processors includes a task processor a, a task processor B, and a task processor C.
S2110, the server obtains the load state of the task processor A, the load state of the task processor B and the load state of the task processor C;
the load state of the task processor refers to the number of tasks to be processed that can be processed again by the task processor currently.
Specifically, determining the load status of the task processor includes the following four steps:
s2111, acquiring the current CPU utilization rate, the current memory, the current network rate and the current task quantity of the task processor;
s2112, determining a first score corresponding to the current CPU utilization rate according to the pre-stored mapping relation between the CPU utilization rate and the score; determining a second score corresponding to the current memory according to a mapping relation between the pre-stored memory and the score; determining a third score corresponding to the current network rate according to a mapping relation between the pre-stored network rate and the score; determining a fourth score corresponding to the current task quantity according to a mapping relation between the pre-stored task quantity and the score;
s2113, determining a target load score according to the first score, the second score, the third score, the fourth score and a pre-stored load score formula;
wherein, the load score formula is as follows:
P=A1×α1+A2×α2+A3×α3+A4×α4,
p is a target load score, A1 is a score corresponding to the current CPU utilization rate, alpha 1 is a weight corresponding to the CPU utilization rate, A2 is a score corresponding to the current memory, alpha 2 is a weight corresponding to the memory, A3 is a score corresponding to the current network rate, alpha 3 is a weight corresponding to the network rate, A4 is a score corresponding to the current task number, and alpha 4 is a weight corresponding to the task number; wherein α 1+ α 2+ α 3+ α 4 is 1.
S2114, determining the target still-processable task number corresponding to the target load score according to the mapping relation between the pre-stored load score and the still-processable task number.
The server sequentially obtains the load state of a task processor A, the load state of a task processor B and the load state of a task processor C; or the server obtains the load state of the task processor A, the load state of the task processor B and the load state of the task processor C in a parallel mode.
S2120, the server determines distribution proportions of a plurality of tasks to be processed according to the load state of the task processor A, the load state of the task processor B and the load state of the task processor C;
if the load status of any processor a is 50, the load status of the task processor B is 45, and the load status of the task processor C is 45, the server determines that the distribution ratio of the plurality of to-be-processed tasks is 10:9: 9.
and S2130, dividing the multiple tasks to be processed into three task sets to be processed by the server according to the distribution proportion of the multiple tasks to be processed.
If the distribution ratio of the plurality of to-be-processed tasks is 10:9:9 and the number of the plurality of to-be-processed tasks is 112, the server divides the 112 to-be-processed tasks into a to-be-processed task set A (40 to-be-processed tasks), a to-be-processed task set B (36 to-be-processed tasks) and a to-be-processed task set C (36 to-be-processed tasks).
As can be seen, in this embodiment, the server determines the distribution ratios of the multiple tasks to be processed based on the load states of the task processors, divides the multiple tasks to be processed into multiple task sets to be processed based on the distribution ratios of the multiple tasks to be processed, and then distributes the task sets to be processed to the corresponding task processors, so that the multiple tasks to be processed are uniformly distributed to the task processors according to the number of the tasks, and thus, load balance of the task processors can be ensured.
Example 3:
the task processor task level policy comprises the steps of:
assume that the number of task processors includes a task processor a, a task processor B, and a task processor C.
S2210, the server obtains a task level of the task processor A, a task level of the task processor B and a task level of the task processor C; the task level of the task processor A, the task level of the task processor B and the task level of the task processor C are different from each other;
the task level of a task processor refers to the importance of the task to be processed by the task processor.
The task level of the task processor may include, but is not limited to, high, medium, and low.
The server sequentially obtains a task level of a task processor A, a task level of a task processor B and a task level of a task processor C; or the server obtains the task level of the task processor A, the task level of the task processor B and the task level of the task processor C in a parallel mode.
S2220, the server determines the task level of each task to be processed in the plurality of tasks to be processed according to the task level of the task processor A, the task level of the task processor B and the task level of the task processor C;
specifically, the manner in which the server determines the task level of each to-be-processed task of the plurality of to-be-processed tasks according to the task level of the task processor a, the task level of the task processor B, and the task level of the task processor C may include the following steps:
s2211, the server obtains task creating time of each task to be processed in the plurality of tasks to be processed;
s2212, the server sequences the plurality of tasks to be processed according to the task creating time of each task to be processed and the sequence to obtain a task sequence to be processed;
and S2213, the server sets the task levels of the first three to-be-processed tasks of the to-be-processed task sequence to be low, sets the task levels of the middle three to-be-processed tasks of the to-be-processed task sequence to be medium, and sets the task levels of the last three to-be-processed tasks of the to-be-processed task sequence to be high.
And S2230, dividing the plurality of tasks to be processed into three task sets to be processed by the server according to the task grade of each task to be processed.
If the number of the multiple tasks to be processed is 120, the task levels of 40 tasks to be processed are all high, and the task levels of 40 tasks to be processed are all low, the server divides the 120 tasks to be processed into a task set A to be processed (40 tasks to be processed with the same task level), a task set B to be processed (40 tasks to be processed with the same task level), and a task set C to be processed (40 tasks to be processed with the same task level).
As can be seen, in this embodiment, the server determines the task level of each to-be-processed task based on the task level of the task processor a, the task level of the task processor B, and the task level of the task processor C, divides a plurality of to-be-processed tasks into a plurality of to-be-processed task sets based on the task level of each to-be-processed task, and further distributes each to-be-processed task set to the corresponding task processor, thereby achieving reasonable distribution of the plurality of to-be-processed tasks to the plurality of task processors according to the task levels, and thus, load balancing of the plurality of task processors can be ensured.
Example 4:
the task processor task status policy comprises the steps of:
assume that the number of task processors includes a task processor a, a task processor B, and a task processor C.
S2310, the server obtains the task state of the task processor A, the task state of the task processor B and the task state of the task processor C; the task state of the task processor A, the task state of the task processor B and the task state of the task processor C are different from each other;
the task condition of the task processor refers to the urgency of the task to be processed by the task processor.
The task status of the task processor a may be urgent, the task status of the task processor B may be more urgent, and the task status of the task processor C may be not urgent.
The server sequentially obtains the task condition of a task processor A, the task condition of a task processor B and the task condition of a task processor C; or the server obtains the task status of the task processor A, the task status of the task processor B and the task status of the task processor C in a parallel mode.
S2320, the server acquires the task status of each task in a plurality of tasks to be processed;
wherein the task status of the task may be preset.
S2330, the server selects at least one task to be processed, of which the task conditions are all urgent, from the multiple tasks to be processed to form a task set to be processed corresponding to the task processor A, selects at least one task to be processed, of which the task conditions are all urgent, from the multiple tasks to be processed to form a task set to be processed corresponding to the task processor B, and selects at least one task to be processed, of which the task conditions are all non-urgent, from the multiple tasks to be processed to form a task set to be processed corresponding to the task processor C.
It can be seen that, in this embodiment, the server selects at least one to-be-processed task having the same task status as the task processor a from the plurality of to-be-processed tasks to form a first to-be-processed task set, selects at least one to-be-processed task having the same task status as the task processor B from the plurality of to-be-processed tasks to form a second to-be-processed task set, selects at least one to-be-processed task having the same task status as the task processor C from the plurality of to-be-processed tasks to form a third to-be-processed task set, and distributes each to-be-processed task set to the corresponding task processor, thereby implementing reasonable distribution of the plurality of to-be-processed tasks to the plurality of task processors according to the task status, and thus load balance of the plurality of task processors can be ensured.
Example 5:
an application scenario of the task scheduling method comprises a server and a plurality of task processors, wherein the server is in communication connection with the task processors respectively, and the method comprises the following steps:
and the server is used for obtaining a plurality of tasks to be processed.
The server is also used for dividing the plurality of tasks to be processed into a plurality of task sets to be processed based on a task distribution strategy, and the task distribution strategy is used for reasonably distributing the plurality of tasks to be processed to the plurality of task processors.
And the server is also used for sending the plurality of task sets to be processed to the plurality of task processors, and the plurality of task processors correspond to the plurality of task sets to be processed one by one.
And the task processor is used for receiving the task set to be processed sent by the server.
And the task processor is also used for executing processing operation on the set of tasks to be processed.
The task processor may be a mobile terminal, and is not limited herein.
The mobile terminal or called mobile communication terminal refers to a computer device that can be used in mobile, and broadly includes a mobile phone, a notebook, a tablet computer, a POS machine, and even a vehicle-mounted computer. But most often refer to cell phones or smart phones and tablets with multiple application functions. With the development of networks and technologies towards increasingly broader bands, the mobile communications industry will move towards a true mobile information age. On the other hand, with the rapid development of integrated circuit technology, the processing capability of the mobile terminal has already possessed strong processing capability, and the mobile terminal is changing from a simple conversation tool to an integrated information processing platform. This also adds more development space to mobile terminals.
Mobile terminals have been developed as simple communication devices with mobile communications for decades. From 2007, the gene mutation of the mobile terminal is intelligently triggered, and the traditional positioning of the terminal as a mobile network terminal is fundamentally changed. The mobile intelligent terminal is almost instantly changed into a key entrance and a main innovation platform of internet business, a novel media, electronic commerce and information service platform, the most important hub of internet resources, mobile network resources and environment interaction resources, and an operating system and a processor chip of the mobile intelligent terminal even become the strategic high points of the whole ICT industry at present. The subversive change caused by the mobile intelligent terminal opens the sequence of mobile internet industry development and opens a new technical industry period. With the continuous development of the mobile intelligent terminal, the influence of the mobile intelligent terminal is more extensive than that of a shoulder radio, a television and the internet (PC), and the mobile intelligent terminal becomes a4 th terminal product which has wide penetration, rapid popularization and great influence and can reach the aspects of human social life historically.
Wherein, the above figure only uses the first task processor, the second task processor and the third task processor as an exemplary illustration, and should not be understood as only three task processors; in practical applications, there may be multiple task processors.
A task scheduling device comprises an acquisition module, a division module, a storage module and an allocation module;
the acquisition module is used for acquiring a plurality of tasks to be processed from the storage module or the task storage center;
the storage module is used for storing a task distribution strategy and a task to be processed;
the dividing module is used for dividing the tasks to be processed into a plurality of task sets to be processed according to the task distribution strategy;
the distribution module is used for sending the task sets to be processed to the plurality of task processors, wherein each task processor corresponds to each task set to be processed one by one.
The task scheduling device can be directly applied to various scenes, such as being installed on a server and the like, and plays a role in balancing load.
A task scheduling electronic device includes a processor and a memory; the processor is used for executing the task scheduling method; the memory is to store a task distribution policy, a pending task, and machine-readable instructions executable by the processor.
By adopting the task scheduling electronic equipment, task scheduling can be performed on each processing unit in a network center or a distributed computing system, so that the functions of reducing resource waste and improving the computing efficiency of the whole network are achieved.
The present invention is not described in detail in the prior art, and therefore, the present invention is not described in detail.
It is understood that the terms "a" and "an" should be interpreted as meaning that a number of one element or element is one in one embodiment, while a number of other elements is one in another embodiment, and the terms "a" and "an" should not be interpreted as limiting the number.
Although the terms are used more often herein, the possibility of using other terms is not excluded. These terms are used merely to more conveniently describe and explain the nature of the present invention; they are to be construed as being without limitation to any additional limitations that may be imposed by the spirit of the present invention.
The present invention is not limited to the above-mentioned preferred embodiments, and any other products in various forms can be obtained by anyone in the light of the present invention, but any changes in the shape or structure thereof, which have the same or similar technical solutions as those of the present application, fall within the protection scope of the present invention.
Claims (10)
1. The task scheduling method is characterized by comprising the following scheduling steps:
acquiring a plurality of tasks to be processed;
dividing the tasks to be processed into a plurality of task sets to be processed based on a task distribution strategy;
and sending the task sets to be processed to a plurality of task processors, wherein each task processor corresponds to each task set to be processed one by one.
2. The task scheduling method according to claim 1, wherein the specific steps of obtaining the plurality of tasks to be processed are as follows:
sending an unprocessed task query request carrying a target time interval to a task storage center;
receiving all the unprocessed tasks sent by the task storage center;
and confirming that all unprocessed tasks are a plurality of to-be-processed tasks.
3. The method of claim 1, wherein the task distribution policy comprises one or more of a task processor load policy, a task processor task level policy, and a task processor task status policy.
4. A task scheduling method according to claim 3, wherein the task processor load policy comprises the steps of:
respectively acquiring the load state of each task processor, wherein the load state of each task processor represents the number of tasks to be processed which can be processed again by the task processor;
comparing the load state of each task processor to determine the distribution proportion of the tasks to be distributed and processed; and distributing the tasks to be distributed and processed according to the load state proportional relation of all the task processors to form a task set to be processed which corresponds to each task processor one by one.
5. A task scheduling method according to claim 3, wherein said task processor task level policy comprises the steps of:
acquiring the task level of each task processor, wherein the task level of each task processor represents the importance degree of a task to be processed by the task processor;
determining the task grade of each task to be allocated to be processed according to the task grade of each task processor;
and distributing the tasks to be distributed in the same level according to the task grade of each task processor to form a task set to be processed corresponding to the task processors one by one.
6. A task scheduling method according to claim 3, wherein said task processor task status policy comprises the steps of:
dividing the emergency degree grade of the task to be processed;
acquiring a task state of each task processor, wherein the task state of each task processor represents the urgency level of a task to be processed by the task processor;
determining tasks to be allocated with the same urgency level according to the task condition of each task processor;
and distributing the tasks to be distributed at the same level according to the urgency degree grade of the task condition in each task processor to form a task set to be processed corresponding to the task processors one by one.
7. A task scheduling method according to claim 4, wherein determining the load status of the task processor comprises the steps of:
at least acquiring the current CPU utilization rate, the current memory occupancy rate, the current network rate and the current task process quantity of each task processor;
determining a first score corresponding to the current CPU utilization rate according to a pre-stored mapping relation between the CPU utilization rate and the score;
determining a second score corresponding to the current memory occupancy rate according to a mapping relation between the pre-stored memory occupancy rate and the score;
determining a third score corresponding to the current network rate according to a mapping relation between the pre-stored network rate and the score;
determining a fourth score corresponding to the number of the current task processes according to a mapping relation between the number of the pre-stored task processes and the score;
determining a target load score according to the first score, the second score, the third score, the fourth score and a pre-stored load score formula;
wherein, the load score formula is as follows:
P=A1×α1+A2×α2+A3×α3+A4×α4,
p is a target load score, a1 is a score corresponding to the current CPU utilization, α 1 is a weight corresponding to the CPU utilization, a2 is a score corresponding to the current memory occupancy, α 2 is a weight corresponding to the memory occupancy, A3 is a score corresponding to the current network rate, α 3 is a weight corresponding to the network rate, a4 is a score corresponding to the current task process number, α 4 is a weight corresponding to the task process number, and α 1+ α 2+ α 3+ α 4 is 1;
and determining the target task number which can still be processed and corresponds to the target load score according to the mapping relation between the pre-stored load score and the task number which can still be processed.
8. The method according to claim 5, wherein the task level assignment comprises the following steps:
acquiring task creating time of each task to be processed;
sequencing the tasks to be processed according to the task creation time of each task to be processed in sequence to obtain a task sequence to be processed;
dividing the grade of the task sequence to be processed according to the number of the task processors, wherein the grade order of the task sequence to be processed is the same as the number of the task processors;
and dividing all the tasks to be processed into a plurality of task sets to be processed according to the task grade of each task to be processed.
9. A task scheduling device is characterized by comprising an acquisition module, a division module, a storage module and an allocation module;
the acquisition module is used for acquiring a plurality of tasks to be processed from the storage module or the task storage center;
the storage module is used for storing a task distribution strategy and a task to be processed;
the dividing module is used for dividing the tasks to be processed into a plurality of task sets to be processed according to a task distribution strategy;
the distribution module is used for sending the task sets to be processed to a plurality of task processors, wherein each task processor is in one-to-one correspondence with each task set to be processed.
10. A task scheduling electronic device comprising a processor and a memory; the processor is used for executing a task scheduling method according to any one of claims 1 to 8; the memory is to store a task distribution policy, a task to be processed, and machine-readable instructions executable by the processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110048212.7A CN112764924A (en) | 2021-01-14 | 2021-01-14 | Task scheduling method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110048212.7A CN112764924A (en) | 2021-01-14 | 2021-01-14 | Task scheduling method and device and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112764924A true CN112764924A (en) | 2021-05-07 |
Family
ID=75700496
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110048212.7A Pending CN112764924A (en) | 2021-01-14 | 2021-01-14 | Task scheduling method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112764924A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130283097A1 (en) * | 2012-04-23 | 2013-10-24 | Yahoo! Inc. | Dynamic network task distribution |
CN109788061A (en) * | 2019-01-23 | 2019-05-21 | 中科驭数(北京)科技有限公司 | Calculating task dispositions method and device |
CN110209496A (en) * | 2019-05-20 | 2019-09-06 | 中国平安财产保险股份有限公司 | Task sharding method, device and sliced service device based on data processing |
CN110347602A (en) * | 2019-07-11 | 2019-10-18 | 中国工商银行股份有限公司 | Multitask script execution and device, electronic equipment and readable storage medium storing program for executing |
CN111309644A (en) * | 2020-02-14 | 2020-06-19 | 苏州浪潮智能科技有限公司 | Memory allocation method and device and computer readable storage medium |
CN112162839A (en) * | 2020-09-25 | 2021-01-01 | 太平金融科技服务(上海)有限公司 | Task scheduling method and device, computer equipment and storage medium |
-
2021
- 2021-01-14 CN CN202110048212.7A patent/CN112764924A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130283097A1 (en) * | 2012-04-23 | 2013-10-24 | Yahoo! Inc. | Dynamic network task distribution |
CN109788061A (en) * | 2019-01-23 | 2019-05-21 | 中科驭数(北京)科技有限公司 | Calculating task dispositions method and device |
CN110209496A (en) * | 2019-05-20 | 2019-09-06 | 中国平安财产保险股份有限公司 | Task sharding method, device and sliced service device based on data processing |
CN110347602A (en) * | 2019-07-11 | 2019-10-18 | 中国工商银行股份有限公司 | Multitask script execution and device, electronic equipment and readable storage medium storing program for executing |
CN111309644A (en) * | 2020-02-14 | 2020-06-19 | 苏州浪潮智能科技有限公司 | Memory allocation method and device and computer readable storage medium |
CN112162839A (en) * | 2020-09-25 | 2021-01-01 | 太平金融科技服务(上海)有限公司 | Task scheduling method and device, computer equipment and storage medium |
Non-Patent Citations (1)
Title |
---|
戴乐育;李伟;徐金甫;李军伟;: "面向任务级的多核密码处理器数据分配机制", 计算机工程与设计, no. 01, pages 89 - 93 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220237022A1 (en) | System and method for controlled sharing of consumable resources in a computer cluster | |
CN107018175B (en) | Scheduling method and device of mobile cloud computing platform | |
US9866450B2 (en) | Methods and apparatus related to management of unit-based virtual resources within a data center environment | |
US8832063B1 (en) | Dynamic request throttling | |
CN110795203B (en) | Resource scheduling method, device, system and computing equipment | |
CN112988390B (en) | Computing power resource allocation method and device | |
US11496413B2 (en) | Allocating cloud computing resources in a cloud computing environment based on user predictability | |
CN111506404A (en) | Kubernetes-based shared GPU (graphics processing Unit) scheduling method | |
CN106569898A (en) | Resource distribution method and mobile terminal | |
CN104917805A (en) | Load sharing method and equipment | |
JP7515710B2 (en) | Resource Scheduling Method, System, Electronic Device and Computer-Readable Storage Medium | |
CN111045808A (en) | Distributed network task scheduling method and device | |
CN107370799A (en) | A kind of online computation migration method of multi-user for mixing high energy efficiency in mobile cloud environment | |
CN112073532B (en) | Resource allocation method and device | |
CN114155026A (en) | Resource allocation method, device, server and storage medium | |
JP2011113268A (en) | Cloud facade management system | |
CN108288139B (en) | Resource allocation method and device | |
CN112764924A (en) | Task scheduling method and device and electronic equipment | |
CN112104682A (en) | Intelligent distribution method and system for cloud desktop server, storage medium and central control server | |
CN117056064A (en) | Resource allocation method, device, server, storage medium and program product | |
CN114489978A (en) | Resource scheduling method, device, equipment and storage medium | |
CN114090256A (en) | Application delivery load management method and system based on cloud computing | |
CN112905351B (en) | GPU and CPU load scheduling method, device, equipment and medium | |
CN117891618B (en) | Resource task processing method and device of artificial intelligent model training platform | |
CN113535360B (en) | Request scheduling method and device based on tenant granularity in software defined cloud |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |