CN118113451A - Task allocation method, device and storage medium - Google Patents

Task allocation method, device and storage medium Download PDF

Info

Publication number
CN118113451A
CN118113451A CN202211520900.XA CN202211520900A CN118113451A CN 118113451 A CN118113451 A CN 118113451A CN 202211520900 A CN202211520900 A CN 202211520900A CN 118113451 A CN118113451 A CN 118113451A
Authority
CN
China
Prior art keywords
task
thread
load information
local
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211520900.XA
Other languages
Chinese (zh)
Inventor
冯楚桓
仇斌
黄灏
胡小强
王煊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202211520900.XA priority Critical patent/CN118113451A/en
Publication of CN118113451A publication Critical patent/CN118113451A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Debugging And Monitoring (AREA)

Abstract

The embodiment of the application discloses a task allocation method, a device and a storage medium, which can reasonably allocate tasks to proper working sub-threads and improve execution efficiency and execution performance. The method comprises the following steps: acquiring a first task and first load information corresponding to the first task, wherein the first load information is used for indicating the condition of system resources required to be consumed when the first task is executed; acquiring second load information of a local task queue of each working sub-thread, wherein the second load information is used for indicating the total load condition of a second task in the corresponding local task queue, and the second task is a task which is currently allocated to the corresponding local task queue; and distributing the first task to a local task queue of the target working sub-thread based on the first load information and each second load information, so as to be used for the target working sub-thread to execute the first task.

Description

Task allocation method, device and storage medium
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a task allocation method, a task allocation device and a storage medium.
Background
There are many business scenarios in software systems that require timed processing tasks, the main body of which is typically undertaken by a process. In the case of a small amount of tasks, one main process can complete the tasks. However, when the number of tasks is large and the reliability requirement is high, the processing needs to be performed in a multithreading-dependent mode. This time, it involves the problem of how tasks are allocated among multiple threads.
In related schemes, allocation of tasks is typically accomplished based on default scheduling policies that the physical engine is self-contained. Specifically, a local task queue is allocated to each working sub-thread, after a task is generated, the task is allocated to the local task queue corresponding to the designated working sub-thread according to a default scheduling policy, and therefore the corresponding working sub-thread can execute the task in the local task queue. However, the importance of the tasks and the system resources to be consumed may vary from task to task. If different task assignments are still distributed to a default local task queue according to a default scheduling policy, at this time, tasks with different demands cannot be distributed to a local task queue of a more appropriate working sub-thread, so that the execution efficiency of subsequent tasks is lower and the performance is poor.
Disclosure of Invention
The embodiment of the application provides a task allocation method, a device and a storage medium, which can reasonably allocate tasks to proper working sub-threads and improve execution efficiency and execution performance.
In a first aspect, an embodiment of the present application provides a method for task allocation. The task allocation method comprises the following steps: acquiring a first task and first load information corresponding to the first task, wherein the first load information is used for indicating the condition of system resources required to be consumed when the first task is executed; acquiring second load information of a local task queue of each working sub-thread, wherein the second load information is used for indicating the total load condition of a second task in the corresponding local task queue, and the second task is a task which is currently allocated to the corresponding local task queue; and distributing the first task to a local task queue of a target working sub-thread based on the first load information and each piece of second load information, so as to be used for the target working sub-thread to execute the first task, wherein before the local task queue of the target working sub-thread is distributed to obtain the first task, the second load information of the local task queue of the target working sub-thread is a target value in the second load information of each local task queue.
In a second aspect, an embodiment of the present application provides a task allocation apparatus. The task allocation device comprises an acquisition unit and a processing unit. The system comprises an acquisition unit, a first task and a second task, wherein the acquisition unit is used for acquiring a first task and first load information corresponding to the first task, and the first load information is used for indicating the condition of system resources consumed when the first task is executed; and acquiring second load information of a local task queue of each working sub-thread, wherein the second load information is used for indicating the total load condition of a second task in the corresponding local task queue, and the second task is a task which is currently allocated in the corresponding local task queue. The processing unit is configured to allocate the first task to a local task queue of a target working sub-thread based on the first load information and each piece of second load information, so that the target working sub-thread executes the first task, where before the local task queue of the target working sub-thread is allocated to obtain the first task, the second load information of the local task queue of the target working sub-thread is a target value in the second load information of each local task queue.
In some optional examples, the obtaining unit is further configured to obtain a traffic priority of the first task after the first task is allocated to a local task queue of a target working sub-thread based on the first load information and each of the second load information, where the traffic priority of the first task is used to indicate a priority level when the first task is loaded in the local task queue of the target working sub-thread. The processing unit is configured to, when the service priority of the first task is higher than the service priority of a third task and lower than the service priority of a fourth task, rank the first task before the third task and after the fourth task, where the third task and the fourth task are any two tasks already allocated in the local task queue of the target working sub-thread.
In other alternative examples, the acquisition unit is configured to: acquiring the first task from a global task queue based on a first identifier, wherein each task in the global task queue is generated by each working sub-thread and/or working main thread, and the first identifier is used for identifying the first task; and acquiring first load information corresponding to the first task from the service field of the first task.
In other alternative examples, the acquisition unit is further configured to: before the first task is obtained from a global task queue based on a first identification, the task generated by the main working thread and the task generated by each working sub-thread are obtained, wherein the task generated by each working sub-thread is the task generated by the corresponding working sub-thread when the task generated by the main working thread is executed. And the processing unit is used for storing the tasks generated by the working main thread and the tasks generated by each working sub thread into the global task queue.
In other alternative examples, the processing unit is further configured to: after the first task is distributed to the local task queue of the target working sub-thread based on the first load information and each piece of second load information, the second load information of the local task queue of the target working sub-thread is updated after the target working sub-thread is determined to execute the first task.
In other alternative examples, the processing unit is configured to: and deleting the first task from the local task queue of the target working sub-thread to update the second load information of the local task queue of the target working sub-thread.
A third aspect of an embodiment of the present application provides a task allocation device, including: memory, input/output (I/O) interfaces, and memory. The memory is used for storing program instructions. The processor is configured to execute the program instructions in the memory to perform the task allocation method corresponding to the implementation manner of the first aspect.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium having instructions stored therein, which when run on a computer, cause the computer to perform to execute the method corresponding to the embodiment of the first aspect described above.
A fifth aspect of the embodiments of the present application provides a computer program product comprising instructions which, when run on a computer or processor, cause the computer or processor to perform the method described above to perform the embodiment of the first aspect described above.
From the above technical solutions, the embodiment of the present application has the following advantages:
In the embodiment of the application, a first task and first load information corresponding to the first task are acquired first, and second load information of a local task queue of each working sub-thread is acquired. Moreover, since the first load information can indicate the condition of system resources required to be consumed when executing the first task, the second load information can indicate the total load condition of the second task in the corresponding local task queue, and the second task can be understood as a task which is allocated to the local task queue currently. At this time, based on the first load information and each second load information, the first task can be distributed to the local task queue of the appropriate working sub-thread, so that the target working sub-thread can execute the first task. And before the local task queue of the target working sub-thread is allocated to obtain the first task, the second load information of the local task queue of the target working sub-thread is a target value in the second load information of each local task queue. By the method, the second load information in the local task queue of each working sub-thread can be fully considered, and then the proper target working sub-thread can be determined from each working sub-thread through each second load information. In this way, the first load information of the current first task is comprehensively considered, and the first task is distributed to the local task queue of the target working sub-thread, so that the execution efficiency of the final target working sub-thread when executing the first task is improved, and the execution performance is also improved.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 shows a schematic diagram illustrating task allocation provided in a prior art scheme;
FIG. 2 is a flow chart illustrating a method of task allocation provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of load information and service priority provided by an embodiment of the present application;
FIG. 4 is a diagram showing the task response speed when performing tasks using the inventive solution versus the prior art solution;
Fig. 5 shows a schematic structural diagram of a task allocation device according to an embodiment of the present application;
Fig. 6 shows a schematic hardware structure of a task allocation device according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides a task allocation method, a device and a storage medium, which can reasonably allocate tasks to proper working sub-threads and improve execution efficiency and execution performance.
It will be appreciated that in the specific embodiments of the present application, related data such as user information, personal data of a user, etc. are involved, and when the above embodiments of the present application are applied to specific products or technologies, user permission or consent is required, and the collection, use and processing of related data is required to comply with relevant laws and regulations and standards of relevant countries and regions.
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In related schemes, assignment of tasks (jobs) is typically accomplished based on default scheduling policies that the physics engine is self-contained. Illustratively, FIG. 1 shows a schematic diagram of task allocation provided in prior art schemes. As shown in FIG. 1, when a task A is generated, the default task scheduler checks the number of threads of the work sub-thread (Cpu Worker Thread), and if there is no work sub-thread, the work main thread is directly used to execute the task A. If a number of working sub-threads greater than zero is detected, such as working sub-thread 1 through working sub-thread 3 shown in fig. 1, the default task scheduler needs to traverse all working sub-threads and detect if the thread that generated task a is the same thread as the thread that executed task a. If the thread is the same thread, the task A is added to a Local task queue (Local Job List) of the corresponding work sub-thread, for example, to a Local task queue 1 corresponding to the work sub-thread 1. Otherwise, the task a needs to be added to the global task queue (Job List). Thus, if the work sub-thread 1 is designated to execute the task a according to the default scheduling policy, when the work sub-thread 1 executes the task a, it is necessary to acquire the task a from the local task queue 1 and execute the task a. If the local task queue 1 is null, it is necessary to acquire task a from the global task queue and execute the task a. If the global task queue is still null, the work sub-thread 1 needs to acquire tasks from the local task queues of other work sub-threads and execute the tasks.
However, as is clear from the above description of fig. 1, when the task a is executed by the work sub-thread 1 by default, the default task scheduler first compares whether the thread that currently generates the task and the work sub-thread that executes the task are the same thread. Although, the scheduling logic brings higher hit rate when the task type is mainly physical simulation task, and can ensure that the task generated by one working sub-thread is continuously executed in the working sub-thread, thereby improving the continuity of simulation task execution. However, in the case that the task is mainly a load type task or other custom type task, since such task is mainly generated by the main working thread, and the default scheduling policy shown in fig. 1 is used to determine whether the working sub-threads are the same, the logic hit rate of searching for the same working sub-thread ID is greatly reduced, and the traversal search becomes unnecessary overhead in practice. In addition, the importance of different tasks and the system resources required to be consumed can be different. If different task assignments are still distributed to a default local task queue according to a default scheduling policy, so that some working sub-threads always process tasks consuming larger system resources, while some working sub-threads always process tasks consuming smaller system resources, at the moment, tasks with different requirements cannot be distributed to the local task queues of more suitable working sub-threads, and therefore follow-up task execution efficiency is low and performance is poor.
It should be noted that, the working sub-threads 1 to 3 shown in fig. 1 are also only illustrative, and in practical application, the working sub-threads 4, 5, etc. may also be included, which is not limited in the embodiment of the present application.
Therefore, in order to solve the above-mentioned problems, a task allocation method is provided in the embodiments of the present application. The method can be applied to a multithreading use scene; or the method can also be applied to task loading scenes, physical simulation scenes and the like, or the method can also be applied to scenes such as virtual games and the like. The embodiments of the present application are not limited to the specific embodiments. Due to the method provided by the embodiment of the application, the second load information in the local task queue of each working sub-thread can be fully considered, and further, the proper target working sub-thread can be determined from each working sub-thread through each second load information. In this way, the first load information of the current first task is comprehensively considered, and the first task is distributed to the local task queue of the target working sub-thread, so that the execution efficiency of the final target working sub-thread when executing the first task is improved, and the execution performance is also improved. In addition, the method provided by the embodiment of the application does not need to judge whether the sub-thread when the task is generated is the same as the sub-thread when the task is executed, so that the method provided by the application can be applied to different types of task allocation, and the application range of the task allocation is widened.
It should be noted that the method for task allocation provided in the embodiment of the present application may be applied to a task allocation device having a data processing capability, for example, a server, or other task scheduling devices. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, a content distribution network (content delivery network, CDN), basic cloud computing services such as big data and an artificial intelligent platform, and the like.
In order to facilitate understanding of the technical solution of the present application, a task allocation method provided by the embodiment of the present application is described below with reference to the accompanying drawings.
Fig. 2 shows a flowchart of a method for task allocation according to an embodiment of the present application. As shown in fig. 2, the method for task allocation may include the steps of:
201. And acquiring a first task and first load information corresponding to the first task, wherein the first load information is used for indicating the condition of system resources required to be consumed when the first task is executed.
In this example, the main thread of work also generates a corresponding task when executing the task. When the working sub-thread executes the task generated by the working main thread, a corresponding task is also generated. At this time, after the main working thread generates the corresponding task and the sub-working thread generates the corresponding task, the task allocation device may store the main working thread generates the corresponding task and the sub-working thread generates the corresponding task in the global queue task. Moreover, the task classification device may use different identifiers to identify different tasks. For example, if the tasks generated by the main working thread are task 1 to task 3, respectively, the tasks generated by the sub-working thread are task 4 to task 8, respectively. At this time, the task classification device may assign a corresponding task identifier to each task, for example, identify the tasks 1 to 8 using the identifiers 1 to 8, respectively.
By way of illustration, a task may be understood as a piece of code logic that requires a work sub-thread to execute. The tasks are typically physical simulation tasks or scene loading tasks, or may be model loading tasks of a certain physical body or other tasks with degree customization, which is not limited in the embodiment of the present application. For example, taking a loading scenario of a virtual game as an example, it is generally required to load different types of tasks, such as a task of loading a sub-copy of the virtual game, a task of loading a PVP copy, a task of loading a PVE copy, a task of loading a camp copy, a task of loading a virtual carrier, a task of loading a building element, a task of loading a virtual enemy object model, and the like, which are not limited in the embodiment of the present application.
After the task allocation device acquires the first task, the task allocation device may acquire the first task from the global task queue based on the first identifier. The first identifier is used for identifying the first task. For example, if the first task is task 1 mentioned above, the task allocation device may obtain the first task, i.e. task 1, from the global task queue based on the identifier 1. It should be understood that, in practical applications, the first task may also be other tasks in the global task queue, for example, task 2, task 3, etc., which are not limited in the embodiment of the present application.
In addition, when the worker main thread and the worker sub thread respectively generate corresponding tasks, load information corresponding to the tasks is calculated according to the information such as the size of the loaded file and the data structure of the file data, and the load information is mapped in the service field of the corresponding tasks. The load information can indicate the system resource consumed when executing the task. In this way, after the task allocation device obtains the first task, the first task may be subjected to demapping processing, and further, first load information corresponding to the first task is obtained from the corresponding service field. The first load information is described to indicate the system resource conditions that need to be consumed in performing the business priority first task.
The load information for each task may be reflected from the quantized load integral point of view. For example, if task 1 is to load a sub-copy of the virtual game, more system resources are consumed when loading task 1 because the sub-copy of the virtual game is the largest map resource in the virtual game. At this time, the load information of task 1 may be reflected by load integral 100000. Similarly, if task 5 is to load a virtual carrier, less system resources are required to execute task 5 because the virtual carrier is the smallest loading unit in the virtual game. At this time, the load information of the task 5 may also be reflected by a smaller load integration, for example, 65 or the like. Similarly, if task 2 is to load a PVP copy, task 3 is to load a PVE copy, task 4 is to load a camp copy, and task 6 is to load a building element, task 7 is to load a virtual enemy object (e.g., monster) model, etc., the load information corresponding to each of them may also be reflected by means of load integration, which may be understood with reference to the content shown in fig. 3.
It should be noted that, for the service priority of each type of task shown in fig. 3, the specific content thereof may be understood with reference to the following content, which is not described herein. In addition, the load integral shown in fig. 3 is 100000, 4000, 6000, etc., which is only a schematic description in the embodiment of the present application, and other integral may be used in practical application, and the present application is not limited to the description.
202. And acquiring second load information of the local task queue of each working sub-thread, wherein the second load information is used for indicating the total load condition of a second task in the corresponding local task queue, and the second task is a task which is currently allocated in the corresponding local task queue.
In this example, for the local task queue of each work sub-thread, the task allocation device may further accumulate the load information of each task based on the load information of the task currently allocated in the local task queue of each work sub-thread, so as to calculate the total load condition of the second task in the corresponding local task queue, that is, the second load information. It should be noted that the second task may understand the task currently allocated in the local task queue.
For example, taking the working sub-thread 1, the working sub-thread 2, and the working sub-thread 3 shown in fig. 1 as an example, each working sub-thread corresponds to a corresponding local task queue, such as the working sub-thread 1 corresponds to the local task queue 1, the working sub-thread 2 corresponds to the local task queue 2, and the working sub-thread 3 corresponds to the local task queue 3. Taking the foregoing types of tasks shown in fig. 3 as an example, for the local task queue 1 of the work sub-thread 1, if the task currently allocated in the local task queue 1 has the task 2 and the task 3, the task allocation device may sum the load information of the task 2 and the load information of the task 3, so as to calculate the second load information of the local task queue 1. For example, the task allocation device may sum the load integral 4000 of the task 2 and the load integral 6000 of the task 3, so as to obtain the current second load information in the local task queue 1, i.e. 10000 load integral. At this time, the second task in the local task queue 1 can be understood as task 2 and task 3. Similarly, for the local task queue 2 of the work sub-thread 2, if the task currently allocated in the local task queue 2 has the task 4 and the task 6, the task allocation device may sum the load information of the task 4 and the load information of the task 6, so as to calculate the second load information of the local task queue 2. For example, the task allocation device may sum the load points 80000 of the task 4 and the load points 80 of the task 6 to obtain the current second load information in the local task queue 2, i.e. 80080 load points. At this time, the second task in the local task queue 2 can be understood as task 4 and task 6. Note that, for the second load information corresponding to the local task queue 3 of the work sub-thread 3, the calculation process of the second load information of the local task queue 1 of the work sub-thread 1 may be understood, which is not described herein.
203. And distributing the first task to a local task queue of the target working sub-thread based on the first load information and each second load information, so as to be used for the target working sub-thread to execute the first task.
In this example, after obtaining the first load information of the first task and the second load information of the second task in the local task queue of each working sub-thread, the task allocation device may allocate the first task to the local task queue of the target working sub-thread based on the first load information and each second load information. In this way, the target work sub-thread is able to execute the first task in the corresponding local task queue.
The described local task queues of the target working sub-thread may be understood as the second load information of the local task queues of the target working sub-thread is the target value in the second load information of each local task queue before the first task is allocated. For example, taking the example shown in the above step 202 as an example, if the second load information of the local task queue 1 of the work sub-thread 1 is 10000 load points and the second load point of the local task queue 2 of the work sub-thread 2 is 80080 points, the target work sub-thread may be determined based on the second load information of the local task queue 1 of the work sub-thread 1 and the second load information of the local task queue 2 of the work sub-thread 2, for example, the work sub-thread with the second load information smaller is taken as the target work sub-thread, such as the work sub-thread 1 described above. In this way, the task allocation device may allocate the first task to the local task queue 1 of the work sub-thread 1 based on the first load information and the second load information. By the method, the task allocation device can comprehensively consider the load information of the new task and the load information of the local task queue of each working sub-thread, and evenly allocate the new task to the local task queue with smaller current load, so that the corresponding working sub-thread can execute the task with better system resources, the execution efficiency is improved, and the performance is also improved.
In some examples, different types of tasks may have different execution requirements in different scenarios. For example, taking the foregoing various loading tasks in the virtual game shown in fig. 3 as an example, the real-time requirement on tasks such as PVP copies is high, and the tasks such as PVP copies need to be loaded as soon as possible, and a high service priority, for example, a level 5, may be set for the tasks such as loading PVP copies. In addition, in the loading tasks of building elements, virtual enemy object models and the like, the real-time requirement is low, so that the tasks of building elements, virtual enemy object models and the like can be set with correspondingly low service priorities, such as level 1. Similarly, for other types of loading tasks shown in fig. 3, corresponding service priorities are set for different tasks based on indexes such as real-time requirements of the tasks, for example, the service priorities of the task for loading virtual game sub-copies, the task for loading PVE copies, the task for loading camp copies, and the task for loading virtual carriers are respectively set to be level 4, level 3, level 2, etc., which can be specifically understood with reference to the content shown in fig. 3 and will not be repeated herein.
The higher the priority level, the higher the priority level of the work sub-thread when loading the corresponding task in the local task queue. For example, for class 5, it may be considered that a work sub-thread needs to load a task of loading a copy of PVP preferentially when executing the task. And aiming at the level 1, when the task work sub-thread executes the task for loading the virtual enemy object model, the task for loading the virtual enemy object model can be executed after the task with high priority is loaded. In addition, the service priorities of the various loading tasks shown in fig. 3 are only a schematic description, and level 7, level 8, etc. may also be set in practical application, which is not limited in the embodiment of the present application.
Based on the above, the working main thread and the working sub-thread can map the service priority of the task into the service field of the corresponding task in the process of generating the corresponding task. In this way, after the task allocation device obtains the first task, the first task may be subjected to demapping processing, so as to extract the service priority of the first task from the corresponding service field. The priority of the first task can be indicated based on the service priority of the first task when the first task is loaded in the local task queue of the target working sub-thread.
In this way, the task allocation device may insert the first task into the appropriate position in the local task queue of the target work sub-thread based on the traffic priority of the first task after allocating the first task to the local task queue of the target work sub-thread. For example, the task allocation device may determine whether the traffic priority of the first task is higher than the traffic priority of the third task and lower than the traffic priority of the fourth task. The third task and the fourth task are any two tasks already allocated in the local task queue of the target working sub-thread. In this way, when the task allocation device determines that the traffic priority of the first task is higher than the traffic priority of the third task and lower than the traffic priority of the fourth task, the first task is inserted between the third task and the fourth task, that is, the first task is arranged before the third task and after the fourth task. Thus, when the target working sub-thread executes the tasks in the corresponding local task queues, the fourth task can be executed according to the priority, the first task is executed, and finally, the execution sequence of the third task is executed to load each task. By the method, the task with higher service priority can be preferentially executed, so that the task with higher importance level can be rapidly responded, and the execution efficiency is improved.
For example, fig. 4 shows a schematic diagram comparing task response speeds when performing tasks using the inventive scheme with existing schemes. As shown in fig. 4, assume that the test case is exemplified by a task of loading 150 virtual game copies, a task of loading 50 PVP copies, and a task of loading 20000 building element models, and the order of generation of these 20200 tasks is random, without limitation. As can be seen from the foregoing description of fig. 3, the service priority of the task loading the PVP copy is level 5, so the task allocation device may count the average time from the task generation to the task completion of the task loading the PVP copy. As shown in fig. 4, taking 4 working sub-threads, 8 working sub-threads and 16 working sub-threads as examples, under the original scheme, no matter in the situations of 4 working sub-threads, 8 working sub-threads or 16 working sub-threads, the task is not provided with a corresponding service priority, so that the execution sequence of the task in the local task queue of each working sub-thread is still executed by adopting a traditional first-come first-go mode, thus the average loading time is relatively longer, and the task with high importance level cannot be executed preferentially, thus the response speed is slower. In the scheme of the application, the new tasks are reasonably distributed to the local task queues of the proper target working sub-threads through the load information and the like, and the corresponding service priority is set for each task, so that the task with higher service priority can be preferentially executed, the task with higher importance level can be rapidly responded, and the execution efficiency is improved. For example, in the context of 4 working sub-threads, the present solution may preferentially perform the task of loading a PVP copy over the existing solution, resulting in a 24.18% reduction in overall average execution time. It can be seen from the comprehensive results that the scheme of the application has great advantages on the execution response time of the task with higher service priority under the condition of insufficient task execution capacity caused by fewer working sub-threads.
In other examples, the task allocation device may further update the second load information of the local task queue of the target work sub-thread after determining that the target work sub-thread executes the first task after allocating the first task to the local task queue of the target work sub-thread based on the first load information and each of the second load information. For example, the task allocation device may delete the first task from the local task queue of the target work sub-thread to update the second load information of the local task queue of the target work sub-thread.
In this example, after the target work sub-thread has completed executing the first task, if the first task is saved in the corresponding local task queue, more system resources are needed to support, thereby causing a greater load. Therefore, the task allocation device can delete the first task from the local task queue of the target working sub-thread after the target working sub-thread finishes executing the first task, and further can correspondingly reduce the first load information of the first task in the second load information of the local task queue of the target working sub-thread, thereby realizing the real-time update of the total load condition of the local task queue of the working sub-thread.
For example, if the first load information of the first task is 4000 load points, the second load information of the local task queue of the target work sub-thread is 10000 load points, and if the first task is allocated to the local task queue of the target work sub-thread at this time, the corresponding second load information is changed to 14000 load points. After the first task is executed, the task allocation device deletes the first task from the local task queue, and at this time, the second load information of the local task queue of the target working sub-thread is changed back to 10000 load points.
It should be noted that the foregoing examples are merely illustrative, and other examples may be included in the practical application, and the embodiments of the present application are not limited to the foregoing examples.
In the embodiment of the application, a first task, first load information corresponding to the first task, and second load information of a local task queue of each working sub-thread are acquired. Moreover, since the first load information can indicate the condition of system resources required to be consumed when executing the first task, the second load information can indicate the total load condition of the second task in the corresponding local task queue, and the second task can be understood as a task which is allocated to the local task queue currently. At this time, based on the first load information and each second load information, the first task can be distributed to the local task queue of the appropriate working sub-thread, so that the target working sub-thread can execute the first task. And before the local task queue of the target working sub-thread is allocated to obtain the first task, the second load information of the local task queue of the target working sub-thread is a target value in the second load information of each local task queue. By the method, the second load information in the local task queue of each working sub-thread can be fully considered, and then the proper target working sub-thread can be determined from each working sub-thread through each second load information. In this way, the first load information of the current first task is comprehensively considered, and the first task is distributed to the local task queue of the target working sub-thread, so that the execution efficiency of the final target working sub-thread when executing the first task is improved, and the execution performance is also improved.
The foregoing description of the solution provided by the embodiments of the present application has been mainly presented in terms of a method. It should be understood that, in order to implement the above-described functions, hardware structures and/or software modules corresponding to the respective functions are included. Those of skill in the art will readily appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the application can divide the functional modules of the device according to the method example, for example, each functional module can be divided corresponding to each function, and two or more functions can be integrated in one processing module. The integrated modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
The task allocation device in the embodiment of the present application is described in detail below, and fig. 5 is a schematic diagram of an embodiment of the task allocation device provided in the embodiment of the present application. As shown in fig. 5, the task allocation device may include an acquisition unit 501 and a processing unit 502.
The acquiring unit 501 is configured to acquire a first task and first load information corresponding to the first task, where the first load information is used to indicate a system resource situation consumed when the first task is executed; and acquiring second load information of a local task queue of each working sub-thread, wherein the second load information is used for indicating the total load condition of a second task in the corresponding local task queue, and the second task is a task which is currently allocated in the corresponding local task queue.
And the processing unit 502 is configured to allocate the first task to a local task queue of a target working sub-thread based on the first load information and each piece of second load information, so that the target working sub-thread executes the first task, where before allocating the first task to the local task queue of the target working sub-thread, the second load information of the local task queue of the target working sub-thread is a target value in the second load information of each local task queue.
In some optional examples, the obtaining unit 501 is further configured to obtain, after the first task is allocated to a local task queue of a target working sub-thread based on the first load information and each of the second load information, a service priority of the first task, where the service priority of the first task is used to indicate a priority level when the first task is loaded in the local task queue of the target working sub-thread. The processing unit 502 is configured to, when the traffic priority of the first task is higher than the traffic priority of the third task and lower than the traffic priority of the fourth task, rank the first task before the third task and after the fourth task, where the third task and the fourth task are any two tasks already allocated in the local task queue of the target working sub-thread.
In other alternative examples, the obtaining unit 501 is configured to: acquiring the first task from a global task queue based on a first identifier, wherein each task in the global task queue is generated by each working sub-thread and/or working main thread, and the first identifier is used for identifying the first task; and acquiring first load information corresponding to the first task from the service field of the first task.
In other alternative examples, the obtaining unit 501 is further configured to: before the first task is obtained from a global task queue based on a first identification, the task generated by the main working thread and the task generated by each working sub-thread are obtained, wherein the task generated by each working sub-thread is the task generated by the corresponding working sub-thread when the task generated by the main working thread is executed. The processing unit 502 is configured to store the task generated by the main working thread and the task generated by each working sub-thread into the global task queue.
In other alternative examples, processing unit 502 is further configured to: after the first task is distributed to the local task queue of the target working sub-thread based on the first load information and each piece of second load information, the second load information of the local task queue of the target working sub-thread is updated after the target working sub-thread is determined to execute the first task.
In other alternative examples, processing unit 502 is configured to: and deleting the first task from the local task queue of the target working sub-thread to update the second load information of the local task queue of the target working sub-thread.
The task allocation device in the embodiment of the present application is described above from the point of view of the modularized functional entity, and the task allocation device in the embodiment of the present application is described below from the point of view of hardware processing. Fig. 6 is a schematic structural diagram of a task allocation device according to an embodiment of the present application. The task allocation device may vary considerably due to configuration or performance. The task allocation device may comprise at least one processor 601, communication lines 607, a memory 603 and at least one communication interface 604.
The processor 601 may be a general purpose central processing unit (central processing unit, CPU), microprocessor, application-specific integrated circuit (server IC), or one or more integrated circuits for controlling the execution of the program of the present application.
Communication line 607 may include a path to communicate information between the above components.
Communication interface 604, using any transceiver-like device, is used to communicate with other devices or communication networks, such as ethernet, radio access network (radio access network, RAN), wireless local area network (wireless local area networks, WLAN), etc.
The memory 603 may be a read-only memory (ROM) or other type of static storage device that may store static information and instructions, a random access memory (random access memory, RAM) or other type of dynamic storage device that may store information and instructions, and the memory may be stand-alone and coupled to the processor via a communication line 607. The memory may also be integrated with the processor.
The memory 603 is used for storing computer-executable instructions for executing the present application, and is controlled by the processor 601 for execution. The processor 601 is configured to execute computer-executable instructions stored in the memory 603, thereby implementing the method provided by the above-described embodiment of the present application.
Alternatively, the computer-executable instructions in the embodiments of the present application may be referred to as application program codes, which are not particularly limited in the embodiments of the present application.
In a specific implementation, the task allocation device may comprise a plurality of processors, such as the processor 601 and the processor 602 in fig. 6, as an embodiment. Each of these processors may be a single-core (single-CPU) processor or may be a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
In a specific implementation, as an embodiment, the task allocation device may further include an output device 605 and an input device 606. The output device 605 communicates with the processor 601 and may display information in a variety of ways. The input device 606 is in communication with the processor 601 and may receive input of a target object in a variety of ways. For example, the input device 606 may be a mouse, a touch screen device, a sensing device, or the like.
The task allocation device may be a general-purpose device or a special-purpose device. In a specific implementation, the task allocation device may be a server, a task allocation device, or the like, or a device having a similar structure as in fig. 6. The embodiment of the application is not limited to the type of the task allocation device.
It should be noted that, the processor 601 in fig. 6 may cause the task allocation device to execute the method in the method embodiment corresponding to fig. 2 by calling the computer-executable instructions stored in the memory 603.
In particular, the functions/implementations of the processing unit 502 in fig. 5 may be implemented by the processor 601 in fig. 6 invoking computer executable instructions stored in the memory 603. The functions/implementation of the acquisition unit 501 in fig. 5 may be implemented through the communication interface 604 in fig. 6.
The embodiment of the present application also provides a computer storage medium storing a computer program for electronic data exchange, where the computer program causes a computer to execute some or all of the steps of any one of the task allocation methods described in the above method embodiments.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer-readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of a method of task allocation as described in any of the method embodiments above.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of elements is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above-described embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof, and when implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When the computer-executable instructions are loaded and executed on a computer, the processes or functions in accordance with embodiments of the present application are fully or partially produced. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). Computer readable storage media can be any available media that can be stored by a computer or data storage devices such as servers, data centers, etc. that contain an integration of one or more available media. Usable media may be magnetic media (e.g., floppy disks, hard disks, magnetic tape), optical media (e.g., DVD), or semiconductor media (e.g., SSD)), or the like.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A method of task allocation, comprising:
acquiring a first task and first load information corresponding to the first task, wherein the first load information is used for indicating the condition of system resources required to be consumed when the first task is executed;
Acquiring second load information of a local task queue of each working sub-thread, wherein the second load information is used for indicating the total load condition of a second task in the corresponding local task queue, and the second task is a task which is currently allocated to the corresponding local task queue;
and distributing the first task to a local task queue of a target working sub-thread based on the first load information and each piece of second load information, so as to be used for the target working sub-thread to execute the first task, wherein before the local task queue of the target working sub-thread is distributed to obtain the first task, the second load information of the local task queue of the target working sub-thread is a target value in the second load information of each local task queue.
2. The method of claim 1, wherein after the assigning the first task to the local task queue of the target work sub-thread based on the first load information and each of the second load information, the method further comprises:
acquiring the service priority of the first task, wherein the service priority of the first task is used for indicating the priority degree when the first task is loaded in a local task queue of the target working sub-thread;
when the service priority of the first task is higher than that of a third task and lower than that of a fourth task, the first task is arranged before the third task and after the fourth task, and the third task and the fourth task are any two tasks which are already allocated in a local task queue of the target working sub-thread.
3. The method according to claim 1 or 2, wherein the acquiring the first task and the first load information corresponding to the first task includes:
Acquiring the first task from a global task queue based on a first identifier, wherein each task in the global task queue is generated by each working sub-thread and/or working main thread, and the first identifier is used for identifying the first task;
And acquiring first load information corresponding to the first task from the service field of the first task.
4. A method according to claim 3, wherein prior to the obtaining the first task from a global task queue based on the first identification, the method further comprises:
Acquiring a task generated by the main working thread and a task generated by each working sub-thread, wherein the task generated by each working sub-thread is a task generated by the corresponding working sub-thread when executing the task generated by the main working thread;
And storing the tasks generated by the working main thread and the tasks generated by each working sub thread into the global task queue.
5. The method of any of claims 1-2, wherein after the assigning the first task to the local task queue of the target work sub-thread based on the first load information and each of the second load information, the method further comprises:
and after the target working sub-thread is determined to execute the first task, updating second load information of a local task queue of the target working sub-thread.
6. The method of claim 5, wherein updating the second load information of the local task queue of the target work sub-thread after determining that the target work sub-thread performs the first task comprises:
And deleting the first task from the local task queue of the target working sub-thread to update the second load information of the local task queue of the target working sub-thread.
7. A task assigning apparatus, comprising:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a first task and first load information corresponding to the first task, and the first load information is used for indicating the condition of system resources consumed when the first task is executed;
The acquiring unit is configured to acquire second load information of a local task queue of each work sub-thread, where the second load information is used to indicate a total load condition of a second task in the corresponding local task queue, and the second task is a task that has been allocated currently in the corresponding local task queue;
The processing fetching unit is configured to allocate the first task to a local task queue of a target working sub-thread based on the first load information and each piece of second load information, so that the target working sub-thread executes the first task, where before the local task queue of the target working sub-thread is allocated to obtain the first task, the second load information of the local task queue of the target working sub-thread is a target value in the second load information of each local task queue.
8. A task allocation device, characterized in that the task allocation device comprises: an input/output (I/O) interface, a processor, and a memory, the memory having program instructions stored therein;
the processor is configured to execute program instructions stored in a memory and to perform the method of any one of claims 1 to 6.
9. A computer readable storage medium comprising instructions which, when run on a computer device, cause the computer device to perform the method of any of claims 1 to 6.
10. A computer program product, characterized in that the computer program product comprises instructions which, when run on a computer device or a processor, cause the computer device or the processor to perform the method of any of claims 1 to 6.
CN202211520900.XA 2022-11-30 2022-11-30 Task allocation method, device and storage medium Pending CN118113451A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211520900.XA CN118113451A (en) 2022-11-30 2022-11-30 Task allocation method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211520900.XA CN118113451A (en) 2022-11-30 2022-11-30 Task allocation method, device and storage medium

Publications (1)

Publication Number Publication Date
CN118113451A true CN118113451A (en) 2024-05-31

Family

ID=91207500

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211520900.XA Pending CN118113451A (en) 2022-11-30 2022-11-30 Task allocation method, device and storage medium

Country Status (1)

Country Link
CN (1) CN118113451A (en)

Similar Documents

Publication Publication Date Title
US10558498B2 (en) Method for scheduling data flow task and apparatus
US10664308B2 (en) Job distribution within a grid environment using mega-host groupings of execution hosts
CN108776934B (en) Distributed data calculation method and device, computer equipment and readable storage medium
US20150172204A1 (en) Dynamically Change Cloud Environment Configurations Based on Moving Workloads
US11966792B2 (en) Resource processing method of cloud platform, related device, and storage medium
CN110166507B (en) Multi-resource scheduling method and device
CN109960575B (en) Computing capacity sharing method, system and related equipment
CN109981702B (en) File storage method and system
US11936568B2 (en) Stream allocation using stream credits
WO2016074130A1 (en) Batch processing method and device for system invocation commands
US11418583B2 (en) Transaction process management by dynamic transaction aggregation
CN114896068A (en) Resource allocation method, resource allocation device, electronic device, and storage medium
CN115658311A (en) Resource scheduling method, device, equipment and medium
US11144359B1 (en) Managing sandbox reuse in an on-demand code execution system
CN114116173A (en) Method, device and system for dynamically adjusting task allocation
CN111291018A (en) Data management method, device, equipment and storage medium
US20220407817A1 (en) Resource allocation using distributed segment processing credits
CN115361349B (en) Resource using method and device
CN118113451A (en) Task allocation method, device and storage medium
CN114090234A (en) Request scheduling method and device, electronic equipment and storage medium
CN114237902A (en) Service deployment method and device, electronic equipment and computer readable medium
CN111796934B (en) Task issuing method and device, storage medium and electronic equipment
CN115878309A (en) Resource allocation method, device, processing core, equipment and computer readable medium
GB2504812A (en) Load balancing in a SAP (RTM) system for processors allocated to data intervals based on system load
CN111800446A (en) Scheduling processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication