CN112905326A - Task processing method and device - Google Patents
Task processing method and device Download PDFInfo
- Publication number
- CN112905326A CN112905326A CN202110187971.1A CN202110187971A CN112905326A CN 112905326 A CN112905326 A CN 112905326A CN 202110187971 A CN202110187971 A CN 202110187971A CN 112905326 A CN112905326 A CN 112905326A
- Authority
- CN
- China
- Prior art keywords
- task
- task processing
- target
- thread
- processing thread
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5038—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/48—Indexing scheme relating to G06F9/48
- G06F2209/484—Precedence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5011—Pool
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5018—Thread allocation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5021—Priority
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
Abstract
The application provides a task processing method and a task processing device, wherein the task processing method comprises the following steps: receiving at least two task processing requests, wherein each task processing request carries a target task; determining the task level of each target task based on a preset task level rule, and sequencing all target tasks based on the task levels to determine a target task queue; acquiring the current load of each task processing thread in a task processing thread pool, and sequentially determining the task processing thread corresponding to each target task in the target task queue based on the current load; and processing each target task based on the task processing thread corresponding to each target task.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a task processing method. The application also relates to a task processing system, a task processing device, a computing device and a computer readable storage medium.
Background
In the process of audio and video editing, related rendering operations are usually required to be performed on the acquired audio and video metadata, which is time-consuming, and if a rendering link to be completed at the same time is too long, the currently rendered data cannot be displayed in time. For the scene of audio and video editing, in most cases, a GPU (Graphics Processing Unit) is not used effectively, and in the case of synchronously Processing batch rendering commands through a single channel, the GPU often has idle waiting, so that the channel transmission efficiency of the GPU in the editing process is low, and the time consumed for Processing rendering tasks is long.
Disclosure of Invention
In view of this, an embodiment of the present application provides a task processing method. The application also relates to a task processing system, a task processing device, a computing device and a computer readable storage medium, which are used for solving the problem that the processing process consumes a long time when a single rendering thread processes a load task in the prior art.
According to a first aspect of embodiments of the present application, there is provided a task processing method, including:
receiving at least two task processing requests, wherein each task processing request carries a target task;
determining the task level of each target task based on a preset task level rule, and sequencing all target tasks based on the task levels to determine a target task queue;
acquiring the current load of each task processing thread in a task processing thread pool, and sequentially determining the task processing thread corresponding to each target task in the target task queue based on the current load;
and processing each target task based on the task processing thread corresponding to each target task.
According to a second aspect of embodiments of the present application, there is provided a task processing system including: a management device and an execution device;
the management device is configured to receive a task processing request and distribute a target task carried in the task processing request to the execution device;
the execution device comprises a rendering module, a background module and a display module,
the rendering module is configured to render the target task, the background module is configured to provide the target task processing background, and the presentation module is configured to present a processing result of the target task, wherein the rendering module executes the steps of the task processing method.
According to a third aspect of embodiments of the present application, there is provided a task processing apparatus including:
the task receiving module is configured to receive at least two task processing requests, wherein each task processing request carries a target task;
the first determining module is configured to determine a task level of each target task based on a preset task level rule, and sort all the target tasks based on the task levels to determine a target task queue;
the second determining module is configured to acquire a current load amount of each task processing thread in a task processing thread pool, and sequentially determine a task processing thread corresponding to each target task in the target task queue based on the current load amount;
and the processing module is configured to process each target task based on the task processing thread corresponding to each target task.
According to a fourth aspect of embodiments herein, there is provided a computing device comprising a memory, a processor and computer instructions stored on the memory and executable on the processor, the processor implementing the steps of the task processing method when executing the instructions.
According to a fifth aspect of embodiments of the present application, there is provided a computer-readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the task processing method.
The task processing method provided by the application receives at least two task processing requests, wherein each task processing request carries a target task; determining the task level of each target task based on a preset task level rule, and sequencing all target tasks based on the task levels to determine a target task queue; acquiring the current load of each task processing thread in a task processing thread pool, and sequentially determining the task processing thread corresponding to each target task in the target task queue based on the current load; and processing each target task based on the task processing thread corresponding to each target task.
According to the task scheduling method and device, the multiple target tasks are sequenced according to the task level rule, the task execution sequence is determined, the complex tasks can be processed preferentially, the simple task follow-up processing can be achieved, or some simple tasks give up processing due to overlong waiting processing time, the time consumption can be reduced, meanwhile, the threads capable of processing the tasks preferentially can be determined according to the current load of each task processing thread in the task processing thread pool, the complex target tasks are distributed to the threads with smaller loads to be processed through the determination of the task level of the tasks and the determination of the task processing threads, the reasonability of task distribution is achieved, the waiting time of the tasks can be saved, the consumed time for processing the multiple target tasks is shortened, and the processing efficiency of processing the target tasks is improved.
Drawings
FIG. 1 is a diagram of a system architecture for a task processing system for performing rendering tasks according to an embodiment of the present application;
FIG. 2 is a flowchart of a task processing method according to an embodiment of the present application;
FIG. 3 is a schematic processing structure diagram of a task processing system applied to image rendering according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of thread management and allocation applied to rendering task processing by a task processing method according to an embodiment of the present application;
FIG. 5 is a schematic structural diagram of a task processing device according to an embodiment of the present application;
fig. 6 is a block diagram of a computing device according to an embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
The terminology used in the one or more embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the present application. As used in one or more embodiments of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present application refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, etc. may be used herein in one or more embodiments of the present application to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first aspect may be termed a second aspect, and, similarly, a second aspect may be termed a first aspect, without departing from the scope of one or more embodiments of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
First, the noun terms to which one or more embodiments of the present application relate are explained.
GPU (Graphics Processing Unit, Graphics processor): the image processor is also called a display core, a visual processor and a display chip, and is a microprocessor which is specially used for image operation work on personal computers, workstations, game machines and some mobile devices (such as tablet computers, smart phones and the like).
GL Environment (OpenGL Environment): used to refer to the set of closures that currently provide the GPU communication channel and associated critical resources.
Load (Burden): and the unit level represents the processing saturation level of the current thread.
Texture (Texture): texture is graphical data that is used primarily to wrap different objects on a screen.
Execution Thread (Execute Thread): and executing the thread of the current task fragmentation queue.
In the present application, a task processing method is provided, and the present application relates to a task processing system, a task processing apparatus, a computing device, and a computer readable storage medium, which are described in detail in the following embodiments one by one.
The task processing method provided by the embodiment of the present specification can be applied to any scene that needs to allocate multiple tasks to multiple threads for processing, including but not limited to processing complex rendering tasks by multiple threads; for convenience of understanding, the embodiments of the present disclosure use the task processing method as an example of applying the multi-thread processing rendering task, but are not limited to this.
In the process of rendering the video frames, if a single-frame video is processed, the problem of too long rendering time can be solved by adopting a traditional delay playing technical means, but for continuous video frames, a frame dropping mode is adopted for processing, and when the rendering processing time length exceeds the frame interval time length, the subsequent frames can be selectively skipped, and the rendering processing window time is prolonged in a phase-changing manner by reducing the refresh rate.
In the traditional processing scheme, when complex tasks are processed through a single rendering thread, the processing process consumes too long time and is inconvenient to control; the GPU processing command is input through a single channel, so that the effective utilization rate of the GPU is reduced; meanwhile, the single rendering environment is not beneficial to classified rendering and background rendering; based on this, the task processing method provided in the embodiments of the present specification performs rendering by using multiple threads, increases the input speed of GPU processing commands, and increases the effective processing window period between video frames by asynchronous processing and time division multiplexing, thereby providing a method for the effective occupancy rate of the system GPU.
Referring to fig. 1, fig. 1 is a diagram illustrating a system architecture applied by a task processing system for performing rendering tasks according to an embodiment of the present application.
The task processing method provided by the embodiment of the present specification is applied to a system for executing a rendering task, for example, part a in fig. 1 is a rendering task execution environment device of a task processing system thereof, and part B in fig. 1 is a rendering environment management device of the task processing system thereof, where the rendering task execution device includes a rendering thread pool (the rendering thread pool includes N rendering threads), a main environment thread, a foreground display thread, and a back-stage preprocessing thread; the rendering environment management device comprises a rendering environment, a basic environment and a display environment. It should be noted that part B in fig. 1 adopts GL environment management in this embodiment, where the basic environment provides a shared texture space for the display environment and the rendering environment, the rendering environment provides a background rendering environment for the rendering task of the rendering thread depending on the basic environment, and the display environment depends on the basic environment as an associated foreground display module.
As can be seen from the system architecture in fig. 1, the rendering environment and the display environment in the rendering environment management apparatus share the basic environment to implement texture space sharing during execution of the rendering task, and the rendering environment and the display environment are created in real time by taking the basic environment as a center and taking the basic environment as a template according to specific requirements, in practical applications, in the case of a complex rendering task, threads and rendering tasks can be dynamically allocated according to the intensity of the current rendering task; after the rendering tasks are distributed to the rendering task execution thread devices in the part a in fig. 1, the rendering thread pool can execute the rendering tasks according to the selection of the strategy, and the automation of executing the complex rendering tasks is realized through the support of the main environment thread, the foreground display thread and the post-stage preprocessing thread, and a higher GPU utilization rate is realized.
Fig. 2 is a flowchart illustrating a task processing method according to an embodiment of the present application, which specifically includes the following steps:
step 202: receiving at least two task processing requests, wherein each task processing request carries a target task.
The target task may be understood as any task that needs to be processed by the task processing thread pool, such as an outline task, an angle task, a color task, and the like for rendering a certain video frame.
Specifically, the server receives at least two task processing requests, each task processing request carries a target task, it needs to be noted that a task type of each target task may be specifically set according to an actual application, and this is not limited in this embodiment of the present specification.
In practical applications, taking a task processing request for rendering a certain frame of animation picture as an example, the server may receive at least two task processing requests for rendering the frame of animation picture, where the task processing requests may include a background task rendering request, an animal contour task rendering request, an animal color task rendering request, and the like, and then the target task may include a background task rendering, an animal contour task rendering, an animal color task rendering, and the like.
Step 204: and determining the task level of each target task based on a preset task level rule, and sequencing all target tasks based on the task levels to determine a target task queue.
For example, the set task level rule may be divided into 10 levels, and if it is determined that both the complexity value of a certain task processing and the time value of the consumed time reach the maximum upper limit, the task level of the task may be set to 10 levels.
The target task queue may be understood as a task queue in which a plurality of target tasks are ordered based on task level.
Specifically, the server determines a task complexity value and a processing time value of each target task respectively for the received target tasks, determines a task level of each target task according to a preset task level rule, and sequences the target tasks from large to small according to the task levels of all the target tasks, thereby determining a task queue with a plurality of target tasks of the task levels.
According to the above example, still taking the task processing request for rendering a certain frame of animation picture as an example, the received multiple target tasks may be a background rendering task, an animal contour rendering task, and an animal color rendering task, and based on a preset task level rule, it may be determined that the background rendering task is level 3, the animal contour rendering task is level 5, and the animal color rendering task is level 4, and then the target task queue is the animal contour rendering task (level 5) -animal color rendering task (level 4) -background rendering task level (level 3).
Step 206: and acquiring the current load of each task processing thread in the task processing thread pool, and sequentially determining the task processing thread corresponding to each target task in the target task queue based on the current load.
The task processing thread pool may be understood as a task processing module composed of a plurality of task processing threads, and the current load may be understood as a processing saturation of a current thread in the task processing threads.
Specifically, after at least two task processing requests are received, a target task in each task processing request is determined into a target task processing queue according to a task level rule, the load of the current processing task of each task processing thread in a task processing thread pool is obtained, which can also be understood as the saturation degree of thread processing, and after the current load of each task processing thread is determined, the task processing thread corresponding to each target task is sequentially determined in the target task queue according to the load of the current task processing thread.
In order to improve the processing efficiency of processing the target task, the occupancy rate of a GPU is effectively utilized, the current processing efficiency of each task processing thread is determined by acquiring the current load of each task processing thread in a task processing thread pool, and then a proper task processing thread is distributed to the target task in the target task queue; specifically, the task processing method provided in the embodiment of the present specification further includes:
determining the priority of each task processing thread, and determining the current load of each task processing thread based on the priority;
and adjusting each task processing thread in the task processing thread pool based on the current load capacity of each task processing thread.
The priority can be understood as the priority of each task processing thread, and in practical application, the priority of the task processing thread for processing the current task can be represented.
In practical application, resource adjustment is firstly carried out on task processing threads in a task processing thread pool, the priority of each task processing thread is determined, the current load of each task processing thread is determined according to the priority, and each task processing thread in the task processing thread pool is adjusted based on the current load.
It should be noted that, in the whole task processing process, the resources of the task processing threads in the task processing thread pool are constantly adjusted to adapt to the subsequent assignment of the target task to the appropriate task processing thread for execution.
In the task processing method provided in the embodiments of the present specification, the resource of the task processing thread in the task processing thread pool is adjusted, so that the target task is subsequently allocated to a proper task processing thread for processing, and the processing efficiency of the task processing thread is further improved.
In addition, the historical load capacity, the load coefficient and the running time of the historical processing thread of each task processing thread in the task processing thread pool are obtained; and calculating the current load capacity of each task processing thread based on the historical load capacity, the load coefficient and the historical processing thread running time.
The historical load quantity can be understood as the accumulated load magnitude (unit level) of the total time consumed by the processing in the historical process of the task processing thread, the load coefficient can be understood as the historical weight coefficient and can be used for regression calculation influenced by the subsequent coarse-coarse accumulated quantity, and the historical processing thread running time can be understood as the retained time (unit ms) of the historical running of the current thread.
Specifically, after acquiring the historical load amount, the load coefficient and the historical processing thread running time of each task processing thread in the task thread pool, the server calculates the current load amount of each task processing thread based on the historical load amount, the load coefficient and the historical processing thread running time of each task processing thread.
In practical application, the current load of a task processing thread is calculated by the running time/load unit of a historical processing thread plus the historical load x load coefficient, wherein the load calculation unit is 100 ms/level; for example, if the obtained historical load amount of the task processing thread is 10 levels, the load factor is 0.1, and the running time of the historical processing thread is 100ms, then according to the calculation method for calculating the current load amount provided in the embodiment of the present specification, 100ms/100ms/level +10 levels × 0.1 — 2 levels may be obtained.
In the task processing method provided in the embodiments of the present description, the historical load amount, the load coefficient, and the running time of the historical processing thread of each task processing thread are obtained, and the current load amount of the task processing thread is determined, so that a corresponding target task can be subsequently allocated to each task processing thread according to the current load amount, and the task processing efficiency is improved.
Further, the determining the priority of each task processing thread and the current load amount of each task processing thread based on the priority comprises:
s1: determining the initial priority of each task processing thread in the task processing thread pool, and initializing the initial priority;
s2: determining a first priority of each task processing thread in the task processing thread pool, and judging whether the first priority is greater than or equal to the highest priority,
if yes, determining a first to-be-detected time of each task processing thread, taking the first priority as the initial priority based on the first to-be-detected time, and continuing to execute step S1;
if not, judging whether the first priority is more than or equal to 1 and less than the highest priority;
if yes, determining second time to be detected of each task processing thread, taking the first priority as the initial priority based on the second time to be detected, and continuing to execute step S1;
if not, determining the current load capacity of each task processing thread under the condition that the first priority is less than 1.
The time to be detected can be understood as the time required for waiting for the next detection of the thread after the thread is detected, and it should be noted that the first time to be detected is different from the second time to be detected.
Specifically, an initial priority of each task processing thread in the whole task processing thread pool is determined, and the initial priority is initialized first, wherein the initialization can reduce the initial priority by one, so as to determine a first priority of each task processing thread, determine whether the first priority is greater than or equal to a highest priority, determine a first time to be detected of the task processing thread when the first priority of the task processing thread is greater than the highest priority, determine the first priority as the initial priority, and continue to initialize the priority of the task processing thread; under the condition that the first priority of the task processing thread is greater than or equal to 1 and smaller than the highest priority, determining the second time to be detected of each task processing thread, continuously determining the first priority as the initial priority, and continuously initializing the priorities of the task processing threads; and determining the current load amount of each task processing thread under the condition that the first priority of the task processing thread is less than 1.
It should be noted that the initial priority of the task processing thread may be preset by the current load of the thread, and the highest priority of the task processing thread may also be preset.
In practical application, by judging the priority of the task processing thread, the processing degree of the current task of each task processing thread or the processing speed of the target task can be judged, and further the task processing thread can be adjusted.
For example, there are 3 task processing threads in the task processing thread pool, which are respectively thread 1, thread 2, and thread 3, and the initial priority of each thread is determined to be 1, 4, and 6, where the highest priority is 5, the initial priority is initialized, and the initial priority is reduced by one, and then taking thread 1 as an example, and the processed priority is 0, it is determined that the first priority of the thread 1 is smaller than 1, and then the current load of the thread 1 is determined; taking thread 2 as an example, if the processed priority is 3, and the priority of thread 2 is determined to be greater than 1 and less than the highest priority 5, determining the second time to be detected of thread 2, and taking the processed priority 3 as the initial priority based on the second time to be detected, and continuing to perform priority detection; taking thread 3 as an example, if the processed priority is 5, and it is determined that the priority of the thread is equal to the highest priority 5, a first time to be detected of the thread 3 is determined, and the priority detection is continued with the processed priority 5 as the initial priority based on the first time to be detected.
In the task processing method provided in the embodiments of the present specification, by determining the priority of a thread, the priority is dynamically adjusted, so as to adjust the thread in a task processing thread pool, and improve the thread task processing efficiency.
Further, the adjusting each task processing thread in the task processing thread pool based on the current load amount of each task processing thread includes:
judging whether the current load amount is larger than or equal to the highest load amount or not based on the current load amount of each task processing thread,
if so, marking the priority of the task processing thread, and adjusting the priority of the task processing thread to be a second priority.
Specifically, the current load capacity of each task processing thread may be determined, whether the current load capacity is greater than or equal to the highest load capacity is determined, and when the current load capacity of the task processing thread is greater than the highest load capacity, the priority of the task processing thread is marked and the priority is adjusted to the second priority.
In practical application, a marking process can be performed on the priority of the task processing thread, so that the priority of the task processing thread is increased and adjusted, and then the priority adjustment of the task processing thread is realized, so that the adjusted priority can be judged subsequently, and further the thread resource of the task processing thread pool is adjusted.
In the task processing method provided in the embodiment of the present specification, the priority of the task processing thread is adjusted, so as to subsequently determine the current load of the task processing thread, and further implement resource allocation for the threads in the thread pool.
Further, after adjusting each task processing thread in the task processing thread pool based on the current load amount of each task processing thread, the method further includes:
determining whether the current load amount is greater than a minimum load amount and less than the maximum load amount,
if so, determining the destroy thread state of the task processing thread, and destroying the task processing thread under the condition that the destroy thread state is determined to meet the preset destroy condition.
The thread destroying state may be understood as a state in which the task processing thread is in a thread destroying state.
In practical application, when it is determined that the current load capacity of the task processing thread is greater than the lowest load capacity and less than the highest load capacity, the destroy thread state of the task processing thread may be processed by adding 1, it should be noted that the initial setting of the destroy thread state may be 0, and after the destroy thread state is processed by adding 1, the thread may be destroyed in a delayed manner, and the thread is waited to be detected next time, rather than being destroyed immediately; and under the condition that the preset destroying condition is met, destroying the task processing thread. When the priority of a thread with high priority is lower than that of a thread with high priority, the thread needs to wait for polling, and the thread with high priority is executed in a time order before waiting for processing of the assignment target task.
The task processing method provided in the embodiments of the present description implements reasonable adjustment of threads in a task processing thread pool by determining a thread destroying state of a task processing thread.
After the sequencing of a plurality of target tasks in the target task queue is determined, distributing the sequenced target tasks according to the sequence; specifically, the sequentially determining a task processing thread corresponding to each target task in the target task queue based on the current load amount includes:
determining a task processing thread corresponding to the ith target task in the target task queue based on the current load capacity of each task processing thread, wherein i belongs to [1, n ], and n is the maximum thread number;
it is determined whether i is greater than n,
and if not, increasing the i by 1, and continuously executing the task processing thread corresponding to the ith target task in the target task queue determined based on the current load.
Specifically, a task processing thread corresponding to the ith target task in the target task queue can be determined based on the current load of each task processing thread, wherein i is a positive integer and belongs to [1, n ], whether i is larger than n is judged, if not, i is increased by 1, and the task processing thread corresponding to the ith target task in the target task queue is determined based on the current load of the task processing thread.
In practical application, after the current load amount of each task processing thread in the task processing thread pool is determined, a task processing thread for executing a task may be determined for the 1 st target task in the target task queue, and it is determined whether 1 is between [1, n ], where n represents the number of all target tasks in the target task queue, and n target tasks in the target task queue are sequentially allocated to the task processing threads for processing.
For example, the target task queue has 5 target tasks, the task processing thread corresponding to the 1 st target task in the target task queue is determined based on the current load amount of each task processing thread, and by judging whether 1 is greater than 5, the corresponding target task in the target task queue is sequentially and circularly executed and allocated to the corresponding task processing thread.
In the task processing thread provided in the embodiment of the present description, the current load of each task processing thread is determined, and the target tasks in the target task queue are sequentially allocated to the corresponding task processing threads, so that the target tasks are divided and allocated to different task processing threads for processing.
The sequentially determining the task processing thread corresponding to each target task in the target task queue based on the current load amount includes:
acquiring the execution time of a current task in each task processing thread, and determining the accumulated load of each task processing thread based on the execution time;
determining whether the accumulated load amount is less than a maximum load amount,
and if so, determining a task processing thread corresponding to each target task in the target task queue based on the accumulated load.
The accumulated load amount may be understood as a current task load amount in the task processing thread within a preset time period.
In practical application, the execution time of the current task in each task processing thread is obtained, the accumulated load of each task processing thread is determined based on the execution time, and the task processing thread corresponding to each target task in the target task queue is determined based on the accumulated load under the condition that the accumulated load is less than the maximum load.
The task processing method provided in the embodiments of the present description determines, through an accumulated load amount, a target task that can be executed by each task processing thread, and improves efficiency of target task allocation.
Determining a task processing thread with the minimum current load capacity as a task processing thread corresponding to a target task in a task processing thread pool, specifically, after determining the cumulative load capacity of each task processing thread based on the execution time, the method further includes:
and under the condition that the accumulated load is greater than or equal to the highest load, determining the task processing thread with the minimum current load of the task processing threads as the task processing thread corresponding to the target task.
In practical application, when the accumulated load of the task processing threads is greater than or equal to the maximum load, the task processing thread with the minimum current load is determined as the task processing thread corresponding to the target task in the task processing thread pool.
When a plurality of target tasks need to be allocated, for each allocation of the target tasks, a task processing thread with the minimum current load amount is selected from the task processing thread pool as a task processing thread corresponding to the target task. For example, taking a task processing request for rendering a certain frame of animation picture as an example, the target tasks in the target task queue respectively comprise a rendering background task, a rendering animal outline task and a rendering animal color task, and after sequencing according to the task processing level, the target task queue can be determined to be the rendering animal outline task, the rendering animal color task and the rendering background task; and determining that the target task (the animal contour rendering task) in the target task queue is the target task to be executed, determining the thread 1 with the minimum current load capacity as a target task processing thread, and determining the thread 1 as the task processing thread of the animal contour rendering task.
In another embodiment, when target tasks are allocated to processing threads one by one, when it is determined that the current load space of any thread is smaller than the load consumption of the target task to be allocated, first, whether the number of current threads in a task processing thread pool is smaller than the number of threads preset in the task processing thread pool is judged, if yes, a new task processing thread is created, and the newly added task processing thread is used as the target thread of the target task to be allocated; and if not, determining that the thread with the minimum current load in the task processing thread pool is the target processing thread corresponding to the target task.
In the embodiment of the present specification, by allocating the target task to the thread with the smallest current load in the task processing threads for execution, not only is the execution burden of the thread avoided from being too heavy, but also a plurality of target tasks can be distributed to each thread in the task processing thread pool for execution according to the policy, so as to implement multi-thread parallel processing of the target task, so that the processing time sequence of the processing flow is synchronous, thereby implementing optimization of the processing process and improving the processing speed.
After each target task in the task processing threads is allocated, deleting the target task to be executed in a task processing thread pool, wherein the processing of each target task based on the task processing thread corresponding to each target task comprises the following steps:
and processing the target task to be executed based on the target task processing thread, and deleting the target task to be executed after the processing is finished.
In practical applications, in order to further save thread processing resources, after each target task processing thread completes processing of the target task to be executed, the target task to be executed is deleted, so as to reduce memory resources of the task processing thread.
Taking the task of rendering a certain frame of animation as an example, the thread 1 is allocated to process a task of rendering an animal outline, and after the processing is completed, the task of rendering the animal outline is deleted to reduce thread processing resources.
In the task processing method provided in the embodiments of the present specification, after the target task processing thread completes processing of the target task to be executed, the target task to be executed is deleted, so that not only can processing resources of the thread be saved, but also processing efficiency of the thread can be improved.
Step 208: and processing each target task based on the task processing thread corresponding to each target task.
Specifically, after each task processing thread is allocated to a corresponding target task, each allocated target task is processed, it needs to be noted that each task processing thread may process a plurality of target tasks, which is not limited in this embodiment of the present disclosure, and the target task executed in each task processing thread may be determined according to a load condition of the specific task processing thread.
For the distribution of the threads, filling the tasks to the low-load threads preferentially according to the priority of the target task processing threads, and when the task filling request exceeds the maximum load, establishing new threads for distribution by the task processing thread pool; specifically, the sequentially determining a task processing thread corresponding to each target task in the target task queue based on the current load amount includes:
acquiring the number of all task processing threads in a task processing thread pool under the condition that the accumulated load is greater than or equal to the highest load;
under the condition that the number of all task processing threads is smaller than the preset thread number and unallocated target tasks exist in the target task queue, creating a new task processing thread in the task processing thread pool;
and determining the current load capacity of the new task processing thread, and determining the task processing thread corresponding to each target task in the target task queue based on the current load capacity.
Specifically, in the process of gradually allocating the target task, when it is determined that the current load space of any thread is smaller than the load consumption of the target task to be allocated, the server may obtain the number of all task processing threads in the task processing thread pool, determine that the number of all task processing threads is smaller than the preset number of threads, and under the condition that the target task queue still has the unallocated target task, may create a new task processing thread in the task processing thread pool, so as to subsequently allocate the target task in the target task queue that has not been allocated to the new task processing thread for processing.
In practical application, in order to allocate an unallocated target task in the target task queue to a proper task processing thread for task processing, a new task processing thread can be created in the task processing thread pool for processing the target task, so that the processing time of a plurality of target tasks is shortened, and the processing speed of the plurality of target tasks is increased.
In the embodiment of the present specification, under the condition that it is determined that the target task queue has the target task that has not been allocated yet, a new task processing thread is created in the task processing thread pool, and the target task that has not been allocated yet is processed, so as to ensure that the burden of the task processing thread is not significantly too heavy, and the processing speed can be further increased.
In summary, the task processing method provided in the embodiments of the present specification can implement the priority processing of complex tasks and the subsequent processing of simple tasks by sequencing a plurality of target tasks according to the task level rule and determining the order of executing tasks, or some simple task abandons the processing due to too long waiting time, so as to reduce the time consumption, meanwhile, the thread which can process the task in priority is determined according to the current load of each task processing thread in the task processing thread pool, by determining the task level of the task and the task processing thread, the complex target task is allocated to the thread with smaller load for processing, thereby not only realizing the rationality of task allocation, but also saving the waiting time of the task, therefore, the time consumption for processing a plurality of target tasks is shortened, and the processing efficiency of processing the target tasks is improved.
The following description will further describe the task processing method by taking an application of the task processing system provided in the present application in image rendering as an example, with reference to fig. 3. Fig. 3 shows a schematic structural diagram of a task processing system applied to image rendering according to an embodiment of the present application.
Part a in fig. 3 is a rendering thread pool, part B in fig. 3 is a main environment thread, and part C in fig. 3 is a foreground display thread, wherein the rendering thread pool can be divided into 4 rendering threads: rendering thread 1, rendering thread 2, rendering thread 3 and rendering thread 4; and data sharing is carried out between the main environment thread and the foreground display thread through data refreshing.
Further, the task processing system includes: a management device and an execution device;
the management device is configured to receive a task processing request and distribute a target task carried in the task processing request to the execution device;
the execution device comprises a rendering module, a background module and a display module,
the rendering module is configured to render the target task, the background module is configured to provide the target task processing background, and the presentation module is configured to present a processing result of the target task, wherein the rendering module executes the steps of the task processing method.
Specifically, the management device may be understood as a part B and a part C in the drawing, the execution device may be understood as a part a in the drawing, the execution device renders the target task, and further includes a background module for processing a background and a presentation module for presenting a processing result of the target task, and the management device may distribute a plurality of target tasks.
In practical application, for upgrading subsequent processing policies, corresponding judgment processes are packaged in a policy selection form, an expansion space is reserved for subsequent introduction of more effective logic, processing of policy judgment can be determined by a GL thread pool, multiple target tasks of a target task queue are cut each time, iteration is performed when each foreground queue in a rendering engine is updated, a task queue for image rendering is E22_1- > C22- > E22- > T2- > E13_1- > C13- > T1- > E1- > (R) at a certain time is taken as an example, by the task processing method provided by the present specification, the multiple target tasks are sliced, each target task is sliced by selection of an allocation policy and allocated to a corresponding rendering thread, as shown in a rendering thread pool in a part a in fig. 3, and finally allocating the tasks E22_1- > C22 to the rendering thread 1, allocating the tasks E22- > T2 to the rendering thread 2, allocating the tasks E13_1- > C13 to the rendering thread 3, allocating the tasks T1- > E1 to the rendering thread 4, allocating the last task (R) to the main environment thread for processing, issuing the processed tasks, transmitting the processed tasks to a display cache system through data refreshing and waiting for displaying.
It should be noted that, when the whole image rendering job is executed in one rendering thread, the burden of the rendering thread may be significantly excessive, which may cause the rendering thread to be unable to complete in a short time, and affect the use experience of a user, in order to solve the above problems, the task processing method provided in the embodiments of the present description implements, through determination of an allocation policy, that a plurality of tasks are respectively allocated to corresponding appropriate rendering threads to be processed, except that a final Output portion (i.e., Output) is located in a main environment thread, processing operations of other nodes in a rendering link need to be dispersed into each rendering thread, where an execution order of the rendering threads is unchanged; in practical application, the target tasks in the target task queue are sliced, the target tasks are not changed due to the dispersion of the target tasks, and the time sequence of the whole processing process is synchronous.
According to the task processing method provided by the embodiment of the specification, the target tasks in the target task queue are sequentially distributed to different rendering threads according to the selection of the task distribution strategy by slicing the target tasks, so that the target tasks can be reasonably distributed, a high GPU utilization rate is further realized, more complex rendering processes are realized within the same time, and the user experience is improved.
Fig. 4, which is described below with reference to fig. 4, illustrates a schematic structural diagram of a thread management and allocation applied to rendering task processing by a task processing method according to an embodiment of the present application.
Part a of fig. 4 is system modules for processing rendering tasks, which include five system support modules, respectively a 1: post-stage pre-processing thread, a 2: foreground show thread, a 3: rendering thread management, a 4: rendering thread pool, a 5: a rendering thread policy machine; part B of fig. 4 is an environment support module for processing rendering tasks, which includes two support modules, respectively B1: environment manager, b 2: the environment provides socket middleware.
In practical application, the rendering environment is managed in the multi-path rendering process, dynamic addition and deletion operations are performed on rendering threads, and the number of threads supported by a CPU is increased, so that the task processing method provided by the embodiment of the specification is more advantageous. Most rendering engines process one-time effective interface refreshing by performing segmentation multiplexing on a rendering pipeline, and thread task granularity is one-time rendering process, and most of the realization technologies are established under the condition that a 3D scene GPU is occupied by batch tasks, and the processing speed is improved by depending on the optimization of the processing process. However, in the editing situation, the problem of complex interaction such as real-time start-stop is mainly solved, the whole rendering process does not need to consider the timing problem like a game engine, and in this situation, the processing time of an effective task in one thread must be as less as possible than the sum of two video frame image refreshing intervals and the starting window time, namely Δ tOperation≤ΔtInterval+ WindowSize; the task processing method provided by the embodiment of the specification needs to reasonably and dynamically allocate the calling of the part of resources.
A5 in fig. 4: the rendering thread policy engine module is a module for providing an allocation policy, and the specific implementation steps may begin by first adjusting system resources, where the specific adjustment method is as follows: step 1, receiving currently allocated task processing thread information, starting to detect one by one, and selecting one in sequence, if no task processing thread information is detected currently, directly skipping to finish system resource adjustment; step 2, selecting a currently detected thread, and detecting the priority of the currently detected task processing thread; step 3, determining the priority of the task processing thread, and reducing the priority by one; step 4, if the priority of the current task processing thread is determined to be larger than or equal to the highest priority, determining the standard interval time of waiting of the task processing thread, and continuously receiving the information of the next task processing thread; step 5, if the priority of the current task processing thread is determined to be more than or equal to 1 and less than the highest priority, determining the waiting time of the task processing thread to be (highest priority-priority) standard interval time, and continuously receiving the information of the next task processing thread; step 6, if the priority of the current task processing thread is less than 1, executing step 7; step 7, calculating and recording the load of the current task processing thread, wherein the load is historical processing thread running time/load unit + historical load coefficient; step 8, judging whether to promote the priority according to the load of the current thread; step 9, if the load of the current thread is greater than or equal to the highest load, adding 1 to the priority of the task processing thread; step 10, receiving the information of the next task processing thread if the load of the current thread is less than or equal to the minimum load; step 11, if the load of the current thread is greater than the lowest load and less than the highest load, adding 1 to the delayed destruction state of the current task processing thread; step 12, judging that the delayed destruction state is greater than or equal to the destruction threshold, destroying the current task processing thread, and continuously receiving the information of the next task processing thread; and step 13, completing the resource adjustment of the system.
A4 in fig. 4: and for the distribution of the task processing threads, a task execution module in the rendering thread pool preferentially fills target tasks into the low-load threads according to the priority of the task processing threads, and distributes new task processing threads in the rendering thread pool when the target tasks requested to be distributed exceed the maximum load, wherein the specific task execution strategy machine in the rendering thread pool is as follows: step 1, starting task allocation, recording the cumulative load as 0, and emptying a task processing queue at the moment; step 2, receiving a current task list, starting to detect one by one, sequentially taking down a next task, and if not, completing the task execution distribution; step 3, traversing the task processing thread list, and calculating the accumulated load according to the current task execution time consumption, wherein the accumulated load is task consumption plus the accumulated load; step 4, if the accumulated load is less than the highest load, the current task is put into a task processing queue; step 5, if the accumulated load is greater than or equal to the highest load, skipping to step 6; step 6, traversing the current thread pool, screening out the thread with the smallest task processing thread historical load and the smallest load level, and recording the screened thread as an execution thread if the thread is screened out; step 7, judging whether the number of the current threads exceeds the maximum thread pool, if not, creating a new thread in the task processing thread pool, adding the new thread into the task processing thread pool, and if so, recording the screened thread as an execution thread; step 8, after the current task processing queue is input into the execution thread, the task processing queue is emptied, and the load record is reduced to 0; step 9, updating the duration of the current task processing thread, and continuously traversing the thread list; and step 10, completing the task execution and distribution.
The task processing method provided by the embodiment of the specification can realize the priority processing of complex tasks and the subsequent processing of simple tasks by sequencing a plurality of target tasks according to the task level rule and determining the execution task sequence, or some simple task abandons the processing due to too long waiting time, so as to reduce the time consumption, meanwhile, the thread which can process the task in priority is determined according to the current load of each task processing thread in the task processing thread pool, by determining the task level of the task and the task processing thread, the complex target task is allocated to the thread with smaller load for processing, thereby not only realizing the rationality of task allocation, but also saving the waiting time of the task, therefore, the time consumption for processing a plurality of target tasks is shortened, and the processing efficiency of processing the target tasks is improved.
Corresponding to the above method embodiment, the present application further provides an embodiment of a task processing device, and fig. 5 shows a schematic structural diagram of a task processing device provided in an embodiment of the present application. As shown in fig. 5, the apparatus includes:
a task receiving module 502 configured to receive at least two task processing requests, where each task processing request carries a target task;
a first determining module 504, configured to determine a task level of each target task based on a preset task level rule, and sort all target tasks based on the task level to determine a target task queue;
a second determining module 506, configured to obtain a current load amount of each task processing thread in a task processing thread pool, and sequentially determine, based on the current load amount, a task processing thread corresponding to each target task in the target task queue;
a processing module 508 configured to process each target task based on the task processing thread corresponding to the target task.
Optionally, the apparatus further comprises:
determining the priority of each task processing thread, and determining the current load of each task processing thread based on the priority;
and adjusting each task processing thread in the task processing thread pool based on the current load capacity of each task processing thread.
Optionally, the apparatus further comprises:
s1: determining the initial priority of each task processing thread in the task processing thread pool, and initializing the initial priority;
s2: determining a first priority of each task processing thread in the task processing thread pool, and judging whether the first priority is greater than or equal to the highest priority,
if yes, determining a first to-be-detected time of each task processing thread, taking the first priority as the initial priority based on the first to-be-detected time, and continuing to execute step S1;
if not, judging whether the first priority is more than or equal to 1 and less than the highest priority;
if yes, determining second time to be detected of each task processing thread, taking the first priority as the initial priority based on the second time to be detected, and continuing to execute step S1;
if not, determining the current load capacity of each task processing thread under the condition that the first priority is less than 1.
Optionally, the apparatus further comprises:
judging whether the current load amount is larger than or equal to the highest load amount or not based on the current load amount of each task processing thread,
if so, marking the priority of the task processing thread, and adjusting the priority of the task processing thread to be a second priority.
Optionally, the apparatus further comprises:
determining whether the current load amount is greater than a minimum load amount and less than the maximum load amount,
if so, determining the destroy thread state of the task processing thread, and destroying the task processing thread under the condition that the destroy thread state is determined to meet the preset destroy condition.
Optionally, the second determining module 506 is further configured to:
acquiring the execution time of a current task in each task processing thread, and determining the accumulated load of each task processing thread based on the execution time;
determining whether the accumulated load amount is less than a maximum load amount,
and if so, determining a task processing thread corresponding to each target task in the target task queue based on the accumulated load.
Optionally, the second determining module 506 is further configured to:
and under the condition that the accumulated load is greater than or equal to the highest load, determining the task processing thread with the minimum current load of the task processing threads as the task processing thread corresponding to the target task.
Optionally, the processing module 508 is further configured to:
and processing the target task to be executed based on the target task processing thread, and deleting the target task to be executed after the processing is finished.
Optionally, the second determining module 506 is further configured to:
determining a task processing thread corresponding to the ith target task in the target task queue based on the current load capacity of each task processing thread, wherein i belongs to [1, n ], and n is the maximum thread number;
it is determined whether i is greater than n,
and if not, increasing the i by 1, and continuously executing the task processing thread corresponding to the ith target task in the target task queue determined based on the current load.
Optionally, the second determining module 506 is further configured to:
acquiring the number of all task processing threads in a task processing thread pool under the condition that the accumulated load is greater than or equal to the highest load;
under the condition that the number of all task processing threads is smaller than the preset thread number and unallocated target tasks exist in the target task queue, creating a new task processing thread in the task processing thread pool;
and determining the current load capacity of the new task processing thread, and determining the task processing thread corresponding to each target task in the target task queue based on the current load capacity.
The task processing device provided by the embodiment of the specification can realize the priority processing of complex tasks and the subsequent processing of simple tasks by sequencing a plurality of target tasks according to the task level rule and determining the execution task sequence, or some simple task abandons the processing due to too long waiting time, so as to reduce the time consumption, meanwhile, the thread which can process the task in priority is determined according to the current load of each task processing thread in the task processing thread pool, by determining the task level of the task and the task processing thread, the complex target task is allocated to the thread with smaller load for processing, thereby not only realizing the rationality of task allocation, but also saving the waiting time of the task, therefore, the time consumption for processing a plurality of target tasks is shortened, and the processing efficiency of processing the target tasks is improved.
The above is a schematic arrangement of a task processing device of the present embodiment. It should be noted that the technical solution of the task processing device and the technical solution of the task processing method belong to the same concept, and for details that are not described in detail in the technical solution of the task processing device, reference may be made to the description of the technical solution of the task processing method.
Fig. 6 illustrates a block diagram of a computing device 600 provided according to an embodiment of the present application. The components of the computing device 600 include, but are not limited to, a memory 610 and a processor 620. The processor 620 is coupled to the memory 610 via a bus 630 and a database 650 is used to store data.
Computing device 600 also includes access device 640, access device 640 enabling computing device 600 to communicate via one or more networks 660. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. Access device 640 may include one or more of any type of network interface (e.g., a Network Interface Card (NIC)) whether wired or wireless, such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the present application, the above-described components of computing device 600, as well as other components not shown in FIG. 6, may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device architecture shown in FIG. 6 is for purposes of example only and is not limiting as to the scope of the present application. Those skilled in the art may add or replace other components as desired.
Computing device 600 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smartphone), wearable computing device (e.g., smartwatch, smartglasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 600 may also be a mobile or stationary server.
Wherein the processor 620 implements the steps of the task processing method when executing the instructions.
The above is an illustrative scheme of a computing device of the present embodiment. It should be noted that the technical solution of the computing device and the technical solution of the task processing method belong to the same concept, and details that are not described in detail in the technical solution of the computing device can be referred to the description of the technical solution of the task processing method.
An embodiment of the present application further provides a computer readable storage medium, which stores computer instructions, and the instructions, when executed by a processor, implement the steps of the task processing method as described above.
The above is an illustrative scheme of a computer-readable storage medium of the present embodiment. It should be noted that the technical solution of the storage medium belongs to the same concept as the technical solution of the task processing method, and details that are not described in detail in the technical solution of the storage medium can be referred to the description of the technical solution of the task processing method.
The foregoing description of specific embodiments of the present application has been presented. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The computer instructions comprise computer program code which may be in the form of source code, object code, an executable file or some intermediate form, or the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The preferred embodiments of the present application disclosed above are intended only to aid in the explanation of the application. Alternative embodiments are not exhaustive and do not limit the invention to the precise embodiments described. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and its practical applications, to thereby enable others skilled in the art to best understand and utilize the application. The application is limited only by the claims and their full scope and equivalents.
Claims (14)
1. A task processing method, comprising:
receiving at least two task processing requests, wherein each task processing request carries a target task;
determining the task level of each target task based on a preset task level rule, and sequencing all target tasks based on the task levels to determine a target task queue;
acquiring the current load of each task processing thread in a task processing thread pool, and sequentially determining the task processing thread corresponding to each target task in the target task queue based on the current load;
and processing each target task based on the task processing thread corresponding to each target task.
2. The task processing method according to claim 1, further comprising:
determining the priority of each task processing thread, and determining the current load of each task processing thread based on the priority;
and adjusting each task processing thread in the task processing thread pool based on the current load capacity of each task processing thread.
3. The task processing method according to claim 2, wherein the determining a priority of each task processing thread and determining a current load amount of each task processing thread based on the priority comprises:
s1: determining the initial priority of each task processing thread in the task processing thread pool, and initializing the initial priority;
s2: determining a first priority of each task processing thread in the task processing thread pool, and judging whether the first priority is greater than or equal to the highest priority,
if yes, determining a first to-be-detected time of each task processing thread, taking the first priority as the initial priority based on the first to-be-detected time, and continuing to execute step S1;
if not, judging whether the first priority is more than or equal to 1 and less than the highest priority;
if yes, determining second time to be detected of each task processing thread, taking the first priority as the initial priority based on the second time to be detected, and continuing to execute step S1;
if not, determining the current load capacity of each task processing thread under the condition that the first priority is less than 1.
4. The task processing method according to claim 3, wherein said adjusting each task processing thread in the task processing thread pool based on the current load amount of each task processing thread comprises:
judging whether the current load amount is larger than or equal to the highest load amount or not based on the current load amount of each task processing thread,
if so, marking the priority of the task processing thread, and adjusting the priority of the task processing thread to be a second priority.
5. The task processing method according to claim 3, wherein after adjusting each task processing thread in the task processing thread pool based on the current load amount of each task processing thread, the method further comprises:
determining whether the current load amount is greater than a minimum load amount and less than the maximum load amount,
if so, determining the destroy thread state of the task processing thread, and destroying the task processing thread under the condition that the destroy thread state is determined to meet the preset destroy condition.
6. The task processing method according to any one of claims 1 to 5, wherein the sequentially determining the task processing thread corresponding to each target task in the target task queue based on the current load amount includes:
acquiring the execution time of a current task in each task processing thread, and determining the accumulated load of each task processing thread based on the execution time;
determining whether the accumulated load amount is less than a maximum load amount,
and if so, determining a task processing thread corresponding to each target task in the target task queue based on the accumulated load.
7. The task processing method according to claim 6, wherein after determining the cumulative load amount of each task processing thread based on the execution time, the method further comprises:
and under the condition that the accumulated load is greater than or equal to the highest load, determining the task processing thread with the minimum current load of the task processing threads as the task processing thread corresponding to the target task.
8. The task processing method according to claim 6, wherein the processing each target task based on the task processing thread corresponding to each target task includes:
and processing the target task to be executed based on the target task processing thread, and deleting the target task to be executed after the processing is finished.
9. The task processing method according to claim 1 or 2, wherein the sequentially determining the task processing thread corresponding to each target task in the target task queue based on the current load amount includes:
determining a task processing thread corresponding to the ith target task in the target task queue based on the current load capacity of each task processing thread, wherein i belongs to [1, n ], and n is the maximum thread number;
it is determined whether i is greater than n,
and if not, increasing the i by 1, and continuously executing the task processing thread corresponding to the ith target task in the target task queue determined based on the current load.
10. The task processing method according to claim 7, wherein the sequentially determining the task processing thread corresponding to each target task in the target task queue based on the current load amount comprises:
acquiring the number of all task processing threads in a task processing thread pool under the condition that the accumulated load is greater than or equal to the highest load;
under the condition that the number of all task processing threads is smaller than the preset thread number and unallocated target tasks exist in the target task queue, creating a new task processing thread in the task processing thread pool;
and determining the current load capacity of the new task processing thread, and determining the task processing thread corresponding to each target task in the target task queue based on the current load capacity.
11. A task processing system, characterized in that the task processing system comprises: a management device and an execution device;
the management device is configured to receive a task processing request and distribute a target task carried in the task processing request to the execution device;
the execution device comprises a rendering module, a background module and a display module,
the rendering module is configured to render the target task, the background module is configured to provide the target task processing background, and the presentation module is configured to present a processing result of the target task, wherein the rendering module performs the task processing method according to any one of claims 1 to 11.
12. A task processing apparatus, comprising:
the task receiving module is configured to receive at least two task processing requests, wherein each task processing request carries a target task;
the first determining module is configured to determine a task level of each target task based on a preset task level rule, and sort all the target tasks based on the task levels to determine a target task queue;
the second determining module is configured to acquire a current load amount of each task processing thread in a task processing thread pool, and sequentially determine a task processing thread corresponding to each target task in the target task queue based on the current load amount;
and the processing module is configured to process each target task based on the task processing thread corresponding to each target task.
13. A computing device comprising a memory, a processor, and computer instructions stored on the memory and executable on the processor, wherein the processor implements the steps of the method of any one of claims 1-10 when executing the computer instructions.
14. A computer-readable storage medium storing computer instructions, which when executed by a processor, perform the steps of the method of any one of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110187971.1A CN112905326B (en) | 2021-02-18 | 2021-02-18 | Task processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110187971.1A CN112905326B (en) | 2021-02-18 | 2021-02-18 | Task processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112905326A true CN112905326A (en) | 2021-06-04 |
CN112905326B CN112905326B (en) | 2023-04-11 |
Family
ID=76123789
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110187971.1A Active CN112905326B (en) | 2021-02-18 | 2021-02-18 | Task processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112905326B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113535361A (en) * | 2021-07-23 | 2021-10-22 | 百果园技术(新加坡)有限公司 | Task scheduling method, device, equipment and storage medium |
CN113595926A (en) * | 2021-07-28 | 2021-11-02 | 南方电网数字电网研究院有限公司 | API data transmission method, device, equipment and medium based on data middlebox |
CN114612287A (en) * | 2022-03-18 | 2022-06-10 | 北京小米移动软件有限公司 | Image processing method, device and storage medium |
CN115016919A (en) * | 2022-08-05 | 2022-09-06 | 阿里云计算有限公司 | Task scheduling method, electronic device and storage medium |
CN115118768A (en) * | 2022-06-27 | 2022-09-27 | 平安壹钱包电子商务有限公司 | Task distribution method and device, storage medium and electronic equipment |
CN116860436A (en) * | 2023-06-15 | 2023-10-10 | 重庆智铸达讯通信有限公司 | Thread data processing method, device, equipment and storage medium |
WO2024031931A1 (en) * | 2022-08-11 | 2024-02-15 | 苏州元脑智能科技有限公司 | Priority queuing processing method and device for issuing of batches of requests, server, and medium |
CN118467149A (en) * | 2023-12-29 | 2024-08-09 | 荣耀终端有限公司 | Task processing method and electronic equipment |
CN118714542A (en) * | 2024-08-29 | 2024-09-27 | 杭州东贝智算科技有限公司 | Cross-equipment linkage distributed control method and system |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103164258A (en) * | 2011-12-12 | 2013-06-19 | 中国科学院沈阳计算技术研究所有限公司 | Fault-tolerant real-time scheduling method suitable for numerical control system |
WO2014200552A1 (en) * | 2013-06-14 | 2014-12-18 | Microsoft Corporation | Assigning and scheduling threads for multiple prioritized queues |
CN104536827A (en) * | 2015-01-27 | 2015-04-22 | 浪潮(北京)电子信息产业有限公司 | Data dispatching method and device |
CN106020954A (en) * | 2016-05-13 | 2016-10-12 | 深圳市永兴元科技有限公司 | Thread management method and device |
CN106776008A (en) * | 2016-11-23 | 2017-05-31 | 福建六壬网安股份有限公司 | A kind of method and system that load balancing is realized based on zookeeper |
CN110489447A (en) * | 2019-07-16 | 2019-11-22 | 招联消费金融有限公司 | Data query method, apparatus, computer equipment and storage medium |
CN110990142A (en) * | 2019-12-13 | 2020-04-10 | 上海智臻智能网络科技股份有限公司 | Concurrent task processing method and device, computer equipment and storage medium |
CN111813521A (en) * | 2020-07-01 | 2020-10-23 | Oppo广东移动通信有限公司 | Thread scheduling method and device, storage medium and electronic equipment |
-
2021
- 2021-02-18 CN CN202110187971.1A patent/CN112905326B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103164258A (en) * | 2011-12-12 | 2013-06-19 | 中国科学院沈阳计算技术研究所有限公司 | Fault-tolerant real-time scheduling method suitable for numerical control system |
WO2014200552A1 (en) * | 2013-06-14 | 2014-12-18 | Microsoft Corporation | Assigning and scheduling threads for multiple prioritized queues |
US20140373021A1 (en) * | 2013-06-14 | 2014-12-18 | Microsoft Corporation | Assigning and Scheduling Threads for Multiple Prioritized Queues |
CN104536827A (en) * | 2015-01-27 | 2015-04-22 | 浪潮(北京)电子信息产业有限公司 | Data dispatching method and device |
CN106020954A (en) * | 2016-05-13 | 2016-10-12 | 深圳市永兴元科技有限公司 | Thread management method and device |
CN106776008A (en) * | 2016-11-23 | 2017-05-31 | 福建六壬网安股份有限公司 | A kind of method and system that load balancing is realized based on zookeeper |
CN110489447A (en) * | 2019-07-16 | 2019-11-22 | 招联消费金融有限公司 | Data query method, apparatus, computer equipment and storage medium |
CN110990142A (en) * | 2019-12-13 | 2020-04-10 | 上海智臻智能网络科技股份有限公司 | Concurrent task processing method and device, computer equipment and storage medium |
CN111813521A (en) * | 2020-07-01 | 2020-10-23 | Oppo广东移动通信有限公司 | Thread scheduling method and device, storage medium and electronic equipment |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113535361A (en) * | 2021-07-23 | 2021-10-22 | 百果园技术(新加坡)有限公司 | Task scheduling method, device, equipment and storage medium |
CN113595926A (en) * | 2021-07-28 | 2021-11-02 | 南方电网数字电网研究院有限公司 | API data transmission method, device, equipment and medium based on data middlebox |
CN114612287A (en) * | 2022-03-18 | 2022-06-10 | 北京小米移动软件有限公司 | Image processing method, device and storage medium |
CN115118768A (en) * | 2022-06-27 | 2022-09-27 | 平安壹钱包电子商务有限公司 | Task distribution method and device, storage medium and electronic equipment |
CN115016919A (en) * | 2022-08-05 | 2022-09-06 | 阿里云计算有限公司 | Task scheduling method, electronic device and storage medium |
CN115016919B (en) * | 2022-08-05 | 2022-11-04 | 阿里云计算有限公司 | Task scheduling method, electronic device and storage medium |
WO2024031931A1 (en) * | 2022-08-11 | 2024-02-15 | 苏州元脑智能科技有限公司 | Priority queuing processing method and device for issuing of batches of requests, server, and medium |
CN116860436A (en) * | 2023-06-15 | 2023-10-10 | 重庆智铸达讯通信有限公司 | Thread data processing method, device, equipment and storage medium |
CN118467149A (en) * | 2023-12-29 | 2024-08-09 | 荣耀终端有限公司 | Task processing method and electronic equipment |
CN118714542A (en) * | 2024-08-29 | 2024-09-27 | 杭州东贝智算科技有限公司 | Cross-equipment linkage distributed control method and system |
Also Published As
Publication number | Publication date |
---|---|
CN112905326B (en) | 2023-04-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112905326B (en) | Task processing method and device | |
CN110489228B (en) | Resource scheduling method and electronic equipment | |
CN109409513B (en) | Task processing method based on neural network and related equipment | |
CN111767134A (en) | Multitask dynamic resource scheduling method | |
US9479358B2 (en) | Managing graphics load balancing strategies | |
US8144149B2 (en) | System and method for dynamically load balancing multiple shader stages in a shared pool of processing units | |
TWI747092B (en) | Method, equipment and system for resource scheduling and central server thereof | |
US20070091088A1 (en) | System and method for managing the computation of graphics shading operations | |
CN112559182B (en) | Resource allocation method, device, equipment and storage medium | |
CN110069341B (en) | Method for scheduling tasks with dependency relationship configured according to needs by combining functions in edge computing | |
US20180329742A1 (en) | Timer-assisted frame running time estimation | |
WO2011134942A1 (en) | Technique for gpu command scheduling | |
CN105117285B (en) | A kind of nonvolatile memory method for optimizing scheduling based on mobile virtual system | |
CN114968521A (en) | Distributed rendering method and device | |
CN109237999B (en) | Method and system for drawing batch three-dimensional situation target trail in real time | |
CN112181613B (en) | Heterogeneous resource distributed computing platform batch task scheduling method and storage medium | |
CN109684000B (en) | APP data display method, device, equipment and computer readable storage medium | |
CN109471872A (en) | Handle the method and device of high concurrent inquiry request | |
CN111142788A (en) | Data migration method and device and computer readable storage medium | |
CN110795238A (en) | Load calculation method and device, storage medium and electronic equipment | |
CN105320570A (en) | Resource management method and system | |
US20150145872A1 (en) | Scheduling, interpreting and rasterising tasks in a multi-threaded raster image processor | |
CN111338803A (en) | Thread processing method and device | |
US7999814B2 (en) | Information processing apparatus, graphics processor, control processor and information processing methods | |
CN109144664B (en) | Dynamic migration method of virtual machine based on user service quality demand difference |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |