CN116069518A - Dynamic allocation processing task method and device, electronic equipment and readable storage medium - Google Patents

Dynamic allocation processing task method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN116069518A
CN116069518A CN202111293764.0A CN202111293764A CN116069518A CN 116069518 A CN116069518 A CN 116069518A CN 202111293764 A CN202111293764 A CN 202111293764A CN 116069518 A CN116069518 A CN 116069518A
Authority
CN
China
Prior art keywords
buffer queue
thread
tasks
target
buffer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111293764.0A
Other languages
Chinese (zh)
Inventor
张文凌
李智年
徐金凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NetsUnion Clearing Corp
Original Assignee
NetsUnion Clearing Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NetsUnion Clearing Corp filed Critical NetsUnion Clearing Corp
Priority to CN202111293764.0A priority Critical patent/CN116069518A/en
Publication of CN116069518A publication Critical patent/CN116069518A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The present disclosure provides a method and apparatus for dynamically allocating processing tasks, and an electronic device and a computer readable storage medium, where the method may be applied to a target device, where a plurality of buffer queues and a target thread pool including a plurality of threads are configured in the target device; wherein the method comprises the following steps: determining the execution priority of each buffer queue in the target equipment under the condition that the first thread finishes processing the current batch of tasks and meets the preset condition; determining a target buffer queue corresponding to the first thread in each buffer queue according to the execution priority, so that the first thread processes tasks in the target buffer queue in batches; the current batch of tasks is tasks in a first buffer queue, the first buffer queue is any one of a plurality of buffer queues, and the first thread is any one of a plurality of threads. According to the method, the threads are dynamically allocated, so that not only is the resource waste caused by the fact that each thread is in an idle state avoided, but also frequent creation, management, destruction and the like of the threads are avoided.

Description

Dynamic allocation processing task method and device, electronic equipment and readable storage medium
Technical Field
The disclosure relates to the technical field of computers and the internet, and in particular relates to a method and a device for dynamically allocating processing tasks, electronic equipment and a computer readable storage medium.
Background
In the architecture design scheme of a computer server for processing a client request, a single-buffer queue corresponding to a single-thread pool processing model is a classical and widely-applied task processing model.
In the related art, when an IO module of a server receives a task processing request initiated by a client, a task is often directly inserted into a tail of a buffer queue. The threads in the task processing thread pool can acquire the tasks to be processed from the queue head of the buffer queue, and the threads can possibly communicate with other peripheral systems in the task processing process.
In a realistic business process scenario, the above model has the following drawbacks:
1. task priority problem: the task cache queue adopts a first-in first-out processing mode, and cannot meet the service scene with priority requirements on processing tasks.
2. System "avalanche" problem: when a task is complex to process or has long communication time with a peripheral system, the time for processing the task is too long, so that a large amount of thread processing resources are occupied. After such problems occur, the processing of other tasks is not only affected, but also the overall processing performance of the server is reduced, and the false overflow of the task cache queue is caused when the processing performance is serious.
Disclosure of Invention
The embodiment of the disclosure provides a method and a device for dynamically allocating processing tasks, electronic equipment and a computer readable storage medium, which can dynamically allocate processing tasks according to actual demands.
Other features and advantages of the present disclosure will be apparent from the following detailed description, or may be learned in part by the practice of the disclosure.
The embodiment of the disclosure provides a dynamic allocation processing task method, which is applied to target equipment, wherein the target equipment is configured with a plurality of buffer queues and a target thread pool containing a plurality of threads; wherein the method comprises the following steps: under the condition that the first thread finishes processing the current batch of tasks and meets the preset condition, determining the execution priority of each buffer queue in the target equipment; determining a target buffer queue corresponding to the first thread in each buffer queue according to the execution priority, so that the first thread processes tasks in the target buffer queue in batches; the current batch of tasks is tasks in a first buffer queue, the first buffer queue is any one of the plurality of buffer queues, and the first thread is any one of the plurality of threads.
In some embodiments, the first thread satisfies a preset condition, including at least one of: the first thread continuously completes target batch tasks aiming at the first buffer queue; the time of the first thread processing the task in the first buffer queue exceeds a preset time threshold; and no task to be processed exists in the first buffer queue.
In some embodiments, determining execution priority of each buffer queue in the target device includes: taking any one buffer queue of the plurality of buffer queues as a second buffer queue, and repeatedly executing the following steps: acquiring the number of tasks to be processed in the second buffer queue; acquiring the number of threads which are performing task processing for the second buffer queue; acquiring the weight of the second buffer queue; and determining the execution priority of the second buffer queue according to the number of tasks to be processed, the number of threads and the weight of the second buffer queue.
In some embodiments, determining the execution priority of the second buffer queue according to the number of tasks to be processed, the number of threads, and the weight of the second buffer queue includes: the number of the tasks to be processed is positively correlated with the execution priority of the second buffer queue, the weight of the second buffer queue is positively correlated with the execution priority of the second buffer queue, and the number of threads is negatively correlated with the execution priority of the second buffer queue.
In some embodiments, the method further comprises: determining the task type of the task to be processed by the target equipment; creating a buffer queue for each task type respectively to generate a plurality of buffer queues in the target equipment so that different buffer queues process tasks of different task types; and determining weights for the corresponding buffer queues according to the importance degrees of the task types.
In some embodiments, determining a target buffer queue corresponding to the first thread in each buffer queue according to the execution priority comprises: determining a target buffer queue according to the execution priority of the plurality of buffer queues; wherein, the execution priority of the target buffer queue is highest.
In some embodiments, the plurality of buffer queues further comprises a third buffer queue and a fourth buffer queue; wherein determining a target buffer queue corresponding to the first thread in each buffer queue according to the execution priority comprises: and determining a target buffer queue corresponding to the first thread according to the number of threads corresponding to the third buffer queue and the fourth buffer queue under the condition that the execution priorities of the third buffer queue and the fourth buffer queue are the same and are the maximum execution priorities among the execution priorities of the plurality of buffer queues.
In some embodiments, determining a target buffer queue corresponding to the first thread based on the number of threads corresponding to the third buffer queue and the fourth buffer queue includes: if the number of threads performing task processing for the fourth buffer queue is larger than the number of threads performing task processing for the third buffer queue;
the third buffer queue is taken as the target buffer queue.
In some embodiments, the plurality of buffer queues includes a fifth buffer queue, no tasks to be processed are in the fifth buffer queue, the target thread pool includes a second thread, and the second thread is a daemon thread of the fifth buffer queue; wherein the method further comprises: detecting that a new task is inserted into the fifth buffer queue; the second thread is allocated to the fifth buffer queue so that the second thread processes the new task in the fifth buffer queue.
In some embodiments, before detecting insertion of a new task in the fifth buffer queue, comprising: acquiring task attributes of the new task; and if the fifth buffer queue is matched with the task attribute of the new task, inserting the new task into the fifth buffer queue.
In some embodiments, the first thread processes tasks in the target buffer queue in batches, including: determining the number of tasks in the target buffer queue; if the number of tasks in the target buffer queue is greater than or equal to a target number threshold, the first thread acquires the target number threshold number of tasks from the target buffer queue as a batch of tasks, so that the first thread processes the batch of tasks; if the number of tasks in the target buffer queue is smaller than the target number threshold, the first thread acquires all tasks in the target buffer queue as tasks of one batch, so that the first thread processes the tasks of the one batch.
The embodiment of the disclosure provides a dynamic allocation processing task device which is deployed in target equipment, wherein the target equipment is provided with a plurality of buffer queues and a target thread pool containing a plurality of threads; wherein; wherein the device comprises: the execution priority determining module is used for determining the execution priority of each buffer queue in the target equipment under the condition that the first thread finishes processing the current batch of tasks and the preset condition is met; a target buffer queue determining module, configured to determine a target buffer queue corresponding to the first thread in each buffer queue according to the execution priority, so that the first thread processes tasks in the target buffer queue in batches; the current batch of tasks is tasks in a first buffer queue, the first buffer queue is any one of the plurality of buffer queues, and the first thread is any one of the plurality of threads.
The embodiment of the disclosure provides an electronic device, which comprises: one or more processors; and the storage device is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors are enabled to realize the dynamic allocation processing task method.
The disclosed embodiments provide a computer readable storage medium having stored thereon a computer program which when executed by a processor implements a method of dynamically scheduling processing tasks as described in any of the above.
Embodiments of the present disclosure propose a computer program product or a computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium and executes the computer instructions to cause the computer device to perform the dynamic scheduling processing task method described above.
According to the method, the device, the electronic equipment and the computer readable storage medium for dynamically allocating and processing the tasks, which are provided by certain embodiments of the present disclosure, firstly, the buffer queues and the threads can be dynamically allocated in many-to-many manner through the setting of the buffer queues, so that a certain thread is prevented from being occupied for a long time, and the tasks are prevented from being jammed, blocked and the like in one buffer queue, thereby improving the overall processing performance of the server; secondly, according to the method for dynamically allocating and processing tasks, provided by the embodiment, the time for dynamically allocating the first thread is determined through the judgment of the preset condition, so that the first thread is dynamically adjusted according to the actual task processing condition, and the first thread is prevented from always serving the first buffer queue; in addition, according to the method for dynamically allocating processing tasks provided by the embodiment, the corresponding target buffer queue after the first thread is dynamically allocated is determined according to the execution priority of each buffer queue, so that the first thread can determine the matched buffer queue according to the execution priority, and tasks in the buffer queue with high execution priority can be processed as soon as possible; finally, by dynamically allocating threads in the target thread pool, the method provided by the embodiment avoids resource waste caused by idle state of each thread as far as possible, avoids frequent creation, management, destruction and the like of the threads, reduces system complexity and improves overall performance of the system.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. The drawings described below are merely examples of the present disclosure and other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 illustrates a schematic diagram of an exemplary system architecture for a dynamically allocated processing task method or a dynamically allocated processing task device as applied to embodiments of the present disclosure.
FIG. 2 is a schematic diagram illustrating a computer system for dynamically deploying a processing task device, according to one illustrative embodiment.
FIG. 3 is a flowchart illustrating a method of dynamically deploying processing tasks, according to an example embodiment.
FIG. 4 is a block diagram illustrating a dynamic deployment of processing tasks, according to an example embodiment.
FIG. 5 is a flowchart illustrating a method of dynamically deploying processing tasks, according to an example embodiment.
FIG. 6 is a flowchart illustrating a method of dynamically deploying processing tasks, according to an example embodiment.
Fig. 7-24 are schematic diagrams illustrating a dynamic deployment process of tasks according to an exemplary embodiment.
FIG. 25 is a block diagram illustrating a dynamically allocated processing task device, according to an example embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments can be embodied in many forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted.
The described features, structures, or characteristics of the disclosure may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. However, those skilled in the art will recognize that the aspects of the present disclosure may be practiced with one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the disclosure.
The drawings are merely schematic illustrations of the present disclosure, in which like reference numerals denote like or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and not necessarily all of the elements or steps are included or performed in the order described. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
In this specification, the terms "a," "an," "the," "and" at least one "are used to indicate the presence of one or more elements/components/etc.; the terms "comprising," "including," and "having" are intended to be inclusive and mean that there may be additional elements/components/etc., in addition to the listed elements/components/etc.; the terms "first," "second," and "third," etc. are used merely as labels, and do not limit the number of their objects.
The following describes example embodiments of the present disclosure in detail with reference to the accompanying drawings.
FIG. 1 illustrates a schematic diagram of an exemplary system architecture for a dynamically allocated processing task method or a dynamically allocated processing task device that may be applied to embodiments of the present disclosure.
As shown in fig. 1, a system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like.
For example, the user may send tasks to the server via terminal device 101, 102, or 103.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop computers, desktop computers, wearable devices, virtual reality devices, smart homes, etc.
The server 105 may be a server providing various services, such as a background management server providing support for devices operated by users with the terminal devices 101, 102, 103. The background management server can analyze and process the received data such as the request and the like, and feed back the processing result to the terminal equipment.
The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligence platforms, and the like, which is not limited in this disclosure.
The server 105 may determine, for example, an execution priority of each buffer queue in the target device when the first thread finishes processing the current batch of tasks and satisfies a preset condition; determining a target buffer queue corresponding to the first thread in each buffer queue according to the execution priority, so that the first thread processes tasks in the target buffer queue in batches; the current batch of tasks is tasks in a first buffer queue, the first buffer queue is any one of a plurality of buffer queues, and the first thread is any one of a plurality of threads.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative, and that the server 105 may be a server of one entity, or may be composed of a plurality of servers, and may have any number of terminal devices, networks and servers according to actual needs.
Referring now to FIG. 2, a schematic diagram of a computer system 200 suitable for use in implementing a terminal device or server of an embodiment of the present application is shown. The terminal device shown in fig. 2 is only an example, and should not impose any limitation on the functions and the scope of use of the embodiments of the present application.
As shown in fig. 2, the computer system 200 includes a Central Processing Unit (CPU) 201, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 202 or a program loaded from a storage section 208 into a Random Access Memory (RAM) 203. In the RAM 203, various programs and data required for the operation of the computer system 200 are also stored. The CPU 201, ROM202, and RAM 203 are connected to each other through a bus 204. An input/output (I/O) interface 205 is also connected to bus 204.
The following components are connected to the I/O interface 205: an input section 206 including a keyboard, a mouse, and the like; an output portion 207 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker, and the like; a storage section 208 including a hard disk or the like; and a communication section 209 including a network interface card such as a LAN card, a modem, and the like. The communication section 209 performs communication processing via a network such as the internet. The drive 210 is also connected to the I/O interface 205 as needed. A removable medium 211 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed on the drive 210 as needed, so that a computer program read therefrom is installed into the storage section 208 as needed.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 209, and/or installed from the removable medium 211. The above-described functions defined in the system of the present application are performed when the computer program is executed by a Central Processing Unit (CPU) 201.
It should be noted that the computer readable storage medium shown in the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable storage medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules and/or units involved in the embodiments of the present application may be implemented in software, or may be implemented in hardware. The described modules and/or units may also be provided in a processor, e.g., may be described as: a processor includes a transmitting unit, an acquiring unit, a determining unit, and a first processing unit. Wherein the names of the modules and/or units do not in some cases constitute limitations on the modules and/or units themselves.
As another aspect, the present application also provides a computer-readable storage medium that may be contained in the apparatus described in the above embodiments; or may be present alone without being fitted into the device. The computer-readable storage medium carries one or more programs which, when executed by a device, cause the device to perform functions including: determining the execution priority of each buffer queue in the target equipment under the condition that the first thread finishes processing the current batch of tasks and meets the preset condition; determining a target buffer queue corresponding to the first thread in each buffer queue according to the execution priority, so that the first thread processes tasks in the target buffer queue in batches; the current batch of tasks is tasks in a first buffer queue, the first buffer queue is any one of a plurality of buffer queues, and the first thread is any one of a plurality of threads.
FIG. 3 is a flowchart illustrating a method of dynamically deploying processing tasks, according to an example embodiment.
In some embodiments, the above-mentioned dynamic allocation processing task method may be executed by any electronic device having computing processing capability (may be referred to as a target device), for example, may be executed by a server or a terminal device in the above-mentioned embodiment of fig. 1, or may be executed by both the server and the terminal device, and in the following embodiments, a server is taken as an example to illustrate that the disclosure is not limited thereto.
In some embodiments, the target device performing the above method for dynamically allocating processing tasks may include a plurality of buffer queues (e.g. task buffer queue 1, task buffer queue 2 … …, task buffer queue M shown in fig. 4, where M is a positive integer greater than or equal to 2). The buffer queue may refer to a queue that buffers tasks in a memory.
In some embodiments, the target device performing the above-described method of dynamically allocating processing tasks may further include a target thread pool, where the target thread pool may include one thread pool (see task processing thread pool) as shown in fig. 4, and may also include multiple thread pools, which is not limited in this disclosure. In this embodiment, the target thread pool may include at least one thread, which may be uniformly allocated by the thread resource allocation module. It should be noted that, if the target thread pool is composed of a plurality of thread pools, the thread resource allocation module may allocate threads in the plurality of thread pools uniformly, which is not limited in this disclosure.
The thread may refer to a minimum unit that an operating system can perform operation scheduling. It is included in the process and is the actual unit of operation in the process. A thread pool refers to a virtual "pool" of multiple threads.
In this embodiment, for clarity of illustration, it may be assumed that multiple buffer queues in a target device may include a first buffer queue, and a target thread pool may include a first thread that is processing tasks in the first buffer queue.
Referring to fig. 3, the method for dynamically allocating processing tasks provided in the embodiment of the present disclosure may include the following steps.
In step S302, when the first thread finishes processing the current batch of tasks and satisfies the preset condition, the execution priority of each buffer queue in the target device is determined.
In some embodiments, when the first thread performs task processing on the first buffer queue, a single task can be acquired from the first buffer queue for task processing at a time; the method and the device can also acquire a plurality of tasks for task processing in a single mode from the first buffer queue, for example, a target number of tasks with a threshold value can be acquired at one time from the first buffer queue for processing, the method and the device are not limited in the disclosure, and the interaction times of threads and the buffer queue can be reduced and the concurrency capability of a system is improved by the mode of acquiring the tasks in batches.
When the first thread performs task processing on the first buffer queue, whether the first thread completes the current batch of tasks (the current batch of tasks can comprise a single task or a plurality of tasks) on the first buffer queue can be monitored in real time, and if the first thread completes the current batch of tasks on the first buffer queue, whether the first thread or the first buffer queue meets the preset condition can be continuously judged.
Wherein, when the first thread at least one of the following is considered, the first thread is considered to satisfy the preset condition: the first thread continuously completes the target batch of tasks for the first buffer queue (i.e., the number of task batches completed by the first thread for the first buffer queue is greater than the target number of times threshold); the first thread processing tasks in the first buffer queue more than a preset time threshold (e.g., the first thread processing tasks for the first buffer queue more than a target time threshold); there are no pending tasks in the first buffer queue.
In some embodiments, if the first thread or the first buffer queue does not meet the preset condition, the first thread continues to acquire a batch of tasks from the first buffer queue for further processing.
In some embodiments, if either the first thread or the second buffer queue satisfies either of the above conditions, then it may be considered to reconfigure the first thread.
In some embodiments, when the first thread is reconfigured, it is first necessary to determine the execution priority of each buffer queue in the target device.
In some embodiments, the execution priority may be determined by the number of tasks to be processed in the buffer queue, the number of threads performing task processing for the buffer queue, and the weight of the buffer queue.
The number of tasks to be processed may refer to the number of tasks to be processed in a buffer queue, the weight of the buffer queue may refer to the weight set for the buffer queue in advance, the weight may be adjusted according to needs, and the execution priority of each buffer queue may be adjusted by the weight of the buffer queue, so that the task with a higher priority level may be processed as soon as possible.
In some embodiments, the determination of execution priority may be illustrated by taking as an example a second buffer queue of the plurality of buffer queues: acquiring the number of tasks to be processed in a second buffer queue; acquiring the number of threads which are performing task processing for the second buffer queue; acquiring the weight of a second buffer queue; and determining the execution priority of the second buffer queue according to the number of tasks to be processed, the number of threads and the weight of the second buffer queue.
The number of tasks to be processed is positively correlated with the execution priority of the second buffer queue, the weight of the second buffer queue is positively correlated with the execution priority of the second buffer queue, and the number of threads is negatively correlated with the execution priority of the second buffer queue.
For example, the execution priority of the second buffer queue may be determined by formula (1).
Execution priority = number of tasks to be processed × weight of buffer queue/number of threads (1)
It will be appreciated that any of a plurality of buffer queues in a target device may be used to effect execution priority determination by the above method.
In some embodiments, the weights of the individual buffer queues may be determined by: when generating a buffer queue, firstly determining task types (such as video tasks, text tasks, voice tasks and the like) of tasks to be processed by target equipment; creating a buffer queue for each task type respectively to generate a plurality of buffer queues (for example, a video task corresponds to one buffer queue, an audio task corresponds to one buffer queue, and a text task corresponds to one buffer queue) in the target equipment, so that the different buffer queues process tasks of different task types; the weights are determined for the corresponding buffer queues according to the importance degree of each task type, and the person skilled in the art can adjust the weights of each buffer queue according to actual requirements, which is not limited in the present disclosure.
Step S304, determining a target buffer queue corresponding to the first thread in each buffer queue according to the execution priority, so that the first thread processes tasks in the target buffer queue in batches; the current batch of tasks is tasks in a first buffer queue, the first buffer queue is any one of a plurality of buffer queues, and the first thread is any one of a plurality of threads.
In some embodiments, after the execution priority of each buffer queue is obtained, the buffer queue with the largest execution priority may be taken as a target buffer queue, and the first thread may be allocated to the target buffer queue for task processing.
For example, assuming that the plurality of buffer queues in the target device includes a first buffer queue and a second buffer queue, and the second buffer queue has an execution priority greater than that of all other buffer queues, the second buffer queue may be regarded as the target buffer queue, and the first thread may be allocated to the second buffer queue for task processing.
Wherein, the first thread processes the tasks in the target buffer queue by batch, and the method comprises the following steps: determining the number of tasks in a target buffer queue; if the number of tasks in the target buffer queue is greater than or equal to the target number threshold, the first thread acquires the target number threshold number of tasks from the target buffer queue as a batch of tasks so that the first thread processes the batch of tasks; if the number of tasks in the target buffer queue is less than the target number threshold, the first thread acquires all tasks in the target buffer queue as tasks of one batch, so that the first thread processes the tasks of one batch.
In some embodiments, if there are two buffer queues in the target device with equal execution priorities and the execution priorities are the maximum execution priorities, the buffer queue with smaller execution threads may be used as the target buffer queue.
If the plurality of buffer queues of the target device further includes a third buffer queue and a fourth buffer queue, and the first thread or the first buffer queue satisfies the preset condition, and the execution priorities of the third buffer queue and the fourth buffer queue are the same and are the maximum execution priorities among the execution priorities of the plurality of buffer queues, then the target buffer queue corresponding to the first thread may be determined according to the thread numbers corresponding to the third buffer queue and the fourth buffer queue.
The method for determining the target buffer queue corresponding to the first thread according to the thread numbers corresponding to the third buffer queue and the fourth buffer queue comprises the following steps: if the number of threads performing task processing for the fourth buffer queue is larger than the number of threads performing task processing for the third buffer queue; the third buffer queue is taken as the target buffer queue.
According to the technical scheme provided by the embodiment, the buffer queues and the threads can be dynamically allocated in many-to-many mode through setting the buffer queues, so that a certain thread is prevented from being occupied for a long time, and tasks are prevented from being jammed and blocked in one buffer queue, and the overall processing performance of a server is provided; secondly, according to the method for dynamically allocating and processing tasks, the time for dynamically allocating the first thread is determined through the target frequency threshold, the target time threshold and the tasks to be processed, so that the first thread is dynamically adjusted according to the actual task processing condition, and the first thread is prevented from always serving the first buffer queue; in addition, according to the method for dynamically allocating processing tasks provided by the embodiment, the corresponding target buffer queue after the first thread is dynamically allocated is determined according to the execution priority of each buffer queue, so that the first thread can determine the matched buffer queue according to the execution priority, and tasks in the buffer queue with high execution priority can be processed as soon as possible; finally, by dynamically allocating threads in the target thread pool, the method provided by the embodiment avoids resource waste caused by idle state of each thread as far as possible, avoids frequent creation, management, destruction and the like of the threads, reduces system complexity and improves overall performance of the system.
FIG. 5 is a flowchart illustrating a method of dynamically deploying processing tasks, according to an example embodiment.
In some embodiments, the plurality of buffer queues of the target device may further include a fifth buffer queue in which no tasks are currently pending, and the target thread pool may further include a second thread that is a daemon of the fifth buffer queue that will only execute tasks in the corresponding buffer queue and will not process tasks of other queues. This design approach is to allow new tasks to be handled as soon as possible when an empty queue is inserted.
Referring to fig. 5, the above-described method of dynamically allocating processing tasks may include the following steps.
Step S502, obtaining task attributes of the new task.
Task attributes may refer to features that enable abstract descriptions of tasks, such as the functional type of task, the priority of task, the peripheral system to which the task corresponds, and the like, which is not limiting in this disclosure.
In step S504, if the fifth buffer queue matches the task attribute of the new task, the new task is inserted into the fifth buffer queue.
In some embodiments, the buffer queue in the target device may also have its own queue properties. The queue attribute may refer to a feature capable of abstracting a buffer queue, such as a queue weight of the queue, a queue function, a peripheral system corresponding to the queue (e.g., a banking system, a merchant system, etc.), and the disclosure is not limited thereto.
In some embodiments, a new task may be randomly assigned to any one of the buffer queues when it is obtained, or may be assigned to a buffer queue for which matching is successful by task attribute matching.
In some embodiments, the task attribute of the new task may be matched with the queue attribute of each buffer queue in the target device by a certain matching rule. For example, a new task with a matching priority may be matched to the buffer queue by the matching rule, and a new task with the same function type may be matched to the buffer queue by the matching rule, which is not limited by the present disclosure.
In this embodiment, assuming that the fifth buffer queue matches the task attribute of the new task, the new task may be inserted into the fifth buffer queue.
In step S506, it is detected that a new task is inserted into the fifth buffer queue.
In step S508, the second thread is allocated to the fifth buffer queue so that the second thread processes a new task in the fifth buffer queue.
When a new task is detected to be inserted in the fifth buffer queue, the task in the fifth buffer queue can be processed through a protection thread of the fifth buffer queue, namely a second thread.
By setting daemon threads for the buffer queues, the embodiment can ensure that corresponding threads can process new tasks in any buffer queue immediately when the new tasks are inserted.
FIG. 6 is a flowchart illustrating a method of dynamically deploying processing tasks, according to an example embodiment. Referring to fig. 6, the above-described method of dynamically allocating processing tasks may include the following steps.
S1: an initial state. I.e. the current thread is in an initial state.
S2, entering an idle state. I.e. the current thread is in an idle state.
S3: and inquiring the priority of each queue in the thread resource allocation module. I.e. the execution priority of at least one buffer queue in the target device.
S4, whether the priorities are all 0.
S5: the thread is assigned to the queue with the highest priority.
If the buffer queue with the execution priority not being 0 exists in the target equipment, the buffer queue with the largest execution priority is used as the target buffer queue, and the current thread is distributed to the target buffer queue.
S6: a batch of tasks is obtained from the assigned queue.
The current thread obtains a batch of tasks from the target buffer queue, which may include at least one task.
S7: a batch of tasks is processed.
S8: whether the maximum number of acquired batches is exceeded.
After the current thread processes a batch of tasks, it is determined whether the number of tasks acquired by the current thread from the target buffer queue exceeds a target number threshold.
If the number of times that the current thread acquires the task from the target buffer queue does not exceed the target number of times threshold, the current thread continues to acquire a batch of tasks from the target buffer queue for processing; if the number of times that the current thread acquires the task from the target buffer queue exceeds the target number of times threshold, the current thread is enabled to enter an idle state, and the current thread is reassigned according to the execution priority of each buffer queue.
To more clearly illustrate this dynamic deployment process task approach, the present disclosure will be described in terms of specific examples.
First, a virtual traffic scene is preset (refer to fig. 7):
1. the task processing thread pool of the server side has 10 threads, namely thread 1 to thread 10.
2. The server has 4 task buffer queues, wherein the weight value of the queue 1 is 3, the weight value of the queue 2 is 1.5, and the weight values of the queues 3 and 4 are 1.
3. The task processing time in the queues 1 to 3 is 1 second, and the task processing time in the queue 4 is 3 seconds.
4. At the beginning of the simulation flow, there are 10 tasks to be processed in each queue.
5. At 4 seconds, 10 more tasks to be processed will be inserted in queue 1 at the end of the queue.
6. The thread pulls one task from the queue at a time for processing.
7. The thread maximum acquisition batch number is 3 times.
The entire model processing is described below.
1. In the initial state of the system, the thread 1-thread 4 are daemon threads of the queue 1-queue 4 respectively. Thread 5 through thread 10 are idle threads.
2. And distributing the processing threads according to the thread resource allocation module.
(1) Distribution thread 5 (the process of FIGS. 7-8)
As shown in fig. 7, the priority of the queue 1 is 30, and the thread 5 is allocated to process the task in the queue 1, and the result shown in fig. 8 is obtained.
(2) Distributing threads 6,7 (the process of FIGS. 8-9)
As shown in fig. 8, the priorities of the queues 1 and 2 are 15, the task in the processing queue 2 is allocated to the thread 6, and the task in the processing queue 1 is allocated to the thread 7, and the result shown in fig. 9 is obtained.
(3) Distributing threads 8,9, 10 (the process of FIGS. 9-10)
As shown in fig. 9, the priorities of the queues 1, 3 and 4 are 10, the task in the processing queue 1 is allocated to the thread 8, the task in the processing queue 3 is allocated to the thread 9, and the task in the processing queue 4 is allocated to the thread 10, and the result shown in fig. 10 is obtained.
3. After the model was executed for 1 second (see fig. 11).
Queue 1 processes 4 tasks, queue 2 processes 2 tasks, queue 3 processes 2 tasks, and 2 tasks are in process in queue 4 (1 second is performed for all 2 tasks).
4. After the model was executed for 2 seconds (see fig. 12).
Queue 1 processes 8 tasks, queue 2 processes 4 tasks, queue 3 processes 4 tasks, and 2 tasks are in process in queue 4 (2 tasks are all executed for 2 seconds).
5. Dynamically adjusting thread policies (the process of fig. 12-13).
Referring to fig. 12, since there are only 2 tasks to be processed and 4 processing threads in the queue 1, the task in the allocation processing queue 2 is adjusted by the thread 7, and the task in the allocation processing queue 4 is adjusted by the thread 8, and the result shown in fig. 13 is obtained.
6. After 3 seconds of model execution (see fig. 14).
The queue 1 processes all 10 tasks, the queue 2 processes 7 tasks, the queue 3 processes 6 tasks, and 1 task in the queue 4 is in process (1 second is executed) and 2 tasks are processed.
7. Dynamically adjusting thread policy (process of FIGS. 14-15)
Since queue 1 has no tasks waiting for processing, thread 5 is tuned to allocate tasks in processing queue 4, resulting in the results shown in FIG. 15.
8. After the model was executed for 4 seconds (see fig. 16).
The queue 1 is newly added with 10 tasks, the queue 2 processes all 10 tasks, the queue 3 processes 8 tasks, 4 tasks in the queue 4 are in process (1 task is executed for 2 seconds, 3 tasks are executed for 1 second), and 2 tasks are processed.
9. Dynamically adjusting thread policy (process of FIGS. 16-17)
Since queue 2 has no tasks waiting for processing, queue 1 has a priority of 30, threads 6 and 7 are tuned to allocate tasks in processing queue 1 to achieve the result shown in FIG. 17.
10. After 5 seconds of model execution (see fig. 18).
The queue 1 processes 13 tasks, the queue 2 processes all 10 tasks, the queue 3 processes 10 tasks, 3 tasks in the queue 4 are in process (3 tasks are executed for 2 seconds), and 3 tasks are processed.
11. Dynamically adjusting thread policies (fig. 18-19 process).
Since queue 3 has no task waiting for processing, queue 1 has a priority of 7, thread 9 is tuned to allocate the task in processing queue 1, resulting in the result shown in FIG. 19.
12. After the model was executed for 6 seconds (see fig. 20).
The queue 1 processes 17 tasks, the queue 2 processes all 10 tasks, the queue 3 processes 10 tasks, 1 task in the queue 4 is in process (the task is executed for 1 second), and 6 tasks are processed.
13. Dynamically adjusting thread policies (the process of fig. 20-21).
Thread 9 adjusts back to the idle thread region to achieve the result shown in figure 21.
14. After 7 seconds of execution of the model (see fig. 22).
The queue 1 processes all 20 tasks, the queue 2 processes all 10 tasks, the queue 3 processes all 10 tasks, and 1 task in the queue 4 is in process (1 task is executed for 2 seconds, 3 tasks are processed for 1 second), and 6 tasks are processed.
13. Dynamically adjusting thread policies (the process of fig. 22-23).
Threads 6 and 7 adjust back to the idle thread region.
15. After the model was executed for 9 seconds (see fig. 24).
Queue 1 processes all 20 tasks, queue 2 processes all 10 tasks, queue 3 processes all 10 tasks, and queue 4 processes all 10 tasks.
The change in the number of processing threads per queue was counted to obtain the results shown in table 1.
TABLE 1
Initial initiation 1 second 2 seconds 3 seconds 4 seconds 5 seconds 6 seconds 7 seconds 8 seconds 9 seconds
Queue
1 4 4 2 1 3 4 3 1 0 0
Queue 2 2 2 3 3 1 1 1 1 0 0
Queue 3 2 2 2 2 2 1 1 1 0 0
Queue 4 2 2 3 4 4 4 4 4 3 0
And counting the change condition of the number of the processing tasks of each queue to obtain the result shown in the table 2.
TABLE 2
Figure BDA0003335671850000171
Figure BDA0003335671850000181
From the above analysis of the data structure and case handling process, it can be found that:
1. in the initial state of the system, the thread priority of executing tasks is distributed according to the execution priority of the queues, and the queues with the same execution priority are distributed evenly. For example: queue 1 (weight 3) allocates 4 threads, and queues 2 (weight 1.5), 3 (weight 1), and 4 (weight 1) allocate 2 threads each.
2. From the viewpoint of the allocation process of the thread resources, the thread resources are preferentially allocated to the queues with high execution priority, and after the task processing of most of the queues is ensured to be completed, the queues with long time consumption for the task processing are allocated to the queues, so that the effect of dynamically adjusting the thread resources is achieved.
3. From the final result of task processing, the task execution efficiency in the queue with high execution priority is relatively high. For example: the 20 tasks in queue 1 (weight 3) are processed for 7 seconds, the 10 tasks in queue 2 (weight 1.5) are processed for 4 seconds, and the 10 tasks in queue 3 (weight 1) are processed for 5 seconds.
In summary, the scheme provided by the embodiment of the disclosure has the following technical effects:
1. the data structure of the single thread pool corresponding to the multiple queues is adopted, so that the thread resources are dynamically allocated in the execution process of different types of tasks.
2. The design scheme that multiple queues are adopted and weight values are set for each queue ensures that tasks with high levels can be processed preferentially.
3. By adopting the design scheme of multiple queues, the problem that the tasks with excessively long partial execution time occupy excessive threads is effectively avoided, so that the response time of a server to other types of tasks is influenced.
4. The design mode of dynamically adjusting the thread resources by adopting the single thread pool avoids the operation of frequently creating, managing and destroying the thread pool, reduces the system overhead and simultaneously maximally utilizes the thread resources.
5. A daemon thread is set for each queue that can be processed as soon as an empty queue inserts a new task.
6. When the thread acquires the tasks from the buffer queue, a mode of acquiring a batch of tasks at a time is adopted, so that the processing efficiency is effectively improved.
FIG. 25 is a block diagram illustrating a dynamically allocated processing task device, according to an example embodiment. Referring to fig. 25, a dynamically allocated processing task device 2500 provided in an embodiment of the present disclosure may be applied to a target device, where the target device includes a plurality of buffer queues and a target thread pool, the plurality of buffer queues includes a first buffer queue, the target thread pool includes a first thread, and the first thread processes a task in the first buffer queue, and the dynamically allocated processing task device may include: the priority determination modules 2501 and 2502 are executed.
The execution priority determining module 2501 may be configured to determine an execution priority of each buffer queue in the target device when the first thread finishes processing the current batch of tasks and meets a preset condition; the target buffer queue determining module 2502 may be configured to determine a target buffer queue corresponding to the first thread in each buffer queue according to the execution priority, so that the first thread processes tasks in the target buffer queue in batches; the current batch of tasks is tasks in a first buffer queue, the first buffer queue is any one of the plurality of buffer queues, and the first thread is any one of the plurality of threads.
In some embodiments, the first thread meeting the preset condition may include at least one of: the first thread continuously completes target batch tasks aiming at the first buffer queue; the time of the first thread processing the task in the first buffer queue exceeds a preset time threshold; and no task to be processed exists in the first buffer queue.
In some embodiments, the execution priority determination module 2501 may comprise: and a priority determining unit.
The priority determining unit may be configured to repeatedly execute the following steps with any one buffer queue of the plurality of buffer queues being a second buffer queue: acquiring the number of tasks to be processed in the second buffer queue; acquiring the number of threads which are performing task processing for the second buffer queue; acquiring the weight of the second buffer queue; and determining the execution priority of the second buffer queue according to the number of tasks to be processed, the number of threads and the weight of the second buffer queue.
In some embodiments, the number of tasks to be processed is positively correlated with the execution priority of the second buffer queue, the weight of the second buffer queue is positively correlated with the execution priority of the second buffer queue, and the number of threads is negatively correlated with the execution priority of the second buffer queue.
In some embodiments, the dynamically adapting processing task device 2500 may further include: the device comprises a task type determining unit, a buffer queue creating unit and a weight determining unit.
Wherein the person type determining unit may be configured to determine a task type of the task to be processed by the target device; the buffer queue creating unit may be configured to create a buffer queue for each task type, so as to generate a plurality of buffer queues in the target device, so that different buffer queues process tasks of different task types; the weight determining unit may be configured to determine weights for the corresponding buffer queues according to importance degrees of the respective task types.
In some embodiments, the target buffer queue determination module 2502 may include: and a highest execution priority determining unit.
The highest execution priority determining unit may be configured to determine a target buffer queue according to execution priorities of the plurality of buffer queues; wherein, the execution priority of the target buffer queue is highest.
In some embodiments, the plurality of buffer queues further comprises a third buffer queue and a fourth buffer queue; wherein the target buffer queue determining module 2502 may comprise: and an execution priority same judging unit.
The execution priority level same judging unit may be configured to determine, when the execution priority levels of the third buffer queue and the fourth buffer queue are the same and are the maximum execution priority levels among the execution priority levels of the plurality of buffer queues, a target buffer queue corresponding to the first thread according to the thread numbers corresponding to the third buffer queue and the fourth buffer queue.
In some embodiments, the execution priority level same judging unit may include: a judgment subunit and a target buffer queue determination subunit.
The judging subunit may be configured to, if the number of threads performing task processing for the fourth buffer queue is greater than the number of threads performing task processing for the third buffer queue; the target buffer queue determination subunit may be configured to take the third buffer queue as the target buffer queue.
In some embodiments, the plurality of buffer queues includes a fifth buffer queue, no tasks to be processed are in the fifth buffer queue, the target thread pool includes a second thread, and the second thread is a daemon thread of the fifth buffer queue; wherein, the dynamic allocation processing task device 2500 may further include: a new task detection unit and a second thread allocation unit.
The new task detection unit may be configured to detect that a new task is inserted into the fifth buffer queue; the second thread allocation unit may be configured to allocate the second thread to the fifth buffer queue so that the second thread processes the new task in the fifth buffer queue.
In some embodiments, the dynamic provisioning processing task device 2500 may further include: a task attribute determination unit and a task insertion unit.
The person attribute determining unit may obtain a task attribute of a new task before detecting that the new task is inserted into the fifth buffer queue; the person inserting unit may be configured to insert the new task into the fifth buffer queue if the fifth buffer queue matches the task attribute of the new task.
In some embodiments, the target buffer queue determination module 2502 may include: the system comprises a task number determining unit, a number threshold first judging unit and a number threshold second judging unit.
Wherein the task number determining unit may be configured to determine the number of tasks in the target buffer queue; the first number threshold judging unit may be configured to, if the number of tasks in the target buffer queue is greater than or equal to a target number threshold, obtain, by the first thread, the target number threshold of tasks from the target buffer queue as a batch of tasks, so that the first thread processes the batch of tasks; the number threshold second judging unit may be configured to, if the number of tasks in the target buffer queue is smaller than the target number threshold, obtain, by the first thread, all tasks in the target buffer queue as tasks of one batch, so that the first thread processes the tasks of the one batch.
Since each functional module of the dynamic allocation processing task device 2500 in the exemplary embodiment of the present disclosure corresponds to the steps of the exemplary embodiment of the dynamic allocation processing task method described above, the description thereof will not be repeated here.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, aspects of embodiments of the present disclosure may be embodied in a software product, which may be stored on a non-volatile storage medium (which may be a CD-ROM, a U-disk, a mobile hard disk, etc.), comprising instructions for causing a computing device (which may be a personal computer, a server, a mobile terminal, or a smart device, etc.) to perform a method in accordance with embodiments of the present disclosure, such as one or more of the steps shown in fig. 3.
Furthermore, the above-described figures are only schematic illustrations of processes included in the method according to the exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the disclosure is not to be limited to the details of construction, the manner of drawing, or the manner of implementation, which has been set forth herein, but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (14)

1. The method is characterized by being applied to target equipment, wherein a plurality of buffer queues and a target thread pool containing a plurality of threads are configured in the target equipment; wherein the method comprises the following steps:
under the condition that the first thread finishes processing the current batch of tasks and meets the preset condition, determining the execution priority of each buffer queue in the target equipment;
Determining a target buffer queue corresponding to the first thread in each buffer queue according to the execution priority, so that the first thread processes tasks in the target buffer queue in batches;
the current batch of tasks is tasks in a first buffer queue, the first buffer queue is any one of the plurality of buffer queues, and the first thread is any one of the plurality of threads.
2. The method of claim 1, wherein the first thread satisfies a predetermined condition comprising at least one of:
the first thread continuously completes target batch tasks aiming at the first buffer queue;
the time of the first thread processing the task in the first buffer queue exceeds a preset time threshold;
and no task to be processed exists in the first buffer queue.
3. The method of claim 1, wherein determining execution priority of each buffer queue in the target device comprises:
taking any one buffer queue of the plurality of buffer queues as a second buffer queue, and repeatedly executing the following steps:
acquiring the number of tasks to be processed in the second buffer queue;
acquiring the number of threads which are performing task processing for the second buffer queue;
Acquiring the weight of the second buffer queue;
and determining the execution priority of the second buffer queue according to the number of tasks to be processed, the number of threads and the weight of the second buffer queue.
4. A method according to claim 3, wherein the number of tasks to be processed is positively correlated with the execution priority of the second buffer queue, the weight of the second buffer queue is positively correlated with the execution priority of the second buffer queue, and the number of threads is negatively correlated with the execution priority of the second buffer queue.
5. A method according to claim 3, wherein the method further comprises:
determining the task type of the task to be processed by the target equipment;
creating a buffer queue for each task type respectively to generate a plurality of buffer queues in the target equipment so that different buffer queues process tasks of different task types;
and determining weights for the corresponding buffer queues according to the importance degrees of the task types.
6. The method of claim 1, wherein determining a target buffer queue corresponding to the first thread in each buffer queue based on the execution priority comprises:
And determining a target buffer queue according to the execution priorities of the plurality of buffer queues, wherein the execution priority of the target buffer queue is highest.
7. The method of claim 1, wherein the plurality of buffer queues further comprises a third buffer queue and a fourth buffer queue; wherein determining a target buffer queue corresponding to the first thread in each buffer queue according to the execution priority comprises:
and determining a target buffer queue corresponding to the first thread according to the number of threads corresponding to the third buffer queue and the fourth buffer queue under the condition that the execution priorities of the third buffer queue and the fourth buffer queue are the same and are the maximum execution priorities among the execution priorities of the plurality of buffer queues.
8. The method of claim 7, wherein determining a target buffer queue corresponding to the first thread based on the number of threads corresponding to a third buffer queue and a fourth buffer queue, comprises:
if the number of threads performing task processing for the fourth buffer queue is larger than the number of threads performing task processing for the third buffer queue;
the third buffer queue is taken as the target buffer queue.
9. The method of claim 1, wherein the plurality of buffer queues includes a fifth buffer queue, wherein no tasks are pending in the fifth buffer queue, wherein the target thread pool includes a second thread, and wherein the second thread is a daemon thread of the fifth buffer queue; wherein the method further comprises:
detecting that a new task is inserted into the fifth buffer queue;
the second thread is allocated to the fifth buffer queue so that the second thread processes the new task in the fifth buffer queue.
10. The method of claim 9, comprising, prior to detecting insertion of a new task in the fifth buffer queue:
acquiring task attributes of the new task;
and if the fifth buffer queue is matched with the task attribute of the new task, inserting the new task into the fifth buffer queue.
11. The method of claim 1, wherein the first thread processes tasks in the target buffer queue in batches, comprising:
determining the number of tasks in the target buffer queue;
if the number of tasks in the target buffer queue is greater than or equal to a target number threshold, the first thread acquires the target number threshold number of tasks from the target buffer queue as a batch of tasks, so that the first thread processes the batch of tasks;
If the number of tasks in the target buffer queue is smaller than the target number threshold, the first thread acquires all tasks in the target buffer queue as tasks of one batch, so that the first thread processes the tasks of the one batch.
12. A dynamic allocation processing task device, which is characterized by being deployed in target equipment, wherein a plurality of buffer queues and a target thread pool containing a plurality of threads are configured in the target equipment; wherein; wherein the device comprises:
the execution priority determining module is used for determining the execution priority of each buffer queue in the target equipment under the condition that the first thread finishes processing the current batch of tasks and the preset condition is met;
a target buffer queue determining module, configured to determine a target buffer queue corresponding to the first thread in each buffer queue according to the execution priority, so that the first thread processes tasks in the target buffer queue in batches;
the current batch of tasks is tasks in a first buffer queue, the first buffer queue is any one of the plurality of buffer queues, and the first thread is any one of the plurality of threads.
13. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-11.
14. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any of claims 1-11.
CN202111293764.0A 2021-11-03 2021-11-03 Dynamic allocation processing task method and device, electronic equipment and readable storage medium Pending CN116069518A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111293764.0A CN116069518A (en) 2021-11-03 2021-11-03 Dynamic allocation processing task method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111293764.0A CN116069518A (en) 2021-11-03 2021-11-03 Dynamic allocation processing task method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN116069518A true CN116069518A (en) 2023-05-05

Family

ID=86177478

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111293764.0A Pending CN116069518A (en) 2021-11-03 2021-11-03 Dynamic allocation processing task method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN116069518A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117149582A (en) * 2023-10-26 2023-12-01 井芯微电子技术(天津)有限公司 Pseudo-thread scheduling monitoring alarm method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117149582A (en) * 2023-10-26 2023-12-01 井芯微电子技术(天津)有限公司 Pseudo-thread scheduling monitoring alarm method and device, electronic equipment and storage medium
CN117149582B (en) * 2023-10-26 2024-01-23 井芯微电子技术(天津)有限公司 Pseudo-thread scheduling monitoring alarm method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
WO2019205371A1 (en) Server, message allocation method, and storage medium
CN107341050B (en) Service processing method and device based on dynamic thread pool
CN107832143B (en) Method and device for processing physical machine resources
US20200348977A1 (en) Resource scheduling methods, device and system, and central server
CN113641457A (en) Container creation method, device, apparatus, medium, and program product
US10778807B2 (en) Scheduling cluster resources to a job based on its type, particular scheduling algorithm,and resource availability in a particular resource stability sub-levels
US10437645B2 (en) Scheduling of micro-service instances
US10733022B2 (en) Method of managing dedicated processing resources, server system and computer program product
CN114155026A (en) Resource allocation method, device, server and storage medium
CN113238861A (en) Task execution method and device
CN116069518A (en) Dynamic allocation processing task method and device, electronic equipment and readable storage medium
Patel et al. A survey on load balancing in cloud computing
CN114116173A (en) Method, device and system for dynamically adjusting task allocation
CN111062572A (en) Task allocation method and device
CN110113176B (en) Information synchronization method and device for configuration server
CN108965364B (en) Resource allocation method, device and system
CN111813541B (en) Task scheduling method, device, medium and equipment
CN113449994A (en) Assignment method, assignment device, electronic device, medium, and program product for job ticket
CN111290842A (en) Task execution method and device
CN107634978B (en) Resource scheduling method and device
CN107045452B (en) Virtual machine scheduling method and device
CN116188240B (en) GPU virtualization method and device for container and electronic equipment
CN109842665B (en) Task processing method and device for task allocation server
Bae et al. EIMOS: Enhancing interactivity in mobile operating systems
US11093281B2 (en) Information processing apparatus, control method, and program to control allocation of computer resources for different types of tasks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40089280

Country of ref document: HK