CN116166435A - Task processing method and device based on threads, electronic equipment and storage medium - Google Patents

Task processing method and device based on threads, electronic equipment and storage medium Download PDF

Info

Publication number
CN116166435A
CN116166435A CN202310180869.8A CN202310180869A CN116166435A CN 116166435 A CN116166435 A CN 116166435A CN 202310180869 A CN202310180869 A CN 202310180869A CN 116166435 A CN116166435 A CN 116166435A
Authority
CN
China
Prior art keywords
subtask
processing
threads
thread
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310180869.8A
Other languages
Chinese (zh)
Inventor
卿力
蒋宁
吴海英
罗仕杰
赵飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mashang Consumer Finance Co Ltd
Original Assignee
Mashang Consumer Finance Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mashang Consumer Finance Co Ltd filed Critical Mashang Consumer Finance Co Ltd
Priority to CN202310180869.8A priority Critical patent/CN116166435A/en
Publication of CN116166435A publication Critical patent/CN116166435A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5066Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)

Abstract

The application provides a task processing method and device based on threads, electronic equipment and a storage medium, and belongs to the technical field of computers. The method comprises the following steps: dividing a task to be processed into a plurality of subtasks according to a service scene; determining time allocation information of each subtask according to the service scene to which each subtask belongs; acquiring information of processing resources, wherein the information of the processing resources is used for reflecting the types of the processing resources and the quantity of the processing resources; determining thread configuration information of each subtask according to the information of the processing resource and the attribute information of each subtask; and configuring threads for each subtask according to the time allocation information and the thread configuration information of each subtask, and executing corresponding subtasks through the configured threads. According to the embodiment of the application, the number of threads can be reasonably set, and the resource utilization rate is improved.

Description

Task processing method and device based on threads, electronic equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a task processing method and apparatus based on threads, an electronic device, and a storage medium.
Background
Data flow and resource allocation (e.g., list flow) become indispensable operations in enterprise operations. In the related art, an intrinsic thread pool technique or a dynamic thread pool technique is generally adopted to process such tasks, wherein a fixed number of threads are generally set when the tasks are processed based on the intrinsic thread pool technique, and the dynamic adjustment is generally performed by manually adjusting the number of threads when the tasks are processed based on the dynamic thread pool technique. The setting of the number of threads in the above manner may not be reasonable, which may easily result in low resource utilization.
Disclosure of Invention
The application provides a task processing method and device based on threads, electronic equipment and a storage medium, which can reasonably set the number of threads and improve the resource utilization rate.
In a first aspect, the present application provides a thread-based task processing method, including: dividing a task to be processed into a plurality of subtasks according to a service scene; determining time allocation information of each subtask according to a service scene to which the subtask belongs, wherein the time allocation information is used for representing processing time allocated by the subtask in different processing states; acquiring information of processing resources, wherein the information of the processing resources is used for reflecting the types of the processing resources and the quantity of the processing resources; determining thread configuration information of each subtask according to the information of the processing resource and the attribute information of each subtask, wherein the thread configuration information is used for representing thread types and thread numbers configured by the subtask in different processing states; and configuring threads for each subtask according to the time allocation information and the thread configuration information of each subtask, and executing corresponding subtasks through the configured threads.
In a second aspect, the present application provides a thread-based task processing device, including: the division module is used for dividing the task to be processed into a plurality of subtasks according to the service scene; the distribution module is used for determining time distribution information of each subtask according to the service scene to which the subtask belongs, wherein the time distribution information is used for representing the processing time distributed by the subtask in different processing states; the acquisition module is used for acquiring information of processing resources, wherein the information of the processing resources is used for reflecting the types of the processing resources and the quantity of the processing resources; the determining module is used for determining thread configuration information of each subtask according to the information of the processing resource and the attribute information of each subtask, wherein the thread configuration information is used for representing thread types and thread numbers configured by the subtask in different processing states; the configuration module is used for configuring threads for the subtasks according to the time allocation information and the thread configuration information of the subtasks, and executing corresponding subtasks through the configured threads.
In a third aspect, the present application provides an electronic device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores one or more computer programs executable by the at least one processor, one or more of the computer programs being executable by the at least one processor to enable the at least one processor to perform the thread-based task processing method described above.
In a fourth aspect, the present application provides a computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the thread-based task processing method described above.
In the embodiment provided by the application, firstly, under the condition of receiving a task request, a task to be processed corresponding to the task request is divided into a plurality of subtasks according to service scenes, so that time slicing distribution is conveniently carried out for the corresponding subtasks aiming at different service scenes; secondly, determining time allocation information of each subtask according to a service scene to which each subtask belongs, wherein the time allocation information is used for representing processing time allocated by the subtask in different processing states, so that the subtask to be preferentially executed in different time periods is defined, processing resources are properly inclined for the subtask, and allocation of the processing resources is optimized; acquiring information of processing resources, wherein the information of the processing resources is used for reflecting the types of the processing resources and the quantity of the processing resources; in addition, according to the information of the processing resources and the attribute information of each subtask, determining the thread configuration information of each subtask, wherein the thread configuration information is used for representing the thread types and the thread numbers of each subtask configured in each processing state, and by adopting the processing mode, the matched thread configuration information is generated for each subtask according to the resource requirements of the subtasks in different processing states, so that reasonable threads can be configured for the subtasks in different processing states according to the thread configuration information; and finally, configuring threads for each subtask according to the time allocation information and the thread configuration information of each subtask, and executing corresponding subtasks through the configured threads, wherein the time dimension and the processing resources are considered in the process, so that the subtasks in the target time slices can obtain relatively more processing resources, the subtasks in the non-target time slices maintain certain processing capacity with less processing resources, the configuration condition of the processing resources is optimized, and the resource utilization rate is improved.
It should be understood that the description of this section is not intended to identify key or critical features of the embodiments of the application or to delineate the scope of the application. Other features of the present application will become apparent from the description that follows.
Drawings
The accompanying drawings are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate the application and together with the embodiments of the application, and not constitute a limitation to the application. The above and other features and advantages will become more readily apparent to those skilled in the art by describing in detail exemplary embodiments with reference to the attached drawings, in which:
FIG. 1 is a flowchart of a task processing method based on threads according to an embodiment of the present application;
fig. 2 is a schematic thread configuration diagram of a server according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a task processing method based on threads according to an embodiment of the present application;
FIG. 4 is a schematic flow chart of a task processing method based on threads according to an embodiment of the present application;
FIG. 5 is a block diagram of a task processing device based on threads according to an embodiment of the present application;
fig. 6 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For a better understanding of the technical solutions of the present application, the following description of exemplary embodiments of the present application is made with reference to the accompanying drawings, in which various details of embodiments of the present application are included to facilitate understanding, and they should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the absence of conflict, embodiments and features of embodiments herein may be combined with one another.
As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The terms "connected" or "connected," and the like, are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and this application and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The list processing is one of important applications of data flow and resource allocation, and can be applied to scenes such as telemarketing and the like. In telemarketing, the golden time of marketing is usually 9-11 a.m. and 13-15 a.m. during which improving the processing efficiency of the list has an important impact on the marketing effect.
In the related art, either an inherent number of threads are set for list processing, or the number of threads is manually adjusted and list processing is performed by the adjusted threads. The mode of setting the inherent number of threads cannot be flexibly adjusted according to the processing amounts of different processing tasks, so that the processing resources are tense or the utilization rate of the processing resources is low; when the number of threads is manually adjusted, the adjusted number of threads may not be reasonable, and the processing resources may be tense or the utilization rate of the processing resources may be low.
In view of this, embodiments of the present application provide a method and apparatus for processing tasks based on threads, an electronic device, and a storage medium.
In the embodiment of the application, firstly, under the condition of receiving a task request, dividing a task to be processed corresponding to the task request into a plurality of subtasks according to service scenes, so as to facilitate the subsequent time slicing distribution of the corresponding subtasks for different service scenes; secondly, determining time allocation information of each subtask according to a service scene to which each subtask belongs, wherein the time allocation information is used for representing processing time allocated by the subtask in different processing states, so that the subtask to be preferentially executed in different time periods is defined, processing resources are properly inclined for the subtask, and allocation of the processing resources is optimized; acquiring information of processing resources, wherein the information of the processing resources is used for reflecting the types of the processing resources and the quantity of the processing resources; in addition, according to the information of the processing resources and the attribute information of each subtask, determining the thread configuration information of each subtask, wherein the thread configuration information is used for representing the thread types and the thread numbers of each subtask configured in each processing state, and by adopting the processing mode, the matched thread configuration information is generated for each subtask according to the resource requirements of the subtasks in different processing states, so that reasonable threads can be configured for the subtasks in different processing states according to the thread configuration information; and finally, configuring threads for each subtask according to the time allocation information and the thread configuration information of each subtask, and executing corresponding subtasks through the configured threads, wherein the time dimension and the processing resources are considered in the process, so that the subtasks in the target time slices can obtain relatively more processing resources, the subtasks in the non-target time slices maintain certain processing capacity with less processing resources, the configuration condition of the processing resources is optimized, and the resource utilization rate is improved.
The thread-based task processing method according to the embodiment of the present application may be performed by an electronic device such as a terminal device or a server, where the terminal device may be a vehicle-mounted device, a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a personal digital assistant (Personal Digital Assistant, PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like, and the method may be implemented by a processor invoking computer readable program instructions stored in a memory. Alternatively, the method may be performed by a server.
In a first aspect, an embodiment of the present application provides a method for processing a task based on a thread.
Fig. 1 is a flowchart of a task processing method based on threads according to an embodiment of the present application. Referring to fig. 1, the thread-based task processing method includes:
in step S11, the task to be processed is divided into a plurality of subtasks according to the service scenario.
In some alternative implementations, a business scenario may be used to analyze a business or process that focuses primarily on the logic of operation of a certain class of business or a certain set of processes in a particular environment. Based on the above, for a task to be processed, the task to be processed can be divided into subtasks corresponding to each service scene according to the change of the service scene.
The tasks to be processed comprise list processing tasks, the business scene comprises at least one of a list receiving scene, a list cleaning scene, a list layering scene, a list distribution scene, a list dialing scene and a list decision scene, and the subtasks comprise at least one of a list receiving subtask, a list cleaning subtask, a list layering subtask, a list distribution subtask, a list dialing subtask and a list decision subtask corresponding to the business scene.
It should be noted that, each service scenario has a certain association relationship. For example, a list cleansing scenario is typically entered only after the list receiving scenario is completed; after the scene of list allocation is completed, a scene of list dialing is entered.
In step S12, time allocation information of each sub-task is determined according to the service scenario to which each sub-task belongs.
In some alternative implementations, the time allocation information is used to characterize the processing time allocated by the subtasks in different processing states. In other words, in the embodiment of the present application, different processing states are set for each subtask, and corresponding processing times are allocated for the subtasks in the different states. Because the number and/or types of processing resources required by the subtasks in different processing states may be different, associating the processing states with processing times may ensure that reasonable processing resources are allocated for the corresponding subtasks in the respective times, so that each subtask may be executed in the matched processing states. Compared with the prior art that each subtask is directly and sequentially executed, the method and the device enable distribution of processing resources and distribution of processing time to be scientific and reasonable, and accordingly resource utilization rate can be improved.
In some alternative implementations, the processing states of the subtasks may include an active state and a connected state. Wherein, in the case that a subtask is in an active state, the processing resources are tilted appropriately towards the subtask so that the subtask can be executed with higher efficiency (i.e. the subtask is executed with a higher resource configuration); when a subtask is in a connected state, only a small amount of processing resources are allocated to the subtask (i.e. the subtask is executed with a lower resource configuration), so that the subtask processing capability is guaranteed (the rest of processing resources can be allocated to the subtask in the current active state).
In some alternative implementations, the time allocation information includes a target time slice and a non-target time slice for each sub-task, and the sub-task is in an active state in the target time slice and in a hold state in the non-target time slice.
For example, for a list processing task, the service scenario includes a list receiving scenario, a list cleaning scenario, a list layering scenario, a list allocation scenario, and a list dialing scenario. Considering the throughput of the list processing task, determining that the processing time of the list processing task is 02:00 to 18:00, and setting the target time slicing of the list receiving subtask to be 02:00 to 04:00 and the non-target time slicing to be 04:00 to 18:00 in combination with the association relation between business scenes to which each subtask belongs and the optimal time for processing each subtask (for example, the optimal time for list dialing is 09:00 to 18:00), wherein the list receiving subtask is in an active state during 02:00 to 04:00, and the list receiving subtask is in a connection state during 04:00 to 18:00.
Similarly, setting target time slicing of the list cleaning subtasks to be 04:00-07:00, and setting the rest time to be non-target time slicing, wherein the list cleaning subtasks are in an activated state during 04:00-07:00, and the list cleaning subtasks are in a connected state during the rest time; setting target time slicing of the list layering subtasks to be 07:00-08:00, and setting the remaining time to be non-target time slicing, wherein the list layering subtasks are in an activated state in the period of 07:00-08:00, and the list layering subtasks are in a connection state in the remaining time; setting target time slicing of the list allocation subtask to 08:00-09:00, and setting the rest time to non-target time slicing, wherein the list allocation subtask is in an activated state during 08:00-09:00, and the list allocation subtask is in a connected state during the rest time; setting target time slicing of the list dialing subtask to 09:00-18:00, and setting the remaining time to non-target time slicing, wherein the list dialing subtask is in an activated state during 09:00-18:00, and the list dialing subtask is in a connected state during the remaining time.
In some alternative implementations, the processing state of the subtasks may also include a safeguard state. Wherein the amount of processing resources required is smaller than the amount of processing resources required for the active state but larger than the amount of processing resources required for the connected state when the subtasks are processed in the guaranteed state.
In an exemplary embodiment, when a subtask is executed in an activated state within a target time slice corresponding to the subtask, if a preset condition is detected to be met, the subtask is adjusted from the activated state to a guaranteed state, and corresponding processing resources are adaptively adjusted.
For example, if during 08:00 to 09:00, the corresponding first processing resource is allocated to the list allocation subtask in an active state (assuming that the number of the first processing resources is x 1), and the list allocation subtask is executed based on the allocated first processing resource. And when the condition that the preset condition is met is detected, corresponding second processing resources are allocated for the list allocation subtasks in a guarantee state mode (the number of the second processing resources is assumed to be x2 and x2 is smaller than x 1), and the list allocation subtasks are continuously executed based on the allocated second processing resources.
In summary, for any subtask, the processing states may include three types, which are respectively: the system comprises an activation state, a connection state and a guarantee state, wherein the activation state and the guarantee state correspond to target time slicing of the subtasks, the connection state corresponds to non-target time slicing of the subtasks, and the quantity of processing resources required by each processing state is sequentially reduced according to the sequence of the activation state, the guarantee state and the connection state.
In step S13, information of the processing resource is acquired.
In some alternative implementations, the information of the processing resources is used to reflect the type of processing resources and the number of processing resources.
Illustratively, a processing resource is a collective term for resources that provide task processing capabilities, which may include computing type processing resources (i.e., computing resources), storage type processing resources (i.e., storage resources), and network type storage resources (i.e., network resources), to name a few. Wherein the computing resources are related to the number of central processing units (Central Processing Unit, CPUs), the number of cores (cores), etc.; the storage resources may be further subdivided into caches, memories, external memory (e.g., hard disk), etc., and the number of storage resources is related to the size of its storage space; the amount of network resources is related to the type of network (e.g., wireless communication, wired communication), network bandwidth, etc.
It should be noted that the foregoing is merely an example of processing resources, and the embodiments of the present application are not limited thereto.
In step S14, thread configuration information of each subtask is determined according to the information of the processing resource and the attribute information of each subtask.
In some alternative implementations, thread configuration information is used to characterize the thread type and number of threads that the subtask configures in different processing states.
Illustratively, the thread types include core threads and available threads, where the available threads may be considered as the number of threads that can be used, which should be greater than or equal to the number of core threads.
For example, if the number of core threads is coreSize, the maximum number of threads (which can be regarded as available threads) is maxSize, and the number of waiting queues is queueSize, where coreSize is less than maxSize, during the task processing, if the number of tasks is less than the number of core threads, the core threads are directly used to process the task, if the number of tasks is between coreSize and maxSize, a new thread is established until the sum of the newly created number of threads and the number of core threads is equal to the maximum number of threads, and if the number of tasks is greater than the maximum number of threads, redundant tasks are placed in the waiting queues, where the number of waiting queues is less than or equal to queueSize.
In some alternative implementations, the processing state of the subtask includes an active state, and the thread configuration information includes at least a first number of core threads, the first number of core threads being the number of core threads the subtask configures with the active state.
Accordingly, in some optional implementations, determining, according to the information of the processing resource and the attribute information of each subtask, the number of first core threads included in the thread configuration information of each subtask includes:
according to the information of the processing resources, determining a preset index for carrying out pressure measurement on the processing resources and an adjustment proportion of a pressure measurement extremum for adjusting the preset index; performing pressure measurement on the processing resources, and determining a pressure measurement extremum of a preset index; determining the transaction quantity of single data of each subtask according to the attribute information of each subtask; determining the single data processing time length of each subtask according to the transaction number of the single data of each subtask and the pressure measurement extremum of a preset index; and determining the number of the first core threads according to the single data processing time length of each subtask, the pressure measurement extremum of the preset index and the adjustment proportion.
In other words, on one hand, the pressure measurement extremum of each preset index is obtained by performing pressure measurement on the processing resource, on the other hand, the transaction quantity of single data of each subtask is determined, the single data processing duration of each subtask is determined based on the transaction quantity, and finally the first core thread quantity is determined according to the single data processing duration of each subtask, the pressure measurement extremum of the preset index and the adjustment proportion. Since the first core thread number is obtained by pressure measurement, when the subtasks are processed based on the first core thread, the overhead of processing resources is high, and thus the first core thread is set to the active state core thread number.
It should be noted that, the single piece of data may be data that is determined according to experience, statistics data and content of the task to be processed and is convenient for quantifying the processing capacity of each subtask, and may be a piece of data record or a piece of data table, which is not limited in the embodiment of the present application. For example, in a list processing task, a single piece of data may be a list record.
In the above-described processing procedure, in order to facilitate determining the time consumption (i.e., the single data processing duration) required for each sub-task to process a single piece of data, the workload of each sub-task to process the single piece of data is quantized into the number of transactions to characterize. In general, a transaction may include multiple operations, which may be considered a complete logical processing unit of work. Moreover, for convenience of comparison, in the embodiment of the application, when determining or dividing the transactions, the operation quantity and the operation complexity of each transaction are balanced as much as possible, so that the time consumption of each transaction is relatively close.
For example, for a list receiving subtask, a transaction recorded for a list may include a receiving transaction (the receiving transaction may include multiple operations) and a storing transaction, so the number of transactions for a single piece of data for the list receiving subtask may be determined to be 2. In a similar manner, the number of transactions for other subtasks may be determined and will not be described further herein.
In some optional implementations, determining the number of first core threads according to the single data processing duration of each subtask, the pressure measurement extremum of the preset index, and the adjustment ratio includes:
determining an allowable threshold of the preset index according to the pressure measurement extremum and the adjustment proportion of the preset index; initializing the number of test core threads; under the condition that the number of the test core threads is adjusted to meet a preset threshold value permission condition, determining the number of the first core threads according to the current number of the test core threads; the threshold value permission condition is that the actual value of each preset index is smaller than or equal to the permission threshold value, and the processing quantity of the data in unit time reaches the maximum value.
For example, after initializing the number of test core threads, the number of test core threads may be gradually increased, so that when the subtasks are executed according to a single data processing duration based on the test core threads and the actual value of each preset index is less than or equal to the allowable threshold, the number of data processed in unit time reaches the maximum value; and determining the number of the first core threads according to the number of the test core threads corresponding to the maximum value.
Illustratively, the preset index has a pressure measurement extremum of y max Adjusting the ratio to r (0 < r < 1), determining the allowable threshold of the preset index to be thr=r×y max . Firstly, initializing the number of test core threads, determining that the initial number of the test core threads is k1 (k 1 is more than or equal to 1), and executing subtasks through the k1 test core threads according to a single data processing time length to obtain an actual value y1 of a preset index and the processing number q1 of data in unit time.
If the actual value y1 is smaller than the allowable error thr, increasing the number of the test core threads from k1 to k2, and executing the subtasks through the k2 test core threads with a single data processing duration to obtain the actual value y2 of the preset index and the processing number q2 of the data in unit time.
If the actual value y2 is less than the allowable error thr, the number of test core threads is increased from k2 to k3. Similarly, until the number of test core threads is increased to kt, the subtasks are executed by the kt test core threads in a single data processing duration, so as to obtain an actual value yt of a preset index and a processing number qt of data in unit time, wherein yt is smaller than or equal to an allowable error thr, and when the number of test core threads is increased to k (t+1), an actual value y (t+1) of the preset index and a processing number q (t+1) of data in unit time are obtained, and when y (t+1) is larger than the allowable error thr, it can be determined that the processing number of data in unit time reaches a maximum value under the condition that the number of test cores is kt and the actual value of the preset index is smaller than or equal to an allowable threshold, so that kt is determined as the number of first core threads.
It should be noted that, the thread configuration information may include, in addition to the first number of core threads, the first maximum number of threads and the first waiting queue number. The first maximum number of threads is greater than or equal to the first number of core threads, and the first waiting queue number may be set according to experience, statistical data, processing efficiency, or the like. The core Thread, the maximum Thread, the waiting queue, etc. may be implemented by a Thread Pool (Thread Pool) technique. The thread pool can schedule and reuse one or more threads in a unified mode, so that overhead in use caused by excessive threads is avoided, resource consumption is reduced, response speed is improved, and manageability of the threads is improved.
For example, the thread configuration information for a subtask includes: in each example, the first core thread number=12, the first maximum thread number=12, and the first waiting queue number=6, where an example may be considered as a physical machine, or may be considered as a virtual processor configured in the physical machine and having relatively independent processing capability, which is not limited in the embodiment of the present application. Further, if it is determined that the single data processing duration of the sub-task is 100ms (milliseconds), the number of data processing in a unit time (for example, 1 second(s), 1 s=1000 ms) is: the first core thread number x (1000 ms/100 ms) x instance number.
In some alternative implementations, the processing state of the subtask further includes a connection state, and the thread configuration information further includes a second number of core threads, the second number of core threads being the number of core threads the subtask configures with the connection state.
Accordingly, in some optional implementations, determining, according to the information of the processing resource and the attribute information of each subtask, the number of second core threads included in the thread configuration information of each subtask includes:
determining the maximum number of threads according to the information of the processing resources, wherein the maximum number of threads is the maximum number of threads which are supported and established by the processing resources; determining the number of the remaining core threads according to the maximum number of threads and the number of the first core threads of each subtask; and determining the second core thread number of each subtask according to the remaining core thread number.
In other words, after determining the maximum number of threads and the first number of core threads allocated to the subtasks in the target time slice, the difference value between the maximum number of threads and the first number of core threads is the number of remaining core threads, so that the second number of core threads that can be allocated to the subtasks in the non-target time slice can be determined according to the number of remaining core threads.
Illustratively, if the task to be processed includes sub-task 1, sub-task 2, and sub-task 3, and the maximum number of threads is p max (p max Not less than 1), the number of the first core threads of the subtask 1 which are sliced at the target time is p1, the number of the first core threads of the subtask 2 which are sliced at the target time is p2, and the number of the first core threads of the subtask 3 which are sliced at the target time is p3. As can be seen, for sub-task 1, the number of remaining core threads is p re 1=p max P1, so that the second number of cores that can be allocated to subtask 2 and subtask 3, respectively, is |p re 1/2|, where||represents a rounding operation. In other words, in the target time slice corresponding to the subtask 1, the subtask 1 occupies p1 core threads, and the subtask 2 and the subtask 3 occupy |p respectively re 1/2|core threads.
Similarly, for sub-task 2, the number of remaining core threads is p re 2=p max P2, so that the second number of cores that can be allocated to subtask 1 and subtask 3, respectively, is |p re 2/2|. In other words, in the target time slice corresponding to the subtask 2, the subtask 2 occupies p2 core threads, and the subtask 1 and the subtask 3 occupy |p respectively re 2/2|core threads.
Further, for subtask 3, the number of remaining core threads is p re 3=p max P3, so that the second number of cores that can be allocated to subtask 1 and subtask 2, respectively, is |p re 3/2|. In other words, in the target time slice corresponding to the subtask 3, the subtask 3 occupies p3 core threads, and the subtask 1 and the subtask 2 occupy the sub respectivelyp re 3/2|core threads.
In some alternative implementations, the second number of core threads may also be set based on experience, statistics, simulation data, and the like. For example, the second core thread number is empirically set to 1.
It should be noted that, the thread configuration information may include, in addition to the second number of core threads, the second maximum number of threads and the second waiting queue number. The second maximum number of threads is greater than or equal to the second number of core threads, and the second waiting queue number may be set according to experience, statistical data, processing efficiency, or the like.
For example, the thread configuration information for a subtask includes: in each instance, the second number of core threads is 1, the second maximum number of threads is 1, and the second waiting queue number is 0.
In the foregoing, it is disclosed how the subtasks determine the thread configuration information in the active state and the connection state, and in some optional implementations, the processing state of the subtasks further includes a guarantee state, and correspondingly, the thread configuration information further includes a third core thread number, where the third core thread number is the core thread number configured by the subtasks in the guarantee state, and the attribute information of each subtask includes the expected processing amount and the expected processing duration of each subtask. Wherein the resource overhead required for the provisioning state is intermediate between the activation state and the connection state.
In some optional implementations, determining, according to the information of the processing resource and the attribute information of each subtask, the number of third core threads included in the thread configuration information of each subtask includes:
and determining the maximum number of core threads meeting the expected processing amount and the expected processing duration of each subtask according to the single data processing duration of each subtask and the information of the processing resources, and determining the maximum number as the third core thread number.
Illustratively, the information of the processing resource further includes a number of instances. For a subtask, the expected processing amount is w, the expected processing time length is h, the number of instances corresponding to the processing resource is m, the single data processing time length is a (in ms), and the number of the third core threads in each instance is n, so that (1000 ms/a) multiplied by m multiplied by n multiplied by h is more than or equal to w, and therefore, n is more than or equal to w/{ (1000 ms/a) multiplied by m multiplied by h. Wherein, (1000 ms/a) represents the number of data (or data stripe number) that can be processed in one second, and (1000 ms/a) ×m×n×h can represent the number of data that can be processed in h by each of the n core threads in m instances, where the number can meet the processing requirement of the subtask when the number is greater than or equal to the expected processing amount, so that the number of the third core threads can be solved.
For example, for a list receiving subtask whose expected throughput is 100 ten thousand, the expected processing time period is 2 hours (i.e., 7200 ms), a single data processing time period is known to be 100ms, and the number of instances is 4. Based on this, a formula for the number of third core threads may be constructed as: (1000/100) x 4 x n x 7200 is larger than or equal to 1000000, n is larger than 3.5 through solving, and the value of n can be determined to be 4 through rounding, so that the number of the third core threads of the list receiving subtask is determined to be 4. Further, the third maximum number of threads may also be empirically set to 4, as may the third waiting queue number.
It should be noted that, in some alternative implementations, after determining the maximum number as the third core thread number, the method further includes:
comparing the number of the third core threads with a thread number threshold value to obtain a comparison result; wherein the thread number threshold is determined based on a maximum number of threads established by the processing resource support; and determining whether the resource amount of the processing resource needs to be increased according to the comparison result.
In other words, after the third number of core threads n is determined, it is necessary to ensure that the processing resource has the capability of establishing n core threads, and therefore, whether the resource amount of the processing resource needs to be increased is determined by comparing the third number of core threads with the thread number threshold. For example, in the case that the number of the third core threads is smaller than or equal to the thread number threshold as a result of the comparison, it is determined that the processing resources do not need to be increased, and in the case that the number of the third core threads is larger than the thread number threshold as a result of the comparison, it is determined that the current processing resources cannot meet the requirement of establishing n core threads, and therefore, the resource amount of the processing resources needs to be increased. The amount of the increased processing resource may be determined according to a difference between the number of the third core threads and the thread number threshold, which is not limited in the embodiment of the present application.
In step S15, threads are configured for each subtask according to the time allocation information and the thread configuration information of each subtask, and corresponding subtasks are executed through the configured threads.
In some optional implementations, the thread configuration information may characterize thread types and thread numbers configured by the subtasks in different processing states, and, because the processing states of the subtasks are related to whether the subtasks are in corresponding target time slices, the thread configuration information may determine, for each subtask, the corresponding thread types and thread numbers for each target time slice and for each non-target time slice, so as to perform thread configuration processing, so that each subtask performs task processing through the configured threads.
In some alternative implementations, the time allocation information includes a target time slice and a non-target time slice for each sub-task, and the sub-task is in an active state in the target time slice and in a hold state in the non-target time slice; correspondingly, according to the time allocation information and the thread configuration information of each subtask, configuring threads for each subtask, and executing corresponding subtasks through the configured threads, wherein the method comprises the following steps:
Aiming at each subtask, under the condition that the target time slicing of the subtask belonging to the current moment is determined according to the time allocation information of the subtask, configuring core threads for the subtask according to the number of the first core threads, and executing the subtask through the core threads; under the condition that non-target time slicing of the subtask at the current moment is determined according to the time allocation information of the subtask, configuring core threads for the subtask according to the number of the second core threads, and executing the subtask through the core threads.
For example, if the processing time of the task to be processed is T, T is sequentially composed of three time periods of T1, T2, and T3, and the task to be processed includes a subtask 1, a subtask 2, and a subtask 3, the target time slice of the subtask 1 is T1 (the non-target time slice thereof includes T2 and T3), the target time slice of the subtask 2 is T2 (the non-target time slice thereof includes T1 and T3), and the target time slice of the subtask 1 is T3 (the non-target time slice thereof includes T1 and T2).
Further, subtask 1 has a first core thread number p11 for its target time slice t1 and a second core thread number p12 for the non-target time slices (including t2 and t 3); subtask 2 has a first number of core threads p21 for its target time slice t2 and a second number of core threads p22 for non-target time slices (including t1 and t 3); subtask 3 has a first number of core threads p31 for its target time slice t3 and a second number of core threads p32 for non-target time slices (including t1 and t 2). Where p12 may be empirically set, the same in t2 and t3, p22 and p32 are similar.
In this way, p11 first core threads are configured for the subtask 1 in the period of t1, and the subtask 1 is executed through the p11 first core threads; meanwhile, p22 second core threads are configured for the subtask 2, the subtask 2 is executed through the p22 second core threads, p32 second core threads are configured for the subtask 3, and the subtask 3 is executed through the p32 second core threads.
Similarly, in the period of t2, p21 first core threads are configured for the subtask 2, and the subtask 2 is executed through the p21 first core threads; meanwhile, p12 second core threads are configured for the subtask 1, the subtask 1 is executed through the p12 second core threads, p32 second core threads are configured for the subtask 3, and the subtask 3 is executed through the p32 second core threads.
In the t3 time period, configuring p31 first core threads for the subtask 3, and executing the subtask 3 through the p31 first core threads; meanwhile, p12 second core threads are configured for the subtask 1, the subtask 1 is executed through the p12 second core threads, p22 second core threads are configured for the subtask 2, and the subtask 2 is executed through the p22 second core threads.
For subtask 1 through subtask 3, in some alternative implementations, it may also be determined that the first number of core threads for subtask 1 at its target time slice t1 is p11, the second number of core threads for non-target time slice t2 is p122, and the second number of core threads for non-target time slice t3 is p123; subtask 2 has p21 as the first core thread number of its target time slice t2, p221 as the second core thread number of the non-target time slice t1, and p223 as the second core thread number of the non-target time slice t 3; subtask 3 has a first number of core threads p31 for its target time slice t3, a second number of core threads p321 for the non-target time slice t1, and a second number of core threads p322 for the non-target time slice t 2.
Correspondingly, in the t1 time period, configuring p11 first core threads for the subtask 1, and executing the subtask 1 through the p11 first core threads; meanwhile, p221 second core threads are configured for the subtask 2, the subtask 2 is executed through the p221 second core threads, p321 second core threads are configured for the subtask 3, and the subtask 3 is executed through the p321 second core threads.
In the t2 time period, configuring p21 first core threads for the subtask 2, and executing the subtask 2 through the p21 first core threads; meanwhile, p122 second core threads are configured for the subtask 1, the subtask 1 is executed through the p122 second core threads, p322 second core threads are configured for the subtask 3, and the subtask 3 is executed through the p322 second core threads.
In the t3 time period, configuring p31 first core threads for the subtask 3, and executing the subtask 3 through the p31 first core threads; meanwhile, p123 second core threads are configured for the subtask 1, the subtask 1 is executed through the p123 second core threads, p223 second core threads are configured for the subtask 2, and the subtask 2 is executed through the p223 second core threads.
In some optional implementations, in a case that it is determined that the current time belongs to the target time slice of the subtask according to the time allocation information of the subtask, configuring the core threads for the subtask according to the first number of core threads, and after executing the subtask through the core threads, further includes:
Under the condition that the preset condition is detected to be met, the number of the core threads of the subtasks is adjusted according to the number of the third core threads, and the preset condition comprises abnormal processing resources; and executing the subtasks through the adjusted core threads.
In other words, in the event of an exception in the processing resources, it may not be possible to provide resource support for the active subtask for the first number of core threads, and therefore the number of core threads for the subtask is reduced from the first number of core threads to the third number of core threads. And because the number of the third core threads is determined based on the expected processing amount and the expected processing time length of the subtask, when the subtask is configured and executed based on the number of the third core threads, the subtask can be ensured to be completed within the expected processing time length, so that the smooth execution of the task is ensured. It should be noted that, when the failure rate of the processing resource is high, the number of core threads of the third core thread cannot be supported, a new processing resource may be applied or the number of core threads may be further reduced.
For the task to be processed, the number of the third core threads of the subtask 1 is determined to be p13, the number of the third core threads of the subtask 2 is determined to be p23, and the number of the third core threads of the subtask 3 is determined to be p33. Based on this, in the t1 period, if it is detected that the preset condition is satisfied, p11 first core threads configured for the subtask 1 are adjusted to p13 third core threads, and the subtask 1 is executed by the p13 third core threads. It should be noted that "first" and "third" are only used to identify the number of core threads, and are not used to limit the number of core threads. In other words, in the case where p11=6, p13=4, the above-described procedure may be: in the period t1 (i.e. the target time slicing of the subtask 1), if the preset condition is detected to be met, 6 core threads configured for the subtask 1 are deleted to 4, and the subtask 1 is executed through the 4 core threads.
Similarly, in the period of t2 (i.e. the target time slicing of the subtask 2), if it is detected that the preset condition is met, p21 first core threads configured for the subtask 2 are adjusted to p23 third core threads, and the subtask 2 is executed through the p23 third core threads.
And in the time period t3 (namely, target time slicing of the subtask 3), if the preset condition is detected to be met, p31 first core threads configured for the subtask 3 are adjusted to p33 third core threads, and the subtask 3 is executed through the p33 third core threads.
In the embodiment of the application, firstly, under the condition of receiving a task request, dividing a task to be processed corresponding to the task request into a plurality of subtasks according to service scenes, so as to facilitate the subsequent time slicing distribution of the corresponding subtasks for different service scenes; secondly, determining time allocation information of each subtask according to a service scene to which each subtask belongs, wherein the time allocation information is used for representing processing time allocated by the subtask in different processing states, so that the subtask to be preferentially executed in different time periods is defined, processing resources are properly inclined for the subtask, and allocation of the processing resources is optimized; acquiring information of processing resources, wherein the information of the processing resources is used for reflecting the types of the processing resources and the quantity of the processing resources; in addition, according to the information of the processing resources and the attribute information of each subtask, determining the thread configuration information of each subtask, wherein the thread configuration information is used for representing the thread types and the thread numbers of each subtask configured in each processing state, and by adopting the processing mode, the matched thread configuration information is generated for each subtask according to the resource requirements of the subtasks in different processing states, so that reasonable threads can be configured for the subtasks in different processing states according to the thread configuration information; and finally, configuring threads for each subtask according to the time allocation information and the thread configuration information of each subtask, and executing corresponding subtasks through the configured threads, wherein the time dimension and the processing resources are considered in the process, so that the subtasks in the target time slices can obtain relatively more processing resources, the subtasks in the non-target time slices maintain certain processing capacity with less processing resources, the configuration condition of the processing resources is optimized, and the resource utilization rate is improved.
The thread-based task processing method according to the embodiment of the present application is described below with reference to fig. 2 and 4.
Fig. 2 is a schematic thread configuration diagram of a server according to an embodiment of the present application. Referring to FIG. 2, d thread pools (d.gtoreq.1) are configured in the server, including: thread pool 1, thread pools 2, … …, thread pool d. Wherein, z1 threads (thread 11, thread 12, … …, thread 1z1, respectively) are arranged in the thread pool 1, z2 threads (thread 21, thread 22, … …, thread 2z2, respectively), … … are arranged in the thread pool d, and zd threads (thread d1, thread d2, … …, thread dzd, respectively) are arranged in the thread pool 2, wherein z1 to zd are integers of 1 or more.
In some alternative implementations, different thread pools correspond to different traffic scenarios (also understood as different thread pools corresponding to different subtasks). For example, for a task to be processed, it includes d subtasks, each subtask corresponding to a different traffic scenario, then the 1 st subtask may be processed by the thread pool, the 2 nd subtask may be processed by the 2 nd thread pool, … …, and the d th subtask may be processed by the d th thread pool. The type and the number of threads in the ith thread pool can be determined according to the thread-based task processing method according to any one of the embodiments of the present application when the ith subtask (i.ltoreq.1.ltoreq.d) performs data processing through the ith thread pool, and will not be described herein.
Fig. 3 is a schematic diagram of a task processing method based on threads according to an embodiment of the present application. Referring to fig. 3, the task to be processed is a task to be processed, which may be a list processing task or other processing tasks, which is not limited in this embodiment of the present application.
The processing resources are various resources provided for executing the task to be processed, and may include an instance, a database, various middleware, and the like. One or more cores are configured in the instance, at least can provide computing resources, the database can provide functions of data storage, data analysis and the like, and the middleware can manage the computing resources and network communication to realize interoperation. Through the processing resources, a resource basis is provided for executing the task to be processed.
In some optional implementation manners, in order to efficiently manage and effectively configure the processing resources, information of the processing resources can be obtained through a processing resource detection mode, and processing resource decision is performed based on the information of the processing resources to obtain a decision result, so that processing resource configuration is dynamically executed for the task to be processed according to the decision result.
As shown in fig. 3, the task to be processed is divided into s subtasks, namely subtask 1, subtask 2, … … and subtask s according to the service scenario. Determining the processing time of a task to be processed as a time period from t1 to t2, and dividing the time period from t1 to t2 into s time slices (comprising time slice 1, time slice 2, … … and time slice s) according to the service scene to which each subtask belongs, wherein the time slice 1 is a target time slice of the subtask 1, the time slice 2 is a target time slice of the subtask 2, … … and the time slice s is a target time slice of the subtask s. And, each subtask is normally in an active state within its respective target time slice and in a connected state within the non-target time slice.
Correspondingly, for the subtask 1, the first core thread number p11 corresponding to the subtask 1, the second core thread numbers p12 and … … of the subtask 2 and the second core thread number p1s of the subtask s can be determined, and p11 core threads are configured for executing the subtask 1, p12 core threads are configured for executing the subtask 2 and … … for the subtask 2, and p1s core threads are configured for executing the subtask s for the subtask s in the time slice 1 through processing resource configuration.
For subtask 2, a second number of core threads p22 (p 22 is generally smaller than p 11) corresponding to subtask 1, a first number of core threads p21 (p 21 is generally larger than p 12) of subtask 2, … …, and a second number of core threads p2s of subtask s can be determined, and p22 core threads are configured for subtask 1 to execute subtask 1, p21 core threads are configured for subtask 2 to execute subtask 2, … …, and p2s core threads are configured for subtask s to execute subtask s in time slice 2 through processing resource configuration.
Similarly, for subtask s, a second number of core threads ps2 corresponding to subtask 1 (ps 2 is typically less than p 11), a second number of core threads ps2 of subtask 2 (ps 2 is typically less than p 21), … …, a first number of core threads ps1 of subtask s (ps 1 is typically greater than p1s and p2 s), and by processing resource configuration, within time slice s, ps2 core threads are configured for subtask 1 to execute subtask 1, ps2 core threads are configured for subtask 2 to execute subtask 2, … …, ps1 core threads are configured for subtask s to execute subtask s.
It should be noted that, in the process of executing the task to be processed, if the detection of the processing resource is determined that the preset condition is met, the re-decision and the reconfiguration of the processing resource are performed to match the current situation of the processing resource.
The processing states of the subtasks include a guarantee state in addition to an active state and a connection state, and the number of core threads configured for the subtasks is a third number of core threads when the subtasks are in the guarantee state. The third number of core threads may be determined according to information of processing resources (for example, the number of instances, the number of cores of the instances, etc.) and attribute information of the subtasks (for example, the expected processing amount and the expected processing duration of the subtasks, etc.), and the determining process may refer to relevant content of the embodiments of the present application, which is not described herein.
For example, the third core thread number p13 corresponding to the subtask 1, the third core thread numbers p23 and … … of the subtask 2, and the second core thread number ps3 of the subtask s are determined.
In the time slicing 1, if it is detected that the preset condition is satisfied, p13 core threads are configured for the subtask 1 for executing the subtask 1 through the processing resource configuration (which is equivalent to reducing the number of the core threads for executing the subtask 1 from p11 to p 13).
In the time slicing 2, if it is detected that the preset condition is satisfied, p23 core threads are configured for the subtask 2 for executing the subtask 2 through the processing resource configuration (which is equivalent to reducing the number of the core threads for executing the subtask 2 from p21 to p 23).
Similarly, within the time slice s, if it is detected that the preset condition is satisfied, ps3 core threads are configured for the subtask s for executing the subtask s through the processing resource configuration (equivalent to reducing the number of core threads for executing the subtask s from ps1 to ps 3).
Fig. 4 is a flow chart of a task processing method based on threads according to an embodiment of the present application.
Step S401, a task to be processed is acquired.
Step S402, dividing the task to be processed into a plurality of subtasks according to the service scene.
Step S403, determining target time slices and non-target time slices of each subtask according to the service scene to which each subtask belongs.
Step S404, obtaining information of the processing resource, and determining a preset index for performing pressure measurement on the processing resource and an adjustment ratio of a pressure measurement extremum for adjusting the preset index.
Step S405, performing pressure measurement on the processing resource, and determining a pressure measurement extremum of a preset index.
For example, for the database, the preset indexes of the database are determined to be the query rate per second (Queries Per Second, QPS) and the transaction number per second (Transactions Per Second, TPS), and the values of the QPS and TPS at the collapse critical of the database are read through the compression measurement, and are taken as the compression measurement extremum of the QPS and TPS. The pressure measurement process for other processing resources is similar and will not be described again.
Step S406, determining the number of transactions of single data of each subtask according to the attribute information of each subtask.
Step S407, determining the single data processing time length of each subtask according to the transaction number of the single data of each subtask and the pressure measurement extremum of the preset index.
Step S408, determining the number of the first core threads according to the single data processing time length of each subtask, the pressure measurement extremum of the preset index and the adjustment proportion.
Step S409, determining the maximum thread number according to the information of the processing resources.
In step S410, the number of remaining core threads is determined according to the maximum number of threads and the first number of core threads of each subtask.
In step S411, the second number of core threads of each subtask is determined according to the remaining number of core threads.
Step S412, determining the maximum number of core threads satisfying the expected processing amount and the expected processing time length of each subtask according to the single data processing time length of each subtask and the information of the processing resources, and determining the maximum number as the third core thread number.
Step S412 may be performed after the single data processing duration of each subtask is obtained in step S407.
Step S413, determining a subtask corresponding to the current moment according to the target time slicing of each subtask, configuring core threads for the subtask according to the first core thread quantity of the subtask, executing the subtask through the configured core threads, configuring core threads for the remaining subtasks according to the second core thread quantity of each subtask, and executing the remaining subtasks through the configured core threads.
In step S414, if it is detected that the preset condition is met, the number of core threads of the subtask in the target time slicing is adjusted according to the number of third core threads, and the subtask is executed through the adjusted core threads.
In some alternative implementations, during an initialization stage of executing a task to be processed, 1 core thread (corresponding to 1 second core thread number) may be allocated to each subtask, so that each subtask performs task processing in a connection state; then, determining a target time slice corresponding to the current moment and a subtask g (i) corresponding to the target time slice, and increasing the number of core threads of the subtask g (i) to a first number p (i) of core threads corresponding to the subtask g (i), so as to switch the processing state of the subtask g (i) into an activated state (other subtasks still perform task processing in a connection state). In the execution process, whether the target time slicing of the next subtask g (i+1) is entered can be judged through a detection mode, after the target time slicing of the next subtask g (i+1) is entered, the number of core threads of the subtask g (i) is reduced from p1 to 1, so that the processing state of the subtask g (i) is switched to a connection state, the number of core threads of the subtask g (i+1) is increased to a first core thread number p (i+1) corresponding to the subtask g (i+1), and the processing state of the subtask g (i+1) is switched to an activation state. And the like until all the subtasks are executed, and a final task processing result is obtained.
In the above processing, if the detection that the preset condition is met, the number of core threads of the subtask that executes the task processing in the active state at the current time is reduced from the number of the first core threads corresponding to the number of the core threads to the number of the third core threads, so that the occupation of processing resources is reduced while the task can be executed. And when the condition that the preset condition is not met is detected, the number of the core threads of the subtask can be readjusted from the third number of the core threads to the first number of the core threads, so that the task processing speed is increased.
It will be appreciated that the above-mentioned method embodiments of the present application may be combined with each other to form a combined embodiment without departing from the principle logic, which is not repeated herein, and the present application is limited to the description. It will be appreciated by those skilled in the art that in the above-described methods of the embodiments, the particular order of execution of the steps should be determined by their function and possible inherent logic.
In a second aspect, embodiments of the present application provide a thread-based task processing device.
Fig. 5 is a block diagram of a task processing device based on a thread according to an embodiment of the present application.
Referring to fig. 5, an embodiment of the present application provides a thread-based task processing device 500, including:
the dividing module 501 is configured to divide a task to be processed into a plurality of subtasks according to a service scenario;
the allocation module 502 is configured to determine time allocation information of each subtask according to a service scenario to which each subtask belongs, where the time allocation information is used to characterize processing time allocated by the subtask in different processing states;
an obtaining module 503, configured to obtain information of processing resources, where the information of the processing resources is used to reflect a type of the processing resources and a number of the processing resources;
a determining module 504, configured to determine thread configuration information of each subtask according to information of a processing resource and attribute information of each subtask, where the thread configuration information is used to characterize thread types and thread numbers configured by the subtask in different processing states (i.e., the thread configuration information may characterize thread types and thread numbers configured by each subtask in each processing state);
the configuration module 505 is configured to configure threads for each subtask according to the time allocation information and the thread configuration information of each subtask, and execute corresponding subtasks through the configured threads.
In some alternative implementations, the processing state of the subtask includes an active state, and the thread configuration information includes at least a first number of core threads, the first number of core threads being a number of core threads that the subtask configures with the active state;
the determining module 504 performs the following steps when determining, according to the information of the processing resource and the attribute information of each subtask, the number of first core threads included in the thread configuration information of each subtask:
according to the information of the processing resources, determining a preset index for carrying out pressure measurement on the processing resources and an adjustment proportion of a pressure measurement extremum for adjusting the preset index;
performing pressure measurement on the processing resources, and determining a pressure measurement extremum of a preset index;
determining the transaction quantity of single data of each subtask according to the attribute information of each subtask;
determining the single data processing time length of each subtask according to the transaction number of the single data of each subtask and the pressure measurement extremum of a preset index;
and determining the number of the first core threads according to the single data processing time length of each subtask, the pressure measurement extremum of the preset index and the adjustment proportion.
In some optional implementations, determining the number of first core threads according to the single data processing duration of each subtask, the pressure measurement extremum of the preset index, and the adjustment ratio includes:
Determining an allowable threshold of the preset index according to the pressure measurement extremum and the adjustment proportion of the preset index;
initializing the number of test core threads;
under the condition that the number of the test core threads is adjusted to meet a preset threshold value permission condition, determining the number of the first core threads according to the current number of the test core threads; the threshold value permission condition is that the actual value of each preset index is smaller than or equal to the permission threshold value, and the processing quantity of the data in unit time reaches the maximum value.
In some optional implementations, the processing state of the subtasks further includes a connection state, and the thread configuration information further includes a second number of core threads, the second number of core threads being the number of core threads that the subtasks configure with the connection state;
the determining module 504 performs the following steps when determining, according to the information of the processing resource and the attribute information of each subtask, the number of second core threads included in the thread configuration information of each subtask:
determining the maximum number of threads according to the information of the processing resources, wherein the maximum number of threads is the maximum number of threads which are supported and established by the processing resources;
determining the number of the remaining core threads according to the maximum number of threads and the number of the first core threads of each subtask;
And determining the second core thread number of each subtask according to the remaining core thread number.
In some optional implementations, the processing state of the subtasks further includes a guarantee state, the thread configuration information further includes a third core thread number, and the third core thread number is the core thread number configured by the subtasks under the condition of being in the guarantee state, and the attribute information of each subtask includes expected processing capacity and expected processing duration of each subtask;
the determining module 504 performs the following steps when determining, according to the information of the processing resource and the attribute information of each subtask, the number of third core threads included in the thread configuration information of each subtask:
and determining the maximum number of core threads meeting the expected processing amount and the expected processing duration of each subtask according to the single data processing duration of each subtask and the information of the processing resources, and determining the maximum number as the third core thread number.
In some alternative implementations, the determination module 504, after determining the maximum number as the third core thread number, further performs the steps of:
comparing the number of the third core threads with a thread number threshold value to obtain a comparison result; wherein the thread number threshold is determined based on a maximum number of threads established by the processing resource support;
And determining whether the resource amount of the processing resource needs to be increased according to the comparison result.
In some alternative implementations, the time allocation information includes a target time slice and a non-target time slice for each sub-task, and the sub-task is in an active state in the target time slice and in a hold state in the non-target time slice;
the configuration module 505 configures threads for each subtask according to the time allocation information and the thread configuration information of each subtask, and executes the following steps when executing the corresponding subtask through the configured thread:
aiming at each subtask, under the condition that the target time slicing of the subtask belonging to the current moment is determined according to the time allocation information of the subtask, configuring core threads for the subtask according to the number of the first core threads, and executing the subtask through the core threads; under the condition that non-target time slicing of the subtask at the current moment is determined according to the time allocation information of the subtask, configuring core threads for the subtask according to the number of the second core threads, and executing the subtask through the core threads.
In some optional implementations, in the case that the configuration module 505 determines, according to the time allocation information of the subtasks, that the current time belongs to the target time slice of the subtasks, the core threads are configured for the subtasks according to the number of the first core threads, and after the subtasks are executed by the core threads, the following steps are further executed:
Under the condition that the preset condition is detected to be met, the number of the core threads of the subtasks is adjusted according to the number of the third core threads, and the preset condition comprises abnormal processing resources;
and executing the subtasks through the adjusted core threads.
In some optional implementations, the tasks to be processed include a list processing task, and the business scenario includes at least one of a list receiving scenario, a list cleaning scenario, a list layering scenario, a list allocation scenario, a list dialing scenario, and a list decision scenario.
According to the embodiment provided by the application, firstly, under the condition that a task request is received, a to-be-processed task corresponding to the task request is divided into a plurality of subtasks according to a service scene by a dividing module, so that time slicing distribution is conveniently carried out for the corresponding subtasks according to different service scenes; secondly, determining time allocation information of each subtask according to a service scene to which each subtask belongs through an allocation module, wherein the time allocation information is used for representing processing time allocated by the subtask in different processing states, so that the subtask to be preferentially executed in different time periods is defined, and processing resources are properly inclined for the subtask, so that allocation of the processing resources is optimized; the method comprises the steps that information of processing resources is obtained through an obtaining module, and the information of the processing resources is used for reflecting the types of the processing resources and the quantity of the processing resources; in addition, the determining module determines thread configuration information of each subtask according to the information of the processing resources and the attribute information of each subtask, wherein the thread configuration information is used for representing the thread types and the thread numbers of each subtask configured in each processing state, and by the processing mode, matched thread configuration information is generated for each subtask according to the resource requirements of the subtasks in different processing states, so that reasonable threads can be configured for the subtasks in different processing states according to the thread configuration information; and finally, configuring threads for each subtask according to the time allocation information and the thread configuration information of each subtask through a configuration module, and executing corresponding subtasks through the configured threads, wherein the time dimension and the processing resources are considered in the process, so that the subtasks in the target time slices can obtain relatively more processing resources, the subtasks in the non-target time slices maintain certain processing capacity with less processing resources, the configuration condition of the processing resources is optimized, and the resource utilization rate is improved.
In addition, the application further provides electronic equipment and a computer readable storage medium, and the electronic equipment and the computer readable storage medium can be used for realizing any of the task processing methods based on threads provided by the application, and corresponding technical schemes and descriptions and corresponding records referring to method parts are omitted.
Fig. 6 is a block diagram of an electronic device according to an embodiment of the present application.
Referring to fig. 6, an embodiment of the present application provides an electronic device, including: at least one processor 601; at least one memory 602, and one or more I/O interfaces 603, connected between the processor 601 and the memory 602; the memory 602 stores one or more computer programs executable by the at least one processor 601, and the one or more computer programs are executed by the at least one processor 601 to enable the at least one processor 601 to perform the thread-based task processing method described above.
The embodiment of the application also provides a computer readable storage medium, on which a computer program is stored, wherein the computer program realizes the task processing method based on threads when being executed by a processor. The computer readable storage medium may be a volatile or nonvolatile computer readable storage medium.
Embodiments of the present application also provide a computer program product comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when executed in a processor of an electronic device, performs the above-described thread-based task processing method.
Those of ordinary skill in the art will appreciate that all or some of the steps, systems, functional modules/units in the apparatus, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between the functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed cooperatively by several physical components. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer-readable storage media, which may include computer storage media (or non-transitory media) and communication media (or transitory media).
The term computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable program instructions, data structures, program modules or other data, as known to those skilled in the art. Computer storage media includes, but is not limited to, random Access Memory (RAM), read Only Memory (ROM), erasable Programmable Read Only Memory (EPROM), static Random Access Memory (SRAM), flash memory or other memory technology, portable compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical disc storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable program instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and may include any information delivery media.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for performing the operations of the present application may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present application are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information for computer readable program instructions, which may execute the computer readable program instructions.
The computer program product described herein may be embodied in hardware, software, or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
Various aspects of the present application are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Example embodiments have been disclosed herein, and although specific terms are employed, they are used and should be interpreted in a generic and descriptive sense only and not for purpose of limitation. In some instances, it will be apparent to one skilled in the art that features, characteristics, and/or elements described in connection with a particular embodiment may be used alone or in combination with other embodiments unless explicitly stated otherwise. It will therefore be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the scope of the present application as set forth in the following claims.

Claims (10)

1. A method for thread-based task processing, comprising:
dividing a task to be processed into a plurality of subtasks according to a service scene;
determining time allocation information of each subtask according to a service scene to which the subtask belongs, wherein the time allocation information is used for representing processing time allocated by the subtask in different processing states;
acquiring information of processing resources, wherein the information of the processing resources is used for reflecting the types of the processing resources and the quantity of the processing resources;
Determining thread configuration information of each subtask according to the information of the processing resource and the attribute information of each subtask, wherein the thread configuration information is used for representing thread types and thread numbers configured by the subtask in different processing states;
and configuring threads for each subtask according to the time allocation information and the thread configuration information of each subtask, and executing corresponding subtasks through the configured threads.
2. The method of claim 1, wherein the processing state of the subtask includes an active state, and wherein the thread configuration information includes at least a first number of core threads, the first number of core threads being a number of core threads the subtask configures with the active state;
determining the number of the first core threads included in the thread configuration information of each subtask according to the information of the processing resource and the attribute information of each subtask, wherein the method comprises the following steps:
according to the information of the processing resources, determining a preset index for carrying out pressure measurement on the processing resources and an adjustment proportion of a pressure measurement extremum for adjusting the preset index;
Performing pressure measurement on the processing resource, and determining a pressure measurement extremum of the preset index;
determining the number of transactions of single data of each subtask according to the attribute information of each subtask;
determining single data processing time length of each subtask according to the transaction number of single data of each subtask and the pressure measurement extremum of the preset index;
and determining the number of the first core threads according to the single data processing time length of each subtask, the pressure measurement extremum of the preset index and the adjustment proportion.
3. The method according to claim 2, wherein determining the number of first core threads according to the single data processing duration of each subtask, the pressure measurement extremum of the preset index, and the adjustment ratio comprises:
determining an allowable threshold of the preset index according to the pressure measurement extremum of the preset index and the adjustment proportion;
initializing the number of test core threads;
under the condition that the number of the test core threads is adjusted to meet a preset threshold value permission condition, determining the number of the first core threads according to the current number of the test core threads; the threshold value permission condition is that the actual value of each preset index is smaller than or equal to the permission threshold value, and the processing quantity of data in unit time reaches the maximum value.
4. The method of claim 2, wherein the processing state of the subtask further comprises a connection state, and the thread configuration information further comprises a second number of core threads, the second number of core threads being the number of core threads the subtask configures with the connection state;
determining the number of the second core threads included in the thread configuration information of each subtask according to the information of the processing resource and the attribute information of each subtask, wherein the method comprises the following steps:
determining the maximum number of threads according to the information of the processing resources, wherein the maximum number of threads is the maximum number of threads which are supported to be established by the processing resources;
determining the number of remaining core threads according to the maximum number of threads and the number of first core threads of each subtask;
and determining the second core thread number of each subtask according to the remaining core thread number.
5. The method of claim 4, wherein the processing state of the subtasks further comprises a guard state, the thread configuration information further comprises a third number of core threads, the third number of core threads being the number of core threads configured by the subtasks while in the guard state, the attribute information of each of the subtasks including a desired throughput and a desired processing duration of each of the subtasks;
Determining the number of the third core threads included in the thread configuration information of each subtask according to the information of the processing resource and the attribute information of each subtask, wherein the method comprises the following steps:
and determining the maximum number of core threads meeting the expected processing amount and the expected processing duration of each subtask according to the single data processing duration of each subtask and the information of the processing resources, and determining the maximum number as the third core thread number.
6. The method of claim 5, wherein after determining the maximum number as the third core thread number, further comprising:
comparing the number of the third core threads with a thread number threshold value to obtain a comparison result; wherein the thread number threshold is determined based on the maximum number of threads that the processing resource support establishes;
and determining whether the resource amount of the processing resource needs to be increased according to the comparison result.
7. The method of claim 5, wherein the time allocation information includes a target time slice and a non-target time slice for each of the subtasks, and wherein the subtasks are in an active state in the target time slice and in a hold state in the non-target time slice;
The configuring a thread for each subtask according to the time allocation information and the thread configuration information of each subtask, and executing a corresponding subtask through the configured thread, including:
for each subtask, under the condition that the target time slicing of the subtask at the current moment is determined according to the time allocation information of the subtask, configuring core threads for the subtask according to the first core thread quantity, and executing the subtask through the core threads; and under the condition that non-target time slicing of the subtask at the current moment is determined according to the time allocation information of the subtask, configuring core threads for the subtask according to the number of the second core threads, and executing the subtask through the core threads.
8. A thread-based task processing device, comprising:
the division module is used for dividing the task to be processed into a plurality of subtasks according to the service scene;
the distribution module is used for determining time distribution information of each subtask according to the service scene to which the subtask belongs, wherein the time distribution information is used for representing the processing time distributed by the subtask in different processing states;
The acquisition module is used for acquiring information of processing resources, wherein the information of the processing resources is used for reflecting the types of the processing resources and the quantity of the processing resources;
the determining module is used for determining thread configuration information of each subtask according to the information of the processing resource and the attribute information of each subtask, wherein the thread configuration information is used for representing thread types and thread numbers configured by the subtask in different processing states;
the configuration module is used for configuring threads for the subtasks according to the time allocation information and the thread configuration information of the subtasks, and executing corresponding subtasks through the configured threads.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores one or more computer programs executable by the at least one processor to enable the at least one processor to perform the thread-based task processing method of any one of claims 1-7.
10. A computer readable storage medium, on which a computer program is stored which, when executed by a processor, implements a thread-based task processing method as claimed in any one of claims 1-7.
CN202310180869.8A 2023-02-28 2023-02-28 Task processing method and device based on threads, electronic equipment and storage medium Pending CN116166435A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310180869.8A CN116166435A (en) 2023-02-28 2023-02-28 Task processing method and device based on threads, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310180869.8A CN116166435A (en) 2023-02-28 2023-02-28 Task processing method and device based on threads, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116166435A true CN116166435A (en) 2023-05-26

Family

ID=86413105

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310180869.8A Pending CN116166435A (en) 2023-02-28 2023-02-28 Task processing method and device based on threads, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116166435A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117745254A (en) * 2023-12-06 2024-03-22 镁佳(北京)科技有限公司 Course generation method, course generation device, computer equipment and readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117745254A (en) * 2023-12-06 2024-03-22 镁佳(北京)科技有限公司 Course generation method, course generation device, computer equipment and readable storage medium

Similar Documents

Publication Publication Date Title
US10970122B2 (en) Optimizing allocation of multi-tasking servers
US11861405B2 (en) Multi-cluster container orchestration
US10880228B2 (en) Proactive channel agent
US20190253316A1 (en) Managing servers with quality of service assurances
US20120290348A1 (en) Routing service requests based on lowest actual cost within a federated virtual service cloud
US20200404051A1 (en) Application placing and scaling
US11144500B2 (en) Assignment of data within file systems
JP7217580B2 (en) Workload Management with Data Access Awareness in Compute Clusters
US11714638B2 (en) Availability level-based service management
US20130227113A1 (en) Managing virtualized networks based on node relationships
US20210133008A1 (en) Throttling using message partitioning and buffering
CN116166435A (en) Task processing method and device based on threads, electronic equipment and storage medium
JP2023545970A (en) Query engine autoscaling for enterprise-level big data workloads
US10908969B2 (en) Model driven dynamic management of enterprise workloads through adaptive tiering
US11303712B1 (en) Service management in distributed system
US20240031305A1 (en) Proactive auto-scaling
US11675631B2 (en) Balancing mainframe and distributed workloads based on performance and costs
KR20220036987A (en) Domain Adaptive Loop Filter for Video Coding
US10904348B2 (en) Scanning shared file systems
US10956037B2 (en) Provisioning storage allocation using prioritized storage system capabilities
US11729081B2 (en) Enhancing software application hosting in a cloud environment
US12107746B2 (en) Enhancing software application hosting in a cloud environment
US12074760B2 (en) Path management
US11847097B2 (en) Optimizing file recall for multiple users
US20170093720A1 (en) Flexibly maximize hardware capabilities in highly virtualized dynamic systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination