CN117492945A - Task processing method and device - Google Patents

Task processing method and device Download PDF

Info

Publication number
CN117492945A
CN117492945A CN202310779467.XA CN202310779467A CN117492945A CN 117492945 A CN117492945 A CN 117492945A CN 202310779467 A CN202310779467 A CN 202310779467A CN 117492945 A CN117492945 A CN 117492945A
Authority
CN
China
Prior art keywords
task
processing
processed
target
list
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310779467.XA
Other languages
Chinese (zh)
Inventor
李小毅
卿力
赵飞
罗仕杰
吴海英
蒋宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mashang Xiaofei Finance Co Ltd
Original Assignee
Mashang Xiaofei Finance Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mashang Xiaofei Finance Co Ltd filed Critical Mashang Xiaofei Finance Co Ltd
Priority to CN202310779467.XA priority Critical patent/CN117492945A/en
Publication of CN117492945A publication Critical patent/CN117492945A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/48Indexing scheme relating to G06F9/48
    • G06F2209/484Precedence
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The embodiment of the specification provides a task processing method and device, wherein the task processing method comprises the following steps: inquiring a target thread with the thread state of idle state in a first thread pool according to a task processing instruction, and calling the target thread to execute the following operations: determining a target task in the task to be processed based on the processing priority and the task state of the task to be processed in a task library; updating the task state of the target task in the task library; and after the updating processing, a second thread pool is called to carry out parallel allocation processing on the list to be allocated carried by the target task. By adopting the embodiment of the application, the task processing efficiency can be improved.

Description

Task processing method and device
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a task processing method and device.
Background
With the rapid popularization of the internet and the continuous development of social media, a scheduling user of a service party can distribute lists of users participating in service to provide more comprehensive and effective service feedback and service recommendation for the users, in the process of distributing the lists, the lists are distributed in a form of creating tasks, the tasks are processed, the generated lists needing to be distributed are more and more along with the more and more users participating in online service, and how to efficiently distribute the lists is an important point of increasing attention of the users and the service party.
Disclosure of Invention
In a first aspect, an embodiment of the present application provides a task processing method, including:
inquiring a target thread with the thread state of idle state in a first thread pool according to a task processing instruction, and calling the target thread to execute the following operations:
determining a target task in the task to be processed based on the processing priority and the task state of the task to be processed in a task library;
updating the task state of the target task in the task library;
and after the updating processing, a second thread pool is called to carry out parallel allocation processing on the list to be allocated carried by the target task.
It can be seen that, in the embodiment of the present application, in the process of performing task processing, a target thread whose thread state is in an idle state is queried in a first thread pool, the target thread is invoked to determine a target task in a task to be processed based on a processing priority and a task state of a task to be processed in a task library, so that the target task is determined according to the processing priority and the task state, the priority processing of the task to be processed with high processing priority is implemented, after the target task is determined, the task state of the target task in the task library is updated, the task state of the target task is updated by the target thread, so as to avoid the repeated execution of the target task by the target thread in other first thread pools, and further, the concurrent security is ensured, after the update processing, the parallel allocation processing of the target task carried by the target thread pool is invoked to the target task, so that the parallel allocation processing of a list to be allocated by the second thread pool is performed, the allocation efficiency of the task to be allocated is improved, the determination of the target task and the update of the task state are performed by the first thread pool, and the efficiency of the task to be processed by the first thread pool is improved.
In a second aspect, an embodiment of the present application provides a task processing device, including:
the inquiring module is used for inquiring the target thread with the idle state of the thread state in the first thread pool according to the task processing instruction, and the calling module is used for calling the target thread to execute the following operations:
determining a target task in the task to be processed based on the processing priority and the task state of the task to be processed in a task library;
updating the task state of the target task in the task library;
and after the updating processing, a second thread pool is called to carry out parallel allocation processing on the list to be allocated carried by the target task.
In a third aspect, an embodiment of the present application provides a task processing device, including: a processor; and a memory configured to store computer-executable instructions that, when executed, cause the processor to perform the task processing method of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing computer-executable instructions that, when executed by a processor, implement the task processing method according to the first aspect.
Drawings
For a clearer description of embodiments of the present application or of the solutions of the prior art, the drawings that are required to be used in the description of the embodiments or of the prior art will be briefly described, it being obvious that the drawings in the description below are only some of the embodiments described in the present specification, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art;
FIG. 1 is a schematic diagram of an implementation environment of a task processing method according to an embodiment of the present application;
FIG. 2 is a process flow chart of a task processing method according to an embodiment of the present application;
fig. 3 is a schematic diagram of a creation process of a task to be processed according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a process of updating a processing priority according to an embodiment of the present application;
fig. 5 is a processing schematic diagram of a task processing method applied to a task processing scenario;
FIG. 6 is a schematic diagram of a task processing device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a task processing device according to an embodiment of the present application.
Detailed Description
In order to better understand the technical solutions in the embodiments of the present application, the following description will clearly and completely describe the technical solutions in the embodiments of the present application with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
The task processing method provided in one or more embodiments of the present disclosure may be applied to an implementation environment of task processing, as shown in fig. 1, where the implementation environment includes at least a server cluster 101 composed of a plurality of servers, where each server in the server cluster is deployed with an application, and one server deployed with an application may be used as an application instance; each application instance may define a thread pool;
wherein each application instance is in a different kafka (a high throughput distributed publish-subscribe messaging system) consumer group. The implementation environment further includes kafka, and task identifications created by the application instances are stored in the kafka.
In addition, the implementation environment may further include a terminal device 102, where the terminal device 102 may be a mobile phone, a personal computer, a tablet computer, an electronic book reader, a VR (Virtual Reality technology) -based device for information interaction, an in-vehicle terminal, an IoT device, a wearable smart device, a laptop portable computer, a desktop computer, and so on; the terminal device 102 may configure a client of an application, where a specific form of the client may be an application program, a sub-program in the application program, a service module in the application program, or a web page program; the user can perform the creation of the task to be processed and the update processing of the processing priority based on the client; the terminal device 102 may be one terminal device, or may be a terminal device cluster formed by a plurality of terminal devices.
In the implementation environment, in the process of task processing, each application instance firstly queries a target thread with an idle state in a first thread pool according to a task processing instruction, and when the condition that the target thread with the idle state exists in the first thread pool is queried, the target thread is called to determine a target task in a task to be processed based on the processing priority and the task state of the task to be processed in a task library, and after the task state of the target task in the task library is updated, a second thread pool is called to perform parallel allocation processing on a list to be allocated carried by the target task.
The embodiment of a task processing method is provided in the specification:
in the task processing method provided by the embodiment, in the task processing process, a target thread with an idle thread state is inquired in a first thread pool, and the target thread is called to process tasks to be processed in a task library; in the process of task processing of the task to be processed in the task library by the target thread, the target task is firstly determined based on the processing priority and the task state of the task to be processed in the task library, so that the task to be processed with high processing priority is processed preferentially, after the target task is determined, the task state of the target task in the task library is updated, so that repeated execution of the target task by the target threads in other first thread pools is avoided, concurrency safety is ensured, after the task state of the target task is updated successfully, the second thread pool is called to perform parallel distribution processing on a list to be distributed carried by the target task, distribution efficiency of the list to be distributed is improved, and processing efficiency of the target task is further improved.
Referring to fig. 2, the task processing method provided in the present embodiment specifically includes steps S202 to S204.
Step S202, inquiring a target thread with the thread state of idle state in a first thread pool according to a task processing instruction.
The task processing method provided by the embodiment can be applied to each application instance in the application instance pool, one application can be regarded as one instance when deployed on one server, and in practical application, one application can be deployed on a plurality of servers, namely, a plurality of application instances exist. Each instance may define a thread pool. In this embodiment, in the process of defining the thread pool, the application instance may define a first thread pool for performing data processing and a second thread pool for performing allocation processing on the list to be allocated, and by matching the first thread pool and the second thread pool, efficiency of data processing and allocation processing is improved.
In addition, the task processing method provided in this embodiment may be applied to each first thread pool in the first thread pool group, and in the case where the task processing method provided in this embodiment is applied to the first thread pool, the following description of the application instance may be replaced by the description of the first thread pool.
The task processing instructions in this embodiment include instructions for starting task processing on tasks to be processed in the task library. Optionally, the task processing instruction is obtained after task identification is consumed from kafka. It should be noted that consumed from kafka is a task identification of a task to be assigned.
In practical application, in the process of task processing, an application instance consumes task identifiers in kafka, reads corresponding tasks to be processed according to the task identifiers and processes the tasks after consuming the task identifiers, and continues to consume the next task identifiers in kafka after the allocation of a list to be allocated carried by the tasks to be processed corresponding to the task identifiers is completed, namely, processes the tasks to be processed corresponding to the task identifiers according to the task identifiers consumed in kafka; under the condition that a large number of task identifications are piled up in the kafka, tasks to be processed corresponding to the piled up task identifications can only be sequentially executed according to the order of consumed task identifications, and the order of the consumed task identifications cannot be adjusted;
in view of this, in this embodiment, the corresponding task to be allocated is not read according to the consumed task identifier, but the consumed task identifier is used as a task processing instruction to control task processing on the task to be processed in the task library.
In actual application, if the application instances in the application instance pool are in the same consumption group of kafka, each application instance can only consume part of task identifications in kafka, for example, 10 task identifications exist in kafka, 5 application instances are in the same consumption group of kafka, and the 5 application instances can equally divide the 10 task identifications, wherein the application instance 1 consumes task identification 1 and task identification 2, the application instance 2 consumes task identification 3 and task identification 4, the application instance 3 consumes task identification 5 and task identification 6, the application instance 4 consumes task identification 7 and task identification 8, and the application instance 5 consumes task identification 9 and task identification 10.
In this embodiment, each application instance is configured in a different consumption group of kafka, so that each application instance in the application instance pool can consume all task identifiers in kafka, in other words, the target thread can determine the target task in all tasks to be processed.
In an optional implementation manner provided in this embodiment, each application instance in the application instance pool performs the following operations:
reading a consumption group identifier in the application configuration information;
generating a random value, and generating a consumption group identifier of a corresponding application instance based on the consumption group identifier and the random value;
the consumption group identification of the corresponding application instance is updated to the consumption group configuration information of kafka.
Specifically, each application instance in the application instance pool obtains the consumption group identifier of each application instance by adding a random value corresponding to each consumption group identifier in the application configuration information, and sets the consumption group identifier of each application instance to the consumption group configuration information of kafka.
For example, the consumption group identifier in the application configuration information is a consumption group ID, the random value generated by application example 1 is 1, the random value generated by application example 2 is 2, the random value generated by application example 3 is 3, the random value generated by application example 4 is 4, and the random value generated by application example 5 is 5; according to the consumption group identification and the random value generated by each application instance, a consumption group identification of the application instance 1 is generated as a consumption group ID1, a consumption group identification of the application instance 2 is generated as a consumption group ID2, a consumption group identification of the application instance 3 is generated as a consumption group ID3, a consumption group identification of the application instance 4 is generated as a consumption group ID4, a consumption group identification of the application instance 5 is generated as a consumption group ID5, and the consumption group identification of each application instance is updated to the consumption group configuration information of kafka.
In the embodiment, the processing priority is configured for the task to be processed, so that in the process of processing the task, the target task for processing the task is determined preferentially based on the processing priority of the task to be processed in the task library; the task library comprises a database for storing tasks to be processed and associated data; the task to be processed comprises a task which is created and needs to be processed; for example, a task is created that requires allocation of a list to be allocated.
In an optional implementation manner provided in this embodiment, the task to be processed is created by adopting the following manner:
acquiring a task creation request submitted by a client; optionally, the task creation request is configured to request to create a task to be processed including at least one list to be allocated under the task category;
creating a task to be processed containing the at least one list to be allocated, and building an association relationship between the created task to be processed and a processing priority carried in the task creation request;
and storing the created task to be processed and associated data of the created task to be processed into the task library.
Optionally, if the processing priority carried in the task creation request is null, reading a default priority of the task category; and establishing an association relation between the default priority and the created task to be processed.
Specifically, according to a task creation request submitted by a client, creating a task to be processed containing at least one list to be allocated carried by the task creation request, if the task creation request carries a processing priority, establishing an association relationship between the processing priority carried in the task creation request and the created task to be processed; if the processing priority carried by the task creation request is null, reading the default priority of the task category carried in the task creation request, and establishing an association relationship between the default priority and the created task to be processed; and storing the created task to be processed and associated data to the task library. Optionally, the task creation request carries a task category and at least one list to be allocated.
The associated data comprises data associated with the task to be processed; for example, processing priority; in addition, after the task to be processed is created and obtained, the task processing strategy of the task category can be read, and the task processing strategy is associated with the task to be processed, wherein the task processing strategy is also used as the associated data of the task to be processed. Optionally, the associated data includes a processing priority and/or a task processing policy. It should be noted that, the association relationship between the created task to be processed and other related data may be established according to the actual requirement, where other related data is also used as the association data of the created task to be processed, and the embodiment is not limited herein.
The creating process of the task to be processed can also be replaced by obtaining a task creating request submitted by the client; optionally, the task creation request carries a task category and at least one list to be allocated; creating a task to be processed comprising the at least one list to be allocated; reading the default priority of the task category, and establishing an association relation between the created task to be processed and the default priority; and storing the created task to be processed and associated data to the task library.
In a specific execution process, besides storing the task to be processed and the associated data of the task to be processed into the task library, the task identification of the task to be processed is also sent to kafka, so that the application instance can consume the task identification of the task to be processed.
As shown in fig. 3, the creation process of the task to be processed specifically includes steps S302 to S308.
Step S302, a task creation request submitted by a client is obtained.
Optionally, the task creation request carries a task category and at least one list to be allocated. In this embodiment, the scenario corresponding to the to-be-allocated list is the task category in this embodiment; for example, if at least one of the lists to be allocated is a list in the preview scene, the task category is a preview category; if at least one list to be allocated is a list under a prediction scene, the task category is the prediction scene; and if at least one list to be allocated is a list in the intelligent scene, the task category is an intelligent category. It should be noted that, the scene and task category in the present embodiment are merely exemplary, and in practical application, the scene and task category may be configured according to the actual requirement, which is not limited herein.
The list in this embodiment refers to the outbound list; and recording user information and/or participated service information of a user in an outbound list. In addition, other information may be recorded in the outbound list according to the service requirement, which is not limited in this embodiment. In the outbound process, the outbound list needs to be allocated to the agent to perform outbound processing, and the outbound list needs to be allocated to the agent to perform outbound processing is the list to be allocated in this embodiment. It should be noted that at least one to-be-allocated list in one task creation request is a list under the same scenario.
In a specific execution process, a user accesses a list set to be allocated under a task category through a client, and performs list selection in the list set to be allocated to obtain at least one list to be allocated; and submitting a task creation request for at least one list to be allocated under the task category to the application instance according to a task creation instruction of the user for at least one list to be allocated.
Step S304, creating a task to be processed containing the at least one list to be allocated.
In addition, step S304 may be replaced by creating a task to be processed including at least one list to be allocated carried by the task creation request, and forming a new implementation manner with steps S302, S306 and S308.
And step S306, reading the default priority of the task category, and establishing the association relation between the task to be processed and the default priority.
In the specific execution process, different task categories and configured default priorities are different; for example, the task processing priority under the preview category is higher than the task processing priority under the prediction category from a traffic perspective, the default priority of the preview category is configured to be 30, and the default priority of the prediction category is configured to be 50. In this way, the priority order of task processing of the tasks to be processed of different task categories is determined by default priority.
In addition, step S306 may be replaced by reading the processing priority in the task creation request, and establishing an association relationship between the task to be processed and the processing priority, and correspondingly, step S308 may also be replaced by storing the task to be processed and the associated data thereof in a task library, and sending the task identifier of the task to be processed to a message system, and form a new implementation manner with step S302 and step S304 provided in the present embodiment.
Step S308, storing the task to be processed and the related data thereof into a task library, and sending the task identification of the task to be processed to a message system.
The messaging system is referred to as kafka. The task identification is generated in the process of creating the task to be processed.
After the created task to be processed is stored in the task library, the task state of the task to be processed in the task library is the state to be processed.
In a specific execution process, the process of storing the task to be processed and the related data thereof into the task library can be realized in the following manner:
the task state of the task to be processed is configured to be the state to be processed;
and storing the task to be processed in the state to be processed and the processing priority of the task to be processed into a task library.
Before storing the task to be processed and the associated data into a task library, task state configuration can be carried out on the task to be processed, initialization processing of the task to be processed is realized, and the task to be processed and the associated data after the initialization processing are stored into the task library; in addition to the task state configuration, the initialization process may also perform the initialization process in other dimensions, which is not limited herein.
The above-mentioned specific description is given to the creation process of the task to be processed associated with the default priority or the processing priority, and in this embodiment, the user may further update the processing priority of the task to be processed in the task library in a state to be processed, so as to adjust the processing priority of the task to be processed that needs to be processed preferentially, and implement flexible configuration of the task processing sequence of the task to be processed, under the condition that a large number of tasks to be processed are piled up in the task library.
In an optional implementation manner provided in this embodiment, the processing priority is updated by adopting the following manner:
reading the processing priority of each task to be processed in the task library according to the priority updating instruction submitted by the client;
sending the processing priority of each task to be processed to the client;
inquiring the task state of a target task to be processed corresponding to the priority updating instruction according to the priority updating instruction submitted by the client; the priority updating instruction carries updating priority;
and if the task state of the target task to be processed is the task state to be processed, updating the processing priority of the target task to be processed based on the updating priority.
Specifically, the user may query all the tasks to be processed, and for the tasks to be processed whose task states are the states to be processed, the user may perform priority update processing on the tasks to be processed. That is, according to a priority update instruction submitted by a client for a target task to be processed, wherein the task state in the task library is a state to be processed, the processing priority of the target task to be processed is updated.
As shown in fig. 4, the update processing procedure of the processing priority specifically includes steps S402 to S408.
Step S402, according to a priority updating instruction submitted by a client, reading a task to be processed in a task library and the processing priority of the task to be processed.
And step S404, sending the task to be processed and the processing priority of the task to be processed to the client.
Optionally, in the process of displaying the task to be processed and the processing priority of each task to be processed in the task library, the display state of the task to be processed in which the task state is the processing priority of the task to be processed in the task to be processed is a triggerable state; and the display state of the processing priority of the task to be processed, of which the task state is the processing state other than the processing state, in the task to be processed is a trigger prohibition state.
Specifically, in order to avoid the waste of priority update resources caused by priority update of the task to be processed in the processed state and the concurrent security of the influence of priority update on the task to be processed in the processing state, in this embodiment, the client only configures the display state of the processing priority of the task to be processed in the processing state to be triggerable state in the process of displaying the task to be processed and the processing priority of each task to be processed, so that the user updates the processing priority in the triggerable state.
Step S406, inquiring the task state of the target task to be processed according to the priority updating instruction submitted by the client to the target task to be processed.
In order to further ensure concurrency safety, after a priority update instruction submitted by a client for a target to processing task is acquired, the task state of the target to-be-processed task is queried based on the priority update instruction.
Step S408, if the task state of the target task to be processed is the task state to be processed, updating the processing priority of the target task to be processed based on the priority carried in the priority updating instruction.
For example, when the default priority of the target task to be processed is 30 and the priority carried in the priority update instruction is 15 when the target task to be processed is created, the processing priority associated with the target task to be processed in the task library is updated from 30 to 15.
In addition, if the task state of the target task to be processed is not the task state to be processed, the task state of the target task to be processed is read, and the read task state is sent to the client, so that the client updates the display state of the target task to be processed.
The above process of creating the task to be processed and updating the processing priority is specifically described, in this embodiment, the processing priority of the task to be processed may be set by the user, and in addition, by configuring the default priorities of different task categories, in the process that the user does not configure the processing priority, the task to be processed may be processed according to the default priorities; further, by configuring the processing priority for the task to be processed, when a large number of tasks to be processed exist in the state to be processed, the user can manually designate the processing priority of the task to be processed which is not processed by the task, so that the task to be processed in emergency is processed preferentially.
In this embodiment, in a task processing process for a task to be processed in a task library, an application instance consumes a task identifier from kafka as a task processing instruction according to a task processing request submitted by a client, and queries a target thread with an idle state in a first thread pool according to the task processing instruction.
The configuration information of the first thread pool in this embodiment may be: the core thread number is 1, the maximum thread number is 1, the waiting queue number is 0, and the rejecting strategy is discarding; the method comprises the steps of setting the number of core threads to be 1 and the maximum number of threads to be 1, realizing task processing on only one task to be processed at a time, improving task processing efficiency of the task to be processed, setting the number of waiting queues to be 0, discarding rejection strategies, enabling the task to be processed with a front processing priority not to be placed in the waiting queues for waiting, discarding the task to be processed with the front processing priority read under the condition that a target thread with an idle state does not exist in a first thread pool, enabling other first thread pools to process tasks to be processed with the front processing priority, and accordingly guaranteeing task processing on the task to be processed according to the processing priority.
In the specific execution process, after a first thread pool acquires a task processing instruction, inquiring a target inquiry with an idle state of a thread state, and if an inquiry result is empty, not processing the target inquiry; if the query result is not null, step S204 is performed.
Or if the query result of the target query with the idle state of the query thread is null, the following operations may be further executed:
reading the number of waiting queues of the first thread pool;
if the number of the waiting queues is 0, executing a rejection strategy of the first thread pool;
and if the number of the waiting queues is not 0, executing the rejection strategy under the condition that the number of the waiting tasks in the waiting queues is equal to the number of the waiting queues, and adding the task processing instruction into the waiting queues under the condition that the number of the waiting tasks is smaller than the number of the waiting queues.
For example, the application instance 1 consumes the task identifier 1, sends the task identifier 1 as a task processing instruction to the first thread pool of the application instance 1, and if the first thread pool queries that the target thread in the idle state exists, the following step S204 is executed.
In the process that the target thread executes the following step S204, if the application instance 1 consumes the task identifier 2, the application instance 1 sends the task identifier 2 as a task processing instruction to the first thread pool of the application instance 1, the first thread pool queries that the target thread in the idle state does not exist, reads that the number of waiting queues is 0, that is, the waiting queue is not allowed to be executed, executes a rejection policy, and the rejection policy is discarded, and discards the task processing instruction. The task identifier consumed by the subsequent application example 1 is processed in the mode.
In addition, if the embodiment is applied to an application instance, the method may alternatively include obtaining a task processing instruction consumed from kafka and sending the task processing instruction to the first thread pool to query a target thread whose thread state is idle, where the target thread whose thread state is idle is queried in the first thread pool according to a processing task instruction; if the embodiment is applied to the first thread pool, the target thread with the idle state of the thread is queried in the first thread pool according to the processing task instruction, instead, the task processing instruction consumed from kafka is acquired, and the target thread with the idle state of the thread is queried in the first thread pool according to the task processing instruction.
Step S204, calling the target thread to execute the following operations: determining a target task in the task to be processed based on the processing priority and the task state of the task to be processed in a task library; updating the task state of the target task in the task library; and after the updating processing, a second thread pool is called to carry out parallel allocation processing on the list to be allocated carried by the target task.
In the step, a target thread with an idle thread state is inquired in a first thread pool according to a task processing instruction, and in the step, the target thread is called to process tasks to be processed in a task library.
The task processing process of the target thread on the task to be processed in the task library is realized by executing the following processes.
(1) And determining a target task in the tasks to be processed based on the processing priority and the task state of the tasks to be processed in the task library.
The target task comprises a task to be processed with highest processing priority and a processing state being a state to be processed in a task library.
In order to process a task to be processed according to the processing priority of the task to be processed, in the process of processing the task to be processed, a target task is determined in the task to be processed based on the processing priority and the task state of the task to be processed in a task library. In an optional implementation manner provided in this embodiment, a process of determining a target task in a task to be processed based on a processing priority and a task state of the task to be processed in a task library is implemented in the following manner:
reading a task to be processed, the task state of which is the state to be processed, in the task library as a candidate task;
according to the processing priority of the candidate tasks, the candidate tasks are arranged in a descending order to obtain a candidate task queue;
and taking the first candidate task in the candidate task queue as the target task.
Specifically, candidate tasks with task states being to-be-processed are screened out from a task library, and candidate tasks with processing priorities higher than those of other candidate tasks in the candidate tasks are used as target tasks, so that the target tasks with the highest processing priorities to-be-processed states are determined.
If the number of the first candidate tasks in the candidate task queue is greater than 1, the first candidate tasks are arranged in an ascending order according to the task creation time to obtain an arrangement queue, and the first candidate tasks in the arrangement queue are used as the target tasks.
For example, in the process of determining a target task from a task library, a target thread reads the task to be processed, the task state of which is the state to be processed, in the task library as a candidate task, ranks the candidate tasks in descending order according to the processing priority of the candidate task to obtain a candidate task queue, then reads the candidate task of the first order in the candidate task queue, if the number of the candidate tasks of the first order read is 1, the read candidate task is taken as the target task to process the task, if the number of the candidate tasks of the first order read is greater than 1, the read candidate task of the first order is ranked in ascending order according to the task creation time of the candidate task of the first order read to obtain a ranking queue, and the candidate task of the first order in the ranking queue is taken as the target task to process the task; that is, the target task is preferentially determined according to the processing priority, and when a plurality of target tasks are determined according to the processing priority, the task processing is performed according to the task creation time of the target task, and the target task whose creation time is earlier than that of other target tasks is determined.
(2) And updating the task state of the target task in the task library.
After the target task is determined, the task state of the target task in the task library is updated, so that concurrency safety of the target task is guaranteed, and waste of allocation resources is caused because a plurality of application instances allocate the list to be allocated carried by the target task at the same time in the process of directly allocating the list to be allocated carried by the target task without updating the task state.
In order to further ensure concurrency security, the task state of the target task in the task library is updated based on the instance identifier of the application instance or the thread pool identifier of the first thread pool. After one application instance or the first thread pool updates the task state of the target task and the update processing is successful, other application instances or the first thread pool cannot update the task state of the target task.
For example, if the application instance 1, the application instance 2, the application instance 3, the application instance 4 and the application instance 5 determine that the task 5 is a target task in the task to be processed, the target threads in the thread pools of the application instance 1, the application instance 2, the application instance 3, the application instance 4 and the application instance 5 update the task state of the task 5 in the task library based on the corresponding instance identifiers, wherein the target thread of the application instance 1 successfully updates the task state of the task 5 from the task state to be processed to the in-process state, and the instance identifier 1'; the target thread of the application example 1 calls a second thread pool of the application example 1 to distribute the list to be distributed carried by the task 5, and the target threads of the application example 2, the application example 3, the application example 4 and the application example 5 are respectively based on the processing priority and the task state of the task to be processed in the task library again, and the target task is determined in the task to be processed in the task library.
Based on this, in an alternative implementation manner provided in this embodiment, the task state of the target task in the task library is updated, and if the update is successful, the second thread pool is invoked to perform allocation processing on the to-be-allocated list carried by the target task; if the update processing fails, returning to execute the processing priority and the task state of the task to be processed in the task library, determining a target task in the task to be processed, and updating the task state of the target task in the task library until the update processing is successful.
(3) And after the updating processing, a second thread pool is called to carry out parallel allocation processing on the list to be allocated carried by the target task.
And after the task state of the target task in the task library is successfully updated, calling a second thread pool to carry out parallel allocation processing on the list to be allocated carried by the target task.
The configuration information of the second thread pool may be: the core thread number is 20, the maximum thread number is 40, the waiting queue number is 1000, and the refusal strategy is the execution of the main thread. In this embodiment, the lists to be allocated are grouped, so that different threads in the second thread pool perform parallel allocation processing on the lists to be allocated in different list groups, and allocation efficiency of the lists to be allocated is improved.
In an optional implementation manner provided in this embodiment, in a process of calling a second thread pool to perform parallel allocation processing on a list to be allocated carried by the target task, the following operations are performed:
grouping the list to be allocated to obtain at least one list group;
and calling the second thread pool to carry out parallel allocation processing on the lists to be allocated in each list group.
In order to further improve the allocation efficiency of the to-be-allocated list, in an optional implementation manner provided in this embodiment, a process of performing packet processing on the to-be-allocated list to obtain at least one list packet is implemented in the following manner:
reading a first list in the lists to be allocated according to a first preset number;
and carrying out grouping processing on the first list to obtain a second preset number of list groups.
Optionally, the first preset number is a preset list number read each time; the second preset number is determined based on the number of core threads of the second thread pool. Specifically, the second preset number may be equal to the number of core threads in the second thread pool, or may be equal to a multiple of the number of core threads in the second thread pool. Therefore, list grouping is carried out according to the number of core threads, so that the number of core threads in the second thread pool is processed together in the parallel distribution processing process of the lists to be distributed in each list grouping, and the distribution efficiency is improved. In addition, the second preset number may also be a random number configured or created in advance, which is not limited herein.
In a specific execution process, according to a first preset number, reading a first list in the list to be allocated, carrying out grouping processing on the first list to obtain a second preset number of list groups, sending the second preset number of list groups to a second thread pool, and carrying out parallel allocation processing on the second preset number of list groups by the second preset number of core threads in the second thread pool.
For example, the target thread of the application example 1 performs allocation processing on the to-be-allocated list carried by the task 5, the task 5 carries 3000 to-be-allocated lists, 1000 to-be-allocated lists are read from 3000 to-be-allocated lists, the number of core threads of the second thread pool of the application example 1 is 20, the 1000 to-be-allocated lists are divided into 20 groups, 50 to-be-allocated lists of each group are subjected to task generation, 20 subtasks are obtained, the 20 subtasks are sent to the second thread pool, and the efficiency of task processing is improved through asynchronous processing of the subtasks by 20 core threads in the second thread pool.
It should be noted that, in the foregoing process of performing allocation processing on the first preset number of to-be-allocated lists in the to-be-allocated lists carried by the target task, when the number of the to-be-allocated lists carried by the target task is less than or equal to the first preset number, the to-be-allocated lists carried by the target task are read, and grouping processing is performed on the to-be-allocated lists to obtain a second preset number of list groups; if the number of the lists to be allocated carried by the target task is larger than the first preset number, after the first lists carried by the target task are read, the residual lists to be allocated still exist in the target task;
In order to perform complete task processing on the target task, in an alternative implementation manner provided in this embodiment, if the number of lists to be allocated carried by the target task is greater than the first preset number, after the above process is performed, the following operations are further performed:
grouping the second list to obtain a second preset number of list groups;
and calling the second thread pool to carry out parallel distribution processing on the list group obtained after the grouping processing of the second list.
Optionally, the second list is a first preset number of to-be-allocated lists in to-be-allocated lists other than the first list in the target task; for example, the target task carries 3000 lists to be allocated, 1000 lists to be allocated read in the first round are the first lists, and 1000 lists to be allocated read in the second round in the remaining 2000 lists to be allocated are the second lists.
Specifically, a second list in the lists to be allocated is read according to the first preset number, grouping processing is carried out on the second list, list grouping of the second preset number is obtained, a second thread pool is called, and parallel allocation processing is carried out on the list grouping of the second preset number, which is obtained through grouping processing of the second list, by threads in the second thread pool. And continuously and circularly reading the list to be allocated until the list to be allocated in the target task is completely read.
Continuing to take the allocation processing of the target thread of the application example 1 to the to-be-allocated list carried by the task 5 as an example, wherein the task 5 carries 3000 to-be-allocated lists, 1000 to-be-allocated lists are read from 3000 to-be-allocated lists, the number of core threads of the second thread pool of the application example 1 is 20, then the 1000 to-be-allocated lists are divided into 20 groups, 50 to-be-allocated lists in each group are subjected to task generation to obtain 20 subtasks, the 20 subtasks are sent to the second thread pool, after the efficiency of task processing is improved through asynchronous processing of the subtasks, 1000 to-be-allocated lists are read from the remaining 2000 to-be-allocated lists of the task 5 in a circulating manner, and processing is performed in the above manner until all to-be-allocated lists in the task 5 are read.
In this embodiment, different task types are allocated to the list to be allocated by adopting different task processing strategies, for example, under the preview type, the list to be allocated is directly allocated to the manual seat, and the task processing strategies of the preview type include at least one manual seat; for another example, under the prediction category, the list to be allocated is allocated to the internet call center for unified dialing, and after the list is connected, the list is allocated to the artificial agents, and then the task processing strategy of the prediction category comprises at least one artificial agent and the internet call center; for another example, under the intelligent category, the list to be allocated is allocated to the intelligent robot customer service for dialing, and then the task processing strategy of the intelligent category comprises at least one intelligent outbound call conversation template.
In order to improve the effectiveness of allocation processing of the second thread pool, the parallel allocation processing process of calling the second thread pool to the list to be allocated carried by the target task can be realized in the following manner:
determining a task processing strategy according to the task category of the target task, and carrying out grouping processing on the list to be allocated to obtain at least one list group;
and calling each thread in the second thread pool to allocate and process the to-be-allocated lists in the corresponding list groups based on the task processing strategy.
Specifically, after at least one list group is obtained, subtasks including the list group and the task processing policy are generated for any list group, at least one subtask is obtained, and the call to the second thread pool is realized by sending at least one task to the second thread pool.
Further, in the course of the loop execution, the following operations may be performed:
reading a second list from the list to be allocated according to the first preset quantity;
grouping the second list to obtain a second preset number of list groups;
and calling the second thread pool, and carrying out distribution processing on the lists to be distributed in each list group based on the task processing strategy.
In the specific execution process, after the allocation of the to-be-allocated list of the target task is completed, the task state of the target task in the task library is updated, so that the effectiveness of the task state of the to-be-processed task in the task library is ensured. In an optional implementation manner provided in this embodiment, if it is detected that allocation of the list to be allocated carried by the target task is completed, based on the thread pool identifier of the first thread pool, the task state of the target task in the task library is updated from the in-process state to the processed state.
Alternatively, the task state of the target task in the task library may also be updated from the in-process state to the processed state based on the instance identification. For example, after the target thread of the application example 1 detects that the allocation of the second thread pool of the application example 1 to the list to be allocated carried by the task 5 is completed, the task state of the task 5 in the task library is updated from the "in-process state" to the "processed state" according to the example identifier of the application example 1, and the example identifier 1".
In a specific execution process, after updating a task state of a target task from an in-process state to a processed state, in order to perform task processing on a task to be processed in a task library in a state to be processed, a target thread circularly performs reading and task processing on the target task until the task to be processed in the state to be processed does not exist in the task library.
Reading a task to be processed, the task state of which is the state to be processed, in the task library as a candidate task;
if the reading result is not null, the candidate tasks are arranged in a descending order according to the processing priority of the candidate tasks to obtain a candidate task queue, and the candidate task of the first order in the candidate task queue is used as the next target task;
if the reading result is empty, the processing is not needed.
Specifically, after the task processing of the target task is completed, the target thread reads the task to be processed with the task state of the task library as a candidate task, determines the next target task in the read candidate task under the condition that the read candidate task is not empty, and processes the next target task, and loops the process until the task to be processed with the task state of the task library of the task to be processed is empty.
In summary, the task processing method provided in this embodiment does not read a corresponding task to be processed from a task library according to a task identifier consumed from kafka, after the task identifier is consumed from kafka, the consumed task identifier is used as a task processing instruction, a target thread whose thread state is idle is queried in a first thread pool according to the task processing instruction, the target thread is invoked, the target thread determines a target task in the task to be processed through a processing priority and a task state of the task to be processed in the task library, after the task state of the target task in the task library is updated, a second thread pool is invoked to perform parallel allocation processing on a list to be allocated carried by the target task, the task identification consumed from the kafka is not required to be processed according to the task identification consumed from the kafka, the task identification is only used as a task processing instruction after the task identification is consumed from the kafka, in the process of specifically processing the task, a target task for processing the task is determined according to the processing priority and the task state of the task to be processed in a task library, so that the adjustment of the sequence of the task processing of the task to be processed is realized by configuring the processing priority of the task to be processed, the flexibility of the task processing is improved, and in the process of distributing the list to be distributed carried by the target task, the parallel distribution processing of the list to be distributed is realized by a plurality of threads in the second thread pool by calling the second thread pool, and the distribution efficiency is improved, so that the task processing efficiency is improved.
The following further describes a task processing method applied to a task allocation processing scenario provided in this embodiment, and referring to fig. 5, the task processing method applied to a task allocation processing scenario specifically includes the following steps.
Step S502, a task processing instruction consumed from kafka is acquired.
Step S504, according to the task processing instruction, inquiring the target thread with the thread state of idle state in the first thread pool.
Step S506, the target thread reads the allocation task with the task state of the waiting state in the task library;
if the reading result is not null, executing step S508 to step S510;
if the reading result is empty, the processing is not needed.
In step S508, the target thread determines a target task to be assigned with a higher processing priority than other tasks to be assigned.
Step S510, the target thread updates the task state of the target allocation task in the task library from the state to be processed to the state in process based on the optimistic lock;
if the update is successful, go to step S512 to step S516;
if the update fails, the process returns to step S506.
In step S512, the target thread reads the task allocation policy of the target allocation task.
Step S514, the target thread performs subtask division based on the outbound list carried by the target allocation task to obtain at least one allocation subtask.
Step S516, the target thread calls a second thread pool, performs parallel allocation processing on the outbound list in at least one allocation subtask according to the task allocation policy, and returns to execute step S506.
An embodiment of a task processing device provided in the present specification is as follows:
in the above-described embodiments, a task processing method and a task processing device corresponding thereto are provided, and the description is given below with reference to the accompanying drawings.
Referring to fig. 6, a schematic diagram of a task processing device provided in this embodiment is shown.
Since the apparatus embodiments correspond to the method embodiments, the description is relatively simple, and the relevant portions should be referred to the corresponding descriptions of the method embodiments provided above. The device embodiments described below are merely illustrative.
The present embodiment provides a task processing device, including:
a query module 602, configured to query, in the first thread pool, a target thread whose thread state is an idle state according to a task processing instruction, and a call module 604, configured to call the target thread to perform the following operations:
Determining a target task in the task to be processed based on the processing priority and the task state of the task to be processed in a task library;
updating the task state of the target task in the task library;
and after the updating processing, a second thread pool is called to carry out parallel allocation processing on the list to be allocated carried by the target task.
The various modules in the task processing device described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
An embodiment of a task processing device provided in the present specification is as follows:
corresponding to the task processing method described above, based on the same technical concept, the embodiment of the present application further provides a task processing device, where the task processing device is configured to execute the task processing method provided above, and fig. 7 is a schematic structural diagram of the task processing device provided in the embodiment of the present application.
The task processing device provided in this embodiment includes:
As shown in fig. 7, the task processing device may have a relatively large difference due to different configurations or performances, and may include one or more processors 701 and a memory 702, where the memory 702 may store one or more storage applications or data. Wherein the memory 702 may be transient storage or persistent storage. The application programs stored in the memory 702 may include one or more modules (not shown) each of which may include a series of computer-executable instructions in the task processing device. Still further, the processor 701 may be arranged to communicate with the memory 702 and execute a series of computer executable instructions in the memory 702 on the task processing device. The task processing device can also include one or more power supplies 703, one or more wired or wireless network interfaces 704, one or more input/output interfaces 705, one or more keyboards 706, and the like.
In a particular embodiment, a task processing device includes a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer-executable instructions for the task processing device, and configured to be executed by one or more processors, the one or more programs comprising computer-executable instructions for:
Inquiring a target thread with the thread state of idle state in a first thread pool according to a task processing instruction, and calling the target thread to execute the following operations:
determining a target task in the task to be processed based on the processing priority and the task state of the task to be processed in a task library;
updating the task state of the target task in the task library;
and after the updating processing, a second thread pool is called to carry out parallel allocation processing on the list to be allocated carried by the target task.
An embodiment of a computer-readable storage medium provided in the present specification is as follows:
corresponding to a task processing method described above, based on the same technical concept, the embodiments of the present application further provide a computer readable storage medium.
The present embodiment provides a computer-readable storage medium for storing computer-executable instructions that, when executed by a processor, implement the following flow:
inquiring a target thread with the thread state of idle state in a first thread pool according to a task processing instruction, and calling the target thread to execute the following operations:
determining a target task in the task to be processed based on the processing priority and the task state of the task to be processed in a task library;
Updating the task state of the target task in the task library;
and after the updating processing, a second thread pool is called to carry out parallel allocation processing on the list to be allocated carried by the target task.
It should be noted that, in the present specification, an embodiment of a computer readable storage medium and an embodiment of a task processing method in the present specification are based on the same inventive concept, so that a specific implementation of the embodiment may refer to an implementation of the foregoing corresponding method, and a repetition is omitted.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-readable storage media (including, but not limited to, magnetic disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present description is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable test apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable test apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable test apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable test apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
Embodiments of the application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing description is by way of example only and is not intended to limit the present disclosure. Various modifications and changes may occur to those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. that fall within the spirit and principles of the present document are intended to be included within the scope of the claims of the present document.

Claims (12)

1. A method of task processing, the method comprising:
inquiring a target thread with the thread state of idle state in a first thread pool according to a task processing instruction, and calling the target thread to execute the following operations:
determining a target task in the task to be processed based on the processing priority and the task state of the task to be processed in a task library;
updating the task state of the target task in the task library;
And after the updating processing, a second thread pool is called to carry out parallel allocation processing on the list to be allocated carried by the target task.
2. The method of claim 1, wherein the invoking the second thread pool to perform parallel allocation processing on the list to be allocated carried by the target task includes:
grouping the list to be allocated to obtain at least one list group;
and calling the second thread pool to carry out parallel allocation processing on the lists to be allocated in each list group.
3. The method according to claim 2, wherein said grouping the to-be-allocated list to obtain at least one list group comprises:
reading a first list in the lists to be allocated according to a first preset number;
and carrying out grouping processing on the first list to obtain a second preset number of list groups.
4. The method of claim 3, wherein after performing the grouping processing on the first list to obtain the second preset number of list groups, further comprising:
reading a second list in the lists to be allocated according to the first preset quantity;
grouping the second list to obtain a second preset number of list groups;
And calling the second thread pool to carry out parallel distribution processing on the list group obtained after the grouping processing of the second list.
5. The method according to claim 1, wherein the method further comprises:
acquiring a task creation request submitted by a client; the task creation request is used for requesting to create a task to be processed, wherein the task to be processed comprises at least one list to be allocated under a task category;
creating a task to be processed containing the at least one list to be allocated, and building an association relationship between the created task to be processed and a processing priority carried in the task creation request;
and storing the created task to be processed and associated data of the created task to be processed into the task library.
6. The method of claim 5, wherein after the creating the pending task comprising the at least one pending list is performed, further comprising:
if the processing priority carried in the task creation request is null, reading the default priority of the task category;
and establishing an association relation between the default priority and the created task to be processed.
7. The method according to claim 1, wherein the method further comprises:
Reading the processing priority of each task to be processed in the task library according to the priority updating instruction submitted by the client;
sending the processing priority of each task to be processed to the client;
inquiring the task state of a target task to be processed corresponding to the priority updating instruction according to the priority updating instruction submitted by the client; the priority updating instruction carries updating priority;
and if the task state of the target task to be processed is the task state to be processed, updating the processing priority of the target task to be processed based on the updating priority.
8. The method according to claim 1, wherein after the parallel allocation processing execution of the list to be allocated carried by the target task by calling the second thread pool after the update processing, the method further comprises:
if the allocation of the list to be allocated carried by the target task is detected to be completed, updating the task state of the target task in the task library from the in-process state to the processed state based on the thread pool identifier of the first thread pool.
9. The method according to claim 8, wherein if it is detected that allocation of the list to be allocated carried by the target task is completed, updating the task state of the target task in the task library from the in-process state to the processed state based on the thread pool identifier of the first thread pool, further comprises:
Reading a task to be processed, the task state of which is the state to be processed, in the task library as a candidate task;
and if the reading result is not null, the candidate tasks are arranged in a descending order according to the processing priority of the candidate tasks to obtain a candidate task queue, and the candidate task of the first order in the candidate task queue is used as the next target task.
10. A task processing device, the device comprising:
the inquiring module is used for inquiring the target thread with the idle state of the thread state in the first thread pool according to the task processing instruction, and the calling module is used for calling the target thread to execute the following operations:
determining a target task in the task to be processed based on the processing priority and the task state of the task to be processed in a task library;
updating the task state of the target task in the task library;
and after the updating processing, a second thread pool is called to carry out parallel allocation processing on the list to be allocated carried by the target task.
11. A task processing device, the device comprising:
a processor; and a memory configured to store computer-executable instructions that, when executed, cause the processor to perform the task processing method of any of claims 1-9.
12. A computer readable storage medium for storing computer executable instructions which, when executed by a processor, implement the task processing method of any one of claims 1 to 9.
CN202310779467.XA 2023-06-28 2023-06-28 Task processing method and device Pending CN117492945A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310779467.XA CN117492945A (en) 2023-06-28 2023-06-28 Task processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310779467.XA CN117492945A (en) 2023-06-28 2023-06-28 Task processing method and device

Publications (1)

Publication Number Publication Date
CN117492945A true CN117492945A (en) 2024-02-02

Family

ID=89677062

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310779467.XA Pending CN117492945A (en) 2023-06-28 2023-06-28 Task processing method and device

Country Status (1)

Country Link
CN (1) CN117492945A (en)

Similar Documents

Publication Publication Date Title
US9454385B2 (en) System and method for fully configurable real time processing
US11334391B2 (en) Self-programmable and self-tunable resource scheduler for jobs in cloud computing
CN108021400B (en) Data processing method and device, computer storage medium and equipment
CN112230616A (en) Linkage control method and device and linkage middleware
CN103873587B (en) A kind of method and device that scheduling is realized based on cloud platform
CN110166507B (en) Multi-resource scheduling method and device
CN113448728B (en) Cloud resource scheduling method, device, equipment and storage medium
CN114610474A (en) Multi-strategy job scheduling method and system in heterogeneous supercomputing environment
US20230275976A1 (en) Data processing method and apparatus, and computer-readable storage medium
CN116166395A (en) Task scheduling method, device, medium and electronic equipment
Sharma et al. A Dynamic optimization algorithm for task scheduling in cloud computing with resource utilization
CN112905338B (en) Automatic computing resource allocation method and device
CN111913792B (en) Service processing method and device
CN113326025A (en) Single cluster remote continuous release method and device
CN117492945A (en) Task processing method and device
CN114327818B (en) Algorithm scheduling method, device, equipment and readable storage medium
CN114710350B (en) Method and device for distributing callable resources, electronic equipment and storage medium
CN116257333A (en) Distributed task scheduling method, device and system
CN114896637A (en) Data processing method and device, electronic equipment and storage medium
CN114169733A (en) Resource allocation method and device
CN114489978A (en) Resource scheduling method, device, equipment and storage medium
CN112328598A (en) ID generation method, device, electronic equipment and storage medium
CN110737533A (en) task scheduling method and device, electronic equipment and storage medium
CN114584625B (en) Message processing method and device, electronic equipment and storage medium
CN114969119A (en) Data query method, device, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination