CN106802826B - Service processing method and device based on thread pool - Google Patents

Service processing method and device based on thread pool Download PDF

Info

Publication number
CN106802826B
CN106802826B CN201611209613.1A CN201611209613A CN106802826B CN 106802826 B CN106802826 B CN 106802826B CN 201611209613 A CN201611209613 A CN 201611209613A CN 106802826 B CN106802826 B CN 106802826B
Authority
CN
China
Prior art keywords
task
thread pool
service
tasks
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611209613.1A
Other languages
Chinese (zh)
Other versions
CN106802826A (en
Inventor
吴文昊
吕伊蒙
冯哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Unionpay Co Ltd
Original Assignee
China Unionpay Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Unionpay Co Ltd filed Critical China Unionpay Co Ltd
Priority to CN201611209613.1A priority Critical patent/CN106802826B/en
Publication of CN106802826A publication Critical patent/CN106802826A/en
Application granted granted Critical
Publication of CN106802826B publication Critical patent/CN106802826B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2453Query optimisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Multi Processors (AREA)

Abstract

The invention discloses a service processing method and a device based on a thread pool, which are used for enabling a thread pool processing system to have a priority function when processing services, and comprise the following steps: receiving a service to be processed and analyzing the service to be processed into N tasks; n is greater than or equal to 1; acquiring a configuration file of a service to be processed according to the service type of the service to be processed; the configuration file comprises task queue information of the service to be processed and task priority of each task analyzed by the service to be processed; according to the task priority of each task, putting N tasks into a task queue corresponding to the service to be processed; the task queue has a priority; and sequentially putting the N tasks into a thread pool according to the priority of the task queue. Because the task queue has the priority, the service with higher priority can be processed preferentially, and in addition, the tasks in the task queue also have the priority, so that the important tasks can be processed preferentially.

Description

Service processing method and device based on thread pool
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a service processing method and apparatus based on a thread pool.
Background
Today, with the explosion of the financial industry, data pressure in financial systems, whether real-time processing systems or batch processing systems, is increasing. With the explosive growth of data, most financial systems have already finished storing data in a distributed manner, and the related technology is more and more mature. The service processing is often accompanied by complex database operation, and in order to realize management of the distributed database and improve the service processing efficiency, the thread pool is a very effective service processing mechanism.
At present, a service processing system based on a thread pool mainly comprises a synchronous single-database thread pool system and an asynchronous multi-data thread pool system. Among them, the asynchronous multi-data thread pool system is more and more widely used due to its high efficiency processing characteristics such as no waiting, asynchronous processing, etc. However, the asynchronous multidata thread pool does not have a weight function and cannot support priority thread processing, so that the asynchronous multidata thread pool cannot process emergency tasks and important tasks preferentially.
To sum up, the current thread pool processing system does not have a priority function when processing services.
Disclosure of Invention
The invention provides a service processing method and device based on a thread pool, which are used for enabling a thread pool processing system to have a priority function when processing services.
The embodiment of the invention provides a service processing method based on a thread pool, which comprises the following steps:
receiving a service to be processed and analyzing the service to be processed into N tasks; n is greater than or equal to 1;
acquiring a configuration file of a service to be processed according to the service type of the service to be processed; the configuration file comprises task queue information of the service to be processed and task priority of each task analyzed by the service to be processed;
according to the task priority of each task, putting N tasks into a task queue corresponding to the service to be processed; the task queue has a priority;
and sequentially putting the N tasks into a thread pool according to the priority of the task queue.
Optionally, the configuration file also contains thread pool information of the service to be processed,
according to the priority of the task queue, before the N tasks are sequentially placed into the thread pool, the method further comprises the following steps:
determining M threads of the to-be-processed service pairs in the thread pool according to the thread pool information of the to-be-processed service;
and associating the tasks in the task queue corresponding to the service to be processed with the M threads.
Optionally, the configuration file also contains database information,
according to the priority of the task queue, before the N tasks are sequentially placed into the thread pool, the method further comprises the following steps:
according to the database information, the M threads are connected with the database recorded in the database information.
Optionally, the task queue is divided into a first-in first-out (FIFO) queue, a weight queue, a last-in first-out (LIFO) queue and an external queue according to the priority from high to low;
the external queue is used for storing key tasks, and the key tasks are part or all of the N tasks;
after the N tasks are sequentially put into the thread pool according to the priority of the task queue, the method further comprises the following steps:
for each task in the N tasks, callback information of the task is received;
judging whether the task is successfully executed or not according to the callback information;
and if the task is failed to execute, determining a key task corresponding to the task in the external queue, and returning to execute the key task.
Optionally, the thread pool is any one of thread pools in the thread pool system; any thread pool in the thread pool system can acquire database resources corresponding to other thread pools;
after the N tasks are sequentially put into the thread pool according to the priority of the task queue, the method further comprises the following steps:
judging the execution efficiency of the thread pool according to the execution conditions of the N tasks;
if the execution efficiency of the thread pool is lower than a preset threshold value, acquiring a substitute thread pool from the thread pool system; the substitute thread pool is a thread pool with the processing speed higher than that of the thread pool in the thread pool system;
and all or part of unprocessed tasks in the N tasks are transferred to a task queue corresponding to the substitute thread pool.
The embodiment of the invention provides a service processing device based on a thread pool, which comprises:
the analysis module is used for receiving the service to be processed and analyzing the service to be processed into N tasks; n is greater than or equal to 1;
the acquisition module is used for acquiring a configuration file of the service to be processed according to the service type of the service to be processed; the configuration file comprises task queue information of the service to be processed and task priority of each task analyzed by the service to be processed;
the configuration module is used for placing the N tasks into a task queue corresponding to the service to be processed according to the task priority of each task; the task queue has a priority;
and the scheduling module is used for sequentially putting the N tasks into the thread pool according to the priority of the task queue.
Optionally, the configuration file also contains thread pool information of the service to be processed,
the configuration module is further to:
determining M threads of the to-be-processed service pairs in the thread pool according to the thread pool information of the to-be-processed service;
and associating the tasks in the task queue corresponding to the service to be processed with the M threads.
Optionally, the configuration file also contains database information,
the configuration module is further used for connecting the M threads with a database recorded in the database information according to the database information.
Optionally, the task queue is divided into a first-in first-out (FIFO) queue, a weight queue, a last-in first-out (LIFO) queue and an external queue according to the priority from high to low;
the external queue is used for storing key tasks, and the key tasks are part or all of the N tasks;
the scheduling module is further configured to:
for each task in the N tasks, callback information of the task is received;
judging whether the task is successfully executed or not according to the callback information;
and if the task is failed to execute, determining a key task corresponding to the task in the external queue, and returning to execute the key task.
Optionally, the thread pool is any one of thread pools in the thread pool system; any thread pool in the thread pool system can acquire database resources corresponding to other thread pools;
the scheduling module is further configured to:
judging the execution efficiency of the thread pool according to the execution conditions of the N tasks;
if the execution efficiency of the thread pool is lower than a preset threshold value, acquiring a substitute thread pool from the thread pool system; the substitute thread pool is a thread pool with the processing speed higher than that of the thread pool in the thread pool system;
and all or part of unprocessed tasks in the N tasks are transferred to a task queue corresponding to the substitute thread pool.
In summary, embodiments of the present invention provide a service processing method and apparatus based on a thread pool, including: receiving a service to be processed and analyzing the service to be processed into N tasks; n is greater than or equal to 1; acquiring a configuration file of a service to be processed according to the service type of the service to be processed; the configuration file comprises task queue information of the service to be processed and task priority of each task analyzed by the service to be processed; according to the task priority of each task, putting N tasks into a task queue corresponding to the service to be processed; the task queue has a priority; and sequentially putting the N tasks into a thread pool according to the priority of the task queue. According to the service type of the service to be processed, N tasks analyzed from the service to be processed are placed into a task queue corresponding to the service to be processed, and the task queue has a priority, so that the tasks analyzed from the service with a higher priority can be processed preferentially when the tasks are scheduled, and in addition, the tasks in the task queue can be arranged according to the priority, so that important tasks in the service can be processed preferentially.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of a service processing method based on a thread pool according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a relationship between a service type, a weight value, and a task queue according to an embodiment of the present invention;
fig. 3 is a schematic diagram of dynamic scheduling according to an embodiment of the present invention;
fig. 4 is a distributed thread pool system architecture according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a service processing apparatus based on a thread pool according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flow chart of a service processing method based on a thread pool according to an embodiment of the present invention, as shown in fig. 1, including the following steps:
s101: receiving a service to be processed and analyzing the service to be processed into N tasks; n is greater than or equal to 1;
s102: acquiring a configuration file of a service to be processed according to the service type of the service to be processed; the configuration file comprises task queue information of the service to be processed and task priority of each task analyzed by the service to be processed;
s103: according to the task priority of each task, putting N tasks into a task queue corresponding to the service to be processed; the task queue has a priority;
s104: and sequentially putting the N tasks into a thread pool according to the priority of the task queue.
In the specific implementation process of step S101, a to-be-processed service is generally received through an Application Programming Interface (API), where the to-be-processed service is formed by combining a plurality of tasks and can be parsed into the plurality of tasks, and processing of the to-be-processed service, that is, processing of the parsed plurality of tasks, is performed.
In the specific implementation process of step S102, each service type corresponds to a preset configuration file, and the configuration file corresponding to the service to be processed is obtained according to the service type of the service to be processed, for example, the query service corresponds to configuration file 1, the transaction service corresponds to configuration file 2, both configuration file 1 and configuration file 2 are preset configuration files, when the service to be processed is the query service, configuration file 1 is obtained, and when the service to be processed is the transaction service, configuration file 2 is obtained. Optionally, the thread pool has a plurality of task queues, and the configuration file includes task queue information of the to-be-processed service and task priorities of the tasks analyzed by the to-be-processed service. For example, if the configuration file of the query service is configuration file 1, then, the task queue a corresponding to the query service and the task priority of each task analyzed by the query service are recorded in the configuration file 1, optionally, the task priorities are distinguished by weighted values, for a common task, the weighted value is positive, the weighted value is larger, the priority is higher, for an important task, the weighted value is negative, the weighted value is larger, the priority is higher, and the tasks analyzed by the service to be processed are divided into two types by positive and negative, which facilitates subsequent priority processing.
In the specific implementation process of step S103, the configuration file corresponding to the service to be processed includes the priorities of the N tasks analyzed by the service to be processed, and when the N tasks are placed in the task queue, the N tasks need to be placed according to the priority information of the N tasks. The task queues for receiving the N tasks are also provided with priority differentiation, the thread pool is provided with a plurality of task queues, different priorities are arranged among the queues, and the configuration file records the task queue information corresponding to the service to be processed, namely different priorities are configured for the service to be processed according to the service type. For example, a task queue a corresponding to the query service is recorded in the configuration file 1, a task queue B corresponding to the transaction service is recorded in the configuration file 2, and the priority of the task queue B is higher than that of the task queue a, so that the tasks in the task queue B will be processed before the tasks in the task queue a, that is, the tasks analyzed by the transaction service are processed preferentially, and the tasks analyzed by the query service will be processed after the tasks analyzed by the transaction service are processed. For example, the transaction service B resolves N tasks, and when the N tasks are placed in the task queue B, the tasks with high priority need to be placed according to the priority order, so that the tasks with high priority can be preferentially processed.
Optionally, the configuration file further includes thread pool information of the service to be processed, and the method further includes, before the N tasks are sequentially placed in the thread pool according to the task queue priority: determining M threads of a to-be-processed service pair in a thread pool according to the thread pool information of the to-be-processed service; the M threads are used to process N tasks. Before formally processing N tasks, a thread pool for processing the tasks is also configured. Alternatively, the thread pool may be an independent thread pool or a thread pool affiliated with a distributed thread pool system. Alternatively, for a distributed thread pool system having a plurality of thread pools, the running state of each thread pool may be traversed first, and the thread pool in which the comprehensive conditions such as running speed and resource conditions are optimal is selected as the thread pool for processing the N tasks. The configuration file comprises thread pool information of the service to be processed, the thread pool information records the number M of threads required by processing the service to be processed, after the thread pool is determined, M threads are divided from the thread pool according to the thread pool information, and the M threads are independently used for processing tasks analyzed by the service to be processed. For example, 1000 threads are shared in the thread pool, and the thread pool information in the configuration file 2 corresponding to the transaction service is 100 threads, 100 threads need to be divided from the thread pool to process the transaction service, and the 100 threads can meet the processing requirements of multiple tasks analyzed from the transaction service. The method and the device have the advantages that the threads are configured for the service to be processed according to the thread pool information in the configuration file, so that the configured threads can meet the requirements of the service to be processed, and the resources of the thread pool cannot be wasted, so that the thread utilization rate in the thread pool can be improved, the memory planning of the thread pool is optimized, and the processing efficiency of the thread pool is improved. Optionally, a default number of threads is set according to actual application requirements or experience, and when the configuration file does not contain thread pool information, the threads in the thread pool are configured for the service to be processed according to the default number.
Optionally, the configuration file further includes database information, and before the N tasks are sequentially placed in the thread pool according to the task queue priority, the method further includes: according to the database information, the M threads are connected with the database recorded in the database information. The thread pool processing task needs to schedule data resources in the database, so the configuration file also contains database information, the database information is the database information needed for processing the service to be processed, for example, the resources of the database 1 and the database 2 need to be scheduled for processing the transaction service, and the database information in the configuration file 2 corresponding to the transaction service is the database 1 and the database 2. Connecting the M threads with the database directions recorded in the database means that the database information is stored in a thread pool, and when a task is processed, the M threads schedule resources from a designated database according to the database information, and if the database information of the transaction service is the database 1 and the database 2, the thread pool schedules the data resources from the database 1 and the database 2 when processing the task analyzed by the transaction service. The database information is added into the configuration file, so that the distributed multi-database can be operated simultaneously, and when a new database is required to be added to support business processing, the corresponding database information is only required to be added into the configuration file.
In the specific implementation process of step S104, each task queue has a different priority, and optionally, the tasks in the task queue with the higher priority are preferentially put into the thread pool.
Optionally, the task queue is divided into a first-in first-out (FIFO) queue, a weight queue, a last-in first-out (LIFO) queue and an external queue according to the priority from high to low; the external queue is used for storing key tasks, and the key tasks are part or all of the N tasks; after the N tasks are sequentially put into the thread pool according to the priority of the task queue, the method further comprises the following steps: for each task in the N tasks, callback information of the task is received; judging whether the task is successfully executed or not according to the callback information; and if the task is failed to execute, determining a key task corresponding to the task in the external queue, and returning to execute the key task. The external queue is used for storing key tasks, generally, the tasks in the external queue are not executed, and the key tasks are part or all of the N tasks analyzed from the service to be processed. Optionally, when the service to be processed has a very high importance and the number of the analyzed tasks is not large, all the analyzed tasks are stored in the external queue. Optionally, when the importance of the service to be processed is general or the analyzed tasks are too many, part of the tasks are stored in the external queue as the key tasks, and optionally, the key tasks may be divided according to the weight indicating the priority of the tasks, for example, the task with the negative weight is stored in the external queue as the key task. Each task is scheduled into a thread pool for processing, and callback information is obtained to represent the completion condition of the task.
Fig. 2 is a relationship among service types, weight values, and task queues according to an embodiment of the present invention, as shown in fig. 2, the priority of each service is represented by a weight, and for a common service, the weight value is positive, the larger the weight value is, the higher the priority is, and for an important service, the weight value is negative, the larger the weight value is, the higher the priority is. The query type service generally has a lower weight value, so that a plurality of analyzed tasks are put into an FIFO queue with the lowest priority, and the financial type service is more important than the query type service, so that the tasks are put into an internal queue, and when the internal queue has insufficient memory, the tasks are also put into an external queue, and for the second killing operation, the tasks with the highest priority are put into a LIFO queue.
In specific implementation, the callback information of each task in the N tasks is received; judging whether the task is successfully executed or not according to the callback information; and if the task is failed to execute, determining a key task corresponding to the task in the external queue, and returning to execute the key task. The task execution failure has two conditions, one is that the task which is failed to execute is a key task, at the moment, the task which is stored only needs to be directly extracted from the external queue and returned to execute, and the other is that the task which is failed to execute is a common task, the external queue is not stored, the executed key task which is closest to the common task is extracted from the external queue and returned to process. Optionally, when the memory of the other queues is not enough, the external queue also participates in task scheduling.
The key tasks are stored in the external queue, so that a snapshot function is provided for the system, when the task processing has an error, the task processing can be returned for reprocessing in a key task recovery mode, and the usability of the system is enhanced. In addition, the return can be realized only by extracting the key tasks in the external queue, and different API interfaces do not need to be returned according to the error type and the processing result, so the embodiment also provides an extremely friendly and simple API interface.
Optionally, the thread pool is any one of thread pools in the thread pool system; any thread pool in the thread pool system can acquire database resources corresponding to other thread pools; after the N tasks are sequentially put into the thread pool according to the priority of the task queue, the method further comprises the following steps: judging the execution efficiency of the thread pool according to the execution conditions of the N tasks; if the execution efficiency of the thread pool is lower than a preset threshold value, acquiring a substitute thread pool from the thread pool system; the substitute thread pool is a thread pool with the processing speed higher than that of the thread pool in the thread pool system; and all or part of unprocessed tasks in the N tasks are transferred to a task queue corresponding to the substitute thread pool. The distributed thread pool system is provided with a plurality of thread pools, the processing efficiency of each thread pool to the tasks is different, the resource pressure is different, and when the processing efficiency of the thread pool for processing the service to be processed is lower than a preset threshold value, the tasks analyzed by the service to be processed can be dispersed to other thread pools for processing. Each thread pool of the distributed thread pool system comprises threads for calling database resources corresponding to other thread pools, so that each thread pool has the capacity of processing other thread pool services. Fig. 3 is a schematic diagram of dynamic scheduling provided in the embodiment of the present invention, as shown in fig. 3, 300 tasks are analyzed by both a service APP1 and a service APP2, and are processed by a thread pool a and a thread pool b, respectively, because database resources of the thread pool a are sufficient, the task processing efficiency is high, after a period of time, 30 tasks remain in the service APP1, 200 tasks remain in the service APP2, and the processing speed of the thread pool b is lower than a preset threshold, at this time, part or all of the remaining 200 tasks of the service APP2 are scheduled to a task queue corresponding to the service APP1, and the thread pool a corresponding to the service APP1 assists the thread pool b to process the remaining tasks of the service APP 2. The dynamic scheduling method can accelerate the service processing process and fully utilize the thread pool resources. In addition, the method can be expanded and deployed transversely, and for complex services, the analyzed tasks can be dispersed in a plurality of thread pools, so that the working pressure of a single thread pool is reduced, and the service processing process is improved.
Fig. 4 is a distributed thread pool system architecture according to an embodiment of the present invention, as shown in fig. 4, APP1, APP2, …, and APPN represent N types of traffic to be processed of the distributed thread pool system, thread pools 1, …, and thread pool N are N thread pools that the thread pool has, each thread pool has four queues, and fig. 4 exemplarily shows a task queue that the thread pool has: the management device comprises a LIFO queue, an internal queue, an FIFO queue and an external queue, wherein the manager is responsible for analyzing a service to be processed into a plurality of tasks according to an execution statement of the service to be processed, determining and analyzing a configuration file corresponding to the service to be processed, configuring a task queue, a thread pool and a database according to information in the configuration file, a scheduler sequentially schedules the tasks to the thread pool according to queue priority, and the node 1, the node 2, the node … and the node N are N database nodes of the distributed thread pool system and provide data resources for the distributed thread pool system.
In summary, an embodiment of the present invention provides a service processing method based on a thread pool, including: receiving a service to be processed and analyzing the service to be processed into N tasks; n is greater than or equal to 1; acquiring a configuration file of a service to be processed according to the service type of the service to be processed; the configuration file comprises task queue information of the service to be processed and task priority of each task analyzed by the service to be processed; according to the task priority of each task, putting N tasks into a task queue corresponding to the service to be processed; the task queue has a priority; and sequentially putting the N tasks into a thread pool according to the priority of the task queue. According to the service type of the service to be processed, N tasks analyzed from the service to be processed are placed into a task queue corresponding to the service to be processed, and the task queue has a priority, so that the tasks analyzed from the service with a higher priority can be processed preferentially when the tasks are scheduled, and in addition, the tasks in the task queue can be arranged according to the priority, so that important tasks in the service can be processed preferentially.
Based on the same technical concept, the embodiment of the invention provides a service processing device based on a thread pool, and the device can realize the technical method. Fig. 5 is a schematic structural diagram of a service processing apparatus based on a thread pool according to an embodiment of the present invention, as shown in fig. 5, a processing apparatus 500 includes a parsing module 501, an obtaining module 502, a configuring module 503, and a scheduling module 504, wherein,
the analysis module 501 is configured to receive a service to be processed and analyze the service to be processed into N tasks; n is greater than or equal to 1;
an obtaining module 502, configured to obtain a configuration file of a service to be processed according to a service type of the service to be processed; the configuration file comprises task queue information of the service to be processed and task priority of each task analyzed by the service to be processed;
a configuration module 503, configured to place the N tasks into task queues corresponding to the to-be-processed services according to task priorities of the tasks; the task queue has a priority;
and the scheduling module 504 is configured to sequentially place the N tasks into the thread pool according to the task queue priority.
Optionally, the configuration file also contains thread pool information of the service to be processed,
the configuration module 503 is further configured to:
determining M threads of a to-be-processed service pair in a thread pool according to the thread pool information of the to-be-processed service;
and associating the tasks in the task queue corresponding to the service to be processed with the M threads.
Optionally, the configuration file also contains database information,
a configuration module 503, further configured to:
according to the database information, the M threads are connected with the database recorded in the database information.
Optionally, the task queue is divided into a first-in first-out (FIFO) queue, a weight queue, a last-in first-out (LIFO) queue and an external queue according to the priority from high to low;
the external queue is used for storing key tasks, and the key tasks are part or all of the N tasks;
the scheduling module 504 is further configured to:
for each task in the N tasks, callback information of the task is received;
judging whether the task is successfully executed or not according to the callback information;
and if the task is failed to execute, determining a key task corresponding to the task in the external queue, and returning to execute the key task.
Optionally, the thread pool is any one of thread pools in the thread pool system; any thread pool in the thread pool system can acquire database resources corresponding to other thread pools;
the scheduling module 504 is further configured to:
judging the execution efficiency of the thread pool according to the execution conditions of the N tasks;
if the execution efficiency of the thread pool is lower than a preset threshold value, acquiring a substitute thread pool from the thread pool system; the substitute thread pool is a thread pool with the processing speed higher than that of the thread pool in the thread pool system;
and all or part of unprocessed tasks in the N tasks are transferred to a task queue corresponding to the substitute thread pool.
In summary, embodiments of the present invention provide a service processing method and apparatus based on a thread pool, including: receiving a service to be processed and analyzing the service to be processed into N tasks; n is greater than or equal to 1; acquiring a configuration file of a service to be processed according to the service type of the service to be processed; the configuration file comprises task queue information of the service to be processed and task priority of each task analyzed by the service to be processed; according to the task priority of each task, putting N tasks into a task queue corresponding to the service to be processed; the task queue has a priority; and sequentially putting the N tasks into a thread pool according to the priority of the task queue. According to the service type of the service to be processed, N tasks analyzed from the service to be processed are placed into a task queue corresponding to the service to be processed, and the task queue has a priority, so that the tasks analyzed from the service with a higher priority can be processed preferentially when the tasks are scheduled, and in addition, the tasks in the task queue can be arranged according to the priority, so that important tasks in the service can be processed preferentially.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (6)

1. A service processing method based on a thread pool is characterized by comprising the following steps:
receiving a service to be processed and analyzing the service to be processed into N tasks; n is greater than or equal to 1;
acquiring a configuration file of the service to be processed according to the service type of the service to be processed; the configuration file comprises task queue information of the service to be processed and task priority of each task analyzed by the service to be processed;
according to the task priority of each task, putting the N tasks into a task queue corresponding to the service to be processed; the task queue has priority, and the task queue is divided into a first-in first-out (FIFO) queue, a weight queue, a last-in first-out (LIFO) queue and an external queue according to the priority from high to low;
sequentially putting the N tasks into a thread pool according to the task queue priority;
the external queue is used for storing key tasks, and the key tasks are part or all of the N tasks;
after the N tasks are sequentially placed in a thread pool according to the task queue priority, the method further includes:
for each task in the N tasks, receiving callback information of the task;
judging whether the task is successfully executed or not according to the callback information;
if the task is failed to execute, determining a key task corresponding to the task in the external queue, and returning to execute the key task, wherein the key task corresponding to the task in the external queue is the task or an executed key task closest to the task;
the configuration file also contains the thread pool information of the service to be processed,
according to the task queue priority, before the N tasks are sequentially placed in a thread pool, the method further includes:
determining M threads corresponding to the service to be processed in the thread pool according to the thread pool information of the service to be processed; the M threads are used for processing the N tasks.
2. The method of claim 1, wherein the configuration file further includes database information,
according to the task queue priority, before the N tasks are sequentially placed in a thread pool, the method further includes:
and connecting the M threads with a database recorded in the database information according to the database information.
3. The method of claim 1 or 2,
the thread pool is any one of thread pools in the thread pool system; any thread pool in the thread pool system can acquire database resources corresponding to other thread pools;
after the N tasks are sequentially placed in a thread pool according to the task queue priority, the method further includes:
judging the execution efficiency of the thread pool according to the execution conditions of the N tasks;
if the execution efficiency of the thread pool is lower than a preset threshold value, acquiring a substitute thread pool from the thread pool system; the alternative thread pool is a thread pool with a processing speed higher than that of the thread pool in the thread pool system;
and all or part of unprocessed tasks in the N tasks are transferred to a task queue corresponding to the substitute thread pool.
4. A service processing apparatus based on a thread pool, comprising:
the analysis module is used for receiving the service to be processed and analyzing the service to be processed into N tasks; n is greater than or equal to 1;
the acquisition module is used for acquiring a configuration file of the service to be processed according to the service type of the service to be processed; the configuration file comprises task queue information of the service to be processed and task priority of each task analyzed by the service to be processed;
the configuration module is used for placing the N tasks into the task queue corresponding to the service to be processed according to the task priority of each task; the task queue has priority, and the task queue is divided into a first-in first-out (FIFO) queue, a weight queue, a last-in first-out (LIFO) queue and an external queue according to the priority from high to low;
the scheduling module is used for sequentially placing the N tasks into a thread pool according to the task queue priority;
the external queue is used for storing key tasks, and the key tasks are part or all of the N tasks;
the scheduling module is further configured to:
for each task in the N tasks, receiving callback information of the task;
judging whether the task is successfully executed or not according to the callback information;
if the task is failed to execute, determining a key task corresponding to the task in the external queue, and returning to execute the key task, wherein the key task corresponding to the task in the external queue is the task or an executed key task closest to the task;
the configuration file also contains the thread pool information of the service to be processed,
the configuration module is further to:
determining M threads corresponding to the service to be processed in the thread pool according to the thread pool information of the service to be processed;
and associating the tasks in the task queue corresponding to the service to be processed with the M threads.
5. The apparatus of claim 4, wherein the configuration file further comprises database information,
the configuration module is further configured to:
and connecting the M threads with a database recorded in the database information according to the database information.
6. The apparatus of claim 4 or 5,
the thread pool is any one of thread pools in the thread pool system; any thread pool in the thread pool system can acquire database resources corresponding to other thread pools;
the scheduling module is further configured to:
judging the execution efficiency of the thread pool according to the execution conditions of the N tasks;
if the execution efficiency of the thread pool is lower than a preset threshold value, acquiring a substitute thread pool from the thread pool system; the alternative thread pool is a thread pool with a processing speed higher than that of the thread pool in the thread pool system;
and all or part of unprocessed tasks in the N tasks are transferred to a task queue corresponding to the substitute thread pool.
CN201611209613.1A 2016-12-23 2016-12-23 Service processing method and device based on thread pool Active CN106802826B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611209613.1A CN106802826B (en) 2016-12-23 2016-12-23 Service processing method and device based on thread pool

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611209613.1A CN106802826B (en) 2016-12-23 2016-12-23 Service processing method and device based on thread pool

Publications (2)

Publication Number Publication Date
CN106802826A CN106802826A (en) 2017-06-06
CN106802826B true CN106802826B (en) 2021-06-18

Family

ID=58985750

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611209613.1A Active CN106802826B (en) 2016-12-23 2016-12-23 Service processing method and device based on thread pool

Country Status (1)

Country Link
CN (1) CN106802826B (en)

Families Citing this family (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107491350B (en) * 2017-09-05 2018-08-10 武汉斗鱼网络科技有限公司 Interface task call method and device
CN108304473B (en) * 2017-12-28 2020-09-04 石化盈科信息技术有限责任公司 Data transmission method and system between data sources
CN110096344A (en) * 2018-01-29 2019-08-06 北京京东尚科信息技术有限公司 Task management method, system, server cluster and computer-readable medium
CN110321202A (en) * 2018-03-29 2019-10-11 优酷网络技术(北京)有限公司 Task processing method and device
CN108958933B (en) * 2018-06-27 2021-12-21 创新先进技术有限公司 Configuration parameter updating method, device and equipment of task executor
CN109189564A (en) * 2018-08-01 2019-01-11 北京奇虎科技有限公司 A kind of task processing method and device
CN110837401A (en) * 2018-08-16 2020-02-25 苏宁易购集团股份有限公司 Hierarchical processing method and device for java thread pool
CN109471731A (en) * 2018-11-21 2019-03-15 阿里巴巴集团控股有限公司 A kind of data processing, EMS memory management process, device, equipment and medium
CN109634653B (en) * 2018-11-30 2023-08-01 苏州朗润创新知识产权运营有限公司 Resource allocation method and device based on componentized architecture
CN109558255A (en) * 2018-12-13 2019-04-02 广东浪潮大数据研究有限公司 A kind of method and Task Processing Unit of task processing
CN109814994B (en) * 2019-01-03 2021-10-08 福建天泉教育科技有限公司 Method and terminal for dynamically scheduling thread pool
CN109857535B (en) * 2019-02-18 2021-06-11 国家计算机网络与信息安全管理中心 Spark JDBC-oriented task priority control implementation method and device
CN112288198A (en) * 2019-07-22 2021-01-29 北京车和家信息技术有限公司 Task processing system and method
CN110457124A (en) * 2019-08-06 2019-11-15 中国工商银行股份有限公司 For the processing method and its device of business thread, electronic equipment and medium
CN110457126A (en) * 2019-08-13 2019-11-15 杭州有赞科技有限公司 A kind of asynchronous invoking method and system
CN110532082A (en) * 2019-09-04 2019-12-03 厦门商集网络科技有限责任公司 A kind of task application device and method of task based access control predistribution
CN110647389B (en) * 2019-09-16 2023-04-07 北京镁伽机器人科技有限公司 Task processing method for automatic beverage machine, automatic beverage machine and storage medium
CN110851245A (en) * 2019-09-24 2020-02-28 厦门网宿有限公司 Distributed asynchronous task scheduling method and electronic equipment
CN111210288A (en) * 2019-12-26 2020-05-29 大象慧云信息技术有限公司 Tax control server-based invoicing batch invoicing job optimized scheduling method and system
CN111290846B (en) * 2020-02-26 2023-08-18 杭州涂鸦信息技术有限公司 Distributed task scheduling method and system
CN111552546B (en) * 2020-04-16 2021-07-16 贝壳找房(北京)科技有限公司 Task implementation method and device based on multithreading and storage medium
CN111737026A (en) * 2020-05-28 2020-10-02 苏州浪潮智能科技有限公司 Multithreading message processing method based on lookup operation
CN111813554A (en) * 2020-07-17 2020-10-23 济南浪潮数据技术有限公司 Task scheduling processing method and device, electronic equipment and storage medium
CN112659119A (en) * 2020-12-02 2021-04-16 广东博智林机器人有限公司 Control method and device of mechanical arm, electronic equipment and storage medium
CN112463331B (en) * 2020-12-02 2022-04-15 天津光电通信技术有限公司 Task scheduling optimization implementation method based on JAVA single thread pool
CN112685156A (en) * 2020-12-28 2021-04-20 北京五八信息技术有限公司 Task execution method and device, electronic equipment and computer readable medium
CN113434307A (en) * 2021-06-22 2021-09-24 北京沃东天骏信息技术有限公司 Task sending processing method, task processing method, device, system and equipment
CN113641517B (en) * 2021-08-10 2023-08-29 平安科技(深圳)有限公司 Service data transmitting method, device, computer equipment and storage medium
CN116560809A (en) * 2022-01-28 2023-08-08 腾讯科技(深圳)有限公司 Data processing method and device, equipment and medium
CN115037702B (en) * 2022-05-23 2024-04-12 北京梧桐车联科技有限责任公司 Message distribution and data transmission methods and equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101599027A (en) * 2009-06-30 2009-12-09 中兴通讯股份有限公司 A kind of thread pool management method and system thereof
CN102253860A (en) * 2011-07-13 2011-11-23 深圳市万兴软件有限公司 Asynchronous operation method and asynchronous operation management device
CN103870348A (en) * 2012-12-14 2014-06-18 中国电信股份有限公司 Test method and system for concurrent user access
CN104063279A (en) * 2013-03-20 2014-09-24 腾讯科技(深圳)有限公司 Task scheduling method and device and terminal

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7080379B2 (en) * 2002-06-20 2006-07-18 International Business Machines Corporation Multiprocessor load balancing system for prioritizing threads and assigning threads into one of a plurality of run queues based on a priority band and a current load of the run queue
CN103559082A (en) * 2013-11-04 2014-02-05 北京华胜天成科技股份有限公司 Distributed task scheduling method, device and system based on queues
CN104407847B (en) * 2014-10-29 2019-05-07 中国建设银行股份有限公司 A kind of method and device of batch processing
CN104462370A (en) * 2014-12-09 2015-03-25 北京百度网讯科技有限公司 Distributed task scheduling system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101599027A (en) * 2009-06-30 2009-12-09 中兴通讯股份有限公司 A kind of thread pool management method and system thereof
CN102253860A (en) * 2011-07-13 2011-11-23 深圳市万兴软件有限公司 Asynchronous operation method and asynchronous operation management device
CN103870348A (en) * 2012-12-14 2014-06-18 中国电信股份有限公司 Test method and system for concurrent user access
CN104063279A (en) * 2013-03-20 2014-09-24 腾讯科技(深圳)有限公司 Task scheduling method and device and terminal

Also Published As

Publication number Publication date
CN106802826A (en) 2017-06-06

Similar Documents

Publication Publication Date Title
CN106802826B (en) Service processing method and device based on thread pool
WO2020211579A1 (en) Processing method, device and system for distributed bulk processing system
CN113535367B (en) Task scheduling method and related device
CN111399989B (en) Container cloud-oriented task preemption and scheduling method and system
CN111125444A (en) Big data task scheduling management method, device, equipment and storage medium
CN111338791A (en) Method, device and equipment for scheduling cluster queue resources and storage medium
CN107589990B (en) Data communication method and system based on thread pool
CN107818012B (en) Data processing method and device and electronic equipment
CN110221914B (en) File processing method and device
CN113626173B (en) Scheduling method, scheduling device and storage medium
CN113051049B (en) Task scheduling system, method, electronic device and readable storage medium
CN105786917B (en) Method and device for concurrent warehousing of time series data
CN113485814A (en) Batch task scheduling method and device
CN111767125B (en) Task execution method, device, electronic equipment and storage medium
CN112860401A (en) Task scheduling method and device, electronic equipment and storage medium
CN105446812A (en) Multitask scheduling configuration method
CN114896295B (en) Data desensitization method, desensitization device and desensitization system in big data scene
CN109829005A (en) A kind of big data processing method and processing device
CN115981808A (en) Scheduling method, scheduling device, computer equipment and storage medium
CN115344370A (en) Task scheduling method, device, equipment and storage medium
CN115220887A (en) Processing method of scheduling information, task processing system, processor and electronic equipment
CN110990139B (en) SMP scheduling method and system based on RTOS
CN113806055A (en) Lightweight task scheduling method, system, device and storage medium
CN113254143A (en) Virtual network function network element arranging and scheduling method, device and system
CN111258728A (en) Task execution method and device, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant