CN116719626A - Multithreading parallel processing method and processing system for splitting mass data - Google Patents

Multithreading parallel processing method and processing system for splitting mass data Download PDF

Info

Publication number
CN116719626A
CN116719626A CN202310962197.6A CN202310962197A CN116719626A CN 116719626 A CN116719626 A CN 116719626A CN 202310962197 A CN202310962197 A CN 202310962197A CN 116719626 A CN116719626 A CN 116719626A
Authority
CN
China
Prior art keywords
list
subtask
data
total task
task list
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310962197.6A
Other languages
Chinese (zh)
Other versions
CN116719626B (en
Inventor
葛辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Cinsoft Technology Co ltd
Original Assignee
Chengdu Cinsoft Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Cinsoft Technology Co ltd filed Critical Chengdu Cinsoft Technology Co ltd
Priority to CN202310962197.6A priority Critical patent/CN116719626B/en
Publication of CN116719626A publication Critical patent/CN116719626A/en
Application granted granted Critical
Publication of CN116719626B publication Critical patent/CN116719626B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application discloses a multi-thread parallel processing method for splitting mass data and a processing system thereof, wherein a total task list is constructed, all data for executing the total task is imported into the total task list, a plurality of subtask lists are generated based on the total task list, all data are split and divided into the subtask lists according to list attributes, a thread pool is constructed, idle threads are distributed to execute the total task and the subtasks respectively, a signal gun initial value comprising a countdown is set based on the number of the distributed idle threads, the countdown is added to the last byte in each subtask list, after all data processing is completed, the numerical value of the countdown is triggered to change, whether the tasks are all completed according to the numerical value of the countdown is judged, the optimization of the response time of the mass data operated in a program is realized, the system resources are fully utilized, the response time of the program is reduced, and the experience of a user to software is improved.

Description

Multithreading parallel processing method and processing system for splitting mass data
Technical Field
The application relates to the field of mass data processing, in particular to a multithreading parallel processing method for splitting mass data and a processing system thereof.
Background
In Java service, the difference of data volume can lead to the difference of selected data processing methods, for example, small amount of data, such as 10 pieces of data, can be processed manually, each piece can be processed one by one, even hundreds of pieces of data can be considered to be processed in the same way, but if the data reach tens of millions, the data can not be processed by a receiver, and the processing of massive data is more complex than the processing of the data by a tool or a program, so that the problems of complex data processing and long time consumption can be effectively and reasonably solved by adopting a special program processing method.
At present, when some massive data are imported, queried, deleted and the like, if only a single thread is operated, the response time of a network request is long, and the user experience is poor, because when the single thread is executed, the program paths are arranged in a continuous sequence, the front part of the program paths must be processed well, and the later part of the program paths can be executed.
In summary, in the prior art, the processing response time for the massive data is longer, and the utilization rate of system resources is lower.
Disclosure of Invention
In view of the above, the present application provides a multithreading parallel processing method for splitting mass data and a processing system thereof, which aims to solve all or part of the above technical problems.
In order to solve the technical problems, the technical scheme of the application is to provide a multithreading parallel processing method for splitting mass data, which comprises the following steps:
constructing a total task list, and importing all data for executing the total task into the total task list;
generating a plurality of subtask lists based on the total task list, splitting all data, and dividing the data into the subtask lists;
constructing a thread pool, and distributing idle threads in the thread pool to respectively execute a total task in a total task list and subtasks in each subtask list, wherein the idle threads are in one-to-one correspondence with the task lists;
setting an initial value of a signal gun based on the number of the allocated idle threads, wherein the signal gun comprises a down counter;
after adding the countdown to the tail byte in each subtask list, triggering the numerical value of the countdown to change after all data processing in each subtask list is completed;
judging whether the down counter value in the signal gun is 0, if so, completing all the tasks in each subtask list, and summarizing the subtask completion results in each subtask list into the total task list to obtain a total task result.
Optionally, the generating a plurality of subtask lists based on the total task list, splitting the whole data, and dividing the whole data into subtask lists includes:
determining a total task list attribute based on the total task list, wherein the total task list attribute at least comprises one of a business logic attribute and a server performance;
determining the subtask list attribute according to the total task list attribute, and generating subtask lists with the corresponding number of the subtask list attribute;
and dividing all the data into corresponding subtask lists according to the subtask list attributes.
Optionally, the subtask list attribute includes the same task list attribute and a different task list attribute, specifically,
when the business logic attribute is adopted as the total task list attribute, the subtask list attribute is a different task list attribute;
when server performance is employed as the overall task list attribute, the subtask list attributes are the same task list attributes.
Optionally, after adding the countdown to the last byte in each subtask list, the step of triggering the value of the countdown to change after all data processing in each subtask list is completed includes:
adding a countdown to the last byte in each subtask list;
processing task data in each subtask list simultaneously by idle threads corresponding to each subtask list;
and triggering the numerical value of the down counter to be correspondingly reduced by 1 by the down count until the idle thread finishes processing all the data in the subtask list.
Optionally, the method for determining whether the value of the down counter in the signal gun is 0 includes:
and calling an await method by an idle thread corresponding to the total task list to judge whether the value of a down counter in the signal gun is 0.
Optionally, after the determining whether the value of the down counter in the signal gun is 0, the method further includes:
and if the value of the down counter is not 0, blocking the idle threads corresponding to the total task list until the idle threads corresponding to the subtask list are all completed.
Optionally, the idle thread which completes all data processing of the corresponding subtask list returns to the thread pool, and task allocation of the thread pool is carried out again.
Correspondingly, the application also provides a multithreaded parallel processing system for splitting mass data, which comprises the following steps:
the total task list module is used for constructing a total task list and importing all data for executing the total task into the total task list;
the data splitting module is used for generating a plurality of subtask lists based on the total task list, splitting all the data and dividing the data into the subtask lists;
the thread pool construction module is used for constructing a thread pool and distributing idle threads in the thread pool to respectively execute the total tasks in the total task list and the subtasks in each subtask list, wherein the idle threads are in one-to-one correspondence with the task lists;
the signal gun setting module is used for setting an initial value of a signal gun based on the number of the allocated idle threads, and the signal gun comprises a down counter;
the countdown adding module is used for adding the countdown to the tail byte in each subtask list so as to trigger the numerical value of the countdown to change after all data processing in each subtask list is completed;
and the numerical judgment module is used for judging whether the value of the down counter in the signal gun is 0, if the value of the down counter is 0, completing all the tasks in each subtask list, and summarizing the subtask completion result in each subtask list into the total task list to obtain a total task result.
Correspondingly, the application also provides a storage medium which stores a computer program, and the computer program can realize the multi-thread parallel processing method for splitting mass data.
Correspondingly, the application also provides computer equipment, which comprises a central processing unit and a memory, wherein the memory stores a computer program, and when the computer program is executed by the central processing unit, the multithreading parallel processing method for splitting mass data can be realized.
The application has the advantages that: the method comprises the steps of constructing a total task list, importing all data for executing the total task into the total task list, generating a plurality of subtask lists based on the total task list, splitting all data, dividing the data into the subtask lists according to list attributes, further constructing a thread pool, distributing idle threads to execute the total task and the subtasks respectively, setting a signal gun initial value comprising a countdown counter based on the number of the distributed idle threads, adding the countdown to the tail byte in each subtask list, triggering the value of the countdown counter to change after all data processing is completed, judging whether the tasks are all completed according to the value of the countdown counter, optimizing the response time of operation mass data in a program, fully utilizing system resources, reducing the response time of the program, and improving the experience of a user to software.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of steps of a multi-threaded parallel processing method for splitting mass data according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a multi-threaded parallel processing system for splitting mass data according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the embodiments of the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Referring to fig. 1, a schematic diagram of steps of a multi-threaded parallel processing method for splitting mass data according to an embodiment of the present application is provided,
s11, constructing a total task list, and importing all data for executing the total task into the total task list.
Firstly, a total task list is constructed, and the operations of importing, inquiring and the like of mass data are regarded as an integral total task and imported into the total task list.
S12, generating a plurality of subtask lists based on the total task list, splitting all data into the subtask lists,
determining a total task list attribute based on the total task list, wherein the total task list attribute at least comprises one of a business logic attribute and a server performance; determining the subtask list attribute according to the total task list attribute, and generating subtask lists with the corresponding number of the subtask list attribute; and dividing all the data into corresponding subtask lists according to the subtask list attributes.
Further, when the business logic attribute is adopted as the total task list attribute, the subtask list attribute is a different task list attribute; when server performance is employed as the overall task list attribute, the subtask list attribute is the same task list attribute.
Further, for example, the total task of the commodity ordering freight can be divided into the freight of the commodity, freight risk and freight of remote areas, wherein the commodity ordering freight is a total task list attribute, the subtask list attribute is a different task list attribute, and the commodity ordering freight is respectively freight of the commodity, freight risk and freight of remote areas.
Furthermore, for example, the total tasks of large data batch warehouse-in can evaluate the maximum throughput of one thread of the server according to the memory size of the server, for example, one thread can process 10000 pieces of data at one time fastest, the total tasks can be split according to 10000 pieces of data as standards, and finally the total tasks are divided into n pieces of tasks, and each piece of task is 10000 pieces of data, at this time, "large data batch warehouse-in" is a total task list attribute, and "10000 pieces of data" is a subtask list attribute.
Further, to ensure data integrity, the total data after splitting must be equal to the total data in the large task. If the big data are put into the total tasks in batches, the maximum throughput of one thread of the server is evaluated according to the memory size of the server, for example, one thread processes 10000 pieces of data at one time most quickly, then we can split the total tasks according to 10000 pieces as standard, and the total data amount/10000 can be divided into (total data amount/10000) pieces of tasks if the total data amount/10000 can be divided; if the tasks cannot be divided completely (total data quantity/10000 integer bit+1), so that more data less than 10000 can be obtained, and the tasks can be divided into one section to ensure the integrity of the data.
S13, constructing a thread pool, and distributing idle threads in the thread pool to respectively execute the total tasks in the total task list and the subtasks in each subtask list, wherein the idle threads correspond to the task lists one by one;
in Java programs, a thread has a life cycle. Under a user request, a Java process allocates a thread to the request to do so until the thread has completed executing the request, and then dies. If the task is in the thread pool, an idle thread is taken from the pool to be executed, and after the execution is completed, the thread is put back into the pool to wait for the next request.
That is, there is one main thread for executing the total task in the total task list, the subtasks in each subtask list generated by the total task list are executed by other idle threads in the thread pool, that is, the single thread processing is changed into multi-thread processing, where each idle thread executes all data processing in a corresponding subtask list, and it can be understood that in a single subtask list, the corresponding idle thread is the single thread processing, and for the total task list, the idle thread only processes one of the subtasks and belongs to one of the multi-threads.
S14, setting an initial value of a signal gun based on the number of the allocated idle threads, wherein the signal gun comprises a down counter;
the total number of subtask lists is obtained based on the number of the allocated space threads, and the total number of threads is set into the countdown by taking the countdown as a signal gun, so that a signal gun containing the total number is obtained.
S15, adding the countdown to the tail byte in each subtask list so as to trigger the numerical value of the countdown to change after all data processing in each subtask list is completed;
after obtaining the signal gun containing the total number, the thread pool distributes the idle thread to execute the subtasks in the corresponding subtask list, and a countDown method is added after the tail byte of each subtask, and the counter in the signal gun is correspondingly decremented by 1 no matter whether the subtask execution is successful or failed. The countDown is used to decrease the counter in the signal gun by 1.
If the execution of the subtask fails, the subtask with the execution failure may be compensated by collecting the subtask first and then manually compensating the task.
S16, judging whether the down counter value in the signal gun is 0, if so, completing all tasks in each subtask list, and summarizing the subtask completion results in each subtask list into a total task list to obtain a total task result.
Because the subtasks are completed by idle threads in the thread pool, and the main thread can continue to execute after waiting for all the subtasks to be completed, the await method is called in the main thread, the await method can judge whether the counter value in the countdown latch signal gun is 0 or not, if not, the main thread is blocked, the execution of all the subtask threads is waited, when the execution of the last subtask is completed, the await method judges that the countdown latch value is 0, and then the execution of all the subtasks is completed, and the main thread can continue to execute downwards.
The embodiment provides a multithreading parallel processing method for splitting mass data, which comprises the steps of constructing a total task list and importing all data for executing the total task into the total task list; generating a plurality of subtask lists based on the total task list; splitting all data, and dividing the data into subtask lists according to list attributes; constructing a thread pool, and distributing idle threads in the thread pool to respectively execute the total tasks in the total task list and the subtasks in each subtask list, wherein the idle threads are in one-to-one correspondence with the task lists; setting an initial value of a signal gun based on the number of the allocated idle threads, wherein the signal gun comprises a down counter; after adding the countdown to the tail byte in each subtask list, triggering the numerical value of the countdown to change after all data processing in each subtask list is completed; judging whether the value of the down counter in the signal gun is 0, if so, completing all the tasks in each subtask list, and summarizing the subtask completion results in each subtask list into a total task list to obtain a total task result. The optimization of the response time of the operation mass data in the program is realized, the system resource is fully utilized, the response time of the program is reduced, and the experience of the user on the software is improved.
Correspondingly, the application also provides a multithreaded parallel processing system for splitting mass data, which is shown in a structural schematic diagram of the processing system in fig. 2, and comprises the following steps:
the total task list module is used for constructing a total task list and importing all data for executing the total task into the total task list;
the data splitting module is used for generating a plurality of subtask lists based on the total task list, splitting all data and dividing the data into the subtask lists;
the thread pool construction module is used for constructing a thread pool and distributing idle threads in the thread pool to respectively execute the total tasks in the total task list and the subtasks in each subtask list, wherein the idle threads correspond to the task lists one by one;
the signal gun setting module is used for setting an initial value of a signal gun based on the number of the allocated idle threads, and the signal gun comprises a down counter;
the countdown adding module is used for adding the countdown to the tail byte in each subtask list so as to trigger the numerical value of the countdown to change after all data processing in each subtask list is completed;
and the numerical judgment module is used for judging whether the value of the down counter in the signal gun is 0, if the value of the down counter is 0, completing all the tasks in each subtask list, and summarizing the subtask completion results in each subtask list into a total task list to obtain a total task result.
Correspondingly, the application also provides a storage medium which stores a computer program, and the computer program can realize the multithreading parallel processing method for splitting mass data in any one of the above embodiments when being executed.
Correspondingly, the application also provides a computer device, as shown in fig. 3, comprising a central processing unit S1001 and a memory S1002, wherein the memory stores a computer program, and when the computer program is executed by the central processing unit, the multithreading parallel processing method for splitting mass data in any one of the above embodiments can be realized.
The multithreading parallel processing method and the processing system for splitting mass data provided by the embodiment of the application. In the description, each embodiment is described in a progressive manner, and each embodiment is mainly described by the differences from other embodiments, so that the same similar parts among the embodiments are mutually referred. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section. It should be noted that it will be apparent to those skilled in the art that various modifications and adaptations of the application can be made without departing from the principles of the application and these modifications and adaptations are intended to be within the scope of the application as defined in the following claims.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application. The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.

Claims (10)

1. The multithreading parallel processing method for splitting mass data is characterized by comprising the following steps of:
constructing a total task list, and importing all data for executing the total task into the total task list;
generating a plurality of subtask lists based on the total task list, splitting all data, and dividing the data into the subtask lists;
constructing a thread pool, and distributing idle threads in the thread pool to respectively execute a total task in a total task list and subtasks in each subtask list, wherein the idle threads are in one-to-one correspondence with the task lists;
setting an initial value of a signal gun based on the number of the allocated idle threads, wherein the signal gun comprises a down counter;
after adding the countdown to the tail byte in each subtask list, triggering the numerical value of the countdown to change after all data processing in each subtask list is completed;
judging whether the down counter value in the signal gun is 0, if so, completing all the tasks in each subtask list, and summarizing the subtask completion results in each subtask list into the total task list to obtain a total task result.
2. The method for multi-threaded parallel processing of mass data splitting as recited in claim 1, wherein generating a plurality of subtask lists based on the total task list and splitting the entire data into subtask lists comprises:
determining a total task list attribute based on the total task list, wherein the total task list attribute at least comprises one of a business logic attribute and a server performance;
determining the subtask list attribute according to the total task list attribute, and generating subtask lists with the corresponding number of the subtask list attribute;
and dividing all the data into corresponding subtask lists according to the subtask list attributes.
3. The method of claim 2, wherein the sub-task list attributes comprise the same task list attribute and different task list attributes, and in particular,
when the business logic attribute is adopted as the total task list attribute, the subtask list attribute is a different task list attribute;
when server performance is employed as the overall task list attribute, the subtask list attributes are the same task list attributes.
4. The method for multi-threaded parallel processing of mass data splitting as recited in claim 1, wherein said adding a back off to an end byte in each subtask list to trigger a change in a value of the back off counter after all data processing in each subtask list is completed comprises:
adding a countdown to the last byte in each subtask list;
processing task data in each subtask list simultaneously by idle threads corresponding to each subtask list;
and triggering the numerical value of the down counter to be correspondingly reduced by 1 by the down count until the idle thread finishes processing all the data in the subtask list.
5. The method for multi-threaded parallel processing of mass data splitting of claim 1, wherein the method for determining whether the down counter value in the signal gun is 0 comprises:
and calling an await method by an idle thread corresponding to the total task list to judge whether the value of a down counter in the signal gun is 0.
6. The method for multi-threaded parallel processing of mass data splitting as recited in claim 1, further comprising, after said determining if the down counter value in the signal gun is 0:
and if the value of the down counter is not 0, blocking the idle threads corresponding to the total task list until the idle threads corresponding to the subtask list are all completed.
7. The multi-thread parallel processing method of mass data splitting according to claim 1, wherein the idle thread which completes all data processing of the corresponding subtask list returns to the thread pool, and the task allocation of the thread pool is performed again.
8. The multithreaded parallel processing system for splitting mass data is characterized by comprising the following components:
the total task list module is used for constructing a total task list and importing all data for executing the total task into the total task list;
the data splitting module is used for generating a plurality of subtask lists based on the total task list, splitting all the data and dividing the data into the subtask lists;
the thread pool construction module is used for constructing a thread pool and distributing idle threads in the thread pool to respectively execute the total tasks in the total task list and the subtasks in each subtask list, wherein the idle threads are in one-to-one correspondence with the task lists;
the signal gun setting module is used for setting an initial value of a signal gun based on the number of the allocated idle threads, and the signal gun comprises a down counter;
the countdown adding module is used for adding the countdown to the tail byte in each subtask list so as to trigger the numerical value of the countdown to change after all data processing in each subtask list is completed;
and the numerical judgment module is used for judging whether the value of the down counter in the signal gun is 0, if the value of the down counter is 0, completing all the tasks in each subtask list, and summarizing the subtask completion result in each subtask list into the total task list to obtain a total task result.
9. A storage medium storing a computer program which, when executed, implements the method of multi-threaded parallel processing of mass data splitting of any of claims 1-7.
10. A computer device comprising a central processor and a memory, wherein the memory has a computer program stored therein, which when executed by the central processor, implements a multi-threaded parallel processing method for splitting mass data according to any of claims 1-7.
CN202310962197.6A 2023-08-02 2023-08-02 Multithreading parallel processing method and processing system for splitting mass data Active CN116719626B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310962197.6A CN116719626B (en) 2023-08-02 2023-08-02 Multithreading parallel processing method and processing system for splitting mass data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310962197.6A CN116719626B (en) 2023-08-02 2023-08-02 Multithreading parallel processing method and processing system for splitting mass data

Publications (2)

Publication Number Publication Date
CN116719626A true CN116719626A (en) 2023-09-08
CN116719626B CN116719626B (en) 2023-11-03

Family

ID=87869981

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310962197.6A Active CN116719626B (en) 2023-08-02 2023-08-02 Multithreading parallel processing method and processing system for splitting mass data

Country Status (1)

Country Link
CN (1) CN116719626B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140173611A1 (en) * 2012-12-13 2014-06-19 Nvidia Corporation System and method for launching data parallel and task parallel application threads and graphics processing unit incorporating the same
CN105224289A (en) * 2014-07-03 2016-01-06 阿里巴巴集团控股有限公司 A kind of action message matching process and equipment
CN109582455A (en) * 2018-12-03 2019-04-05 恒生电子股份有限公司 Multithreading task processing method, device and storage medium
CN110018892A (en) * 2019-03-12 2019-07-16 平安普惠企业管理有限公司 Task processing method and relevant apparatus based on thread resources
CN116069461A (en) * 2022-12-06 2023-05-05 兴业银行股份有限公司 Adaptive task scheduling method and system for dynamic slicing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140173611A1 (en) * 2012-12-13 2014-06-19 Nvidia Corporation System and method for launching data parallel and task parallel application threads and graphics processing unit incorporating the same
CN105224289A (en) * 2014-07-03 2016-01-06 阿里巴巴集团控股有限公司 A kind of action message matching process and equipment
CN109582455A (en) * 2018-12-03 2019-04-05 恒生电子股份有限公司 Multithreading task processing method, device and storage medium
CN110018892A (en) * 2019-03-12 2019-07-16 平安普惠企业管理有限公司 Task processing method and relevant apparatus based on thread resources
CN116069461A (en) * 2022-12-06 2023-05-05 兴业银行股份有限公司 Adaptive task scheduling method and system for dynamic slicing

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHE MA 等: "Hierarchical task scheduler for interleaving subtasks on heterogeneous multiprocessor platforms", 《PROCEEDINGS OF THE ASP-DAC 2005. ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE, 2005》, pages 952 - 955 *
王剑 等: "一种改进的网格作业管理实现方法", 《微电子学与计算机》, pages 1 - 3 *

Also Published As

Publication number Publication date
CN116719626B (en) 2023-11-03

Similar Documents

Publication Publication Date Title
US10545789B2 (en) Task scheduling for highly concurrent analytical and transaction workloads
WO2020211579A1 (en) Processing method, device and system for distributed bulk processing system
CN109582455B (en) Multithreading task processing method and device and storage medium
CN106802826B (en) Service processing method and device based on thread pool
KR101600129B1 (en) Application efficiency engine
US8010972B2 (en) Application connector parallelism in enterprise application integration systems
US7818743B2 (en) Logging lock data
CN109814994B (en) Method and terminal for dynamically scheduling thread pool
CN113467933B (en) Distributed file system thread pool optimization method, system, terminal and storage medium
CN109471711B (en) Task processing method and device
CN111625331A (en) Task scheduling method, device, platform, server and storage medium
Zhong et al. Speeding up Paulson’s procedure for large-scale problems using parallel computing
US20030028640A1 (en) Peer-to-peer distributed mechanism
CN116010064A (en) DAG job scheduling and cluster management method, system and device
CN113157411A (en) Reliable configurable task system and device based on Celery
CN116719626B (en) Multithreading parallel processing method and processing system for splitting mass data
CN112711470A (en) Method for cluster parallel processing of multiple tasks
CN114168594A (en) Secondary index creating method, device, equipment and storage medium of horizontal partition table
US20110191775A1 (en) Array-based thread countdown
CN112380024B (en) Thread scheduling method based on distributed counting
CN115934272A (en) Online batch task processing method and device
CN113419836B (en) Task processing method and device, electronic equipment and computer readable storage medium
CN106815061B (en) Service processing method and device
CN114385227A (en) Service processing method, device, equipment and storage medium
CN116755868B (en) Task processing system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant