CN109739629B - System multithreading scheduling method and device - Google Patents

System multithreading scheduling method and device Download PDF

Info

Publication number
CN109739629B
CN109739629B CN201811640317.6A CN201811640317A CN109739629B CN 109739629 B CN109739629 B CN 109739629B CN 201811640317 A CN201811640317 A CN 201811640317A CN 109739629 B CN109739629 B CN 109739629B
Authority
CN
China
Prior art keywords
processing
file
processing file
processed
files
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811640317.6A
Other languages
Chinese (zh)
Other versions
CN109739629A (en
Inventor
黄自力
杨阳
陈舟
熊璐
胡景秀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Unionpay Co Ltd
Original Assignee
China Unionpay Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Unionpay Co Ltd filed Critical China Unionpay Co Ltd
Priority to CN201811640317.6A priority Critical patent/CN109739629B/en
Publication of CN109739629A publication Critical patent/CN109739629A/en
Application granted granted Critical
Publication of CN109739629B publication Critical patent/CN109739629B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application discloses a system multithreading scheduling method and device, wherein the method comprises the following steps: traversing the ordered processing file group from large to small, and comparing the processing files in the ordered processing file group with a system memory in sequence; if the processing file is larger than the preset system memory, the processing file is processed by a single thread; and if the processing file is smaller than the preset system memory, processing the processing file, and determining the processing file smaller than the residual preset system memory from the unprocessed processing file at the same time. By the method and the device for scheduling the system multithreading, the files to be processed can be subjected to multithreading within the preset sustainable range of the system memory by sequencing the files to be processed according to the size, so that the efficiency of processing the files by the system is improved.

Description

System multithreading scheduling method and device
Technical Field
The present disclosure relates to the field of multithreading system application technologies, and in particular, to a system multithreading scheduling method and device.
Background
In the system multithreading application, a series of tasks need to be processed, for example, a bank can have a large amount of transaction running water daily, so that a clearing file is formed, when the correctness of the clearing file is checked by comparison, the clearing file needs to be processed in a multithreading mode concurrently for the ordered file group, in order to improve the processing efficiency, a high-configuration server can be used for improving the efficiency in terms of hardware, and besides the high-configuration server, a good algorithm and model are needed, so that the operation efficiency is higher. In the prior art, when a system processes a series of files, many problems exist, for example, if some files have a size exceeding a preset system memory, more files are continuously read at this time, interruption of a system level can be caused, and even abnormal conditions such as locking of the system, system breakdown and the like are caused; for massive files, the processing time of large files is far longer than that of small files, if the files are processed according to random sequence, the situation of excessive system resources can occur at the end of processing, so that the overall processing efficiency is reduced.
Disclosure of Invention
The embodiment of the application provides a system multithreading scheduling method and device, which are used for solving the problems of low efficiency when a system processes files to be processed and abnormal conditions such as system deadlock or system breakdown in the processing process in the prior art.
In a first aspect, a method for scheduling multiple threads in a system is provided, including:
traversing the ordered processing file group from large to small, and comparing the processing files in the ordered processing file group with a preset system memory in sequence;
if the processing file is larger than the preset system memory, the processing file is processed by a single thread;
and if the processing file is smaller than the preset system memory, processing the processing file, and determining the processing file smaller than the residual preset system memory from the unprocessed processing file at the same time.
The system multithreading scheduling method provided by the application can be used for realizing a plurality of tasks in a high-efficiency processing system, ensuring that the tasks are processed in a multithreading way within a preset sustainable range of a system memory, maximizing the utilization rate of the system, and ensuring that the processing time for processing the files is short and the processing efficiency is high under the condition of normal operation of the system when the same files are processed by the system with the same memory.
Optionally, the method further comprises:
comparing each file to be processed with a preset threshold value of the maximum file which can be processed by the system, and determining a processing file group;
and sequencing the processing files in the processing file group based on the sizes of the processing files in the processing file group, and determining the ordered processing file group.
Before all the files to be processed are processed, judging whether the files to be processed are in a processible bearing range of the system, and by the method, the normal operation of the system can be ensured, and abnormal conditions such as system deadlock or system breakdown caused by oversized files during processing are prevented.
Optionally, the determining that the processing files smaller than the remaining preset system memory are processed includes:
comparing all unprocessed processing files with the rest memory of the system in sequence;
if a first processing file in the unprocessed processing files is larger than the rest of the preset system memory, skipping the first processing file, and performing no processing on the first processing file;
and if the first processing file in the unprocessed processing files is smaller than the residual preset system memory, processing the first processing file.
Optionally, the method further comprises:
when the processing file is not processed and a new processing file is generated, acquiring preset category information of the new processing file;
and splicing the new processing file and the processing file with the same type of unfinished processing in the processing files with unfinished processing based on the preset type information to determine a spliced processing file.
Optionally, the splicing the new processing file with the unprocessed processing file with the same category in the unprocessed processing files based on the preset category information to determine a spliced processing file includes:
judging whether the preset category of the new processing file is the same as the category of the processing file which is not processed and is being processed;
if the preset category of the new processing file is the same as the category of the processing file which is not processed and is being processed, splicing the new processing file with the processing file which is being processed;
and if the preset category of the new processing file is different from the category of the processing file which is not processed in the incomplete processing, splicing the new processing file with the unprocessed processing file with the same category in the unprocessed processing file.
By the method for splicing the processing files, more generated processing files can be ensured to be processed, and the overall efficiency of processing the files by the system is improved.
In a second aspect, there is provided a system multithreading scheduler, the apparatus comprising:
and a comparison module: the method comprises the steps of traversing an ordered processing file group from large to small, and comparing processing files in the ordered processing file group with a preset system memory in sequence;
a first processing module: the single-thread processing method is used for processing the processing file if the processing file is larger than a preset system memory;
and a second processing module: and if the processing file is smaller than the preset system memory, processing the processing file, and determining the processing file smaller than the residual preset system memory from the unprocessed processing file at the same time.
Optionally, the apparatus further includes:
and a sequencing module: comparing each file to be processed with a threshold value of a preset maximum file which can be processed by the system to determine a processing file group; and sequencing the processing files in the processing file group based on the sizes of the processing files in the processing file group, and determining the ordered processing file group.
Optionally, the second processing module includes:
and a comparison unit: comparing all unprocessed processing files with the rest memory of the system in sequence;
a first processing unit: if the first processing file in the unprocessed processing files is larger than the residual preset system memory, skipping the first processing file, and performing no processing on the first processing file;
a second processing unit: and if the first processing file in the unprocessed processing files is smaller than the residual preset system memory, processing the first processing file.
Optionally, the apparatus further includes:
and (3) splicing modules: when the processing file is not processed and a new processing file is generated, acquiring preset category information of the new processing file; and splicing the new processing files and the unprocessed processing files with the same category in the processing files which are not processed based on the preset category information to determine spliced processing files.
Optionally, the splicing module is further configured to:
judging whether the preset category of the new processing file is the same as the category of the processing file which is not processed and is being processed;
if the preset category of the new processing file is the same as the category of the processing file which is not processed and is being processed, splicing the new processing file with the processing file which is being processed;
and if the preset category of the new processing file is different from the category of the processing file which is not processed in the incomplete processing, splicing the new processing file with the unprocessed processing file with the same category in the unprocessed processing file.
In a third aspect, embodiments of the present application further provide a computer storage medium, including:
the computer readable storage medium comprises a computer program which, when run on a computer, causes the computer to perform the method of the first aspect of the method described above.
In a fourth aspect, embodiments of the present application also provide a computer program product comprising instructions, comprising:
the instructions, when executed on a computer, cause the computer to perform the method of the first aspect of the method described above.
Drawings
FIG. 1 is a flowchart of a system multithreading scheduling method according to an embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating steps for processing files according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a processing file time provided in the first embodiment of the application;
FIG. 4 is a schematic diagram of a processing file time provided in a second embodiment of the application;
fig. 5 is a schematic diagram of a system multithreading scheduling apparatus according to an embodiment of the present application.
Detailed Description
In the prior art, the system has low efficiency when processing files to be processed and causes abnormal conditions such as system deadlock or system breakdown in the processing process. The present embodiments provide the following solutions.
In order to solve the problems, the embodiment of the invention has the following general ideas:
when a system has a lot of files to be processed, firstly comparing all the files to be processed with a preset threshold value of the largest file which can be processed by the system, screening out processing files which can be processed under the condition of normal operation of the system from the lot of files to be processed, and then sequencing the processing files according to the size to determine an ordered processing file group, wherein the size of the processing files refers to the size of the content contained in the files, traversing the ordered processing file group from small to large after determining the ordered processing file group, comparing the processing files in the ordered processing file group with a preset system memory in sequence, and if the processing files are larger than the preset system memory, processing the processing files in a single thread mode; and if the processing file is smaller than the preset system memory, processing the processing file, and determining the processing file smaller than the residual preset system memory from the unprocessed processing file at the same time until the preset system memory is fully occupied or no processing file smaller than the residual preset system memory exists, and stopping comparing the processing file with the residual memory by the system to wait.
The system multithreading scheduling method provided by the application can be used for realizing a plurality of tasks in a high-efficiency processing system, ensuring that the tasks are processed in a multithreading way within a preset sustainable range of a system memory, maximizing the utilization rate of the system, and ensuring that the processing time for processing the files is short and the processing efficiency is high under the condition of normal operation of the system when the same files are processed by the system with the same memory.
As shown in fig. 1, the specific implementation steps of the system multithreading scheduling method provided in the embodiment of the application are as follows:
before traversing the ordered processing file group, firstly, ordering all processing files according to the sizes of the processing files to determine the ordered processing file group, wherein the step of determining the ordered processing file group is as follows:
a: comparing each file to be processed with a preset threshold value of the maximum file which can be processed by the system, and determining a processing file group; and screening out the processing files which can be processed by the system, and selecting out the processing files which are larger than the threshold value of the maximum file which can be processed by the system, wherein the processing files are not processed, so that the system abnormality caused by processing the oversized files is avoided, and the normal operation of the system is ensured.
b: and sequencing the processing files in the processing file group based on the sizes of the processing files in the processing file group, and determining the ordered processing file group. The method for sorting the processing files in the embodiment of the present application is not limited, and in the embodiment of the present application, a dynamic sorting algorithm based on a mangrove may be selected to sort the processing files, so as to determine the ordered processing file group.
The determining of the ordered set of process files is followed by the following step 101.
Step 101: traversing the ordered processing file group from large to small, and comparing the processing files in the ordered processing file group with a preset system memory in sequence;
comparing the processing files in the ordered processing file group from large to small with a preset system memory, wherein the preset system memory in the embodiment of the present application includes a virtual memory of a system, and if the processing files are larger than the preset system memory, performing the following step 102.
Step 102: if the processing file is larger than the preset system memory, the processing file is processed by a single thread;
when the processing file is larger than the preset system memory, the file is preferentially processed in a single thread mode, other threads are prevented from being blocked, after the processing is completed, the next processing file with the size which is next to that of the current processing file is continuously compared with the preset system memory, if the next processing file is still larger than the preset system memory, the next processing file is processed in a single thread mode, and after the processing is completed, the steps are circularly performed until the processing file which is smaller than the preset system memory appears.
If the processing file is smaller than the preset system memory, the following step 103 is performed.
Step 103: and if the processing file is smaller than the preset system memory, processing the processing file, and determining the processing file smaller than the residual preset system memory from the unprocessed processing file at the same time.
Specifically, the processing of the processing file is performed while determining that the processing file smaller than the remaining preset system memory is performed as follows:
comparing all unprocessed processing files with the rest memory of the system in sequence; if a first processing file in the unprocessed processing files is larger than the rest of the preset system memory, skipping the first processing file, and performing no processing on the first processing file; and if the first processing file in the unprocessed processing files is smaller than the residual preset system memory, processing the first processing file. When a certain file is being processed by the system, the file occupies part of the memory during processing, meanwhile, the remaining unprocessed processing files are compared with the remaining memory, if the first processing file in the unprocessed processing files is smaller than the remaining preset system memory, the first processing file is processed, the first processing file occupies part of the memory of the remaining preset system memory, meanwhile, the remaining preset system memory at the moment is continuously compared with the remaining processing files at the moment, and the processable processing files are determined to be processed until the preset system memory is occupied or the unprocessed processing files are not processed.
When processing files in the ordered processing file group, if a new processing file is generated, firstly acquiring preset category information of the new processing file, and based on the acquired preset category information, splicing the new processing file with the processing files with the same category in the processing files with incomplete processing to determine a spliced processing file; specifically, when determining the spliced file, judging whether the preset category of the new processing file is the same as the category of the processing file which is not processed and is being processed; if the preset category of the new processing file is the same as the category of the processing file which is not processed and is being processed, splicing the new processing file with the processing file which is being processed; and if the preset category of the new processing file is different from the category of the processing file which is not processed in the incomplete processing, splicing the new processing file with the unprocessed processing file with the same category in the unprocessed processing file.
Further, the new processing file is spliced with the processing file which is being processed, and the spliced processing file is directly processed as a whole; splicing the new processing files with unprocessed processing files with the same category in the unprocessed processing files, and inserting the spliced processing files into the unprocessed processing files to determine a new ordered processing file group; processing of the new ordered set of processed files continues according to the method shown in fig. 1. When a new processing file of the same type as an unprocessed processing file is generated, a plurality of processing files can be processed at one time by adopting a splicing mode, so that the times of traversing a file processing group can be reduced, the file processing time is saved, and the file processing efficiency is improved.
In order to more clearly describe the solution of the present application, the following description will be made by using specific embodiments, where the processing file in the embodiment of the present application refers to the clearing file mentioned in the background section.
Embodiment one:
it is assumed that for a certain organization, the current task is to calculate MD5 feature values of the process files, and a total of 4 process files, process file a (size 6), process file B (size 5), process file C (size 2), and process file D (size 3), respectively. The preset system memory is 8, and the preset system memory also comprises virtual memory, and the number of the preset threads is 2. In particular, it is assumed that the calculation time required for each processing file is linearly related to the processing file size, and the calculation time required for a processing file of size 1 is also 1. The specific steps for calculating the MD5 eigenvalues of these 4 process files are shown in fig. 2:
step 201: sorting the processing files; sorting the processing files based on the sizes of the processing files, arranging the processing files A, B, C, D in descending order according to the sizes, wherein the ordered processing file groups determined after the arrangement are (A, B, D, C); step 202 is performed after determining the ordered set of processing files.
Step 202: traversing the ordered processing file group from large to small, comparing the processing files in the ordered processing file group with a preset system memory in sequence, traversing the processing file A first, comparing the processing file A with the preset system memory, and performing step 203 if the processing file A is smaller than the preset system memory 8 in size 6.
Step 203: processing the processing file A, calculating the MD5 value of the processing file, and setting the residual preset system memory as 2;
step 204: traversing the processing file B, wherein the size of the processing file B is 5 and is larger than the remaining preset system memory and is 2, so that the processing file B is skipped and is not processed;
step 205: continuing traversing the ordered processing file group, traversing the processing file D, wherein the size of the processing file D is 3 and is larger than the residual system memory 2, skipping the processing file D, and not processing the processing file D;
step 206: continuing traversing the ordered processing file group, traversing the processing file C, wherein the size of the processing file C is 2 and is equal to the remaining free memory 2 of the system, processing the processing file C, and calculating the MD5 value of the processing file C, wherein the remaining preset system memory is 0;
step 207: waiting for the recovery of a preset system memory; because the processing file C is smaller, the processing is finished firstly, the memory is released, and the rest idle memory of the system is restored to 2; however, since both skipped processing file B and processing file D are larger than the recovered remaining preset system memory 2, the processing file is still waiting until MD5 calculation of processing file a is completed, the memory is released, and the remaining preset system memory is recovered to 8; after the remaining preset system memory is restored to 8, step 208 is performed.
Step 208: the processing file B is smaller than a preset system memory, the processing file B is processed, MD5 of the processing file B is calculated, and at the moment, the remaining preset system memory is 3;
step 209: traversing the processing file D, comparing the processing file D with the residual preset system memory size, wherein the processing file D is equal to the residual preset system memory size, processing the processing file D, and calculating the MD5 of the processing file B, and at the moment, the residual preset system memory is 0. In particular, in theory, it is also possible that the processing of the file B and the file D starts at the same time. While the processing file B starts to be processed, the remaining preset system memory is 3, and the remaining preset system memory is equal to the size of the processing file D, so that the processing file D will be processed immediately, and in theory, the processing file B and the processing file D will be processed simultaneously. When the processing of the processing file B is completed, all files to be processed are processed.
In the first embodiment, the calculation time required for processing the file with the size of 1 is also 1, so the total time consumed for processing all the files is 11, as shown in fig. 3, the time consumed for processing the processing file a and the processing file C in the first stage is 6, and the time consumed for processing the processing file D and the processing file B in the second stage is 5, so the total time consumed for processing all the files is 11.
Compared with the scheme 1 of single-thread file processing in the prior art, the total time consumption is 16 in the first embodiment, and the method is specifically as follows: t=6+5+2+3=16, and the time for single-threaded sequential processing is 16, which is greater than the total time consumption 11 of embodiment one of the present application.
Compare scheme 2 of processing files by multithreading defaults in the prior art: in the default system multithreading strategy, a judgment mechanism for the size of the processing file is not added, so that in a processing system with only 8 preset system memories, the processing file A and the processing file B are continuously read into the memory for processing, the use of competing memories in the threads can be possibly caused, and excessive cache application is applied, so that the CPU is very slow in processing, and even abnormal conditions such as system breakdown and error reporting can be caused.
Compared with two processing schemes in the prior art, the system multithreading scheduling method provided by the application can realize a plurality of tasks in a high-efficiency processing system, can ensure that the tasks are processed in a multithreading way within a preset bearable range of a system memory, so that the utilization rate of system resources is maximized, and when the same files are processed by the system with the same memory, the system multithreading scheduling method provided by the application can ensure that the processing time for processing the files is short and the processing efficiency is high under the condition of normal operation of the system.
Embodiment two: the dynamic order checking method is applied to a dynamic order checking scene of a merchant, the merchant has a plurality of online channels, the online channels can collect respective orders to a server, and then a delivery system (a plurality of centers can exist in the interior) can carry out delivery according to order information in the server. To ensure the correctness of the order, the shipping system needs to check the order. In addition, the order checking is performed while new orders are generated, and it is required to ensure that as many orders as possible are checked out. The preset system memory is 8, the threshold value of the maximum processing file that can be processed by the preset system is 20, the preset system memory also comprises virtual memory, and the number of the preset threads is 2. It is assumed that the required calculation time for each order file is linearly related to the file size, and the calculation time required for processing a file of size 1 is also 1.
Assuming that the merchant has 5 categories of merchandise, 5 order files are formed, each order file contains a plurality of order information, and the sizes of the order files are respectively a=6, b=4, c=1, d=3 and e=25. The specific processing mode is as follows:
firstly, comparing an order file ABCDE with a threshold value of the maximum processing file which can be processed by the preset system to determine an order file group, wherein the order file E is larger than the threshold value 20 of the maximum processing file which can be processed by the preset system, so that the order file E is not processed, and the determined order file group is ABCD.
The order file group is processed according to the size, the order file group ABDC is determined after the order file group is arranged, the order file group is traversed, the order file A is traversed firstly, the order file A is compared with a preset system memory 8, the order file A is smaller than the preset system memory 8, so that the order file A is processed, at the moment, the system remains free memory 2, the order file B is traversed, the size of the processed file B is 4, the processed file B is larger than the remaining preset system memory 2, and the order file B is skipped; then traversing to an order file D, wherein the size of the order file D is 3 and is larger than the remaining free memory 2 of the system, and skipping the order file D; and continuing traversing the processing file C, wherein the size of the processing file C is 1 and is smaller than the remaining preset system memory 2, so that the processing order file C is processed, and at the moment, the remaining preset system memory is 1.
When the order file C is not finished, new order files C '=1, d' =1 are generated, the order files C '=1, d' =1 are inserted into the ordered order file team, and the newly added order files do not affect the existing ordered order files too much due to the dynamic ordering property. The new order file C 'is the same as the order file C being processed, and the preset system memory 1 remains at this time, so that the order file C' can be processed.
Since the new order file D ' is different from the order file C being processed and the order file D ' is in the same category as the order file D, the new order file Dnew is formed by splicing the order files D ' and D, the order files are reordered, the ordered new ordered order file group is BDnew (b=4, dnew=4, order may be DnewB), and the order file B and the order file Dnew are added to the order file group.
After the order file C and the order file C' are processed, the memory is released, the residual idle memory of the system is changed to 2, but the file B and the file Dnew in the to-be-processed queue are both larger than the residual idle memory 2 of the system, so that the system still waits until the processing of the order file A is completed, the memory is released, and the residual idle memory of the system is restored to 8. At this time, the order file B and the order file Dnew are processed simultaneously, and the system remains the free memory 0 until the processing of the files B and Dnew is completed, and the whole task is completed.
As shown in fig. 4, all order processing is completed with 10 times.
As shown in fig. 5, based on the above method, an embodiment of the present application further provides a system multithreading scheduling device, including:
comparison module 501: the method comprises the steps of traversing an ordered processing file group from large to small, and comparing processing files in the ordered processing file group with a preset system memory in sequence;
the first processing module 502: the single-thread processing method is used for processing the processing file if the processing file is larger than a preset system memory;
the second processing module 503: and if the processing file is smaller than the preset system memory, processing the processing file, and determining the processing file smaller than the residual preset system memory from the unprocessed processing file at the same time.
Optionally, the apparatus further includes:
ranking module 504: comparing each file to be processed with a threshold value of a preset maximum file which can be processed by the system to determine a processing file group; and sequencing the processing files in the processing file group based on the sizes of the processing files in the processing file group, and determining the ordered processing file group. This module is not necessary for the present solution, and is indicated by a dashed box in the figure.
Optionally, the second processing module 503 includes:
comparison unit 5031: comparing all unprocessed processing files with the rest memory of the system in sequence;
first processing unit 5032: if the first processing file in the unprocessed processing files is larger than the residual preset system memory, skipping the first processing file, and performing no processing on the first processing file;
second processing unit 5033: and if the first processing file in the unprocessed processing files is smaller than the residual preset system memory, processing the first processing file.
Optionally, the apparatus further includes:
splicing module 505: when the processing file is not processed and a new processing file is generated, acquiring preset category information of the new processing file; splicing the new processing files with unprocessed processing files with the same category in the unprocessed processing files based on the preset category information to determine spliced processing files; and inserting the spliced processing files into the unprocessed processing files to determine a new ordered processing file group. This module is not necessary for the present solution, and is indicated by a dashed box in the figure.
Optionally, the splicing module 505 is further configured to:
judging whether the preset category of the new processing file is the same as the category of the processing file which is not processed and is being processed;
if the preset category of the new processing file is the same as the category of the processing file which is not processed and is being processed, splicing the new processing file with the processing file which is being processed;
and if the preset category of the new processing file is different from the category of the processing file which is not processed in the incomplete processing, splicing the new processing file with the unprocessed processing file with the same category in the unprocessed processing file.
The embodiment of the application also provides a computer storage medium, which comprises:
the computer readable storage medium comprises a computer program which, when run on a computer, causes the computer to perform the method described in fig. 1.
Embodiments of the present application also provide a computer program product containing instructions, comprising:
the instructions, when executed on a computer, cause the computer to perform the method described in fig. 1. It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, magnetic disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (9)

1. A method for multi-threaded scheduling of a system, the method comprising:
traversing the ordered processing file group from large to small, and comparing the processing files in the ordered processing file group with a preset system memory in sequence;
if the processing file is larger than the preset system memory, the processing file is processed by a single thread;
if the processing file is smaller than the preset system memory, processing the processing file, and determining the processing file smaller than the residual preset system memory from the unprocessed processing file to process;
when the processing file is not processed and a new processing file is generated, acquiring preset category information of the new processing file; and splicing the new processing files and the unprocessed processing files with the same category in the processing files which are not processed based on the preset category information to determine spliced processing files.
2. The method of claim 1, wherein the method further comprises:
comparing each file to be processed with a preset threshold value of the maximum file which can be processed by the system, and determining a processing file group;
and sequencing the processing files in the processing file group based on the sizes of the processing files in the processing file group, and determining the ordered processing file group.
3. The method of claim 1, wherein determining the processing files that are less than the remaining predetermined system memory comprises:
comparing all unprocessed processing files with the rest memory of the system in sequence;
if a first processing file in the unprocessed processing files is larger than the rest of the preset system memory, skipping the first processing file, and performing no processing on the first processing file;
and if the first processing file in the unprocessed processing files is smaller than the residual preset system memory, processing the first processing file.
4. The method of claim 1, wherein splicing the new processed file with the unprocessed processed file of the same category as the unprocessed processed file based on the preset category information to determine a spliced processed file comprises:
judging whether the preset category of the new processing file is the same as the category of the processing file which is not processed and is being processed;
if the preset category of the new processing file is the same as the category of the processing file which is not processed and is being processed, splicing the new processing file with the processing file which is being processed;
and if the preset category of the new processing file is different from the category of the processing file which is not processed in the incomplete processing, splicing the new processing file with the unprocessed processing file with the same category in the unprocessed processing file.
5. A system multithreaded scheduling apparatus, the apparatus comprising:
and a comparison module: the method comprises the steps of traversing an ordered processing file group from large to small, and comparing processing files in the ordered processing file group with a preset system memory in sequence;
a first processing module: the single-thread processing method is used for processing the processing file if the processing file is larger than a preset system memory;
and a second processing module: if the processing file is smaller than the preset system memory, processing the processing file, and determining the processing file smaller than the residual preset system memory from the unprocessed processing file at the same time;
and (3) splicing modules: when the processing file is not processed and a new processing file is generated, acquiring preset category information of the new processing file; and splicing the new processing files and the unprocessed processing files with the same category in the processing files which are not processed based on the preset category information to determine spliced processing files.
6. The apparatus of claim 5, wherein the apparatus further comprises:
and a sequencing module: comparing each file to be processed with a threshold value of a preset maximum file which can be processed by the system to determine a processing file group; and sequencing the processing files in the processing file group based on the sizes of the processing files in the processing file group, and determining the ordered processing file group.
7. The apparatus of claim 5, wherein the second processing module comprises:
and a comparison unit: comparing all unprocessed processing files with the rest memory of the system in sequence;
a first processing unit: if the first processing file in the unprocessed processing files is larger than the residual preset system memory, skipping the first processing file, and performing no processing on the first processing file;
a second processing unit: and if the first processing file in the unprocessed processing files is smaller than the residual preset system memory, processing the first processing file.
8. The apparatus of claim 5, wherein the stitching module is further configured to:
judging whether the preset category of the new processing file is the same as the category of the processing file which is not processed and is being processed;
if the preset category of the new processing file is the same as the category of the processing file which is not processed and is being processed, splicing the new processing file with the processing file which is being processed;
and if the preset category of the new processing file is different from the category of the processing file which is not processed in the incomplete processing, splicing the new processing file with the unprocessed processing file with the same category in the unprocessed processing file.
9. A computer readable storage medium, characterized in that the computer readable storage medium comprises a computer program which, when run on a computer, causes the computer to perform the method according to any one of claims 1 to 4.
CN201811640317.6A 2018-12-29 2018-12-29 System multithreading scheduling method and device Active CN109739629B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811640317.6A CN109739629B (en) 2018-12-29 2018-12-29 System multithreading scheduling method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811640317.6A CN109739629B (en) 2018-12-29 2018-12-29 System multithreading scheduling method and device

Publications (2)

Publication Number Publication Date
CN109739629A CN109739629A (en) 2019-05-10
CN109739629B true CN109739629B (en) 2023-04-25

Family

ID=66362572

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811640317.6A Active CN109739629B (en) 2018-12-29 2018-12-29 System multithreading scheduling method and device

Country Status (1)

Country Link
CN (1) CN109739629B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102053859A (en) * 2009-11-09 2011-05-11 中国移动通信集团甘肃有限公司 Method and device for processing bulk data
CN107153618A (en) * 2016-03-02 2017-09-12 阿里巴巴集团控股有限公司 A kind of processing method and processing device of Memory Allocation
CN107222554A (en) * 2017-06-27 2017-09-29 山东中创软件商用中间件股份有限公司 A kind of document transmission method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9058338B2 (en) * 2011-10-26 2015-06-16 International Business Machines Corporation Storing a small file with a reduced storage and memory footprint

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102053859A (en) * 2009-11-09 2011-05-11 中国移动通信集团甘肃有限公司 Method and device for processing bulk data
CN107153618A (en) * 2016-03-02 2017-09-12 阿里巴巴集团控股有限公司 A kind of processing method and processing device of Memory Allocation
CN107222554A (en) * 2017-06-27 2017-09-29 山东中创软件商用中间件股份有限公司 A kind of document transmission method and system

Also Published As

Publication number Publication date
CN109739629A (en) 2019-05-10

Similar Documents

Publication Publication Date Title
US9846637B2 (en) Machine learning based software program repair
CN113238838A (en) Task scheduling method and device and computer readable storage medium
JP5803972B2 (en) Multi-core processor
CN109656782A (en) Visual scheduling monitoring method, device and server
CN106257411A (en) Single instrction multithread calculating system and method thereof
CN109710624B (en) Data processing method, device, medium and electronic equipment
CN104035818A (en) Multiple-task scheduling method and device
CN101620527A (en) Managing active thread dependencies in graphics processing
CN104317958A (en) Method and system for processing data in real time
EP2420930A2 (en) Apparatus and method for thread scheduling and lock acquisition order control based on deterministic progress index
CN110851246A (en) Batch task processing method, device and system and storage medium
CN113168364A (en) Chip verification method and device
CN111400010A (en) Task scheduling method and device
CN109739629B (en) System multithreading scheduling method and device
CN102855122A (en) Processing pipeline control
CN111241594B (en) Method, device, computer equipment and storage medium for signing transaction information
CN111078510A (en) Method and device for recording task processing progress
CN105931003B (en) Order processing method, system and device
CN103530742B (en) Improve the method and device of scheduling arithmetic speed
CN102141938B (en) Method and device for adjusting software load in multithreaded system
CN105892596A (en) Information processing method and electronic device
JP2007122527A (en) Flow control method
CN109783242A (en) Abroad holding valuation flow control method, device, computer equipment and storage medium
CN110969565A (en) Image processing method and device
CN112965798A (en) Big data processing method and system based on distributed multithreading

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant