CN114416325B - Batch task computing system based on intelligent analysis - Google Patents
Batch task computing system based on intelligent analysis Download PDFInfo
- Publication number
- CN114416325B CN114416325B CN202210340398.8A CN202210340398A CN114416325B CN 114416325 B CN114416325 B CN 114416325B CN 202210340398 A CN202210340398 A CN 202210340398A CN 114416325 B CN114416325 B CN 114416325B
- Authority
- CN
- China
- Prior art keywords
- task
- module
- running
- batch
- temporary
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5038—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5044—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering hardware capabilities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/48—Indexing scheme relating to G06F9/48
- G06F2209/484—Precedence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5018—Thread allocation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5021—Priority
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Abstract
The invention discloses a batch task computing system based on intelligent analysis, which aims to solve the technical problems that in the prior art, the proper number of threads cannot be created according to the memory occupation condition, the running speed of a program cannot be maximized, the intelligent analysis function is not provided, and tasks running simultaneously cannot be properly assigned for processing according to the system running condition, so that the system is easy to jam and the normal running of batch tasks is influenced. The system comprises a task acquisition module, a storage module, a task sequencing module, a task allocation module, a task calculation module and an intelligent analysis module; the task obtaining module obtains configuration information of batch tasks to obtain a batch task list. The system can set the threads with the correct number to maximize the running speed of the program by utilizing the task computing module, and can rearrange the batch tasks according to the running time, so that the system has the function of intelligent analysis and ensures that the batch tasks normally run in the optimal and fastest mode.
Description
Technical Field
The invention belongs to the field of computing systems, and particularly relates to a batch task computing system based on intelligent analysis.
Background
With the continuous development of computer technology, the functions and requirements of batch processing tasks are seen everywhere in numerous internet projects, and great convenience and efficiency improvement are brought to daily work through batch processing tasks.
At present, the invention patent with patent number CN 201910598614.7 discloses a batch task arrangement method, which includes: acquiring resource use information for processing batch tasks; generating the optimal concurrency of the batch tasks according to the resource use information and a preset concurrency evaluation model; and arranging the batch tasks according to the optimal concurrency, the historical processing information of the batch tasks and a preset arrangement evaluation model. The method adopts scene analysis to further realize scientific job task arrangement, but the computing system cannot establish a proper number of threads according to the memory occupation condition, cannot maximize the running speed of a program, does not have the function of intelligent analysis, cannot properly allocate tasks running simultaneously according to the running condition of the system to process, is easy to cause system blockage, and influences the normal running of batch tasks.
Therefore, in order to solve the above-mentioned problems that the running speed of the program cannot be maximized and the function of intelligent analysis is not available, it is necessary to improve the usage scenario of the computing system.
Disclosure of Invention
(1) Technical problem to be solved
Aiming at the defects of the prior art, the invention aims to provide a batch task computing system based on intelligent analysis, which aims to solve the technical problems that in the prior art, the number of threads with proper number cannot be created according to the memory occupation condition, the running speed of a program cannot be maximized, the intelligent analysis function is unavailable, tasks running simultaneously cannot be properly assigned according to the running condition of the system, the system is easy to jam, and the normal running of batch tasks is influenced.
(2) Technical scheme
In order to solve the technical problems, the invention provides a batch task computing system based on intelligent analysis, which comprises a task acquisition module, a storage module, a task sequencing module, a task allocation module, a task computing module and an intelligent analysis module; wherein the content of the first and second substances,
the task obtaining module obtains configuration information of batch tasks to obtain a batch task list;
the storage module comprises a cache unit and a timing unit, temporarily stores the configuration information acquired by the task acquisition module in the cache unit, and then periodically clears cache data in the cache unit through the timing unit so as to release a cache space in the cache unit;
the task ordering module orders the tasks of the task acquisition module and generates a task ordering table, a GBDT ordering model is pre-installed in the task ordering module, and the algorithm steps of the GBDT ordering model are as follows: input (a),) T, L, whereinIn order to be a sample of the sample,for sorting labels, initializing,Calculating the response as a standard value,i=1,2,…,N,For the loss function, then the t-th tree is learned,=argfinding a step length= arg,Is the complex conjugate of the model parameters,in order to be the parameters of the model,is a predicted value, and the method is used,after the step length meets the condition formula, the model is updated:,output for classifying the output value of the learner;
The task allocation module is internally pre-loaded with a scheduling algorithm, the scheduling algorithm divides the task sorting table according to the task sorting table of the task sorting module, generates sub-table 1, sub-table 2, sub-table … and sub-table Z respectively according to the sequence, and calculates the running time of the first task in the sub-table 1Adding the first task to the first position in the running table 1, and calculating the running time of the first task in the sub-table 1By usingAndmake a comparison if<Inserting the second task into the front end of the first task, if so≥If the second task is placed at the rear end of the first task, the adjustment of all task positions in the sub-table 1 is completed in the same way, the operation table 1 is generated, the operation tables 2, … and the operation table Z are generated in the same way, and the tasks in the operation table 1, the operation tables 2, … and the operation table Z are sequentially added to be concentrated to one position and a new batch task list is generated;
the task computing module creationThe number of the threads is one,the running speed of the program is maximized by setting the correct number of threads;
an analysis rule is pre-installed in the intelligent analysis module, a new batch task list generated by the task allocation module is used for establishing a temporary task running queue, a first task in the new batch task list is added into the temporary task running queue, a second task is added into the temporary task running queue, the CPU occupancy rates of the first task and the second task are analyzed, if the CPU occupancy rate is more than or equal to 50%, the second task is kicked out of the temporary task running queue, only the first task is in the temporary task running queue and a task running queue is generated, if the CPU occupancy rate is less than 50%, the second task is reserved, at the moment, the temporary task running queue comprises the first task and the second task, then a third task is added into the temporary task running queue, the CPU occupancy rates of the first task, the second task and the third task are analyzed, and if the CPU occupancy rate is more than or equal to 50%, and kicking the third task out of the temporary task running queue, wherein the temporary task running queue comprises the first task and the second task and generates a task running queue, if the CPU occupancy rate is less than 50%, the third task is reserved, the temporary task running queue comprises the first task, the second task and the third task, and so on, and the final generated task running queue is generated.
When the system of the technical scheme is used, the task acquisition module acquires configuration information of tasks in batches to obtain a batch task list, the task sorting module sorts the tasks of the task acquisition module and generates a task sorting table, and the GBDT sorting model comprises the following algorithm steps: input (a),) T, L, initializationCalculating a responseI =1,2, …, N, then learn the t-th tree,=argfinding the step length= argAnd after the condition formula is satisfied, updating the model:output of(ii) a The scheduling algorithm divides the task sorting table according to the task sorting table of the task sorting module, generates sub-table 1, sub-table 2, … and sub-table Z respectively according to the sequence, and calculates the running time of the first task in the sub-table 1Adding the first task to the first position in the running table 1, and calculating the running time of the first task in the sub-table 1By usingAndmake a comparison if<Inserting the second task into the front end of the first task, if so≥If the second task is placed at the rear end of the first task, the adjustment of all task positions in the sub-table 1 is completed in the same way, the operation table 1 is generated, the operation tables 2, … and the operation table Z are generated in the same way, and the tasks in the operation table 1, the operation tables 2, … and the operation table Z are sequentially added to be concentrated to one position and a new batch task list is generated; task computing module creationThe number of the threads is one,the running speed of the program is maximized by setting the correct number of threads; an analysis rule is pre-installed in the intelligent analysis module, a new batch task list generated by the task allocation module is used for establishing a temporary task running queue, a first task in the new batch task list is added into the temporary task running queue, a second task is added into the temporary task running queue, the CPU occupancy rates of the first task and the second task are analyzed, if the CPU occupancy rate is larger than or equal to 50%, the second task is kicked out of the temporary task running queue, only the first task is in the temporary task running queue at the moment and a task running queue is generated, if the CPU occupancy rate is smaller than 50%, the second task is reserved, at the moment, the temporary task running queue comprises the first task and the second task, then a third task is added into the temporary task running queue, the CPU occupancy rates of the first task, the second task and the third task are analyzed, and if the CPU occupancy rate is larger than or equal to 50%, kicking a third task out of the temporary task running queue, wherein the temporary task running queue comprises the first task and the second task and generates a task running queue, if the CPU occupancy rate is less than 50%, reserving the third task, wherein the temporary task running queue comprises the first task, the second task and the third task, and so on, generating a final generated task running queue, and meanwhile, the storage module enables the task acquisition module to acquire the tasksThe acquired configuration information is temporarily stored in the cache unit, and then the cache data in the cache unit is cleared at regular time through the timing unit, so that the cache space in the cache unit is released.
Preferably, the configuration information of the task obtaining module includes a task identifier, a task type, and a task parameter.
Preferably, theRepresents L to F isThe negative bias, F is the initial value of the model, L is the mean square error,=argrepresents the learning process of the tth tree, and the tth tree is fittedGiven a negative bias, T is the number of CART.
Preferably, the number of tasks in sub-table 1, sub-table 2, … and sub-table Z in the task allocation module is 100.
Preferably, the specific process of the timing unit is as follows: setting the working period of the timing unit to be G minutes, screening the finished tasks in the cache unit after the G minutes are reached, and then clearing cache data corresponding to the finished tasks.
(3) Advantageous effects
Compared with the prior art, the invention has the beneficial effects that: the system of the invention can set the correct number of threads to maximize the running speed of the program by utilizing the task computing module, can sequence the batch tasks by utilizing the GBDT sequencing model pre-installed in the task sequencing module, can rearrange the batch tasks according to the running time by utilizing the task allocation module, and has the intelligent analysis function by utilizing the intelligent analysis module, so that the number of the tasks running each time can be conveniently allocated according to the running condition of the system, the utilization rate of the system is maximally improved, the blockage of the system can not be caused, and the batch tasks can be ensured to normally run in the optimal and fastest mode.
Drawings
FIG. 1 is a schematic diagram of an overall frame structure of one embodiment of the system of the present invention;
FIG. 2 is a flowchart of the operation of a scheduling algorithm in one embodiment of the system of the present invention;
FIG. 3 is a flowchart of the GBDT ranking model in an embodiment of the system of the present invention.
Detailed Description
Example 1
The specific embodiment is a batch task computing system based on intelligent analysis, the schematic diagram of the overall framework structure of the system is shown in fig. 1, the work flow diagram of a scheduling algorithm is shown in fig. 2, the work flow diagram of a GBDT sorting model is shown in fig. 3, and the system comprises a task acquisition module, a storage module, a task sorting module, a task allocation module, a task computing module and an intelligent analysis module;
the task obtaining module obtains configuration information of batch tasks to obtain a batch task list;
the storage module comprises a cache unit and a timing unit, temporarily stores the configuration information acquired by the task acquisition module in the cache unit, and then regularly clears cache data in the cache unit through the timing unit so as to release a cache space in the cache unit;
the task ordering module orders the tasks of the task acquisition module and generates a task ordering table, a GBDT ordering model is pre-installed in the task ordering module, and the algorithm steps of the GBDT ordering model are as follows: input (a),) T, L, whereinIn order to be a sample of the sample,for sorting labels, initialization,Calculating the response as a standard value,i=1,2,…,N,For the loss function, then the t-th tree is learned,=argfinding a step length= arg,Is the complex conjugate of the model parameters,in order to be the parameters of the model,in order to predict the value of the target,after the step length meets the condition formula, the model is updated:,to classify the output value of the learner, output;
The task allocation module is internally pre-loaded with a scheduling algorithm, the scheduling algorithm divides the task sorting table according to the task sorting table of the task sorting module, generates sub-table 1, sub-table 2, … and sub-table Z respectively according to the sequence, and calculates the running time of the first task in the sub-table 1Adding the first task to the first position in the running table 1, and calculating the running time of the first task in the sub-table 1By usingAndmake a comparison if<Inserting the second task into the front end of the first task, if so≥If the second task is placed at the rear end of the first task, the adjustment of all task positions in the sub-table 1 is completed in the same way, the operation table 1 is generated, the operation tables 2, … and the operation table Z are generated in the same way, and the tasks in the operation table 1, the operation tables 2, … and the operation table Z are sequentially added to be concentrated to one position and a new batch task list is generated;
task computing module creationThe number of the threads is one,the running speed of the program is maximized by setting the correct number of threads;
an analysis rule is pre-installed in the intelligent analysis module, a new batch task list generated by the task allocation module is established, a temporary task running queue is established, a first task in the new batch task list is added into the temporary task running queue, a second task is added into the temporary task running queue, the CPU occupancy rates of the first task and the second task are analyzed, if the CPU occupancy rate is larger than or equal to 50%, the second task is kicked out of the temporary task running queue, only the first task is in the temporary task running queue at the moment and the task running queue is generated, if the CPU occupancy rate is smaller than 50%, the second task is reserved, at the moment, the temporary task running queue comprises the first task and the second task, then a third task is added into the temporary task running queue, the CPU occupancy rates of the first task, the second task and the third task are analyzed, and if the CPU occupancy rate is larger than or equal to 50%, and kicking the third task out of the temporary task running queue, wherein the temporary task running queue comprises the first task and the second task and generates a task running queue, if the CPU occupancy rate is less than 50%, the third task is reserved, the temporary task running queue comprises the first task, the second task and the third task, and so on, and the final generated task running queue is generated.
Wherein the configuration information of the task acquisition module comprises task identification, task type and task parameter,represents L to F isThe negative bias, F is the initial value of the model, L is the mean square error,=argrepresenting the learning process of the t-th tree to which is fittedGiven the negative bias, T is the number of CARTs.
Meanwhile, the condition formula in the task sequencing module isIn task computing modulesIs the optimum number of threads.
In addition, the number of tasks in sub-table 1, sub-table 2, … and sub-table Z in the task allocation module is 100, and the specific flow of the timing unit is as follows: setting the working period of the timing unit to be G minutes, screening the finished tasks in the cache unit after the G minutes are reached, and then clearing cache data corresponding to the finished tasks.
When the system of the technical scheme is used, the task obtaining module obtains configuration information of batch tasks, the configuration information comprises task identifiers, task types and task parameters, a batch task list is obtained, the task sorting module sorts the tasks of the task obtaining module and generates a task sorting table, and the GBDT sorting model comprises the following algorithm steps: the task ordering module orders the tasks of the task acquisition module and generates a task ordering table, a GBDT ordering model is pre-installed in the task ordering module, and the algorithm steps of the GBDT ordering model are as follows: input (a),) T, L, whereinIn order to be a sample of the sample,for sorting labels, initialization,Calculating the response as a standard value,i=1,2,…,N,For the loss function, then the t-th tree is learned,=argfinding the step length= arg,Is the complex conjugate of the model parameters,in order to be the parameters of the model,in order to predict the value of the target,after the step length meets the condition formula, the model is updated:,to classify the output value of the learner, output,Represents L to F isThe negative bias, F is the initial value of the model, L is the mean square error,=argrepresenting the learning process of the t-th tree to which is fittedGiven a negative bias, T is the number of CARTs; the scheduling algorithm divides the task sorting table according to the task sorting table of the task sorting module, generates sub-table 1, sub-table 2, … and sub-table Z respectively according to the sequence, the number of tasks in sub-table 1, sub-table 2, … and sub-table Z in the task allocation module is 100, and calculates the running time of the first task in sub-table 1Adding the first task to the first position in the running table 1, and calculating the running time of the first task in the sub-table 1By usingAndmake a comparison if<Inserting the second task into the front end of the first task, if so≥If the second task is placed at the rear end of the first task, the adjustment of all task positions in the sub-table 1 is completed in the same way, the operation table 1 is generated, the operation tables 2, … and the operation table Z are generated in the same way, and the tasks in the operation table 1, the operation tables 2, … and the operation table Z are sequentially added to be concentrated to one position and a new batch task list is generated; task computing module creationThe number of the threads is one,the running speed of the program is maximized by setting the correct number of threads,the optimal number of threads; an analysis rule is pre-installed in the intelligent analysis module, a new batch task list generated by the task allocation module is used for establishing a temporary task running queue, a first task in the new batch task list is added into the temporary task running queue, a second task is added into the temporary task running queue, the CPU occupancy rates of the first task and the second task are analyzed, if the CPU occupancy rate is larger than or equal to 50%, the second task is kicked out of the temporary task running queue, only the first task is in the temporary task running queue at the moment and a task running queue is generated, if the CPU occupancy rate is smaller than 50%, the second task is reserved, at the moment, the temporary task running queue comprises the first task and the second task, then a third task is added into the temporary task running queue, the CPU occupancy rates of the first task, the second task and the third task are analyzed, and if the CPU occupancy rate is larger than or equal to 50%, kicking a third task out of the temporary task running queue, wherein the temporary task running queue comprises a first task and a second task and generates a task running queue, if the CPU occupancy rate is less than 50%, the third task is reserved, the temporary task running queue comprises the first task, the second task and the third task, and so on, a final generated task running queue is generated, meanwhile, the storage module temporarily stores the configuration information acquired by the task acquisition module in the cache unit, and then the cache data in the cache unit is cleared at regular time through the timing unit, so that the cache space in the cache unit is released, and the specific flow of the timing unit is as follows: setting the working period of the timing unit to be G minutes, screening completed tasks in the cache unit after the G minutes, and then clearing cache data corresponding to the completed tasks.
Claims (5)
1. The system comprises a task acquisition module, a storage module, a task sequencing module, a task allocation module, a task calculation module and an intelligent analysis module; it is characterized in that the preparation method is characterized in that,
the task obtaining module obtains configuration information of batch tasks to obtain a batch task list;
the storage module comprises a cache unit and a timing unit, temporarily stores the configuration information acquired by the task acquisition module in the cache unit, and then periodically clears cache data in the cache unit through the timing unit so as to release a cache space in the cache unit;
the task ordering module orders the tasks of the task acquisition module and generates a task ordering table, a GBDT ordering model is pre-installed in the task ordering module, and the algorithm steps of the GBDT ordering model are as follows: input (a),) T, L, whereinIn order to be a sample of the sample,for sorting labels, initialization,Calculating the response as a standard value,i=1,2,…,N,For the loss function, then the t-th tree is learned,=argfinding the step length= arg,In order to model the predicted value of regression,is the complex conjugate of the model parameters,in order to be the parameters of the model,is a predicted value, and the method is used,in order to be the step size,updating the model for the current regression prediction value after the condition formula is satisfied:,to classify the output value of the learner, output;
The task allocation module is internally pre-loaded with a scheduling algorithm, the scheduling algorithm divides the task sorting table according to the task sorting table of the task sorting module, generates sub-table 1, sub-table 2, … and sub-table Z respectively according to the sequence, and calculates the running time of the first task in the sub-table 1Adding the first task to the first position in the operation table 1, and calculating the operation time of the second task in the sub-table 1By usingAndmake a comparison if<Inserting the second task into the front end of the first task, if so≥Then, the second task is placed at the back end of the first task, the adjustment of all task positions in the sub-table 1 is completed in the same way, the operation table 1 is generated, the operation table 2 is generated in the same way,…, a running table Z, which is used for sequentially adding and centralizing the tasks in the running table 1, the running table 2, the running table … and the running table Z to one position and generating a new batch task list;
the task computing module creationThe number of the threads is increased by the number of the threads,the running speed of the program is maximized by setting the correct number of threads;
the intelligent analysis module is internally pre-loaded with analysis rules, the task allocation module generates a new batch task list, establishes a temporary task running queue, firstly adds a first task in the new batch task list into the temporary task running queue, then adds a second task into the temporary task running queue, analyzes the CPU occupancy rates of the first task and the second task, if the CPU occupancy rate is more than or equal to 50%, kicks the second task out of the temporary task running queue, only the first task is in the temporary task running queue and generates the task running queue, if the CPU occupancy rate is less than 50%, reserves the second task, at the moment, the temporary task running queue comprises the first task and the second task, then adds a third task into the temporary task running queue, analyzes the CPU occupancy rates of the first task, the second task and the third task, if the CPU occupancy rate is more than or equal to 50%, kicking the third task out of the temporary task running queue, wherein the temporary task running queue comprises the first task and the second task and generates a task running queue, if the CPU occupancy rate is less than 50%, reserving the third task, wherein the temporary task running queue comprises the first task, the second task and the third task, and so on, and generating a final generated task running queue;
the above-mentionedRepresents L to FThe negative bias, F is the initial value of the model, L is the mean square error,=argrepresenting the learning process of the t-th tree to which is fittedGiven a negative bias, T is the number of CARTs;
2. The batch task computing system based on intelligent analysis of claim 1, wherein the configuration information of the task obtaining module includes task identification, task type, and task parameters.
4. The batch task computing system based on intelligent analysis of claim 1, wherein the number of tasks in sub-table 1, sub-table 2, …, sub-table Z in the task allocation module is 100.
5. The batch task computing system based on intelligent analysis according to claim 1, wherein the specific flow of the timing unit is as follows: setting the working period of the timing unit to be G minutes, screening the finished tasks in the cache unit after the G minutes are reached, and then clearing cache data corresponding to the finished tasks.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210340398.8A CN114416325B (en) | 2022-04-02 | 2022-04-02 | Batch task computing system based on intelligent analysis |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210340398.8A CN114416325B (en) | 2022-04-02 | 2022-04-02 | Batch task computing system based on intelligent analysis |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114416325A CN114416325A (en) | 2022-04-29 |
CN114416325B true CN114416325B (en) | 2022-08-26 |
Family
ID=81264052
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210340398.8A Active CN114416325B (en) | 2022-04-02 | 2022-04-02 | Batch task computing system based on intelligent analysis |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114416325B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150058858A1 (en) * | 2013-08-21 | 2015-02-26 | Hasso-Platt ner-Institut fur Softwaresystemtechnik GmbH | Dynamic task prioritization for in-memory databases |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7610111B2 (en) * | 2007-02-13 | 2009-10-27 | Tech Semiconductor Singapore Pte Ltd | Method and system for wafer lot order |
US9141430B2 (en) * | 2012-04-30 | 2015-09-22 | Hewlett-Packard Development Company, L.P. | Scheduling mapreduce job sets |
US10810043B2 (en) * | 2017-05-04 | 2020-10-20 | Salesforce.Com, Inc. | Systems, methods, and apparatuses for implementing a scheduler and workload manager with cyclical service level target (SLT) optimization |
CN110119307B (en) * | 2018-02-05 | 2022-09-13 | 上海交通大学 | Data processing request processing method and device, storage medium and electronic device |
CN111078396B (en) * | 2019-11-22 | 2023-12-19 | 厦门安胜网络科技有限公司 | Distributed data access method and system based on multitasking examples |
CN113238861A (en) * | 2021-05-08 | 2021-08-10 | 北京天空卫士网络安全技术有限公司 | Task execution method and device |
-
2022
- 2022-04-02 CN CN202210340398.8A patent/CN114416325B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150058858A1 (en) * | 2013-08-21 | 2015-02-26 | Hasso-Platt ner-Institut fur Softwaresystemtechnik GmbH | Dynamic task prioritization for in-memory databases |
Also Published As
Publication number | Publication date |
---|---|
CN114416325A (en) | 2022-04-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Tong et al. | A scheduling scheme in the cloud computing environment using deep Q-learning | |
CN105956021B (en) | A kind of automation task suitable for distributed machines study parallel method and its system | |
CN104317658B (en) | A kind of loaded self-adaptive method for scheduling task based on MapReduce | |
CN109933306B (en) | Self-adaptive hybrid cloud computing framework generation method based on operation type recognition | |
CN113157413B (en) | Deep learning task resource optimization configuration method and system based on service quality requirement | |
CN109547546B (en) | Scheduling method of request task and scheduling center server | |
CN106202431A (en) | A kind of Hadoop parameter automated tuning method and system based on machine learning | |
CN107908536B (en) | Performance evaluation method and system for GPU application in CPU-GPU heterogeneous environment | |
CN106874112B (en) | Workflow backfilling method combined with load balancing | |
CN109445386B (en) | Cloud manufacturing task shortest production time scheduling method based on ONBA | |
CN108170531B (en) | Cloud data center request flow scheduling method based on deep belief network | |
WO2023093375A1 (en) | Computing resource acquisition method and apparatus, electronic device, and storage medium | |
CN110705716A (en) | Multi-model parallel training method | |
CN110990121A (en) | Kubernetes scheduling strategy based on application portrait | |
CN108132840B (en) | Resource scheduling method and device in distributed system | |
CN114416325B (en) | Batch task computing system based on intelligent analysis | |
CN108519908A (en) | A kind of task dynamic management approach and device | |
CN113010296B (en) | Formalized model based task analysis and resource allocation method and system | |
CN109086976B (en) | Task allocation method for crowd sensing | |
KR20110037184A (en) | Pipelining computer system combining neuro-fuzzy system and parallel processor, method and apparatus for recognizing objects using the computer system in images | |
CN113032367A (en) | Dynamic load scene-oriented cross-layer configuration parameter collaborative tuning method and system for big data system | |
CN111190704A (en) | Task classification processing method based on big data processing framework | |
CN111309821B (en) | Task scheduling method and device based on graph database and electronic equipment | |
CN112070162A (en) | Multi-class processing task training sample construction method, device and medium | |
Du et al. | OctopusKing: A TCT-aware task scheduling on spark platform |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |