CN114416325B - Batch task computing system based on intelligent analysis - Google Patents

Batch task computing system based on intelligent analysis Download PDF

Info

Publication number
CN114416325B
CN114416325B CN202210340398.8A CN202210340398A CN114416325B CN 114416325 B CN114416325 B CN 114416325B CN 202210340398 A CN202210340398 A CN 202210340398A CN 114416325 B CN114416325 B CN 114416325B
Authority
CN
China
Prior art keywords
task
module
running
batch
temporary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210340398.8A
Other languages
Chinese (zh)
Other versions
CN114416325A (en
Inventor
魏俊杰
蓝岸
何翼
熊黄
庄辉
余翔达
许泽鹏
陈晓玩
陈飞
冷佳琪
廖瑞杰
黄冬泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen News Network Media Co ltd
Original Assignee
Shenzhen News Network Media Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen News Network Media Co ltd filed Critical Shenzhen News Network Media Co ltd
Priority to CN202210340398.8A priority Critical patent/CN114416325B/en
Publication of CN114416325A publication Critical patent/CN114416325A/en
Application granted granted Critical
Publication of CN114416325B publication Critical patent/CN114416325B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5044Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering hardware capabilities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/48Indexing scheme relating to G06F9/48
    • G06F2209/484Precedence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a batch task computing system based on intelligent analysis, which aims to solve the technical problems that in the prior art, the proper number of threads cannot be created according to the memory occupation condition, the running speed of a program cannot be maximized, the intelligent analysis function is not provided, and tasks running simultaneously cannot be properly assigned for processing according to the system running condition, so that the system is easy to jam and the normal running of batch tasks is influenced. The system comprises a task acquisition module, a storage module, a task sequencing module, a task allocation module, a task calculation module and an intelligent analysis module; the task obtaining module obtains configuration information of batch tasks to obtain a batch task list. The system can set the threads with the correct number to maximize the running speed of the program by utilizing the task computing module, and can rearrange the batch tasks according to the running time, so that the system has the function of intelligent analysis and ensures that the batch tasks normally run in the optimal and fastest mode.

Description

Batch task computing system based on intelligent analysis
Technical Field
The invention belongs to the field of computing systems, and particularly relates to a batch task computing system based on intelligent analysis.
Background
With the continuous development of computer technology, the functions and requirements of batch processing tasks are seen everywhere in numerous internet projects, and great convenience and efficiency improvement are brought to daily work through batch processing tasks.
At present, the invention patent with patent number CN 201910598614.7 discloses a batch task arrangement method, which includes: acquiring resource use information for processing batch tasks; generating the optimal concurrency of the batch tasks according to the resource use information and a preset concurrency evaluation model; and arranging the batch tasks according to the optimal concurrency, the historical processing information of the batch tasks and a preset arrangement evaluation model. The method adopts scene analysis to further realize scientific job task arrangement, but the computing system cannot establish a proper number of threads according to the memory occupation condition, cannot maximize the running speed of a program, does not have the function of intelligent analysis, cannot properly allocate tasks running simultaneously according to the running condition of the system to process, is easy to cause system blockage, and influences the normal running of batch tasks.
Therefore, in order to solve the above-mentioned problems that the running speed of the program cannot be maximized and the function of intelligent analysis is not available, it is necessary to improve the usage scenario of the computing system.
Disclosure of Invention
(1) Technical problem to be solved
Aiming at the defects of the prior art, the invention aims to provide a batch task computing system based on intelligent analysis, which aims to solve the technical problems that in the prior art, the number of threads with proper number cannot be created according to the memory occupation condition, the running speed of a program cannot be maximized, the intelligent analysis function is unavailable, tasks running simultaneously cannot be properly assigned according to the running condition of the system, the system is easy to jam, and the normal running of batch tasks is influenced.
(2) Technical scheme
In order to solve the technical problems, the invention provides a batch task computing system based on intelligent analysis, which comprises a task acquisition module, a storage module, a task sequencing module, a task allocation module, a task computing module and an intelligent analysis module; wherein the content of the first and second substances,
the task obtaining module obtains configuration information of batch tasks to obtain a batch task list;
the storage module comprises a cache unit and a timing unit, temporarily stores the configuration information acquired by the task acquisition module in the cache unit, and then periodically clears cache data in the cache unit through the timing unit so as to release a cache space in the cache unit;
the task ordering module orders the tasks of the task acquisition module and generates a task ordering table, a GBDT ordering model is pre-installed in the task ordering module, and the algorithm steps of the GBDT ordering model are as follows: input (a)
Figure 100002_DEST_PATH_IMAGE001
Figure 567095DEST_PATH_IMAGE002
) T, L, wherein
Figure 183890DEST_PATH_IMAGE001
In order to be a sample of the sample,
Figure 97619DEST_PATH_IMAGE002
for sorting labels, initializing
Figure 100002_DEST_PATH_IMAGE003
Figure 81666DEST_PATH_IMAGE003
Calculating the response as a standard value
Figure 380929DEST_PATH_IMAGE004
,i=1,2,…,N,
Figure 100002_DEST_PATH_IMAGE005
For the loss function, then the t-th tree is learned,
Figure 653779DEST_PATH_IMAGE006
=arg
Figure 100002_DEST_PATH_IMAGE007
finding a step length
Figure 274378DEST_PATH_IMAGE008
= arg
Figure 100002_DEST_PATH_IMAGE009
Figure 743406DEST_PATH_IMAGE006
Is the complex conjugate of the model parameters,
Figure 864552DEST_PATH_IMAGE010
in order to be the parameters of the model,
Figure 100002_DEST_PATH_IMAGE011
is a predicted value, and the method is used,
Figure 26412DEST_PATH_IMAGE012
after the step length meets the condition formula, the model is updated:
Figure 100002_DEST_PATH_IMAGE013
Figure 603149DEST_PATH_IMAGE014
output for classifying the output value of the learner
Figure 100002_DEST_PATH_IMAGE015
The task allocation module is internally pre-loaded with a scheduling algorithm, the scheduling algorithm divides the task sorting table according to the task sorting table of the task sorting module, generates sub-table 1, sub-table 2, sub-table … and sub-table Z respectively according to the sequence, and calculates the running time of the first task in the sub-table 1
Figure 141447DEST_PATH_IMAGE016
Adding the first task to the first position in the running table 1, and calculating the running time of the first task in the sub-table 1
Figure 100002_DEST_PATH_IMAGE017
By using
Figure 900455DEST_PATH_IMAGE017
And
Figure 59648DEST_PATH_IMAGE016
make a comparison if
Figure 169686DEST_PATH_IMAGE017
Figure 590303DEST_PATH_IMAGE016
Inserting the second task into the front end of the first task, if so
Figure 718665DEST_PATH_IMAGE017
Figure 769798DEST_PATH_IMAGE016
If the second task is placed at the rear end of the first task, the adjustment of all task positions in the sub-table 1 is completed in the same way, the operation table 1 is generated, the operation tables 2, … and the operation table Z are generated in the same way, and the tasks in the operation table 1, the operation tables 2, … and the operation table Z are sequentially added to be concentrated to one position and a new batch task list is generated;
the task computing module creation
Figure 180182DEST_PATH_IMAGE018
The number of the threads is one,
Figure 100002_DEST_PATH_IMAGE019
the running speed of the program is maximized by setting the correct number of threads;
an analysis rule is pre-installed in the intelligent analysis module, a new batch task list generated by the task allocation module is used for establishing a temporary task running queue, a first task in the new batch task list is added into the temporary task running queue, a second task is added into the temporary task running queue, the CPU occupancy rates of the first task and the second task are analyzed, if the CPU occupancy rate is more than or equal to 50%, the second task is kicked out of the temporary task running queue, only the first task is in the temporary task running queue and a task running queue is generated, if the CPU occupancy rate is less than 50%, the second task is reserved, at the moment, the temporary task running queue comprises the first task and the second task, then a third task is added into the temporary task running queue, the CPU occupancy rates of the first task, the second task and the third task are analyzed, and if the CPU occupancy rate is more than or equal to 50%, and kicking the third task out of the temporary task running queue, wherein the temporary task running queue comprises the first task and the second task and generates a task running queue, if the CPU occupancy rate is less than 50%, the third task is reserved, the temporary task running queue comprises the first task, the second task and the third task, and so on, and the final generated task running queue is generated.
When the system of the technical scheme is used, the task acquisition module acquires configuration information of tasks in batches to obtain a batch task list, the task sorting module sorts the tasks of the task acquisition module and generates a task sorting table, and the GBDT sorting model comprises the following algorithm steps: input (a)
Figure 811014DEST_PATH_IMAGE001
Figure 59462DEST_PATH_IMAGE002
) T, L, initialization
Figure 78233DEST_PATH_IMAGE003
Calculating a response
Figure 910667DEST_PATH_IMAGE004
I =1,2, …, N, then learn the t-th tree,
Figure 610770DEST_PATH_IMAGE006
=arg
Figure 979303DEST_PATH_IMAGE007
finding the step length
Figure 372238DEST_PATH_IMAGE008
= arg
Figure 6482DEST_PATH_IMAGE009
And after the condition formula is satisfied, updating the model:
Figure 995429DEST_PATH_IMAGE013
output of
Figure 969201DEST_PATH_IMAGE015
(ii) a The scheduling algorithm divides the task sorting table according to the task sorting table of the task sorting module, generates sub-table 1, sub-table 2, … and sub-table Z respectively according to the sequence, and calculates the running time of the first task in the sub-table 1
Figure 579043DEST_PATH_IMAGE020
Adding the first task to the first position in the running table 1, and calculating the running time of the first task in the sub-table 1
Figure 372686DEST_PATH_IMAGE017
By using
Figure 476908DEST_PATH_IMAGE017
And
Figure 318569DEST_PATH_IMAGE016
make a comparison if
Figure 787728DEST_PATH_IMAGE017
Figure 380252DEST_PATH_IMAGE016
Inserting the second task into the front end of the first task, if so
Figure 960269DEST_PATH_IMAGE017
Figure 705371DEST_PATH_IMAGE016
If the second task is placed at the rear end of the first task, the adjustment of all task positions in the sub-table 1 is completed in the same way, the operation table 1 is generated, the operation tables 2, … and the operation table Z are generated in the same way, and the tasks in the operation table 1, the operation tables 2, … and the operation table Z are sequentially added to be concentrated to one position and a new batch task list is generated; task computing module creation
Figure 96163DEST_PATH_IMAGE018
The number of the threads is one,
Figure 129978DEST_PATH_IMAGE019
the running speed of the program is maximized by setting the correct number of threads; an analysis rule is pre-installed in the intelligent analysis module, a new batch task list generated by the task allocation module is used for establishing a temporary task running queue, a first task in the new batch task list is added into the temporary task running queue, a second task is added into the temporary task running queue, the CPU occupancy rates of the first task and the second task are analyzed, if the CPU occupancy rate is larger than or equal to 50%, the second task is kicked out of the temporary task running queue, only the first task is in the temporary task running queue at the moment and a task running queue is generated, if the CPU occupancy rate is smaller than 50%, the second task is reserved, at the moment, the temporary task running queue comprises the first task and the second task, then a third task is added into the temporary task running queue, the CPU occupancy rates of the first task, the second task and the third task are analyzed, and if the CPU occupancy rate is larger than or equal to 50%, kicking a third task out of the temporary task running queue, wherein the temporary task running queue comprises the first task and the second task and generates a task running queue, if the CPU occupancy rate is less than 50%, reserving the third task, wherein the temporary task running queue comprises the first task, the second task and the third task, and so on, generating a final generated task running queue, and meanwhile, the storage module enables the task acquisition module to acquire the tasksThe acquired configuration information is temporarily stored in the cache unit, and then the cache data in the cache unit is cleared at regular time through the timing unit, so that the cache space in the cache unit is released.
Preferably, the configuration information of the task obtaining module includes a task identifier, a task type, and a task parameter.
Preferably, the
Figure 100002_DEST_PATH_IMAGE021
Represents L to F is
Figure 497375DEST_PATH_IMAGE022
The negative bias, F is the initial value of the model, L is the mean square error,
Figure 100002_DEST_PATH_IMAGE023
=arg
Figure 251311DEST_PATH_IMAGE024
represents the learning process of the tth tree, and the tth tree is fitted
Figure 124589DEST_PATH_IMAGE004
Given a negative bias, T is the number of CART.
Preferably, the condition formula in the task sequencing module is
Figure DEST_PATH_IMAGE025
Preferably, the task computation module
Figure 98230DEST_PATH_IMAGE018
Is the optimum number of threads.
Preferably, the number of tasks in sub-table 1, sub-table 2, … and sub-table Z in the task allocation module is 100.
Preferably, the specific process of the timing unit is as follows: setting the working period of the timing unit to be G minutes, screening the finished tasks in the cache unit after the G minutes are reached, and then clearing cache data corresponding to the finished tasks.
(3) Advantageous effects
Compared with the prior art, the invention has the beneficial effects that: the system of the invention can set the correct number of threads to maximize the running speed of the program by utilizing the task computing module, can sequence the batch tasks by utilizing the GBDT sequencing model pre-installed in the task sequencing module, can rearrange the batch tasks according to the running time by utilizing the task allocation module, and has the intelligent analysis function by utilizing the intelligent analysis module, so that the number of the tasks running each time can be conveniently allocated according to the running condition of the system, the utilization rate of the system is maximally improved, the blockage of the system can not be caused, and the batch tasks can be ensured to normally run in the optimal and fastest mode.
Drawings
FIG. 1 is a schematic diagram of an overall frame structure of one embodiment of the system of the present invention;
FIG. 2 is a flowchart of the operation of a scheduling algorithm in one embodiment of the system of the present invention;
FIG. 3 is a flowchart of the GBDT ranking model in an embodiment of the system of the present invention.
Detailed Description
Example 1
The specific embodiment is a batch task computing system based on intelligent analysis, the schematic diagram of the overall framework structure of the system is shown in fig. 1, the work flow diagram of a scheduling algorithm is shown in fig. 2, the work flow diagram of a GBDT sorting model is shown in fig. 3, and the system comprises a task acquisition module, a storage module, a task sorting module, a task allocation module, a task computing module and an intelligent analysis module;
the task obtaining module obtains configuration information of batch tasks to obtain a batch task list;
the storage module comprises a cache unit and a timing unit, temporarily stores the configuration information acquired by the task acquisition module in the cache unit, and then regularly clears cache data in the cache unit through the timing unit so as to release a cache space in the cache unit;
the task ordering module orders the tasks of the task acquisition module and generates a task ordering table, a GBDT ordering model is pre-installed in the task ordering module, and the algorithm steps of the GBDT ordering model are as follows: input (a)
Figure 36361DEST_PATH_IMAGE001
Figure 756056DEST_PATH_IMAGE002
) T, L, wherein
Figure 737918DEST_PATH_IMAGE001
In order to be a sample of the sample,
Figure 730014DEST_PATH_IMAGE002
for sorting labels, initialization
Figure 517841DEST_PATH_IMAGE003
Figure 777528DEST_PATH_IMAGE003
Calculating the response as a standard value
Figure 992608DEST_PATH_IMAGE004
,i=1,2,…,N,
Figure 222733DEST_PATH_IMAGE005
For the loss function, then the t-th tree is learned,
Figure 1202DEST_PATH_IMAGE006
=arg
Figure 695488DEST_PATH_IMAGE007
finding a step length
Figure 769886DEST_PATH_IMAGE008
= arg
Figure 284044DEST_PATH_IMAGE009
Figure 616936DEST_PATH_IMAGE006
Is the complex conjugate of the model parameters,
Figure 352680DEST_PATH_IMAGE010
in order to be the parameters of the model,
Figure 112825DEST_PATH_IMAGE011
in order to predict the value of the target,
Figure 862082DEST_PATH_IMAGE012
after the step length meets the condition formula, the model is updated:
Figure 998665DEST_PATH_IMAGE013
Figure 588916DEST_PATH_IMAGE014
to classify the output value of the learner, output
Figure 316700DEST_PATH_IMAGE015
The task allocation module is internally pre-loaded with a scheduling algorithm, the scheduling algorithm divides the task sorting table according to the task sorting table of the task sorting module, generates sub-table 1, sub-table 2, … and sub-table Z respectively according to the sequence, and calculates the running time of the first task in the sub-table 1
Figure 962707DEST_PATH_IMAGE020
Adding the first task to the first position in the running table 1, and calculating the running time of the first task in the sub-table 1
Figure 699719DEST_PATH_IMAGE017
By using
Figure 895208DEST_PATH_IMAGE017
And
Figure 43162DEST_PATH_IMAGE016
make a comparison if
Figure 222470DEST_PATH_IMAGE017
Figure 206517DEST_PATH_IMAGE016
Inserting the second task into the front end of the first task, if so
Figure 522092DEST_PATH_IMAGE017
Figure 857258DEST_PATH_IMAGE016
If the second task is placed at the rear end of the first task, the adjustment of all task positions in the sub-table 1 is completed in the same way, the operation table 1 is generated, the operation tables 2, … and the operation table Z are generated in the same way, and the tasks in the operation table 1, the operation tables 2, … and the operation table Z are sequentially added to be concentrated to one position and a new batch task list is generated;
task computing module creation
Figure 241972DEST_PATH_IMAGE018
The number of the threads is one,
Figure 743623DEST_PATH_IMAGE019
the running speed of the program is maximized by setting the correct number of threads;
an analysis rule is pre-installed in the intelligent analysis module, a new batch task list generated by the task allocation module is established, a temporary task running queue is established, a first task in the new batch task list is added into the temporary task running queue, a second task is added into the temporary task running queue, the CPU occupancy rates of the first task and the second task are analyzed, if the CPU occupancy rate is larger than or equal to 50%, the second task is kicked out of the temporary task running queue, only the first task is in the temporary task running queue at the moment and the task running queue is generated, if the CPU occupancy rate is smaller than 50%, the second task is reserved, at the moment, the temporary task running queue comprises the first task and the second task, then a third task is added into the temporary task running queue, the CPU occupancy rates of the first task, the second task and the third task are analyzed, and if the CPU occupancy rate is larger than or equal to 50%, and kicking the third task out of the temporary task running queue, wherein the temporary task running queue comprises the first task and the second task and generates a task running queue, if the CPU occupancy rate is less than 50%, the third task is reserved, the temporary task running queue comprises the first task, the second task and the third task, and so on, and the final generated task running queue is generated.
Wherein the configuration information of the task acquisition module comprises task identification, task type and task parameter,
Figure 366234DEST_PATH_IMAGE026
represents L to F is
Figure 809985DEST_PATH_IMAGE022
The negative bias, F is the initial value of the model, L is the mean square error,
Figure 914951DEST_PATH_IMAGE023
=arg
Figure 266298DEST_PATH_IMAGE024
representing the learning process of the t-th tree to which is fitted
Figure 353203DEST_PATH_IMAGE004
Given the negative bias, T is the number of CARTs.
Meanwhile, the condition formula in the task sequencing module is
Figure 217122DEST_PATH_IMAGE025
In task computing modules
Figure 389478DEST_PATH_IMAGE018
Is the optimum number of threads.
In addition, the number of tasks in sub-table 1, sub-table 2, … and sub-table Z in the task allocation module is 100, and the specific flow of the timing unit is as follows: setting the working period of the timing unit to be G minutes, screening the finished tasks in the cache unit after the G minutes are reached, and then clearing cache data corresponding to the finished tasks.
When the system of the technical scheme is used, the task obtaining module obtains configuration information of batch tasks, the configuration information comprises task identifiers, task types and task parameters, a batch task list is obtained, the task sorting module sorts the tasks of the task obtaining module and generates a task sorting table, and the GBDT sorting model comprises the following algorithm steps: the task ordering module orders the tasks of the task acquisition module and generates a task ordering table, a GBDT ordering model is pre-installed in the task ordering module, and the algorithm steps of the GBDT ordering model are as follows: input (a)
Figure 764089DEST_PATH_IMAGE001
Figure 439921DEST_PATH_IMAGE002
) T, L, wherein
Figure 491054DEST_PATH_IMAGE001
In order to be a sample of the sample,
Figure 337656DEST_PATH_IMAGE002
for sorting labels, initialization
Figure 296385DEST_PATH_IMAGE003
Figure 43368DEST_PATH_IMAGE003
Calculating the response as a standard value
Figure 796560DEST_PATH_IMAGE004
,i=1,2,…,N,
Figure 881191DEST_PATH_IMAGE005
For the loss function, then the t-th tree is learned,
Figure 892878DEST_PATH_IMAGE006
=arg
Figure 746564DEST_PATH_IMAGE007
finding the step length
Figure 936237DEST_PATH_IMAGE008
= arg
Figure 258896DEST_PATH_IMAGE009
Figure 497111DEST_PATH_IMAGE006
Is the complex conjugate of the model parameters,
Figure 782468DEST_PATH_IMAGE010
in order to be the parameters of the model,
Figure 80725DEST_PATH_IMAGE011
in order to predict the value of the target,
Figure 936685DEST_PATH_IMAGE012
after the step length meets the condition formula, the model is updated:
Figure 726393DEST_PATH_IMAGE013
Figure 351410DEST_PATH_IMAGE014
to classify the output value of the learner, output
Figure 882885DEST_PATH_IMAGE015
Figure 413093DEST_PATH_IMAGE026
Represents L to F is
Figure 55427DEST_PATH_IMAGE022
The negative bias, F is the initial value of the model, L is the mean square error,
Figure 488944DEST_PATH_IMAGE023
=arg
Figure 191321DEST_PATH_IMAGE024
representing the learning process of the t-th tree to which is fitted
Figure 959557DEST_PATH_IMAGE004
Given a negative bias, T is the number of CARTs; the scheduling algorithm divides the task sorting table according to the task sorting table of the task sorting module, generates sub-table 1, sub-table 2, … and sub-table Z respectively according to the sequence, the number of tasks in sub-table 1, sub-table 2, … and sub-table Z in the task allocation module is 100, and calculates the running time of the first task in sub-table 1
Figure 654849DEST_PATH_IMAGE020
Adding the first task to the first position in the running table 1, and calculating the running time of the first task in the sub-table 1
Figure 192141DEST_PATH_IMAGE017
By using
Figure 799840DEST_PATH_IMAGE017
And
Figure 865491DEST_PATH_IMAGE016
make a comparison if
Figure 52890DEST_PATH_IMAGE017
Figure 507005DEST_PATH_IMAGE016
Inserting the second task into the front end of the first task, if so
Figure DEST_PATH_IMAGE027
Figure 472556DEST_PATH_IMAGE020
If the second task is placed at the rear end of the first task, the adjustment of all task positions in the sub-table 1 is completed in the same way, the operation table 1 is generated, the operation tables 2, … and the operation table Z are generated in the same way, and the tasks in the operation table 1, the operation tables 2, … and the operation table Z are sequentially added to be concentrated to one position and a new batch task list is generated; task computing module creation
Figure 12122DEST_PATH_IMAGE018
The number of the threads is one,
Figure 488365DEST_PATH_IMAGE019
the running speed of the program is maximized by setting the correct number of threads,
Figure 62566DEST_PATH_IMAGE018
the optimal number of threads; an analysis rule is pre-installed in the intelligent analysis module, a new batch task list generated by the task allocation module is used for establishing a temporary task running queue, a first task in the new batch task list is added into the temporary task running queue, a second task is added into the temporary task running queue, the CPU occupancy rates of the first task and the second task are analyzed, if the CPU occupancy rate is larger than or equal to 50%, the second task is kicked out of the temporary task running queue, only the first task is in the temporary task running queue at the moment and a task running queue is generated, if the CPU occupancy rate is smaller than 50%, the second task is reserved, at the moment, the temporary task running queue comprises the first task and the second task, then a third task is added into the temporary task running queue, the CPU occupancy rates of the first task, the second task and the third task are analyzed, and if the CPU occupancy rate is larger than or equal to 50%, kicking a third task out of the temporary task running queue, wherein the temporary task running queue comprises a first task and a second task and generates a task running queue, if the CPU occupancy rate is less than 50%, the third task is reserved, the temporary task running queue comprises the first task, the second task and the third task, and so on, a final generated task running queue is generated, meanwhile, the storage module temporarily stores the configuration information acquired by the task acquisition module in the cache unit, and then the cache data in the cache unit is cleared at regular time through the timing unit, so that the cache space in the cache unit is released, and the specific flow of the timing unit is as follows: setting the working period of the timing unit to be G minutes, screening completed tasks in the cache unit after the G minutes, and then clearing cache data corresponding to the completed tasks.

Claims (5)

1. The system comprises a task acquisition module, a storage module, a task sequencing module, a task allocation module, a task calculation module and an intelligent analysis module; it is characterized in that the preparation method is characterized in that,
the task obtaining module obtains configuration information of batch tasks to obtain a batch task list;
the storage module comprises a cache unit and a timing unit, temporarily stores the configuration information acquired by the task acquisition module in the cache unit, and then periodically clears cache data in the cache unit through the timing unit so as to release a cache space in the cache unit;
the task ordering module orders the tasks of the task acquisition module and generates a task ordering table, a GBDT ordering model is pre-installed in the task ordering module, and the algorithm steps of the GBDT ordering model are as follows: input (a)
Figure DEST_PATH_IMAGE001
Figure 163667DEST_PATH_IMAGE002
) T, L, wherein
Figure 46303DEST_PATH_IMAGE001
In order to be a sample of the sample,
Figure 634280DEST_PATH_IMAGE002
for sorting labels, initialization
Figure DEST_PATH_IMAGE003
Figure 729406DEST_PATH_IMAGE003
Calculating the response as a standard value
Figure 820858DEST_PATH_IMAGE004
,i=1,2,…,N,
Figure DEST_PATH_IMAGE005
For the loss function, then the t-th tree is learned,
Figure 671134DEST_PATH_IMAGE006
=arg
Figure DEST_PATH_IMAGE007
finding the step length
Figure 887352DEST_PATH_IMAGE008
= arg
Figure DEST_PATH_IMAGE009
Figure 868993DEST_PATH_IMAGE010
In order to model the predicted value of regression,
Figure 814953DEST_PATH_IMAGE006
is the complex conjugate of the model parameters,
Figure DEST_PATH_IMAGE011
in order to be the parameters of the model,
Figure 570550DEST_PATH_IMAGE012
is a predicted value, and the method is used,
Figure 805222DEST_PATH_IMAGE008
in order to be the step size,
Figure DEST_PATH_IMAGE013
updating the model for the current regression prediction value after the condition formula is satisfied:
Figure 304468DEST_PATH_IMAGE014
Figure DEST_PATH_IMAGE015
to classify the output value of the learner, output
Figure 42617DEST_PATH_IMAGE016
The task allocation module is internally pre-loaded with a scheduling algorithm, the scheduling algorithm divides the task sorting table according to the task sorting table of the task sorting module, generates sub-table 1, sub-table 2, … and sub-table Z respectively according to the sequence, and calculates the running time of the first task in the sub-table 1
Figure DEST_PATH_IMAGE017
Adding the first task to the first position in the operation table 1, and calculating the operation time of the second task in the sub-table 1
Figure 234695DEST_PATH_IMAGE018
By using
Figure 753401DEST_PATH_IMAGE018
And
Figure 446550DEST_PATH_IMAGE017
make a comparison if
Figure 849325DEST_PATH_IMAGE018
Figure 930414DEST_PATH_IMAGE017
Inserting the second task into the front end of the first task, if so
Figure 811782DEST_PATH_IMAGE018
Figure 183989DEST_PATH_IMAGE017
Then, the second task is placed at the back end of the first task, the adjustment of all task positions in the sub-table 1 is completed in the same way, the operation table 1 is generated, the operation table 2 is generated in the same way,…, a running table Z, which is used for sequentially adding and centralizing the tasks in the running table 1, the running table 2, the running table … and the running table Z to one position and generating a new batch task list;
the task computing module creation
Figure DEST_PATH_IMAGE019
The number of the threads is increased by the number of the threads,
Figure 693468DEST_PATH_IMAGE020
the running speed of the program is maximized by setting the correct number of threads;
the intelligent analysis module is internally pre-loaded with analysis rules, the task allocation module generates a new batch task list, establishes a temporary task running queue, firstly adds a first task in the new batch task list into the temporary task running queue, then adds a second task into the temporary task running queue, analyzes the CPU occupancy rates of the first task and the second task, if the CPU occupancy rate is more than or equal to 50%, kicks the second task out of the temporary task running queue, only the first task is in the temporary task running queue and generates the task running queue, if the CPU occupancy rate is less than 50%, reserves the second task, at the moment, the temporary task running queue comprises the first task and the second task, then adds a third task into the temporary task running queue, analyzes the CPU occupancy rates of the first task, the second task and the third task, if the CPU occupancy rate is more than or equal to 50%, kicking the third task out of the temporary task running queue, wherein the temporary task running queue comprises the first task and the second task and generates a task running queue, if the CPU occupancy rate is less than 50%, reserving the third task, wherein the temporary task running queue comprises the first task, the second task and the third task, and so on, and generating a final generated task running queue;
the above-mentioned
Figure 430611DEST_PATH_IMAGE004
Represents L to F
Figure DEST_PATH_IMAGE021
The negative bias, F is the initial value of the model, L is the mean square error,
Figure 392750DEST_PATH_IMAGE006
=arg
Figure 427702DEST_PATH_IMAGE007
representing the learning process of the t-th tree to which is fitted
Figure 339158DEST_PATH_IMAGE004
Given a negative bias, T is the number of CARTs;
the condition formula in the task sequencing module is
Figure 168573DEST_PATH_IMAGE022
Wherein
Figure DEST_PATH_IMAGE023
Is the complex conjugate of the step size.
2. The batch task computing system based on intelligent analysis of claim 1, wherein the configuration information of the task obtaining module includes task identification, task type, and task parameters.
3. The batch task computing system based on intelligent analysis of claim 1, wherein the task computing module is provided with
Figure 693708DEST_PATH_IMAGE019
Is the optimum number of threads.
4. The batch task computing system based on intelligent analysis of claim 1, wherein the number of tasks in sub-table 1, sub-table 2, …, sub-table Z in the task allocation module is 100.
5. The batch task computing system based on intelligent analysis according to claim 1, wherein the specific flow of the timing unit is as follows: setting the working period of the timing unit to be G minutes, screening the finished tasks in the cache unit after the G minutes are reached, and then clearing cache data corresponding to the finished tasks.
CN202210340398.8A 2022-04-02 2022-04-02 Batch task computing system based on intelligent analysis Active CN114416325B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210340398.8A CN114416325B (en) 2022-04-02 2022-04-02 Batch task computing system based on intelligent analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210340398.8A CN114416325B (en) 2022-04-02 2022-04-02 Batch task computing system based on intelligent analysis

Publications (2)

Publication Number Publication Date
CN114416325A CN114416325A (en) 2022-04-29
CN114416325B true CN114416325B (en) 2022-08-26

Family

ID=81264052

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210340398.8A Active CN114416325B (en) 2022-04-02 2022-04-02 Batch task computing system based on intelligent analysis

Country Status (1)

Country Link
CN (1) CN114416325B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150058858A1 (en) * 2013-08-21 2015-02-26 Hasso-Platt ner-Institut fur Softwaresystemtechnik GmbH Dynamic task prioritization for in-memory databases

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7610111B2 (en) * 2007-02-13 2009-10-27 Tech Semiconductor Singapore Pte Ltd Method and system for wafer lot order
US9141430B2 (en) * 2012-04-30 2015-09-22 Hewlett-Packard Development Company, L.P. Scheduling mapreduce job sets
US10810043B2 (en) * 2017-05-04 2020-10-20 Salesforce.Com, Inc. Systems, methods, and apparatuses for implementing a scheduler and workload manager with cyclical service level target (SLT) optimization
CN110119307B (en) * 2018-02-05 2022-09-13 上海交通大学 Data processing request processing method and device, storage medium and electronic device
CN111078396B (en) * 2019-11-22 2023-12-19 厦门安胜网络科技有限公司 Distributed data access method and system based on multitasking examples
CN113238861A (en) * 2021-05-08 2021-08-10 北京天空卫士网络安全技术有限公司 Task execution method and device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150058858A1 (en) * 2013-08-21 2015-02-26 Hasso-Platt ner-Institut fur Softwaresystemtechnik GmbH Dynamic task prioritization for in-memory databases

Also Published As

Publication number Publication date
CN114416325A (en) 2022-04-29

Similar Documents

Publication Publication Date Title
Tong et al. A scheduling scheme in the cloud computing environment using deep Q-learning
CN105956021B (en) A kind of automation task suitable for distributed machines study parallel method and its system
CN104317658B (en) A kind of loaded self-adaptive method for scheduling task based on MapReduce
CN109933306B (en) Self-adaptive hybrid cloud computing framework generation method based on operation type recognition
CN113157413B (en) Deep learning task resource optimization configuration method and system based on service quality requirement
CN109547546B (en) Scheduling method of request task and scheduling center server
CN106202431A (en) A kind of Hadoop parameter automated tuning method and system based on machine learning
CN107908536B (en) Performance evaluation method and system for GPU application in CPU-GPU heterogeneous environment
CN106874112B (en) Workflow backfilling method combined with load balancing
CN109445386B (en) Cloud manufacturing task shortest production time scheduling method based on ONBA
CN108170531B (en) Cloud data center request flow scheduling method based on deep belief network
WO2023093375A1 (en) Computing resource acquisition method and apparatus, electronic device, and storage medium
CN110705716A (en) Multi-model parallel training method
CN110990121A (en) Kubernetes scheduling strategy based on application portrait
CN108132840B (en) Resource scheduling method and device in distributed system
CN114416325B (en) Batch task computing system based on intelligent analysis
CN108519908A (en) A kind of task dynamic management approach and device
CN113010296B (en) Formalized model based task analysis and resource allocation method and system
CN109086976B (en) Task allocation method for crowd sensing
KR20110037184A (en) Pipelining computer system combining neuro-fuzzy system and parallel processor, method and apparatus for recognizing objects using the computer system in images
CN113032367A (en) Dynamic load scene-oriented cross-layer configuration parameter collaborative tuning method and system for big data system
CN111190704A (en) Task classification processing method based on big data processing framework
CN111309821B (en) Task scheduling method and device based on graph database and electronic equipment
CN112070162A (en) Multi-class processing task training sample construction method, device and medium
Du et al. OctopusKing: A TCT-aware task scheduling on spark platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant