CN108446174B - Multi-core job scheduling method based on resource pre-allocation and public boot agent - Google Patents

Multi-core job scheduling method based on resource pre-allocation and public boot agent Download PDF

Info

Publication number
CN108446174B
CN108446174B CN201810182628.6A CN201810182628A CN108446174B CN 108446174 B CN108446174 B CN 108446174B CN 201810182628 A CN201810182628 A CN 201810182628A CN 108446174 B CN108446174 B CN 108446174B
Authority
CN
China
Prior art keywords
job
resource
jobs
information
scheduling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810182628.6A
Other languages
Chinese (zh)
Other versions
CN108446174A (en
Inventor
李康
孙涌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou University
Original Assignee
Suzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou University filed Critical Suzhou University
Priority to CN201810182628.6A priority Critical patent/CN108446174B/en
Publication of CN108446174A publication Critical patent/CN108446174A/en
Application granted granted Critical
Publication of CN108446174B publication Critical patent/CN108446174B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/48Indexing scheme relating to G06F9/48
    • G06F2209/484Precedence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention is a multi-core job scheduling method based on resource pre-allocation and public guide agent, which adopts a uniform mode to standardize the basic information, resource requirements and state conversion of different types of jobs; acquiring site resource configuration information, and classifying the site resource information according to different levels; obtaining the requirement type of user operation according to the site resource use condition provided by the current resource management system; submitting the guide agent job to the job scheduling queue of the user-specified site and occupying the same computational resource as the guide agent by taking the information of the job scheduling queue of the site, the submission number of the guide agent job, the size of the guide agent job, the authentication information of the user and the operation shared directory of the job as parameters; and creating a scheduling process to execute the user job according to the identification information of the job. The method has the advantages of simple calculation resource allocation mode and short consumed time, realizes the dispatching of the multi-core operation by using a public boot agent, and greatly reduces the consumption problem of memory resources.

Description

Multi-core job scheduling method based on resource pre-allocation and public boot agent
Technical Field
The invention relates to the field of high-energy physical experiments, in particular to a multi-core job scheduling method based on resource pre-allocation and a public boot agent.
Background
Distributed computing is a necessary result of the development of computer-aided computing technology in order to process and analyze simulation data generated by high-energy physical experiments and reconstructed data generated by data processing and provide a good analytical computing environment for physicists. The distributed computing technology is to integrate heterogeneous computing resources in different places by utilizing a network to form a virtual super computer and provide strong computing power for large-scale computing operation. Representative of these are middleware, peer-to-peer transport, web services, grid, and cloud computing technologies.
The research on the multi-core operation scheduling mode design and the resource allocation technology of the distributed computing system in the foreign high-energy physical field is earlier, a great deal of research and research on the structure of experimental operation, the operation scheduling mode, the allocation of multi-core resources and other works are performed, and typical ones are the experimental operation processing systems of the european large hadron collider CMS (compact solenoid detector) and the atala (annular LHC Apparatus). Compared with the foreign country, the job processing system of JUNO (Jiangmen underrground Neutrino observer, Jiangmen middle-micro experiment) has relatively few researches on multi-core parallel, and the most representative is the JUNO parallel simulation framework of the high-energy physics research institute of the Chinese academy.
The job processing process of the high-energy physical experiment mainly comprises the submission of jobs, the allocation of resources and the processing of the jobs, and the processing of the jobs comprises the functions of scheduling and executing the jobs and outputting results, and the processing is also the core of the high-energy physical distributed computing system. Under the present circumstances, as the experimental data volume and the event complexity increase continuously, the processing time of each job will be lengthened, and the consumption of memory resources will be greatly increased, so that it will be difficult to meet the memory requirement of each single-core job in the conventional single-core job processing mode of the experiment. One of the objectives of the present invention is to provide a resource allocation strategy that is short in time consumption and simple; the invention also aims to optimize the access of users to distributed computing resources and realize the scheduling and the parallel execution of multi-core jobs.
Disclosure of Invention
The invention aims to overcome the problems in the prior art, and provides a multi-core job scheduling method based on resource pre-allocation and a public boot agent, which can avoid the phenomenon of insufficient memory resources in a distributed computing single-core processing mode and improve the utilization rate of distributed computing resources and the job processing efficiency.
In order to achieve the technical purpose and achieve the technical effect, the invention is realized by the following technical scheme:
a multi-core job scheduling method based on resource pre-allocation and public boot agent comprises the following steps:
step 1) user job classification
Standardizing basic information, resource requirements and state conversion of different types of jobs in a uniform mode, and classifying user jobs with the same characteristics into the same job queue by analyzing the requirement characteristics of the user jobs to form standardized jobs;
step 2) resource state acquisition
Acquiring site resource configuration information, classifying the site resource information according to different levels, acquiring resource states of single-core and multi-core queues, acquiring the requirement type of user operation supported by each queue according to the site resource use condition provided by the current resource management system, performing matching detection on the resource information and the operation queue requirement, and recording the resource characteristics of the queues meeting the requirements;
step 3) distributed resource allocation
Detecting the job processing environment of a user job appointed site, utilizing the running state of a current guide agent provided by a resource management system according to the source requirement of a current job waiting queue, submitting guide agent job scheduling queue information, the submitting number of the guide agent, the size of the guide agent, user authentication information and a job running shared directory as parameters, submitting guide agent job to the job scheduling queue of the user appointed site through a resource public access interface, occupying computing resources with the same size as the guide agent, and facilitating the pulling of the user job;
step 4) Job scheduling
Detecting a job waiting queue meeting the resource requirement, randomly matching user jobs meeting the resource requirement in the job waiting queue by taking currently available computing resources as a main basis according to a detection result, adding the successfully matched jobs to an execution queue, providing basic information required by execution for the jobs, monitoring the operation condition and the resource state of the jobs, and updating the state of the jobs and the available resource number of a guide agent in real time;
step 5) parallelization execution of operation
Initializing a resource sharing pool for guiding agent operation, acquiring configuration information of an input file, an output log and file information of the operation from a resource management system according to identification information of the operation, acquiring the number of available resources in the current resource, allocating computing resources to the current user operation according to the resource type of a local site scheduling queue, creating a scheduling process in the resource pool to execute the user operation, and monitoring the operation condition of the operation in real time.
And 6) acquiring a job output result file, a log file and error information.
Further, the normalization operation in step 1) includes the following three parts:
A. basic information
Describing the basic attributes of the job, including job number, job type, belonging user, job group, job priority and associated file information;
B. demand information
Information describing storage, memory and CPU resources required by job scheduling and execution, including execution environment, designated sites, required CPU resources, storage space, memory requirements and CPU operation time;
C. status information
Describing the state of the user operation in the life cycle and the information of the actual use of the resources, including the basic state, the creation time, the starting execution time, the completion time, the node information, the actual consumption of the memory and the actual running time of the CPU of the operation.
Further, in step 3), a resource pre-allocation policy is adopted, and the boot agent job is sent to a designated site of the distributed computing platform as a resource reservation container, the size of the boot agent is designated as a minimum value of the maximum job core number supportable by the scheduling queue and the current maximum job required core number, the number of the boot agent jobs is determined by the resource status and the job queue information, and the computing formula is as follows:
pilotsToSubmit=max(0,min(totalSlots,totalTQJobs-totalWaitingPilots)),
wherein, the pilottosToSubmit is the number of the jobs submitted by the lead agents under one circulation of the site agents, the totalSlots is the number of the resources of the site, the totalTQJobs is the waiting number of the jobs of the current queue, and the totalWaitingPilots is the number of the lead agents waiting for the resources to be taken.
Further, the steps 4) to 5) are to complete the scheduling and execution of the multi-core job in the common boot agent scheduling mode, the scheduling of the job is moved from the computing station to the inside of the boot agent, assuming that there are M mixed jobs waiting for scheduling,
Figure 973018DEST_PATH_IMAGE001
is the number of cores of the ith job, i ∈ [1, M]If the site has N-core boot agents, when
Figure 977883DEST_PATH_IMAGE002
+...+
Figure 727403DEST_PATH_IMAGE003
When N is not more than 1, and M is not less than 1 and not more than M, the operation is performed
Figure 210336DEST_PATH_IMAGE004
...
Figure 495955DEST_PATH_IMAGE005
Can be scheduled by the boot agent for execution at the same time if
Figure 609405DEST_PATH_IMAGE002
+...+
Figure 596952DEST_PATH_IMAGE003
If the number of the bootstrap agent resources is not idle, otherwise, resource fragments are generated;
under the scheduling and execution conditions of different types of jobs of multiple users, the completion condition and the resource utilization rate of the jobs are used as evaluation indexes of system performance, and a calculation formula of the job resource utilization rate can be expressed as follows:
Figure 962923DEST_PATH_IMAGE006
wherein the number of available resources of the site is N, the number of completed jobs is N,
Figure 86737DEST_PATH_IMAGE007
is the number of cores of the ith job,
Figure 184137DEST_PATH_IMAGE008
is the run time of the job.
Further, the basic states of the job include wait, match, run, end, and fail.
The invention has the beneficial effects that:
the invention can avoid the phenomenon of insufficient memory resources in a distributed computing single-core processing mode, improves the utilization rate of distributed computing resources and the operation processing efficiency, has simple computing resource distribution mode and short consumed time, realizes the scheduling of multi-core operation by using a public guide agent, and greatly reduces the consumption problem of memory resources.
Drawings
FIG. 1 illustrates a resource pre-allocation strategy based on a boot agent according to the present invention;
FIG. 2 is a diagram of a job scheduling model for a common boot agent of the present invention;
fig. 3 is a specific flowchart of the multi-core job scheduling method according to the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
A multi-core job scheduling method based on resource pre-allocation and public boot agent comprises the following steps:
step 1) user job classification
Standardizing basic information, resource requirements and state conversion of different types of jobs in a uniform mode, and classifying user jobs with the same characteristics into the same job queue by analyzing the requirement characteristics of the user jobs to form standardized jobs;
step 2) resource state acquisition
Acquiring site resource configuration information, classifying the site resource information according to different levels, acquiring resource states of single-core and multi-core queues, acquiring the requirement type of user operation supported by each queue according to the site resource use condition provided by the current resource management system, performing matching detection on the resource information and the operation queue requirement, and recording the resource characteristics of the queues meeting the requirements;
step 3) distributed resource allocation
Detecting the job processing environment of a user job appointed site, utilizing the running state of a current guide agent provided by a resource management system according to the source requirement of a current job waiting queue, submitting guide agent job scheduling queue information, the submitting number of the guide agent, the size of the guide agent, user authentication information and a job running shared directory as parameters, submitting guide agent job to the job scheduling queue of the user appointed site through a resource public access interface, occupying computing resources with the same size as the guide agent, and facilitating the pulling of the user job;
step 4) Job scheduling
Detecting a job waiting queue meeting the resource requirement, randomly matching user jobs meeting the resource requirement in the job waiting queue by taking currently available computing resources as a main basis according to a detection result, adding the successfully matched jobs to an execution queue, providing basic information required by execution for the jobs, monitoring the operation condition and the resource state of the jobs, and updating the state of the jobs and the available resource number of a guide agent in real time;
step 5) parallelization execution of operation
Initializing a resource sharing pool for guiding agent operation, acquiring configuration information of an input file, an output log and file information of the operation from a resource management system according to identification information of the operation, acquiring the number of available resources in the current resource, allocating computing resources to the current user operation according to the resource type (single core or multi-core) of a local site scheduling queue, creating a scheduling process in the resource pool to execute the user operation, and monitoring the operation condition of the operation in real time.
And 6) acquiring a job output result file, a log file and error information.
The normalization operation in the step 1) comprises the following three parts:
A. basic information
Describing the basic attributes of the job, including job number, job type, belonging user, job group, job priority and associated file information;
B. demand information
Information describing storage, memory and CPU resources required by job scheduling and execution, including execution environment, designated sites, required CPU resources, storage space, memory requirements and CPU operation time;
C. status information
Describing the state of the user operation in the life cycle and the information of the actual use of the resources, including the basic state, the creation time, the starting execution time, the completion time, the node information, the actual consumption of the memory and the actual running time of the CPU of the operation.
The step 3) adopts a resource pre-allocation strategy, and the specific design is as shown in fig. 1, and the boot agent job is sent to the designated site of the distributed computing platform as a resource reservation container, wherein the size and the number of the boot agent job are key factors for influencing the utilization rate of the distributed computing resources, the size of the boot agent is designated as the minimum value of the maximum job core number and the current maximum job required core number which can be supported by the scheduling queue, the number of the boot agent job is determined by the resource state and the job queue information, and the calculation formula is as follows:
pilotsToSubmit=max(0,min(totalSlots,totalTQJobs-totalWaitingPilots)),
wherein, the pilottosToSubmit is the number of the jobs submitted by the lead agents under one circulation of the site agents, the totalSlots is the number of the resources of the site, the totalTQJobs is the waiting number of the jobs of the current queue, and the totalWaitingPilots is the number of the lead agents waiting for the resources to be taken.
The steps 4) to 5) are to complete the scheduling and execution of the multi-core job in the common boot agent scheduling mode, the job scheduling is moved from the computing site to the inside of the boot agent, and a specific mode diagram is shown in fig. 2, assuming that the existing M mixed jobs wait for scheduling,
Figure 658981DEST_PATH_IMAGE001
is the number of cores of the ith job, i ∈ [1, M]If the site has N-core boot agents, when
Figure 686980DEST_PATH_IMAGE002
+...+
Figure 180147DEST_PATH_IMAGE003
When N is not more than 1, and M is not less than 1 and not more than M, the operation is performed
Figure 432136DEST_PATH_IMAGE004
...
Figure 145009DEST_PATH_IMAGE005
Can be scheduled by the boot agent for execution at the same time if
Figure 711119DEST_PATH_IMAGE002
+...+
Figure 75104DEST_PATH_IMAGE003
If the number of the bootstrap agent resources is not idle, otherwise, resource fragments are generated;
under the scheduling and execution conditions of different types of jobs of multiple users, the completion condition and the resource utilization rate of the jobs are used as evaluation indexes of system performance, and a calculation formula of the job resource utilization rate can be expressed as follows:
Figure 747263DEST_PATH_IMAGE006
wherein the number of available resources of the site is N, the number of completed jobs is N,
Figure 196699DEST_PATH_IMAGE007
is the number of cores of the ith job,
Figure 379550DEST_PATH_IMAGE008
is the run time of the job.
The basic states of the job include wait, match, run, end, and fail.
In this embodiment, the invention is explained in detail with reference to fig. 3:
1) initializing Job queue according to Job set Job = { Job1 , job2 , ..., jobi , ... ,jobnClassifying and adding resource requirements and user priorities of the jobs in the queue;
2) traversing the site set S by taking the cycle time as a time interval, and detecting and counting the resource state of the S;
3) according to the resource requirements of the jobs in the job queue, the resource pre-allocation strategy of the invention is adopted to lead the agent job set P = { pt =1,pt2 ,..., pti ,...,ptnSubmitting to a working node WN in a specified site set Sj
4) Working node WNjActing pt for boot-upiAllocating computing resources, ptiStarting a job agent and initializing residual resources;
5) through the matching service, the waiting jobs in the job queue are dynamically matched according to the number of available resources;
6) jobkThe resource pool can actively acquire the job jobsuccessfullykParameter information of (2), execution job jobskAnd updating the residual resources;
7) job jobkAfter the execution is successful, the residual resources are updated, and the operation information is fed back; if the operation has an error, ptiEnding and releasing the computing resources;
8) if ptiWhen the life cycle is reached, ptiAnd finishing and actively releasing the computing resources of the working nodes, and feeding the guiding agent information back to the monitoring system.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (5)

1. The multi-core job scheduling method based on resource pre-allocation and a public boot agent is characterized by comprising the following steps of:
step 1) user job classification
Standardizing basic information, resource requirements and state conversion of different types of jobs in a uniform mode, and classifying user jobs with the same characteristics into the same job queue by analyzing the requirement characteristics of the user jobs to form standardized jobs;
step 2) resource state acquisition
Acquiring site resource configuration information, classifying the site resource information according to different levels, acquiring resource states of single-core and multi-core queues, acquiring the requirement type of user operation supported by each queue according to the site resource use condition provided by the current resource management system, performing matching detection on the resource information and the operation queue requirement, and recording the resource characteristics of the queues meeting the requirements;
step 3) distributed resource allocation
Detecting the job processing environment of a user job appointed site, utilizing the running state of a current guide agent provided by a resource management system according to the source requirement of a current job waiting queue, submitting guide agent job scheduling queue information, the submitting number of the guide agent, the size of the guide agent, user authentication information and a job running shared directory as parameters, submitting guide agent job to the job scheduling queue of the user appointed site through a resource public access interface, occupying computing resources with the same size as the guide agent, and facilitating the pulling of the user job;
step 4) Job scheduling
Detecting a job waiting queue meeting the resource requirement, randomly matching user jobs meeting the resource requirement in the job waiting queue by taking currently available computing resources as a main basis according to a detection result, adding the successfully matched jobs to an execution queue, providing basic information required by execution for the jobs, monitoring the operation condition and the resource state of the jobs, and updating the state of the jobs and the available resource number of a guide agent in real time;
step 5) parallelization execution of operation
Initializing a resource sharing pool for guiding agent operation, acquiring configuration information of an input file, an output log and file information of the operation from a resource management system according to identification information of the operation, acquiring the number of available resources in the current resource, allocating computing resources to the current user operation according to the resource type of a local site scheduling queue, creating a scheduling process in the resource pool to execute the user operation, and monitoring the operation condition of the operation in real time;
and 6) acquiring a job output result file, a log file and error information.
2. The method for scheduling multi-core jobs based on resource pre-allocation and common boot agent according to claim 1, wherein the normalized job in step 1) comprises the following three parts:
A. basic information
Describing the basic attributes of the job, including job number, job type, belonging user, job group, job priority and associated file information;
B. demand information
Describing information of storage, memory and CPU resources required by job scheduling and execution, wherein the information comprises an execution environment, a specified site, required CPU resources, a storage space and CPU operation time;
C. status information
Describing the state of the user operation in the life cycle and the information of the actual use of the resources, including the basic state, the creation time, the starting execution time, the completion time, the node information, the actual consumption of the memory and the actual running time of the CPU of the operation.
3. The method for scheduling multi-core jobs based on resource pre-allocation and a common boot agent according to claim 1, wherein the resource pre-allocation policy is adopted in step 3), the boot agent job is sent to the designated site of the distributed computing platform as a resource reservation container, the size of the boot agent is designated as the minimum value of the maximum job core number and the current maximum job demand core number that can be supported by the scheduling queue, the number of the boot agent jobs is determined by the resource status and the job queue information, and the calculation formula is as follows:
pilotsToSubmit=max(0,min(totalSlots,totalTQJobs-totalWaitingPilots)),
wherein, the pilottosToSubmit is the number of the jobs submitted by the lead agents under one circulation of the site agents, the totalSlots is the number of the resources of the site, the totalTQJobs is the waiting number of the jobs of the current queue, and the totalWaitingPilots is the number of the lead agents waiting for the resources to be taken.
4. The method for scheduling multi-core jobs based on resource pre-allocation and common boot agent as claimed in claim 1, wherein the steps 4) to 5) are to complete the scheduling and execution of multi-core jobs in the common boot agent scheduling mode, the scheduling of jobs is moved from the computing station to the inside of the boot agent, assuming that there are M mixed jobs waiting for scheduling,
Figure DEST_PATH_IMAGE001
is the number of cores of the ith job, i ∈ [1, M]If the site has N-core boot agents, when
Figure 662380DEST_PATH_IMAGE002
+...+
Figure DEST_PATH_IMAGE003
When N is not more than 1, and M is not less than 1 and not more than M, the operation is performed
Figure 110679DEST_PATH_IMAGE004
...
Figure DEST_PATH_IMAGE005
Can be scheduled by the boot agent for execution at the same time if
Figure 749471DEST_PATH_IMAGE002
+...+
Figure 206997DEST_PATH_IMAGE003
If the number of the bootstrap agent resources is not idle, otherwise, resource fragments are generated;
under the scheduling and execution conditions of different types of jobs of multiple users, the completion condition and the resource utilization rate of the jobs are used as evaluation indexes of system performance, and a calculation formula of the job resource utilization rate can be expressed as follows:
Figure 101264DEST_PATH_IMAGE006
wherein the number of available resources of the site is N, the number of completed jobs is N,
Figure DEST_PATH_IMAGE007
is the number of cores of the ith job,
Figure 845098DEST_PATH_IMAGE008
is the run time of the job.
5. The method of claim 2, wherein the basic states of the job include wait, match, run, end, and fail.
CN201810182628.6A 2018-03-06 2018-03-06 Multi-core job scheduling method based on resource pre-allocation and public boot agent Active CN108446174B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810182628.6A CN108446174B (en) 2018-03-06 2018-03-06 Multi-core job scheduling method based on resource pre-allocation and public boot agent

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810182628.6A CN108446174B (en) 2018-03-06 2018-03-06 Multi-core job scheduling method based on resource pre-allocation and public boot agent

Publications (2)

Publication Number Publication Date
CN108446174A CN108446174A (en) 2018-08-24
CN108446174B true CN108446174B (en) 2022-03-11

Family

ID=63193712

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810182628.6A Active CN108446174B (en) 2018-03-06 2018-03-06 Multi-core job scheduling method based on resource pre-allocation and public boot agent

Country Status (1)

Country Link
CN (1) CN108446174B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111767199B (en) * 2020-06-24 2023-09-19 中国工商银行股份有限公司 Resource management method, device, equipment and system based on batch job
CN112905562A (en) * 2021-02-04 2021-06-04 中国工商银行股份有限公司 Host job submitting method and device
CN114168314B (en) * 2021-10-27 2022-09-20 厦门国际银行股份有限公司 Multithreading concurrent data index batch processing method and device and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8073908B2 (en) * 2003-12-10 2011-12-06 Zerotouchdigital, Inc. Method and apparatus for utility computing in ad-hoc and configured peer-to-peer networks
CN102549984A (en) * 2009-05-05 2012-07-04 思杰系统有限公司 Systems and methods for packet steering in a multi-core architecture
CN103995735A (en) * 2013-02-14 2014-08-20 韩国电子通信研究院 Device and method for scheduling working flow
CN104102548A (en) * 2014-08-04 2014-10-15 北京京东尚科信息技术有限公司 Task resource scheduling processing method and task resource scheduling processing system
CN105739949A (en) * 2014-12-26 2016-07-06 英特尔公司 Techniques for cooperative execution between asymmetric processor cores
CN105786612A (en) * 2014-12-23 2016-07-20 杭州华为数字技术有限公司 Resource management method and apparatus
CN106797382A (en) * 2014-08-01 2017-05-31 格林伊登美国控股有限责任公司 For the system and method for the route based on event of call center
CN106874084A (en) * 2017-01-04 2017-06-20 北京百度网讯科技有限公司 A kind of method and apparatus of distributed work flow scheduling

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104156505B (en) * 2014-07-22 2017-12-15 中国科学院信息工程研究所 A kind of Hadoop cluster job scheduling method and devices based on user behavior analysis
US10379899B2 (en) * 2015-11-18 2019-08-13 Nxp Usa, Inc. Systems and methods for frame presentation and modification in a networking environment
CN107450983A (en) * 2017-07-14 2017-12-08 中国石油大学(华东) It is a kind of based on the hierarchical network resource regulating method virtually clustered and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8073908B2 (en) * 2003-12-10 2011-12-06 Zerotouchdigital, Inc. Method and apparatus for utility computing in ad-hoc and configured peer-to-peer networks
CN102549984A (en) * 2009-05-05 2012-07-04 思杰系统有限公司 Systems and methods for packet steering in a multi-core architecture
CN103995735A (en) * 2013-02-14 2014-08-20 韩国电子通信研究院 Device and method for scheduling working flow
CN106797382A (en) * 2014-08-01 2017-05-31 格林伊登美国控股有限责任公司 For the system and method for the route based on event of call center
CN104102548A (en) * 2014-08-04 2014-10-15 北京京东尚科信息技术有限公司 Task resource scheduling processing method and task resource scheduling processing system
CN105786612A (en) * 2014-12-23 2016-07-20 杭州华为数字技术有限公司 Resource management method and apparatus
CN105739949A (en) * 2014-12-26 2016-07-06 英特尔公司 Techniques for cooperative execution between asymmetric processor cores
CN106874084A (en) * 2017-01-04 2017-06-20 北京百度网讯科技有限公司 A kind of method and apparatus of distributed work flow scheduling

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"A self-tuning PSO for job-shop scheduling problems";Pisut Pongchairerks;《International Journal of Operational Research》;20140131;第19卷(第1期);第96-113页 *
"Linux内核进程调度schedule深入理解";Farmwang;《https://blog.csdn.net/farmwang/article/details/70160131》;20170413;第1-7页 *
"支持对称多核处理器的嵌入式实时操作系统研究与实现";许璐璐;《中国优秀硕士学位论文全文数据库 信息科技辑》;20170315(第03期);第I137-178页 *

Also Published As

Publication number Publication date
CN108446174A (en) 2018-08-24

Similar Documents

Publication Publication Date Title
Abd Elaziz et al. Advanced optimization technique for scheduling IoT tasks in cloud-fog computing environments
Gu et al. Liquid: Intelligent resource estimation and network-efficient scheduling for deep learning jobs on distributed GPU clusters
US20200396311A1 (en) Provisioning using pre-fetched data in serverless computing environments
US11681547B2 (en) File operation task optimization
CN110389820B (en) Private cloud task scheduling method for resource prediction based on v-TGRU model
Xu et al. Adaptive task scheduling strategy based on dynamic workload adjustment for heterogeneous Hadoop clusters
CN108446174B (en) Multi-core job scheduling method based on resource pre-allocation and public boot agent
EP3857401B1 (en) Methods for automatic selection of degrees of parallelism for efficient execution of queries in a database system
US20090031312A1 (en) Method and Apparatus for Scheduling Grid Jobs Using a Dynamic Grid Scheduling Policy
He et al. Parallel implementation of classification algorithms based on MapReduce
CN103500123B (en) Parallel computation dispatching method in isomerous environment
Tang et al. Energy efficient and deadline satisfied task scheduling in mobile cloud computing
US20120315966A1 (en) Scheduling method and system, computing grid, and corresponding computer-program product
CN102812439A (en) Power management in a multi-processor computer system
GB2507038A (en) Scheduling jobs weighted according to the memory usage using a knapsack problem.
CN109840142A (en) Thread control method, device, electronic equipment and storage medium based on cloud monitoring
Gao et al. Reduct algorithm based execution times prediction in knowledge discovery cloud computing environment.
CN112596904A (en) Quantum service resource calling optimization method based on quantum cloud platform
CN108536528A (en) Using the extensive network job scheduling method of perception
CN115408152A (en) Adaptive resource matching obtaining method and system
CN104793993A (en) Cloud computing task scheduling method of artificial bee colony particle swarm algorithm based on Levy flight
Mohamed et al. Hadoop-MapReduce job scheduling algorithms survey
Maroulis et al. Express: Energy efficient scheduling of mixed stream and batch processing workloads
CN113608858A (en) MapReduce architecture-based block task execution system for data synchronization
Mana A feature based comparison study of big data scheduling algorithms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant