CN113238853A - Server-free computing scheduling system and method based on function intermediate expression - Google Patents

Server-free computing scheduling system and method based on function intermediate expression Download PDF

Info

Publication number
CN113238853A
CN113238853A CN202110659654.5A CN202110659654A CN113238853A CN 113238853 A CN113238853 A CN 113238853A CN 202110659654 A CN202110659654 A CN 202110659654A CN 113238853 A CN113238853 A CN 113238853A
Authority
CN
China
Prior art keywords
function
cpu core
functions
cpu
core
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110659654.5A
Other languages
Chinese (zh)
Other versions
CN113238853B (en
Inventor
李超
张路
冯伟琪
于峥
王鑫凯
过敏意
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN202110659654.5A priority Critical patent/CN113238853B/en
Publication of CN113238853A publication Critical patent/CN113238853A/en
Application granted granted Critical
Publication of CN113238853B publication Critical patent/CN113238853B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/485Task life-cycle, e.g. stopping, restarting, resuming execution
    • G06F9/4856Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)

Abstract

A no server computing function scheduling system and method based on function intermediate expression includes: the device comprises a function characteristic acquisition module, a function aggregation module, a mapping module and an adjustment module, wherein: the function characteristic acquisition module acquires programming language characteristics of the function, runtime stage characteristics of the function and micro-architecture characteristics of the function; the function aggregation module aggregates the functions into different function sets according to different characteristics; the mapping module divides the CPU core into an initialization stage CPU core, a runtime stage CPU core and a hybrid CPU core according to different function sets; the adjusting module deploys the function to the appointed core according to the feature of the function and the classification of the CPU core, and periodically transfers the function and adjusts the frequency of each core according to the change of the feature of the function, thereby ensuring that the server-free computing function can operate according to the optimal operating point. The system may improve the energy efficiency of a serverless computing system.

Description

Server-free computing scheduling system and method based on function intermediate expression
Technical Field
The invention relates to a technology in the field of cloud computing, in particular to a server-free computing scheduling system and method based on function intermediate expression (intermediate representation).
Background
The server-free computing is a novel cloud computing mode, the traditional single application can be divided in a fine-grained mode, users can be liberated from complicated server management, the flexibility of application deployment is greatly improved, and the production efficiency of the users is improved. Different from the traditional cloud computing load, the server-free computing function has the characteristics of heterogeneity, diversity and dynamics, so that the challenge is full of designing a power consumption management mechanism meeting the complex characteristic of the server-free computing function. Existing scheduling strategies for cloud computing applications do not achieve optimal scheduling of serverless computing functions because these scheduling mechanisms are coarse-grained and lack a system perspective. Existing scheduling schedules applications mainly from two layers: 1) request-level scheduling at a macro level, and performing allocation scheduling on resources required by the application according to the granularity of the request; 2) and dynamically scheduling hardware at a microscopic level, and allocating and scheduling resources required by the thread operation at the granularity of instructions. Neither of these two levels of scheduling schemes has a way to obtain the following internal features of the serverless computation function: 1) a runtime environment, where the serverless computing platform allows developers to develop corresponding functions using various programming languages; 2) in the life cycle of the container, the serverless computing function can be divided into a plurality of different stages in the running process and 3) resource occupation, and tasks completed by the serverless computing function are different, so that the types of resources required by the function are different. If the multidimensional characteristics of the serverless computing functions cannot be considered, the system cannot configure a common optimal operating point for a plurality of serverless computing functions running on the same processor core, so that the processor core cannot run in an optimal energy efficiency state.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a server-free computing scheduling system and method based on function intermediate expression. The system may improve the energy efficiency of a serverless computing system.
The invention is realized by the following technical scheme:
the invention relates to a serverless computing scheduling system based on function intermediate expression, which comprises: the device comprises a function characteristic acquisition module, a function aggregation module, a mapping module and an adjustment module, wherein: the function characteristic acquisition module acquires programming language characteristics of the function, runtime stage characteristics of the function and micro-architecture characteristics of the function; the function aggregation module aggregates the functions into different function sets according to different characteristics; the mapping module divides the CPU core into an initialization stage CPU core, a runtime stage CPU core and a hybrid CPU core according to different function sets; the adjusting module deploys the function to the specified core according to the feature of the function and the classification of the CPU core, and periodically migrates the function according to the change of the feature of the function and adjusts the frequency of each core.
Technical effects
Compared with the prior art, the invention can relieve the gap between the upper-layer request-level scheduling and the bottom-layer instruction-level dynamic scheduling. The intermediate expression of the function can describe multi-dimensional characteristics of the server-free computing function, and the server-free computing function can be better scheduled and distributed by utilizing the multi-dimensional characteristics. The invention can acquire and gather the multi-dimensional characteristics of the functions, and map and adjust the CPU core, thereby ensuring that the server-free calculation functions can be operated according to respective optimal operation frequency. Compared with the prior art, the system can ensure that functions in the server-free computing system operate according to the optimized operating frequency as far as possible, and improves the energy efficiency of the server-free computing system.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic diagram of a functional feature acquisition module;
FIG. 3 is a schematic diagram of a function aggregation module;
FIG. 4 is a schematic diagram of a mapping module;
FIG. 5 is a schematic view of a conditioning module;
FIG. 6 is a diagram illustrating an overall implementation of the embodiment;
fig. 7-8 are schematic diagrams illustrating the effects of the embodiment.
Detailed Description
As shown in fig. 6, the present embodiment relates to a serverless computation scheduling system based on function intermediate expression, including: the device comprises a function characteristic acquisition module, a function aggregation module, a mapping module and an adjustment module, wherein: the function characteristic acquisition module acquires programming language characteristics of the function, runtime stage characteristics of the function and micro-architecture characteristics of the function; the function aggregation module aggregates the functions into different function sets according to different characteristics; the mapping module divides the CPU core into an initialization stage CPU core, a runtime stage CPU core and a hybrid CPU core according to different function sets; the adjusting module deploys the function to the specified core according to the feature of the function and the classification of the CPU core, and periodically migrates the function according to the change of the feature of the function and adjusts the frequency of each core.
As shown in fig. 2 to 5, the function feature acquisition means: firstly, acquiring the identification of a function, and then respectively: obtaining micro-architectural features of functions from a hardware level of serverless computing functions
Figure BDA0003114675430000025
Collecting runtime phase features of an application from an operating system level
Figure BDA0003114675430000021
Collecting programming language features of an application from the application software level
Figure BDA0003114675430000022
And inducing the acquired multidimensional characteristics to form a triple with function identification
Figure BDA0003114675430000023
The function aggregation refers to: aggregating functions into different function sets according to different characteristics, and aggregating according to function characteristics, specifically: first run-time phase characterization by function
Figure BDA0003114675430000024
The method comprises the steps of aggregating functions, and classifying the function aggregation into an initialization stage function and a runtime stage function; then further initializing the phase function programming language
Figure BDA0003114675430000031
Aggregating to obtain a function set
Figure BDA0003114675430000032
Per-microarchitectural characterization of runtime phase functions
Figure BDA0003114675430000037
Aggregating functions to obtain function set
Figure BDA0003114675430000033
Figure BDA0003114675430000034
The initialization stage CPU core refers to: according to a set
Figure BDA0003114675430000035
In the function condition, marking all the CPU cores which can deploy the functions of the same programming language as the CPU cores in the initialization stage; initializing the functions to be run on each of the CPU cores is in the initialization phase and their programming languages are the same.
The run-time stage CPU core refers to: according to a set
Figure BDA0003114675430000036
In the function condition, marking all the CPU cores which can deploy the functions with the same micro-architecture characteristics as the CPU cores in the runtime stage; the functions to be run on each of the run-time CPU cores are in the run-time phase and their microarchitectural features are the same.
The mixed CPU core is as follows: because the number of the functions which can be deployed on each CPU core is limited, all the functions with the same characteristics can not be deployed on the same CPU core, and the functions which can not be deployed completely according to the characteristics can be deployed on the mixed CPU core; the hybrid CPU core includes: an initialization hybrid CPU core, a runtime hybrid CPU core, and an initialization/runtime hybrid CPU core, wherein: initializing the hybrid CPU core means: some functions in the initialized CPU core are in an initialization stage, but there is no way to deploy the functions with the same programming language on the same CPU core, and at this time, the remaining CPU cores, which can be deployed by the functions in the initialization stage, are marked as initialized hybrid CPU cores, and the functions to be run on each core in the initialized hybrid CPU cores are in the initialization stage but the programming languages are different; run-time hybrid CPU core means: the function still existing in the CPU core during operation can not be deployed on the same CPU core according to the same micro-architecture characteristics; marking the rest CPU core which can be deployed by the function in the runtime stage as a runtime mixed CPU core; the function to be run on each core in the run-time hybrid CPU core is in the run-time stage but the micro-architecture features are different; the initialization/runtime hybrid CPU core refers to: the rest of functions cannot be deployed on the same CPU core according to the initialization phase and the runtime phase, and at this time, the rest of functions with the initialization phase and the runtime phase are deployed on the CPU core in a mixed manner, and the cores are marked as initialization/runtime mixed CPU cores.
The regulation refers to that: the method comprises the steps of firstly deploying a function to a specified core according to the characteristics of the function and the classification of CPU cores, sensing the function characteristics of the operation of the CPU cores as the optimal operation frequency of the function, periodically transferring the function according to the change of the function characteristics to ensure that the confusion factor of the CPU core is minimum, and periodically adjusting the frequency of the CPU core to ensure that the frequency of the CPU core meets the optimal operation frequency of the function.
The periodically migrating the function is that: along with the running of functions, the characteristics of partial functions can be changed, and simultaneously, with the arrival of new access functions, more and more functions on the CPU cores cannot be guaranteed to have the same characteristics, so that more and more mixed CPU cores are provided. The invention periodically carries out migration adjustment on the functions operated on the CPU core, and ensures that the confusion factor of the CPU core in the server-free computing system is minimum.
The CPU core confusion factor is as follows: the hybrid CPU core occupies a proportion of all active CPU cores in a serverless computing system, i.e.
Figure BDA0003114675430000041
The functions deployed on the CPU core hardly meet the optimal operating frequency of the functions, so that the energy efficiency of the system can be improved by reducing the chaos factors of the CPU core, and more functions can be operated according to the optimal operating frequency.
As shown in fig. 1, when a user sends a function trigger request to a serverless computing system, a specific scheduling method of the serverless computing scheduling system includes:
step 1, a function feature extraction module acquires multi-dimensional features of a function, specifically: when the function runs in the server-free computing system, the function feature extraction module obtains the identifier of the container corresponding to the function, and obtains the features of the server-free computing function from three different layers: micro-architectural feature for fetching functions from hardware level
Figure BDA0003114675430000042
Acquiring runtime stage characteristics of a function from an operating system level; and acquiring the programming language characteristics of the function from the function software layer. And inducing the obtained multidimensional characteristics to form a ternary system with function identificationGroup of
Figure BDA0003114675430000043
1) When the function accesses the serverless computing system, the function feature extraction module first obtains the programming language of the function from the function software level. The method for acquiring the function language is to analyze the output of the function language by using a docker ps command so as to acquire the realization language of the function. And stored in a unary group
Figure BDA0003114675430000044
. The operation of acquiring the function programming language only needs to be performed once in the stage of accessing the system by the function.
2) Periodically (100ms), a function feature extraction module acquires the micro-architectural features of the function from a hardware level, and the function feature extraction module uses a perf command to count specific events on a specific core, wherein the events comprise: task-clock is used to express CPU utilization, cache-reference is used to express memory usage bandwidth and block _ rq _ insert is used to express IO bandwidth of an application. And store the hardware information into triplets
Figure BDA0003114675430000045
Utilization ratio, memory bandwidth, IO bandwidth>。
3) The function feature extraction module periodically (100ms) acquires the runtime stage features of the function from the operating system level, and acquires whether the function is in the initialization stage or the runtime stage currently by analyzing log information output by the docker log. And storing the runtime information in a tuple
Figure BDA0003114675430000046
< initialization phase/runtime phase>。
The function feature extraction module generalizes the collected three-aspect feature information and stores the information as a triple with function identification
Figure BDA0003114675430000047
Step 2, the function aggregation module aggregates the functions into different function sets according to different characteristics, specifically: firstly, according to the runtime stage characteristics of the functions, the functions are divided into two major categories: integrating all functions in the initialization stage into an initialization stage function set; all functions in the runtime phase are aggregated into a runtime phase function set.
1) The function aggregation module further aggregates the initialization stage function sets according to the programming language, and aggregates functions with the same programming language (python, java, go, ruby, swift, php, rust, etc.) in the initialization stage function sets into a new function set
Figure BDA0003114675430000048
I.e. a set of function identifications all having the same programming language.
2) The function aggregation module further aggregates the function sets in the runtime stage according to the micro-architecture characteristics, and aggregates the functions with the same micro-architecture characteristics (CPU intensive, memory intensive, IO intensive and the like) in the runtime stage into a new function set
Figure BDA0003114675430000051
I.e. the set of function identifications all having the same micro-architectural features.
The functions in the server-free computing system can be divided into different sets with the same characteristics through the function aggregation module, and then the functions are deployed into the server-free computing system according to the characteristics of the sets.
And 3, classifying the CPU cores on the server-free computing system by the function mapping module to ensure that the server-free computing function can have better mapping with the CPU cores, and dividing the CPU cores on the server-free computing platform into the following parts by the function mapping module according to the characteristics and the number of different function sets obtained in the function aggregation module: 1) a CPU core in an initialization stage; 2) a runtime phase CPU core; and 3) a hybrid CPU core. Each CPU core on a serverless computing platform can deploy a maximum of 4 functions simultaneously.
1) SaidThe initialization phase CPU core refers to: to the collection
Figure BDA0003114675430000052
The statistical number of the function sets corresponding to each programming language is obtained
Figure BDA0003114675430000053
Will be provided with
Figure BDA0003114675430000054
Each CPU core is marked as an initialization CPU core in the corresponding programming language. The serverless computing functions deployed on the initializing CPU core all have the same programming language.
2) The run-time stage CPU core refers to: to the collection
Figure BDA0003114675430000055
The statistical number of the function set corresponding to each micro-architecture feature in the system is obtained
Figure BDA0003114675430000056
Respectively to be provided with
Figure BDA0003114675430000057
Each CPU core is marked as a runtime CPU core under the corresponding micro-architectural feature. Serverless computing functions deployed on runtime CPU cores all have the same microarchitectural features.
3) The mixed CPU core is as follows: because the number of functions that can be deployed on each CPU core is limited, all functions with the same characteristics cannot be deployed on the same CPU core, and functions that cannot be deployed completely according to the characteristics are deployed on a hybrid CPU core. Hybrid CPU cores can be further divided into three categories: 1) initializing a hybrid CPU core; 2) run-time hybrid CPU cores; and 3) initialize/run-time hybrid CPU cores.
The mixed CPU core is further divided according to the following modes:
Figure BDA00031146754300000512
the initialization of the hybrid CPU core refers to: marking part of the CPU core as an initialization CPU core according to a programming language, but also part of the function
Figure BDA0003114675430000058
Are all in the initialization phase, but there is no way to deploy functions with the same programming language on the same CPU core. Then the remaining functions are shared
Figure BDA0003114675430000059
Individual functions cannot be deployed on the same CPU core as functions having the same programming language. At this moment will
Figure BDA00031146754300000510
Figure BDA00031146754300000511
Each CPU core is marked as an initialization hybrid CPU core.
Figure BDA00031146754300000513
The run-time mixed CPU core is as follows: marking a deployment CPU core as a runtime CPU core in terms of microarchitectural features, but still existing
Figure BDA0003114675430000061
The functions cannot be deployed on the same CPU core exactly according to the same micro-architectural features. At this moment will
Figure BDA0003114675430000062
Individual CPU cores are marked as runtime hybrid CPU cores.
Figure BDA0003114675430000064
The initialization/runtime mixed CPU core is as follows: the remaining functions cannot be deployed on the same CPU core in both the initialization phase and the runtime phase. The function mixture with initialization phase and runtime phase left at this time is deployed on the CPU core, and this is doneThese cores are labeled as initialization/runtime hybrid CPU cores.
Step 4, the adjusting module adjusts the operating environment of the function, specifically: the method comprises the steps of firstly deploying functions to appointed cores according to the characteristics of the functions and the classification of CPU cores, and setting the frequency of each CPU core as the optimal operation frequency of the functions according to a function-frequency table. And periodically transferring the function according to the change of the function characteristics to ensure that the chaotic factor of the CPU core is minimum, and periodically adjusting the frequency of the CPU core to ensure that the frequency of the CPU core meets the optimal operation frequency of the function.
The function-frequency table refers to: and counting functions operated on the server-free computing platform to obtain the energy efficiency of the functions operated at different frequencies under each function characteristic. The method comprises the following specific steps: binding functions with different characteristics to a CPU core, setting the frequency of the CPU core to be 0.8GHz, 0.9GHz, … GHz and 2.2GHz, and recording the energy efficiency of the functions under each frequency setting. And stores the resulting structure in a function-frequency table.
The function-frequency table includes: function characteristics and energy efficiency corresponding to 0.8 GHz-2.2 GHz every 0.1 GHz.
The method for setting the frequency of the CPU core as the optimal operation frequency of the function comprises the following specific steps:
1) the CPU cores are marked as different classes according to the function mapping module. The adjustment module uses the Cgroup command to bind the function to the CPU core corresponding to the feature tag.
2) For the CPU core in the initialization stage and the CPU core in the runtime stage, the adjusting module queries the function-frequency table, selects the optimal running frequency corresponding to the function characteristics, and then adjusts the frequency of the corresponding CPU core by using the DVFS.
3) For a hybrid CPU core, the frequency of the CPU core is set using an energy efficiency maximization algorithm. For each hybrid CPU on-core function, the energy efficiency of the function at each frequency setting is obtained from a function-frequency table. At each frequency setting, the energy efficiency of all functions on the CPU core are added, and the overall energy efficiency of the functions at each frequency is compared. The frequency chosen to maximize the overall energy efficiency of the function is set to the frequency of the hybrid CPU core.
The periodically migrating the function is that: along with the running of functions, the characteristics of partial functions can be changed, and simultaneously, with the arrival of new access functions, more and more functions on the CPU cores cannot be guaranteed to have the same characteristics, so that more and more mixed CPU cores are provided. The invention periodically carries out migration adjustment on the functions operated on the CPU core, and ensures that the confusion factor of the CPU core in the server-free computing system is minimum.
The CPU core confusion factor is as follows: the hybrid CPU core occupies a proportion of all active CPU cores in a serverless computing system, i.e.
Figure BDA0003114675430000063
The functions deployed on the CPU core hardly meet the optimal operating frequency of the functions, so that the energy efficiency of the system can be improved by reducing the chaos factors of the CPU core, and more functions can be operated according to the optimal operating frequency.
In order to perform function migration, according to the characteristics of the functions in the current system, the CPU core is marked again every 100ms, and according to the marks, the functions are migrated, and the functions with the same characteristics are migrated to the corresponding CPU core. And simultaneously, deploying the newly accessed function to the corresponding CPU core according to the characteristics.
The migration specifically comprises the following steps:
Figure BDA0003114675430000071
detecting all CPU cores, judging whether the function characteristics on the CPU cores are the same or not, when the function characteristics are the same, the CPU cores do not need to be adjusted, when the function characteristics on the CPU cores are different, the CPU cores need to be adjusted at the moment, and when no function runs on the CPU cores, the CPU cores are put into an idle CPU core queue QfreeThe following adjustments are only for the CPU cores that need to be adjusted;
Figure BDA0003114675430000072
performing characteristic analysis on the newly accessed function, and storing the newly accessed function into a queue Qnew.
Figure BDA0003114675430000073
Counting all functions on the CPU core needing to be adjusted and newly accessed functions, and counting the number { N } of the functions corresponding to each characteristicf1,Nf2,., determining the number of CPU cores corresponding to each feature and how many mixed CPU cores need to be allocated to deploy the function which cannot be deployed in the initialization stage CPU core and the runtime stage CPU core;
Figure BDA0003114675430000074
finding the leading features, counting the functions on one CPU core, counting the number of functions corresponding to different features, taking the function feature with the highest number as the leading feature of the CPU core, and putting the CPU core into the queue Q corresponding to the leading featurefnIf the function is not a function of the feature, the throttling module removes the CPU core affinity of the function by using cgroup and inserts the function into a queue Q to be deployedmotion
Figure BDA0003114675430000075
Determining a classification of the CPU core based on the steps
Figure BDA0003114675430000076
Obtaining the CPU core corresponding to the function of each characteristic and how many mixed CPU cores to obtain the function of each characteristic
Figure BDA0003114675430000077
The CPU core and the idle CPU core corresponding to the obtained dominant feature are redistributed: when the number of CPU cores required for a feature is greater than or equal to the number of CPU cores dominated by the feature, N isfn≥len(Qfn) Then, the slave Q is requiredfreeExtracting idle CPU cores to deploy the function of the characteristic; CPU core required when a feature is requiredIs less than the number of CPU cores dominated by the feature, i.e., Nfn<len(Qfn) It is explained that the number of the CPU cores dominated by the feature is too large, functions on a part of the CPU cores need to be migrated, and the system transfers QfnIn the method, each CPU core is sorted in descending order according to the number of the dominant functions of the CPU core, and is arranged in NfnAll function removals on subsequent CPU cores put Q intomotionAnd add these CPU cores into Qfree
Figure BDA0003114675430000078
Function deployment is carried out, and Q is processed according to the classified CPU coresmotionAnd a new function is deployed on the CPU core with the corresponding characteristic according to the characteristic, and firstly, the function is judged not to be QmotionAnd QnewThen the function does not need to be adjusted, then for QmotionAnd QnewThe functions in the method are bound with the affinity among the CPU cores by using the cgroup until the corresponding CPU cores are completely deployed, when all the functions are completely deployed, all the functions are perfectly bound on the corresponding CPU cores according to the characteristics, otherwise, for the functions which cannot be deployed on the CPU cores with similar characteristics, Q is bound on the corresponding CPU coresfreeThe CPU core in the system is configured as a mixed CPU core, and the redundant functions are mixed and deployed.
Through specific practical experiments, the CPU is Intel (R) Xeon (R) Silver4114, the function without server computing is selected from ALUpy, ALUJS, ALUGo, ALUwigft, ALUpthp, ALUruby and FileIO, the regulation period is 100ms, and the maximum number of the functions which can be deployed on each CPU core is 2 or 4. The experimental data that can be obtained are: the invention can reduce the chaos degree of the CPU core deployment function in the serverless computing system and improve the overall energy efficiency of the system by 14-53%.
Compared with the prior art, the method obviously relieves the gap between the upper-layer request-level scheduling and the bottom-layer instruction-level dynamic scheduling. The intermediate expression of the function can describe multi-dimensional characteristics of the server-free computing function, and the server-free computing function can be better scheduled and distributed by utilizing the multi-dimensional characteristics. And the element scheduling is used for acquiring and aggregating the multi-dimensional characteristics of the functions, and mapping and adjusting the CPU core, so that the server-free calculation functions can be operated according to respective optimal operation frequency. The invention can better deploy and manage the server-free computing system and optimize the overall operation energy efficiency of the system.
The foregoing embodiments may be modified in many different ways by those skilled in the art without departing from the spirit and scope of the invention, which is defined by the appended claims and all changes that come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims (10)

1. A serverless computing scheduling system based on functional intermediate representation, comprising: the device comprises a function characteristic acquisition module, a function aggregation module, a mapping module and an adjustment module, wherein: the function characteristic acquisition module acquires programming language characteristics of the function, runtime stage characteristics of the function and micro-architecture characteristics of the function; the function aggregation module aggregates the functions into different function sets according to different characteristics; the mapping module divides the CPU core into an initialization stage CPU core, a runtime stage CPU core and a hybrid CPU core according to different function sets; the adjusting module deploys the function to a specified core according to the characteristics of the function and the classification of the CPU core, and periodically transfers the function and adjusts the frequency of each core according to the change of the characteristics of the function;
the function characteristic acquisition refers to: firstly, acquiring the identification of a function, and then respectively: obtaining micro-architectural features of functions from a hardware level of serverless computing functions
Figure FDA0003114675420000011
Collecting runtime phase features of an application from an operating system level
Figure FDA0003114675420000012
Collecting programming language features of an application from the application software level
Figure FDA0003114675420000013
And inducing the acquired multidimensional characteristics to form a triple with function identification
Figure FDA0003114675420000014
The function aggregation refers to: aggregating functions into different function sets according to different characteristics, and aggregating according to function characteristics, specifically: first run-time phase characterization by function
Figure FDA0003114675420000015
The method comprises the steps of aggregating functions, and classifying the function aggregation into an initialization stage function and a runtime stage function; then further initializing the phase function programming language
Figure FDA0003114675420000016
Aggregating to obtain a function set
Figure FDA0003114675420000017
Per-microarchitectural characterization of runtime phase functions
Figure FDA00031146754200000112
Aggregating functions to obtain function set
Figure FDA0003114675420000018
Figure FDA0003114675420000019
The initialization stage CPU core refers to: according to a set
Figure FDA00031146754200000110
In the function condition, marking all the CPU cores which can deploy the functions of the same programming language as the CPU cores in the initialization stage; initializing functions to be run on each of the CPU coresThe initialization phase and their programming languages are the same.
2. The system according to claim 1, wherein the runtime stage CPU core is: according to a set
Figure FDA00031146754200000111
In the function condition, marking all the CPU cores which can deploy the functions with the same micro-architecture characteristics as the CPU cores in the runtime stage; the functions to be run on each core in the runtime CPU core are in the runtime phase and their micro-architectural features are the same;
the mixed CPU core is as follows: because the number of the functions which can be deployed on each CPU core is limited, all the functions with the same characteristics can not be deployed on the same CPU core, and the functions which can not be deployed completely according to the characteristics can be deployed on the mixed CPU core; the hybrid CPU core includes: an initialization hybrid CPU core, a runtime hybrid CPU core, and an initialization/runtime hybrid CPU core, wherein: initializing the hybrid CPU core means: some functions in the initialized CPU core are in an initialization stage, but there is no way to deploy the functions with the same programming language on the same CPU core, and at this time, the remaining CPU cores, which can be deployed by the functions in the initialization stage, are marked as initialized hybrid CPU cores, and the functions to be run on each core in the initialized hybrid CPU cores are in the initialization stage but the programming languages are different; run-time hybrid CPU core means: the function still existing in the CPU core during operation can not be deployed on the same CPU core according to the same micro-architecture characteristics; marking the rest CPU core which can be deployed by the function in the runtime stage as a runtime mixed CPU core; the function to be run on each core in the run-time hybrid CPU core is in the run-time stage but the micro-architecture features are different; the initialization/runtime hybrid CPU core refers to: the rest of functions cannot be deployed on the same CPU core according to the initialization phase and the runtime phase, and at this time, the rest of functions with the initialization phase and the runtime phase are deployed on the CPU core in a mixed manner, and the cores are marked as initialization/runtime mixed CPU cores.
3. The system according to claim 1, wherein the adjusting is: the method comprises the steps of firstly deploying a function to a specified core according to the characteristics of the function and the classification of CPU cores, sensing the function characteristics of the operation of the CPU cores as the optimal operation frequency of the function, periodically transferring the function according to the change of the function characteristics to ensure that the confusion factor of the CPU core is minimum, and periodically adjusting the frequency of the CPU core to ensure that the frequency of the CPU core meets the optimal operation frequency of the function.
4. The system according to claim 1, wherein the periodically migrating the function is: periodically performing migration adjustment on functions running on the CPU cores to ensure that the confusion factor of the CPU cores in the server-free computing system is minimum, namely, the mixed CPU cores in the server-free computing system occupy the proportion of all activated CPU cores, namely
Figure FDA0003114675420000021
When the minimum value is reached, the functions deployed on the CPU core hardly meet the optimal operation frequency of the functions, so that the energy efficiency of the system can be improved by reducing the chaos factors of the CPU core, and more functions can be operated according to the optimal operation frequency.
5. The serverless computation scheduling method of any one of claims 1 to 4, comprising:
step 1, a function feature extraction module acquires multi-dimensional features of a function, specifically: when the function runs in the server-free computing system, the function feature extraction module obtains the identifier of the container corresponding to the function, and obtains the features of the server-free computing function from three different layers: micro-architectural feature for fetching functions from hardware level
Figure FDA0003114675420000022
Acquiring runtime stage characteristics of a function from an operating system level; the function feature extraction module generalizes the collected three-aspect feature information and stores the information as a triple with function identification
Figure FDA0003114675420000023
Step 2, the function aggregation module aggregates the functions into different function sets according to different characteristics, specifically: firstly, according to the runtime stage characteristics of the function, the function is divided into: integrating all functions in an initialization stage into an initialization stage function set and integrating all functions in a runtime stage into a runtime stage function set;
and 3, classifying the CPU cores on the server-free computing system by the function mapping module to ensure that the server-free computing function can have better mapping with the CPU cores, and dividing the CPU cores on the server-free computing platform into the following parts by the function mapping module according to the characteristics and the number of different function sets obtained in the function aggregation module: an initialization stage CPU core, a runtime stage CPU core and a hybrid CPU core;
step 4, the adjusting module adjusts the operating environment of the function, specifically: firstly, deploying a function to a specified core according to the characteristic of the function and the classification of CPU cores, and setting the frequency of each CPU core as the optimal operating frequency of the function according to a function-frequency table; and periodically transferring the function according to the change of the function characteristics to ensure that the chaotic factor of the CPU core is minimum, and periodically adjusting the frequency of the CPU core to ensure that the frequency of the CPU core meets the optimal operation frequency of the function.
6. The method as claimed in claim 5, wherein the step 1 specifically comprises:
1) when the function accesses the serverless computing system, the function feature extraction module firstly acquires the programming language of the function from a function software level; method for acquiring function languageThe realization language of the function can be obtained by analyzing the output of the dockerps command; and stored in a unary group
Figure FDA0003114675420000031
Performing the following steps; the operation of obtaining the function programming language only needs to be carried out once at the stage of accessing the system by the function;
2) periodically (100ms), a function feature extraction module acquires the micro-architectural features of the function from a hardware level, and the function feature extraction module uses a perf command to count specific events on a specific core, wherein the events comprise: task-clock is used for expressing the CPU utilization rate, cache-reference is used for expressing the memory use bandwidth, and block _ rq _ insert is used for expressing the IO bandwidth of application; and store the hardware information into triplets
Figure FDA0003114675420000032
3) A function feature extraction module periodically (100ms) acquires the runtime stage features of the function from an operating system level, and acquires whether the function is in an initialization stage or a runtime stage currently by analyzing log information output by the dockerlog; and storing the runtime information in a tuple
Figure FDA0003114675420000033
7. The method as claimed in claim 5, wherein the step 2 specifically comprises:
1) the function aggregation module further aggregates the initialization stage function sets according to the programming language, and aggregates functions with the same programming language (python, java, go, ruby, swift, php, rust, etc.) in the initialization stage function sets into a new function set
Figure FDA0003114675420000034
I.e. a set of function identifications all having the same programming language;
2) function aggregationThe set module further aggregates the function sets in the runtime stage according to the micro-architecture characteristics, and aggregates the functions with the same micro-architecture characteristics (CPU intensive, memory intensive, IO intensive and the like) in the runtime stage into a new function set
Figure FDA0003114675420000035
I.e. the set of function identifications all having the same micro-architectural features.
8. The method of claim 5, wherein the initialization phase CPU core is: to the collection
Figure FDA0003114675420000041
Figure FDA0003114675420000042
The statistical number of the function sets corresponding to each programming language is obtained
Figure FDA0003114675420000043
Will be provided with
Figure FDA0003114675420000044
Marking each CPU core as an initialization CPU core under a corresponding programming language; initializing that server-free computing functions deployed on a CPU core all have the same programming language;
the run-time stage CPU core refers to: to the collection
Figure FDA0003114675420000045
The statistical number of the function set corresponding to each micro-architecture feature in the system is obtained
Figure FDA0003114675420000046
Respectively to be provided with
Figure FDA0003114675420000047
Marking CPU cores as corresponding micro-architecturesA run-time CPU core under a configuration characteristic; the server-free computing functions deployed on the CPU core during the operation have the same micro-architectural characteristics;
the mixed CPU core is as follows: because the number of the functions which can be deployed on each CPU core is limited, all the functions with the same characteristics can not be deployed on the same CPU core, and the functions which can not be deployed completely according to the characteristics can be deployed on the mixed CPU core; hybrid CPU cores can be further divided into three categories: 1) initializing a hybrid CPU core; 2) run-time hybrid CPU cores; and 3) an initialization/runtime hybrid CPU core;
the mixed CPU core is further divided according to the following mode:
the initialization of the hybrid CPU core is as follows: marking part of the CPU core as an initialization CPU core according to a programming language, but also part of the function
Figure FDA0003114675420000048
Are all in the initialization phase, but there is no way to deploy functions with the same programming language on the same CPU core; then the remaining functions are shared
Figure FDA0003114675420000049
Each function cannot be deployed on the same CPU core as a function with the same programming language; at this moment will
Figure FDA00031146754200000410
Figure FDA00031146754200000411
Marking each CPU core as an initialization hybrid CPU core;
the run-time hybrid CPU core means: marking a deployment CPU core as a runtime CPU core in terms of microarchitectural features, but still existing
Figure FDA00031146754200000412
The functions can not be deployed on the same CPU core completely according to the same micro-architectural characteristics; at this moment will
Figure FDA00031146754200000413
Each CPU core is marked as a runtime hybrid CPU core;
the initialization/run-time hybrid CPU core is as follows: the rest functions cannot be deployed on the same CPU core according to the initialization phase and the runtime phase; the remaining function mix with initialization and runtime phases at this point is deployed on the CPU core and these cores are marked as initialization/runtime mix CPU cores.
9. The method of claim 5, wherein said function-frequency table is: counting functions operated on a server-free computing platform to obtain the energy efficiency of the functions operated at different frequencies under each function characteristic; the method comprises the following specific steps: binding functions with different characteristics on a CPU core, setting the frequency of the CPU core to be 0.8GHz, 0.9GHz, … GHz and 2.2GHz, and recording the energy efficiency of the functions under each frequency setting; and storing the obtained structure in a function-frequency table;
the function-frequency table includes: functional characteristics, a system energy efficiency corresponding to 0.8GHz, an energy efficiency corresponding to 0.9GHz, and an energy efficiency corresponding to 2.2 GHz.
10. The method of claim 5, wherein setting the frequency of the CPU core to the optimal operating frequency of the function comprises:
1) marking the CPU cores into different categories according to a function mapping module; the adjusting module binds the function to the CPU core of the corresponding characteristic mark by using the Cgroup command;
2) for the CPU core in the initialization stage and the CPU core in the running stage, the adjusting module inquires a function-frequency table, selects the optimal running frequency corresponding to the function characteristic, and then adjusts the frequency of the corresponding CPU core by using the DVFS;
3) for a hybrid CPU core, setting the frequency of the CPU core by using an energy efficiency maximization algorithm; for each mixed CPU core function, acquiring the energy efficiency of the function under each frequency setting from the function-frequency table; under each frequency setting, adding all the function energy efficiencies on the CPU core, and comparing the overall energy efficiencies of the functions under each frequency; selecting a frequency that maximizes overall energy efficiency of the function as the frequency of the hybrid CPU core;
the periodically migrating the function is that: along with the operation of the function, the characteristics of part of functions can be changed, and simultaneously, with the arrival of new access functions, more and more functions on the CPU core can not be guaranteed to have the same characteristics, so that more and more mixed CPU cores can be obtained; the invention periodically carries out migration adjustment on the functions running on the CPU core, and ensures that the chaotic factor of the CPU core in the server-free computing system is minimum;
in order to carry out function migration, according to the characteristics of the functions in the current system, the CPU core is marked again every 100ms, and the functions are migrated according to the marks, and the functions with the same characteristics are migrated to the corresponding CPU core; meanwhile, deploying the newly accessed function to the corresponding CPU core according to the characteristics;
the migration specifically comprises the following steps:
detecting all CPU cores, judging whether the function characteristics on the CPU cores are the same or not, when the function characteristics are the same, the CPU cores do not need to be adjusted, when the function characteristics on the CPU cores are different, the CPU cores need to be adjusted at the moment, and when no function runs on the CPU cores, the CPU cores are put into an idle CPU core queue QfreeThe following adjustments are only for the CPU cores that need to be adjusted;
secondly, the feature analysis is carried out on the newly accessed function, and the newly accessed function is stored in a queue Qnew.
Thirdly, counting all functions on the CPU core needing to be adjusted and newly accessed functions, and counting the number { N } of the functions corresponding to each characteristicf1,Nf2,., determining the number of CPU cores corresponding to each feature and how many mixed CPU cores need to be allocated to deploy the function which cannot be deployed in the initialization stage CPU core and the runtime stage CPU core;
finding out dominant features for the functions of one CPU coreCounting the number, counting the function number corresponding to different characteristics, taking the function characteristic with the highest number as the leading characteristic of the CPU core, and putting the CPU core into the queue Q corresponding to the leading characteristicfnIf the function is not a function of the feature, the throttling module removes the CPU core affinity of the function by using cgroup and inserts the function into a queue Q to be deployedmotion
Determining the classification of the CPU cores, obtaining the CPU cores corresponding to the functions of each characteristic and how many mixed CPU cores according to the step III, and reallocating the CPU cores corresponding to the main characteristic and the idle CPU cores obtained in the step IV: when the number of CPU cores required for a feature is greater than or equal to the number of CPU cores dominated by the feature, N isfn≥len(Qfn) Then, the slave Q is requiredfreeExtracting idle CPU cores to deploy the function of the characteristic; when the number of CPU cores required for a feature is less than the number of CPU cores dominated by the feature, Nfn<len(Qfn) It is explained that the number of the CPU cores dominated by the feature is too large, functions on a part of the CPU cores need to be migrated, and the system transfers QfnIn the method, each CPU core is sorted in descending order according to the number of the dominant functions of the CPU core, and is arranged in NfnAll function removals on subsequent CPU cores put Q intomotionAnd add these CPU cores into Qfree
Sixthly, function deployment is carried out, and Q is processed according to the classified CPU coresmotionAnd a new function is deployed on the CPU core with the corresponding characteristic according to the characteristic, and firstly, the function is judged not to be QmotionAnd QnewThen the function does not need to be adjusted, then for QmotionAnd QnewThe functions in the method are bound with the affinity among the CPU cores by using the cgroup until the corresponding CPU cores are completely deployed, when all the functions are completely deployed, all the functions are perfectly bound on the corresponding CPU cores according to the characteristics, otherwise, for the functions which cannot be deployed on the CPU cores with similar characteristics, Q is bound on the corresponding CPU coresfreeThe CPU core in the system is configured as a mixed CPU core, and the redundant functions are mixed and deployed.
CN202110659654.5A 2021-06-15 2021-06-15 Server-free computing scheduling system and method based on function intermediate expression Active CN113238853B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110659654.5A CN113238853B (en) 2021-06-15 2021-06-15 Server-free computing scheduling system and method based on function intermediate expression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110659654.5A CN113238853B (en) 2021-06-15 2021-06-15 Server-free computing scheduling system and method based on function intermediate expression

Publications (2)

Publication Number Publication Date
CN113238853A true CN113238853A (en) 2021-08-10
CN113238853B CN113238853B (en) 2021-11-12

Family

ID=77139897

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110659654.5A Active CN113238853B (en) 2021-06-15 2021-06-15 Server-free computing scheduling system and method based on function intermediate expression

Country Status (1)

Country Link
CN (1) CN113238853B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024114484A1 (en) * 2022-12-02 2024-06-06 中国科学院深圳先进技术研究院 Serverless computing adaptive resource scheduling method and system and computer device

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160378661A1 (en) * 2015-06-26 2016-12-29 Microsoft Technology Licensing, Llc Instruction block allocation
CN108763042A (en) * 2018-05-24 2018-11-06 广东睿江云计算股份有限公司 A kind of Cloud Server performance data acquisition method and device based on python
CN108958725A (en) * 2018-07-06 2018-12-07 广州慧通编程教育科技有限公司 Graphical mode programming platform generation method, device and computer equipment
CN109388486A (en) * 2018-10-09 2019-02-26 北京航空航天大学 A kind of data placement and moving method for isomery memory with polymorphic type application mixed deployment scene
CN110121698A (en) * 2016-12-31 2019-08-13 英特尔公司 System, method and apparatus for Heterogeneous Computing
US20190340013A1 (en) * 2018-05-06 2019-11-07 Strong Force TX Portfolio 2018, LLC Transaction-enabled systems and methods for providing provable access to executable algorithmic logic in a distributed ledger
WO2020096282A1 (en) * 2018-11-05 2020-05-14 Samsung Electronics Co., Ltd. Service-aware serverless cloud computing system
CN111970338A (en) * 2020-07-30 2020-11-20 腾讯科技(深圳)有限公司 Request processing method and device based on cloud function and computer readable medium
CN112214293A (en) * 2017-11-08 2021-01-12 华为技术有限公司 Method for service deployment under server-free architecture and function management platform
CN112272198A (en) * 2020-09-03 2021-01-26 中国空间技术研究院 Satellite network-oriented collaborative computing task migration method and device
CN112346845A (en) * 2021-01-08 2021-02-09 腾讯科技(深圳)有限公司 Method, device and equipment for scheduling coding tasks and storage medium
CN112753019A (en) * 2018-09-27 2021-05-04 亚马逊技术有限公司 Efficient state maintenance of execution environments in on-demand code execution systems

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160378661A1 (en) * 2015-06-26 2016-12-29 Microsoft Technology Licensing, Llc Instruction block allocation
CN110121698A (en) * 2016-12-31 2019-08-13 英特尔公司 System, method and apparatus for Heterogeneous Computing
CN112214293A (en) * 2017-11-08 2021-01-12 华为技术有限公司 Method for service deployment under server-free architecture and function management platform
US20190340013A1 (en) * 2018-05-06 2019-11-07 Strong Force TX Portfolio 2018, LLC Transaction-enabled systems and methods for providing provable access to executable algorithmic logic in a distributed ledger
CN108763042A (en) * 2018-05-24 2018-11-06 广东睿江云计算股份有限公司 A kind of Cloud Server performance data acquisition method and device based on python
CN108958725A (en) * 2018-07-06 2018-12-07 广州慧通编程教育科技有限公司 Graphical mode programming platform generation method, device and computer equipment
CN112753019A (en) * 2018-09-27 2021-05-04 亚马逊技术有限公司 Efficient state maintenance of execution environments in on-demand code execution systems
CN109388486A (en) * 2018-10-09 2019-02-26 北京航空航天大学 A kind of data placement and moving method for isomery memory with polymorphic type application mixed deployment scene
WO2020096282A1 (en) * 2018-11-05 2020-05-14 Samsung Electronics Co., Ltd. Service-aware serverless cloud computing system
CN111970338A (en) * 2020-07-30 2020-11-20 腾讯科技(深圳)有限公司 Request processing method and device based on cloud function and computer readable medium
CN112272198A (en) * 2020-09-03 2021-01-26 中国空间技术研究院 Satellite network-oriented collaborative computing task migration method and device
CN112346845A (en) * 2021-01-08 2021-02-09 腾讯科技(深圳)有限公司 Method, device and equipment for scheduling coding tasks and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JONATAN ENES: ""Real-time resource scaling platform for Big Data workloads on serverless environments"", 《FUTURE GENERATION COMPUTER SYSTEMS》 *
TESTERMA: ""高并发编程知识体系"", 《HTTPS://WWW.CNBLOGS.COM/TESTMA/P/10641278.HTML》 *
赵煜辉: "《并发程序设计基础教程》", 31 December 2008 *
马泽华: ""无服务器平台资源调度综述"", 《计算机科学》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024114484A1 (en) * 2022-12-02 2024-06-06 中国科学院深圳先进技术研究院 Serverless computing adaptive resource scheduling method and system and computer device

Also Published As

Publication number Publication date
CN113238853B (en) 2021-11-12

Similar Documents

Publication Publication Date Title
Jin et al. Bar: An efficient data locality driven task scheduling algorithm for cloud computing
CN104915407B (en) A kind of resource regulating method based under Hadoop multi-job environment
Wei-guo et al. Research on kubernetes' resource scheduling scheme
CN104298550B (en) A kind of dynamic dispatching method towards Hadoop
CN103475538B (en) A kind of adaptive cloud service method of testing based on multiplex roles
CN109388486B (en) Data placement and migration method for heterogeneous memory and multi-type application mixed deployment scene
Mao et al. A multi-resource task scheduling algorithm for energy-performance trade-offs in green clouds
Liu et al. Preemptive hadoop jobs scheduling under a deadline
CN110231986A (en) Dynamic based on more FPGA reconfigurable multi-task scheduling and laying method
CN113672391B (en) Parallel computing task scheduling method and system based on Kubernetes
CN102004664A (en) Scheduling method of embedded real-time operating system of space vehicle
CN113238853B (en) Server-free computing scheduling system and method based on function intermediate expression
CN110084507B (en) Scientific workflow scheduling optimization method based on hierarchical perception in cloud computing environment
Chen et al. Energy-and locality-efficient multi-job scheduling based on MapReduce for heterogeneous datacenter
Shu-Jun et al. Optimization and research of hadoop platform based on fifo scheduler
Neuwirth et al. Using balanced data placement to address i/o contention in production environments
JP6287261B2 (en) System control apparatus, control method, and program
CN114860449B (en) Data processing method, device, equipment and storage medium
Xue et al. DSM: a dynamic scheduling method for concurrent workflows in cloud environment
Sibai Simulation and performance analysis of multi-core thread scheduling and migration algorithms
Abawajy Dynamic parallel job scheduling in multi-cluster computing systems
Li et al. Labeling Scheduler: A Flexible Labeling-based Jointly Scheduling Approach for Big Data Analysis
CN104699520A (en) Energy saving method based on migrating scheduling of virtual machine
Jiang et al. Research on unified resource management and scheduling system in cloud environment
Li et al. Energy-efficient resource allocation strategy based on task classification in data center

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant