CN112083932A - Function preheating system and method on virtual network equipment - Google Patents

Function preheating system and method on virtual network equipment Download PDF

Info

Publication number
CN112083932A
CN112083932A CN202010829332.6A CN202010829332A CN112083932A CN 112083932 A CN112083932 A CN 112083932A CN 202010829332 A CN202010829332 A CN 202010829332A CN 112083932 A CN112083932 A CN 112083932A
Authority
CN
China
Prior art keywords
function
preheating
resource
network
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010829332.6A
Other languages
Chinese (zh)
Other versions
CN112083932B (en
Inventor
李超
张路
冯伟琪
路煜
侯小凤
过敏意
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN202010829332.6A priority Critical patent/CN112083932B/en
Publication of CN112083932A publication Critical patent/CN112083932A/en
Application granted granted Critical
Publication of CN112083932B publication Critical patent/CN112083932B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • H04L43/0888Throughput
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45579I/O management, e.g. providing access to device drivers or storage
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Environmental & Geological Engineering (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Debugging And Monitoring (AREA)

Abstract

A function warm-up system and method on a virtualized network device, comprising: the resource excavator conducts interruption aggregation to collect idle resources according to an idle resource table, real-time network function types and network packet rates in the network function virtualization server, the resource controller dynamically maintains a preheating function pool, sends a starting notice to the resource excavator according to preheating requirements, conducts function preheating operation by using the collected idle resources, and meanwhile guarantees the preferential execution of operation threads through thread control. The invention can excavate and collect idle computing resources of the server in the network function virtualization environment, simultaneously manage the preheating thread and the execution thread in parallel, and is used for preheating and efficiently running the function container in the server-free computing scene.

Description

Function preheating system and method on virtual network equipment
Technical Field
The invention relates to a technology in the field of internet information processing, in particular to a Function preheating system and a Function preheating method (NEMO) combining a Network Function Virtualization (Network Function Virtualization) technology and a server-free Computing (servers Computing) technology.
Background
The serverless computing (serverless computing) represents a novel Internet application deployment mode, has good expansibility, is favorable for realizing multi-tenant shared use under the condition of limited resources, improves the utilization efficiency of cloud resources, and is very suitable for the supply of future edge intelligent services. The serverless computing does not need a server, but means that a user does not need to pay attention to the deployment configuration of the server, only needs to pay attention to a specifically required service, and an operator is specifically responsible for an underlying operating system and hardware facilities and runs the service of the user in a software container (software container) environment.
The conventional network function virtualization server has low daily utilization rate and large resource waste, and can be used for supporting the operation of a container on which a server-free computing task (also called a function) depends. However, the container in the server-free computing scene has a cold start problem, and the delay of the whole operation of the function can be greatly limited; simply deploying a container on a network functions virtualization server can have two negative impacts: 1) the inability to fully utilize all available computing resources of the network function virtualization server; 2) causing interference between different computing user tasks and seriously compromising the performance of the function and the quality of network transmission.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a function preheating system and a function preheating method on a virtualization network device, which can excavate and collect idle computing resources of a server in a network function virtualization environment, simultaneously manage a preheating thread and an execution thread in parallel and are used for preheating and efficiently running a function container in a server-free computing scene.
The invention is realized by the following technical scheme:
the invention relates to a function preheating system on a virtualization network device, which comprises: a resource miner and a resource controller, wherein: the resource excavator conducts interruption aggregation to collect idle resources according to an idle resource table, real-time network function types and network packet rates in the network function virtualization server, the resource controller dynamically maintains a preheating function pool, sends a starting notice to the resource excavator according to preheating requirements, conducts function preheating operation by using the collected idle resources, and meanwhile guarantees the preferential execution of operation threads through thread control.
The idle resource table is used for carrying out offline analysis on a server with network function virtualization deployed in advance, quantifying parameters of redundant resource mining to obtain the idle resource table so as to reflect the maximum resource quantity which can be utilized by different network virtualization applications at different network packet rates (packet rates), wherein column labels are application types, row labels are network packet rates, and numerical values are the interrupt aggregation quantity of the maximum resources which can be mined at specific application and network packet rates, so that the maximum idle resource quantity which can be provided by a system for providing resources can be stored in a memory to guide a resource mining module under the condition that the network function is not influenced under the conditions of different network function types and network packet rates.
The resource controller is matched with the resource digger to dig the idle resources on the network function server to complete the preheating thread and the operation thread which run in parallel in the serverless computation, wherein the function which is not triggered is initialized in advance during the preheating thread, and the operation thread requests the coming computation request.
The free resource table is preferably located in the memory in the operating state.
The invention relates to a function preheating method on a virtualized network device based on the system, which utilizes an idle resource table to quantitatively analyze idle resources of a network function virtualized server, uses the idle resources for preheating a container which is depended by a serverless computing task (also called a function), namely, initializing in advance, and reduces thread conflict by optimally calling a preheating thread and a serverless computing thread.
The optimization calling comprises the following steps:
step 1) preheating of a server-free computing task: the method comprises the following steps that a container for maintaining the preheated function is arranged, namely a preheating pool is arranged to reduce initialization time, when the system is in a running state, the system can keep running a background preheating thread for managing preheating of a serverless computing task, and the thread can periodically check whether the preheating pool is full: when the preheating pool has a free position, the preheating process selects a new function for preheating according to the reference historical data of the system operation and places the new function into the preheating pool, namely, the resource manager controls the preheating process to select a serverless computing task for preheating and marks the state of the computing task as < preheating in >, and when the computing task is preheated, the preheating process changes the state of the task into < preheating > and adds the task into the preheating pool.
The preheating tank comprises two preheating processes: the function that is warming up is marked as < warming up >, the function that has completed warming up is marked as < warming up >, and the function of < warming up > is kept in the memory of the system.
The reference history data includes 1) a measure of the preheating benefit of a function, 2) the frequency at which the function is called, and 3) the execution time of the last time the function was called.
Preferably, the historical data is maintained and stored as a historical data table while the system is performing a computing task.
Preferably, during the warm-up, the resource miners are in a non-mining state to maintain maximum performance operation of the network functions.
Step 2) when the system runs a serverless computing task, the system correspondingly processes according to whether the function is contained in the preheating pool, and the method specifically comprises the following steps:
2.1) when the functions required by the incoming computation request are not in the preheat pool:
i) the system informs the resource digger to start resource digging and provides maximum resource.
And ii) when the preheating process is running at the same time, the preheating process is suspended emergently to ensure that no server calculates the request.
iii) when the computing task is completed, the system notifies the resource miner to first transition the mode to a non-mining mode. A suspended warm-up process is activated.
iv) delete the preheat function that has not been used for the longest time in the preheat pool, and store the calculated function in the preheat pool and mark it as < preheated >.
v) updating and maintaining the function historical data table.
2.2) when the functions required by the incoming computation request are warming the pool:
i) the system informs the resource digger to start resource digging and provides maximum resource.
ii) when the tag of the function is < warmed up > and the warm-up process is also running at the same time, the warm-up process is urgently suspended.
iii) after the computation is completed, activating a preheating process.
iv) when the label of the function is < preheating >, suspending the calculation request without the server and waiting for the completion of the preheating process.
v) after the preheating process is completed, the function label is updated to be preheated, and a serverless computing request is activated.
vi) the resource miner changes to a non-mining mode and updates the function historical data table.
Preferably, when the no-server computing task arrives, the resource manager controls the preheating process to monitor the preheating pool at regular time.
Technical effects
The invention integrally solves the defect that the prior art can not quantitatively analyze and utilize the idle resources on the network function virtualization server.
Compared with the prior art, the method and the device quantitatively analyze the quantity of the idle resources according to different network function types and network packet rates and guide the digger to dig the idle resources. The invention uses the collected idle resources of the network function virtualization server to preheat the function in the serverless computing.
The invention fully excavates the redundant resources on the server with the network function virtualization capability; and running functions required by the serverless computation on the basis of the mining resources. The invention can intelligently manage the preheating and calculating process of the server-free calculation, provides the server-free calculation capability on the basis of ensuring the network function performance, optimizes the performance of the user function, and is expected to reduce the delay of the server-free calculation task by 96% and improve the performance of the server-free calculation task by 25 times compared with the design method of simply combining the existing network function and the server-free calculation.
Drawings
FIG. 1 is a block diagram of the system of the present invention;
in the figure: EF is a server-free edge calculation function, and NF is a network function;
FIG. 2 is a flow chart of the present invention;
FIG. 3 is a schematic view of resource mining according to the present invention;
FIG. 4 is a graph comparing the performances of examples.
Detailed Description
As shown in fig. 1, the present embodiment relates to a function warm-up system on a virtualized network device, including: a resource miner and a resource controller, wherein: the resource excavator conducts interruption aggregation to collect idle resources according to an idle resource table, real-time network function types and network packet rates in the network function virtualization server, the resource controller dynamically maintains a preheating function pool, sends a starting notice to the resource excavator according to preheating requirements, conducts function preheating operation by using the collected idle resources, and meanwhile guarantees the preferential execution of operation threads through thread control.
The resource miner comprises: the network packet processing system comprises an offline analysis module for analyzing the maximum resource which can be mined by the network function under different network packet rates and a resource mining module for controlling the interrupt aggregation of the network card and the burst processing of the network packet, wherein: the system for providing resources by offline analysis of the offline analysis module can provide the maximum amount of idle resources under the conditions of different network function types and network packet rates and on the premise of not influencing the network function, and the maximum amount of idle resources can be stored in the memory to guide the resource mining module. And the resource mining module collects idle resources through interrupt aggregation according to the current network function type and the network packet rate of the function preheating system on the virtual network equipment.
The offline analysis module sends network packets of 64bytes, 128bytes, 256bytes, 512bytes, 1024bytes and 1500bytes to the network function virtualization server respectively aiming at the deployed network functions, measures the throughput of different network functions under different packet sizes to serve as the performance indexes of the network functions, and conducts an interrupt aggregation experiment on the network card at 10-bit intervals from 0 through an ethntool command under different packet sizes. And simultaneously acquiring the CPU utilization rate and the throughput of the network function corresponding to different interrupts, analyzing different network functions to process network packets with different sizes, and maintaining an idle resource table according to the maximum interrupt aggregation number without influencing the performance of the network packets.
The resource mining module queries the required terminal aggregation number from the idle resource table according to the network function and the network packet size of the current system operation, interrupts and aggregates the network card through an ethtoolol command, and when the system does not need to perform resource mining, the resource mining device uses the ethtoolol command to set the network card interrupt aggregation to 1, so that the network card recovers the default operation.
The resource controller is matched with the resource digger to complete a preheating thread and an operation thread which run in parallel in the serverless computing, wherein the function which is not triggered is initialized in advance during the preheating thread, and the operation thread requests an incoming computing request.
The preheating process is a periodic background process, a preheating pool is defined, and a new function is determined to preheat according to the historical data of system operation when the preheating pool is periodically checked whether to be idle.
The size of the preheating pool is 2 containers which can survive in the system at the same time. Meanwhile, the system maintains a historical data table, wherein the table comprises the following components: the preheating benefit of a certain function, the calling frequency of the function and the execution time of the function when the function is called last time are measured.
The preheating is preferably performed by marking the function to be preheated as < preheating in progress, and changing the label of the function to be preheated after the preheating is completed. And stored in a preheating pool.
The pre-heating is further preferably performed when the resource miner is in a non-mining state during the pre-heating to maintain the highest performance operation of the network function.
The operation process correspondingly adjusts the states of the resource miner and the preheating process and updates and maintains a function historical data table according to whether the function required by the incoming calculation request is in a preheating pool defined by the preheating process, and specifically comprises the following steps:
when the functions required by the incoming computation request are not in the pre-heat pool: and informing the resource digger to start resource mining, providing maximum resources, when the preheating process runs simultaneously, suspending the preheating process emergently to ensure that no server is used for calculating a request, informing the resource digger to change the mode into a non-mining mode to activate the suspended preheating process until the calculation task is completed, deleting the preheating function which is not used for the longest time in the preheating pool after the preheating is completed, and simultaneously storing the calculated function into the preheating pool, marking the function as < preheated > and updating and maintaining a function historical data table.
When the functions required by the arriving computation requests are in the pre-heating pool: informing a resource miner to start resource mining, providing maximum resources, urgently suspending a preheating process when a function label is < preheated > and the preheating process is running simultaneously, ensuring that a server-free computing request is not required, activating the preheating process until the computing is finished, suspending the computing request when the function label is < preheating >, updating the function label to < preheated > after the preheating process is finished, activating the server-free computing request, setting the resource miner to change into a non-mining mode and updating a function historical data table.
The method can intelligently manage the preheating and calculating processes of the server-free calculation, provides the server-free calculation capacity on the basis of ensuring the network function performance, optimizes the performance of the user function, and is expected to reduce the delay of the server-free calculation task by 96% and improve the performance of the server-free calculation task by 25 times compared with the design method of simply combining the existing network function and the server-free calculation.
Through practical experiments, under the specific environment that a CPU is Intel (R) Xeon (R) Silver 4114 and a network card is 1GB, a Click-based network function Firewall (500rules), NAT and LoadBalancer are used, a serverless computing task is selected to be markdown2html, img-reset, sending-analysis and ocr-img, 30 times of serverless computing are randomly started, and the experimental data which can be obtained are as follows: compared with the design method of simply combining the network function and the serverless computation, the design method of simply combining the total delay of the system is 10271950 ms, and the system total delay proposed in the embodiment is 410818 ms, so that on the basis of not affecting the throughput of the network function, the embodiment can reduce the serverless computation task delay by 96%, and the serverless computation task performance can be improved by up to 25 times.
Compared with the prior art, when the network function and the serverless computing task are mixed and deployed on the same server, the method reduces the starting delay of the serverless computing function and improves the performance of the serverless computing task on the basis of not influencing the throughput of the network function.
The foregoing embodiments may be modified in many different ways by those skilled in the art without departing from the spirit and scope of the invention, which is defined by the appended claims and all changes that come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims (7)

1. A function warm-up system on a virtualized network device, comprising: a resource miner and a resource controller, wherein: the resource excavator performs interrupt aggregation to collect idle resources according to an idle resource table, real-time network function types and network packet rates in the network function virtualization server, the resource controller dynamically maintains a preheating function pool, sends a starting notice to the resource excavator according to preheating requirements, performs function preheating operation by using the collected idle resources, and simultaneously ensures the priority execution of operation threads through thread control;
the idle resource table is used for carrying out offline analysis on a server with network function virtualization deployed in advance, quantifying parameters of redundant resource mining to obtain the idle resource table so as to reflect the maximum resource quantity which can be utilized by different network virtualization applications at different network packet rates, wherein column labels are application types, row labels are network packet rates, and numerical values are the interrupt aggregation quantity of the maximum resources which can be mined at specific application and network packet rates, so that the system for providing resources can provide the maximum idle resource quantity under the condition of not influencing network functions under the condition of different network function types and network packet rates, and stores the maximum idle resource quantity in a memory to guide a resource mining module.
2. The system of claim 1, wherein the resource controller cooperates with the resource miner to mine idle resources on the network function server to perform a pre-heating thread and an operational thread running in parallel in serverless computing, wherein the pre-heating thread initiates an un-triggered function in advance and the operational thread requests incoming computing requests.
3. The system of claim 1, wherein the free resources table is located in memory during a run state.
4. A method for function warming on virtualized network devices based on the system of any of the above claims, characterized in that the free resources of the network function virtualized server are quantitatively analyzed by using the free resources table, the free resources are used for warming of the container on which the serverless computing task depends, i.e. initialization in advance, and the thread conflict is reduced by optimally calling the warming thread and the serverless computing thread.
5. The function warm-up method of claim 4, wherein said optimization call comprises the steps of:
step 1) preheating of a server-free computing task: the method comprises the following steps that a container for maintaining the preheated function is arranged, namely a preheating pool is arranged to reduce initialization time, when the system is in a running state, the system can keep running a background preheating thread for managing preheating of a serverless computing task, and the thread can periodically check whether the preheating pool is full: when the preheating pool has a free position, a preheating process selects a new function for preheating according to reference historical data of system operation, and the function is placed in the preheating pool for a container;
step 2) when the system runs a serverless computing task, the system correspondingly processes according to whether the function is contained in the preheating pool, and the method specifically comprises the following steps:
2.1) when the functions required by the incoming computation request are not in the preheat pool:
i) the system informs the resource digger to start resource digging and provides maximum resource;
ii) when the preheating process is running at the same time, the preheating process is suspended emergently to ensure that no server is used for calculating the request;
iii) when the computing task is completed, the system informs the resource miner to change the mode to a non-mining mode; activating a suspended preheating process;
iv) deleting the preheating function which is not used for the longest time in the preheating pool, and simultaneously storing the calculated function into the preheating pool and marking the function as < preheated >;
v) updating and maintaining the function historical data table;
2.2) when the functions required by the incoming computation request are warming the pool:
i) the system informs the resource digger to start resource digging and provides maximum resource;
ii) when the label of the function is < preheated > and the preheating process is running at the same time, the preheating process is emergently suspended;
iii) after the calculation is finished, activating a preheating process;
iv) when the label of the function is < preheating >, suspending the calculation request without the server and waiting for the completion of the preheating process;
v) after the preheating process is finished, the function label is updated to be preheated, and a serverless computing request is activated;
vi) the resource miner changes to a non-mining mode and updates the function historical data table.
6. The function preheating method of claim 5, wherein the preheating pool comprises two types of preheating processes: the function that is warming up is marked as < warming up >, the function that has completed warming up is marked as < warming up >, and the function of < warming up > is kept in the memory of the system.
7. The method of claim 5, wherein the reference history data includes 1) a measure of a preheating benefit of a function, 2) a frequency with which the function is called, and 3) a time of execution of a previous call.
CN202010829332.6A 2020-08-18 2020-08-18 Function preheating system and method on virtual network equipment Active CN112083932B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010829332.6A CN112083932B (en) 2020-08-18 2020-08-18 Function preheating system and method on virtual network equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010829332.6A CN112083932B (en) 2020-08-18 2020-08-18 Function preheating system and method on virtual network equipment

Publications (2)

Publication Number Publication Date
CN112083932A true CN112083932A (en) 2020-12-15
CN112083932B CN112083932B (en) 2022-02-25

Family

ID=73728365

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010829332.6A Active CN112083932B (en) 2020-08-18 2020-08-18 Function preheating system and method on virtual network equipment

Country Status (1)

Country Link
CN (1) CN112083932B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115373804A (en) * 2022-10-27 2022-11-22 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Virtual machine scheduling method facing network test in cloud infrastructure
WO2023093204A1 (en) * 2021-11-27 2023-06-01 华为技术有限公司 Method for dynamically configuring function resource under serverless architecture, and function management platform
CN116887357A (en) * 2023-09-08 2023-10-13 山东海博科技信息系统股份有限公司 Computing platform management system based on artificial intelligence

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6584489B1 (en) * 1995-12-07 2003-06-24 Microsoft Corporation Method and system for scheduling the use of a computer system resource using a resource planner and a resource provider
CN108293041A (en) * 2015-12-28 2018-07-17 华为技术有限公司 A kind of distribution method of resource, device and system
CN111143024A (en) * 2019-11-22 2020-05-12 中国船舶工业系统工程研究院 Real-time virtual computing-oriented resource management method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6584489B1 (en) * 1995-12-07 2003-06-24 Microsoft Corporation Method and system for scheduling the use of a computer system resource using a resource planner and a resource provider
CN108293041A (en) * 2015-12-28 2018-07-17 华为技术有限公司 A kind of distribution method of resource, device and system
CN111143024A (en) * 2019-11-22 2020-05-12 中国船舶工业系统工程研究院 Real-time virtual computing-oriented resource management method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘继洋: "软件定义网络中应用感知资源管理机制研究", 《中国优秀硕士学位论文全文数据库》 *
李超: "Xen VMX虚拟网卡的研究和模型改进", 《CNKI优秀硕士学位论文全文数据库》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023093204A1 (en) * 2021-11-27 2023-06-01 华为技术有限公司 Method for dynamically configuring function resource under serverless architecture, and function management platform
CN115373804A (en) * 2022-10-27 2022-11-22 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Virtual machine scheduling method facing network test in cloud infrastructure
CN115373804B (en) * 2022-10-27 2023-03-07 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Virtual machine scheduling method facing network test in cloud infrastructure
CN116887357A (en) * 2023-09-08 2023-10-13 山东海博科技信息系统股份有限公司 Computing platform management system based on artificial intelligence
CN116887357B (en) * 2023-09-08 2023-12-19 山东海博科技信息系统股份有限公司 Computing platform management system based on artificial intelligence

Also Published As

Publication number Publication date
CN112083932B (en) 2022-02-25

Similar Documents

Publication Publication Date Title
CN112083932B (en) Function preheating system and method on virtual network equipment
US10719343B2 (en) Optimizing virtual machines placement in cloud computing environments
US7882092B2 (en) Method and system for hoarding content on mobile clients
CN113900767B (en) Monitoring clusters and container-as-a-service controller implementing auto-scaling policies
US8799895B2 (en) Virtualization-based resource management apparatus and method and computing system for virtualization-based resource management
TWI549001B (en) Power and load management based on contextual information
CN106557369B (en) Multithreading management method and system
CN110297711A (en) Batch data processing method, device, computer equipment and storage medium
CN112559182B (en) Resource allocation method, device, equipment and storage medium
CN109478147B (en) Adaptive resource management in distributed computing systems
CN108989238A (en) A kind of method and relevant device for distributing service bandwidth
CN106293868A (en) In a kind of cloud computing environment, virtual machine expands capacity reduction method and scalable appearance system
US20170126833A1 (en) Mitigating service disruptions using mobile prefetching based on predicted dead spots
JP2007524951A (en) Method, apparatus, and computer program for dynamic query optimization
MXPA06006854A (en) Apparatus, system, and method for on-demand control of grid system resources.
US20230359512A1 (en) Predicting expansion failures and defragmenting cluster resources
CN103179048A (en) Method and system for changing main machine quality of service (QoS) strategies of cloud data center
CN106227582A (en) Elastic telescopic method and system
CN101227416A (en) Method for distributing link bandwidth in communication network
EP3007407B1 (en) Configuration method, equipment, system and computer readable medium for determining a new configuration of calculation resources
CN109936471B (en) Multi-cluster resource allocation method and device
US9607275B2 (en) Method and system for integration of systems management with project and portfolio management
US9515905B1 (en) Management of multiple scale out workloads
CN113614694B (en) Boxing virtual machine load using predicted capacity usage
CN110716763B (en) Automatic optimization method and device for web container, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant