CN113347016B - Virtualization network function migration method based on resource occupation and time delay sensitivity - Google Patents
Virtualization network function migration method based on resource occupation and time delay sensitivity Download PDFInfo
- Publication number
- CN113347016B CN113347016B CN202110259716.3A CN202110259716A CN113347016B CN 113347016 B CN113347016 B CN 113347016B CN 202110259716 A CN202110259716 A CN 202110259716A CN 113347016 B CN113347016 B CN 113347016B
- Authority
- CN
- China
- Prior art keywords
- resource
- migration
- time delay
- node
- mec
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/34—Network arrangements or protocols for supporting network services or applications involving the movement of software or configuration parameters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5083—Techniques for rebalancing the load in a distributed system
- G06F9/5088—Techniques for rebalancing the load in a distributed system involving task migration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0813—Configuration setting characterised by the conditions triggering a change of settings
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/90—Buffering arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1031—Controlling of the operation of servers by a load balancer, e.g. adding or removing servers that serve requests
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/4557—Distribution of virtual machine instances; Migration and load balancing
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention relates to a virtualization network function migration method based on resource occupation and time delay sensitivity, which comprises the following steps of firstly, acquiring a resource occupation condition through a service monitoring model; secondly, calculating user weight by using a time delay perception model to obtain user division; and finally, improving the migration success rate by using a queuing alternative mechanism. The invention can effectively improve the migration success rate and the request average coverage rate.
Description
Technical Field
The invention belongs to the technical field of mobile communication, and particularly relates to a virtualized network function migration method based on resource occupation and time delay sensitivity.
Background
Mobile Edge Computing (MEC) reduces resource requirements and user-perceived latency by transferring Computing, network, and storage capabilities from the core cloud to the Mobile Edge network nodes. MECs may also utilize a Network Function Virtualization (NFV) platform to help remove the limitations of traditional Network applications. With NFV technology, virtual Network Functions (VNFs) can be created and placed in real-time to meet different application needs and to optimize management of network, computing and storage resources.
In the edge network, the resources of the MEC server are relatively limited, and the user terminal devices have different requirements on the delay sensitivity. When a certain MEC server receives a large number of requests in a certain time period, because differentiated services are not considered for applications with different delay requirements, part of application services are interrupted due to too high delay. Therefore, in the face of user requirements and user limitations, how to effectively handle the failure to handle the subsequent incoming user requests due to insufficient MEC server resources becomes an urgent problem to be solved.
Disclosure of Invention
In view of this, the present invention provides a method for migrating a virtualized network function based on resource occupation and delay sensitivity, which effectively improves the success rate of migration and the average coverage rate of requests.
In order to achieve the purpose, the invention adopts the following technical scheme:
a virtualization network function migration method based on resource occupation and time delay sensitivity comprises the following steps:
s1, calculating the current residual resource condition of a server of each edge node in an MEC, and adding the MEC server with insufficient resource residue into a preset waiting sequencing queue;
s2, calculating the request delay size and the user resource demand occupation condition of each user associated with each MEC server in the queue waiting for sorting;
s3, sequencing each user request associated with the MEC server according to an intelligent matching priority rule;
s4, selecting an MEC server meeting preset time delay constraint and resource constraint conditions based on an improved particle swarm intelligent algorithm, sequentially calculating the weighted sum of node migration time delay and resource conditions as a fitness function, selecting a migration path with the shortest time delay, and migrating a corresponding user request to a target MEC server;
and S5, determining the final user request distribution condition according to the obtained optimal solution.
Further, the resource constraint is a virtual network function instanceThe required resource size does not exceed the resource capacity of the bottom layer edge network node corresponding to the mapping, and a certain VNF instance can only be deployed on a certain edge network node:
wherein A is ik Indicating whether the kth VNF is on the ith MEC,andrespectively representing the size of memory resources and the size of CPU resources required to be placed by the kth VNF,andrespectively showing the memory resource capacity and the CPU resource capacity of the ith MEC.
Further, the delay constraint is specifically that the delay of the migrated ue requesting to wait is required to be less than the sum of the link delay caused by the migration and the delay caused by the distance between the device and the physical edge node:
wherein, the first and the second end of the pipe are connected with each other,representing the time delay between the links ij,andindicating the migration situation of the nth terminal device,indicating that the link ij is enabled,representing the time delay between the user request to the MEC.
Further, the intelligent matching priority rule specifically includes:
(1) When the residual resource capacity of the edge node is insufficient, partial user requests need to be disassociated;
(2) Sequencing the user requests according to the set priority formula;
(3) Associating the user request with the appropriate edge node;
(4) And outputting the migration condition requested by the user.
Further, the priority formula represents the product of the delay required by the ue request and the resource occupation resulting from the request:
further, step S4 specifically includes:
step S41: selecting an MEC server meeting constraint conditions such as time delay and resources, sequentially calculating the weighted sum of the node migration time delay and the resource condition as a fitness function, and taking the fitness function as a global optimal particle;
step S42: selecting the nearest node of the server nodes which are distant from the iteration round as a new particle;
step S43: calculating the weighted sum of the node migration delay and the resource condition, and comparing whether the fitness function value of the node migration delay and the resource condition is higher than that of the global optimal particle, if so, updating the optimal particle, otherwise, not modifying;
step S44: and repeating the actions of S41-S42 until the globally optimal particle converges, namely, continuously updating the globally optimal particle for more than 8 times without changing the globally optimal particle, namely, the node is the final migration target node.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention reduces the local node load, relieves the bandwidth transmission and effectively reduces the cost of operators;
2. the invention effectively deals with the service interruption of the user caused by the lack of local resources, and simultaneously reduces the time delay expense;
3. the invention effectively improves the migration success rate and the request average coverage rate.
Drawings
FIG. 1 is a schematic illustration of migration in one embodiment of the present invention;
FIG. 2 is a schematic diagram of a user requesting a pre-migration and a post-migration in one embodiment of the invention;
FIG. 3 is a flow chart of the method of the present invention;
fig. 4 is a flow chart of an improved intelligent algorithm for particle swarm in accordance with an embodiment of the invention.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
Referring to fig. 1, a migration diagram specifically includes:
an MEC server: in a mobile edge network, there are many base stations, which constitute network nodes, and these network nodes all have network function virtualization infrastructure solutions to support VNF placement, and these servers all have certain physical resources to provide services for VNF normal operation.
The terminal equipment: namely, the user plane may specifically include a smart phone, a tablet, a notebook computer, etc., and has various service requests, a maximum delay requirement, and a service satisfaction degree.
Fig. 1 shows a request migration diagram under the MEC. When the MEC server is served with too many requests and the resources are insufficient, some of the requests need to be redirected to other MEC servers, as shown in the figure, request a on MEC server 1 will be distributed to MEC server 2.
In this embodiment, referring to fig. 2, the total resource of each MEC in the figure is 50, where MEC1 is processing 4 requests from user terminal devices (e.g. camera, mobile computer), and occupying 50 resources, then the available resources are 0. When another user terminal device (such as a camera) requests the MEC1 server to serve the user terminal device, the service is not available due to insufficient MEC1 resources, and at this time, a part of the user terminal device request which is not sensitive to the delay needs to be migrated to the adjacent MEC server for processing.
Referring to fig. 3, the present embodiment provides a flowchart of a method for migration and placement of a virtualized network function based on mobile edge computing, where the method first obtains resource occupation conditions through a service monitoring model on the premise of satisfying user request constraints and underlying network resource constraints; secondly, calculating user weight by using a time delay perception model to obtain user division; and finally, improving the migration success rate by using a queuing alternative mechanism.
The method comprises the following steps:
step S1: initializing underlying network resources, user requests and associated states;
preferably, in this embodiment, in Mobile Edge Computing (MEC), each MEC node is assumed to have a certain resource capacityEach link has a certain time delayThe delay between a user request to the MEC is expressed asEach VNF will occupy a portion of the resources of the MEC server;
server M = { M for each edge node 1 ,m 2 ,…m i Calculating the current residual resource condition of the server; adding the MEC server with insufficient resource residue into a preset waiting sequencing queue;
step S2: randomly increasing the number of user requests of a certain MEC node;
for each MEC server in the waiting-to-sort queue, calculate each user U = { U } associated with it 1 ,u 2 ,…,u n Size of request delayAnd user resource demand occupancy
And step S3: sequencing each user request associated with the MEC server according to an intelligent matching priority rule;
in this embodiment, preferably, the intelligent matching priority rule method is as follows:
step S31: when the residual resource capacity of the edge node is insufficient, partial user requests need to be disassociated;
step S32: sequencing the user requests according to the set priority formula;
the priority formula represents the product of the delay required by a user equipment request and the resource occupation resulting from the request:
the above equation indicates that when the product of the two is larger, the higher the priority of the request, the more likely it is to be selected for migration.
Step S33: associating the user request with the appropriate edge node;
step S34: outputting the migration condition requested by the user;
s4, selecting an MEC server meeting constraint conditions such as time delay and resources based on an improved particle swarm intelligent algorithm, sequentially calculating the weighted sum of node migration time delay and resource conditions as a fitness function, selecting a migration path with the shortest time delay, and migrating the corresponding user request to a target MEC server;
in this embodiment, preferably, the preset resource constraint requirement is a virtual network function instanceThe required resource size does not exceed the resource capacity of the bottom layer edge network node corresponding to the mapping, and a certain VNF instance can only be deployed on a certain edge network node:
in the above formula A ik Indicating whether the kth VNF is on the ith MEC,andrespectively representing the size of the memory resource and the size of the CPU resource which need to be placed in the kth VNF,andrespectively representing the memory resource capacity and the CPU resource capacity of the ith MEC;
in this embodiment, preferably, the delay constraint requires that the time delay for the migrated ue to request to wait is less than the sum of the link delay caused by the migration and the delay caused by the distance between the ue and the physical edge node:
in the above formulaRepresenting the time delay between the links ij,andindicating the migration situation of the nth terminal device,indicating that the link ij is enabled,represents the time delay between the user request to the MEC;
s5, judging whether the migration queue is empty or not, if so, indicating that no user request needing to be redirected exists, and if not, migrating the user request to a target MEC server;
and S6, judging whether the current migration operation fails, if so, returning to further judge the resource condition of the server, and if not, adding an alternative request for migration. Determining the final user request distribution condition according to the obtained optimal solution;
fig. 4 is a schematic flow chart of an improved particle swarm intelligence algorithm of the present invention, where the steps of the improved particle swarm intelligence algorithm of the migration server node are as follows:
step S41: selecting an MEC server meeting constraint conditions such as time delay and resources, sequentially calculating the weighted sum of the node migration time delay and the resource condition as a fitness function, and taking the fitness function as a global optimal particle;
step S42: selecting the nearest node of the server nodes which are distant from the iteration round as a new particle;
step S43: calculating the weighted sum of the node migration delay and the resource condition, and comparing whether the fitness function value of the node migration delay and the resource condition is higher than that of the global optimal particle, if so, updating the optimal particle, otherwise, not modifying;
step S44: repeating the actions of S41-S42 until the global optimal particle is converged, namely continuously updating the global optimal particle for more than 8 times, wherein the global optimal particle is not changed, and the global optimal particle is the final migration target node;
as will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the invention. However, any simple modification, equivalent change and modification of the above embodiments according to the technical essence of the present invention are within the protection scope of the technical solution of the present invention.
Claims (5)
1. A virtualization network function migration method based on resource occupation and time delay sensitivity is characterized by comprising the following steps:
s1, calculating the current residual resource condition of a server of each edge node in an MEC, and adding the MEC server with insufficient resource residue into a preset waiting sequencing queue;
s2, calculating the request delay of each user and the occupation condition of user resource requirements associated with each MEC server in the queue waiting for sorting;
s3, sequencing each user request associated with the MEC server according to an intelligent matching priority rule;
s4, selecting an MEC server meeting preset time delay constraint and resource constraint conditions based on an improved particle swarm intelligent algorithm, sequentially calculating the weighted sum of node migration time delay and resource conditions as a fitness function, selecting a migration path with the shortest time delay, and migrating the corresponding user request to a target MEC server;
the step S4 specifically includes:
step S41: selecting an MEC server meeting time delay and resource constraint conditions, sequentially calculating the weighted sum of the node migration time delay and the resource condition as a fitness function, and taking the weighted sum as a global optimal particle;
step S42: selecting the nearest node of the server nodes which are distant from the iteration round as a new particle;
step S43: calculating the weighted sum of the node migration delay and the resource condition, and comparing whether the fitness function value of the node migration delay and the resource condition is higher than that of the global optimal particle, if so, updating the optimal particle, otherwise, not modifying;
step S44: repeating the actions of S41-S42 until the global optimal particle is converged, namely continuously updating the global optimal particle for more than 8 times, wherein the global optimal particle is not changed, and the global optimal particle is the final migration target node;
and S5, determining the final user request distribution condition according to the obtained optimal solution.
2. The method of claim 1, wherein the resource constraint is a virtual network function instanceThe required resource size does not exceed the resource capacity of the bottom layer edge network node corresponding to the mapping, and a certain VNF instance can only be deployed on a certain edge network node:
3. The method according to claim 1, wherein the latency constraint is that the latency of the migrated ue requesting to wait needs to be less than the sum of the link latency caused by migration and the latency caused by the distance between the device and the physical edge node:
4. The method for migrating the virtualized network function based on resource occupation and delay sensitivity according to claim 1, wherein the intelligent matching priority rule specifically comprises:
(1) When the residual resource capacity of the edge node is insufficient, partial user requests need to be disassociated;
(2) Sequencing the user requests according to the set priority formula;
(3) Associating the user request with the appropriate edge node;
(4) And outputting the migration condition requested by the user.
5. The method of claim 1, wherein the formula of the priority represents a product of a delay required by a user equipment request and a resource occupation generated by the request:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110259716.3A CN113347016B (en) | 2021-03-10 | 2021-03-10 | Virtualization network function migration method based on resource occupation and time delay sensitivity |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110259716.3A CN113347016B (en) | 2021-03-10 | 2021-03-10 | Virtualization network function migration method based on resource occupation and time delay sensitivity |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113347016A CN113347016A (en) | 2021-09-03 |
CN113347016B true CN113347016B (en) | 2022-10-04 |
Family
ID=77467749
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110259716.3A Active CN113347016B (en) | 2021-03-10 | 2021-03-10 | Virtualization network function migration method based on resource occupation and time delay sensitivity |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113347016B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109644199A (en) * | 2016-10-18 | 2019-04-16 | 华为技术有限公司 | Virtual network condition managing in mobile edge calculations |
CN111130904A (en) * | 2019-12-30 | 2020-05-08 | 重庆邮电大学 | Virtual network function migration optimization algorithm based on deep certainty strategy gradient |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE112018007463T5 (en) * | 2018-04-11 | 2020-12-24 | Intel IP Corporation | Flexible Multiple Access Edge Computing Service Consumption (Mec Service Consumption) through host zoning |
JP7125601B2 (en) * | 2018-07-23 | 2022-08-25 | 富士通株式会社 | Live migration control program and live migration control method |
CN110275758B (en) * | 2019-05-09 | 2022-09-30 | 重庆邮电大学 | Intelligent migration method for virtual network function |
-
2021
- 2021-03-10 CN CN202110259716.3A patent/CN113347016B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109644199A (en) * | 2016-10-18 | 2019-04-16 | 华为技术有限公司 | Virtual network condition managing in mobile edge calculations |
CN111130904A (en) * | 2019-12-30 | 2020-05-08 | 重庆邮电大学 | Virtual network function migration optimization algorithm based on deep certainty strategy gradient |
Non-Patent Citations (4)
Title |
---|
Clustered Virtualized Network Functions Resource Allocation based on Context-Aware Grouping in 5G Edge Networks;S. Song等;《IEEE Transactions on Mobile Computing》;20200501;第19卷(第5期);第1072-1083页 * |
移动边缘计算网络中任务调度与资源配置的协同优化研究;李雨晴;《中国优秀博士学位论文全文数据库信息科技辑》;20200615(第6期);第I136-56页 * |
网络功能虚拟化中延时感知的资源调度优化方法;徐冉等;《计算机研究与发展》;20180415;第68-77页 * |
面向多业务需求的NFV和SDN融合的资源优化算法;朱晓荣等;《通信学报》;20181125;第58-66页 * |
Also Published As
Publication number | Publication date |
---|---|
CN113347016A (en) | 2021-09-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210149737A1 (en) | Method for fast scheduling for balanced resource allocation in distributed and collaborative container platform environment | |
US10924535B2 (en) | Resource load balancing control method and cluster scheduler | |
CN107273185B (en) | Load balancing control method based on virtual machine | |
CN110134495B (en) | Container cross-host online migration method, storage medium and terminal equipment | |
Ge et al. | GA-based task scheduler for the cloud computing systems | |
CN114138486B (en) | Method, system and medium for arranging containerized micro-services for cloud edge heterogeneous environment | |
US20170142177A1 (en) | Method and system for network dispatching | |
CN109788046B (en) | Multi-strategy edge computing resource scheduling method based on improved bee colony algorithm | |
WO2018000991A1 (en) | Data balancing method and device | |
US11966792B2 (en) | Resource processing method of cloud platform, related device, and storage medium | |
CN108667657B (en) | SDN-oriented virtual network mapping method based on local feature information | |
CN110933139A (en) | System and method for solving high concurrency of Web server | |
CN104092756A (en) | Cloud storage system resource dynamic allocation method based on DHT mechanism | |
CN105491150A (en) | Load balance processing method based on time sequence and system | |
WO2020134133A1 (en) | Resource allocation method, substation, and computer-readable storage medium | |
CN110995470A (en) | Block chain-based network function distribution method and device | |
CN102480502B (en) | I/O load equilibrium method and I/O server | |
CN104539744A (en) | Two-stage media edge cloud scheduling method and two-stage media edge cloud scheduling device | |
CN115134371A (en) | Scheduling method, system, equipment and medium containing edge network computing resources | |
CN113448714B (en) | Computing resource control system based on cloud platform | |
CN114691372A (en) | Group intelligent control method of multimedia end edge cloud system | |
CN112130927B (en) | Reliability-enhanced mobile edge computing task unloading method | |
CN113329432A (en) | Edge service arrangement method and system based on multi-objective optimization | |
CN107948330A (en) | Load balancing based on dynamic priority under a kind of cloud environment | |
Malazi et al. | Distributed service placement and workload orchestration in a multi-access edge computing environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |