CN110058934B - Method for making optimal task unloading decision in large-scale cloud computing environment - Google Patents

Method for making optimal task unloading decision in large-scale cloud computing environment Download PDF

Info

Publication number
CN110058934B
CN110058934B CN201910336787.1A CN201910336787A CN110058934B CN 110058934 B CN110058934 B CN 110058934B CN 201910336787 A CN201910336787 A CN 201910336787A CN 110058934 B CN110058934 B CN 110058934B
Authority
CN
China
Prior art keywords
task
node
energy consumption
cloud
time delay
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910336787.1A
Other languages
Chinese (zh)
Other versions
CN110058934A (en
Inventor
徐九韵
郝壮远
张超
李苗
孙姗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Petroleum East China
Original Assignee
China University of Petroleum East China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Petroleum East China filed Critical China University of Petroleum East China
Priority to CN201910336787.1A priority Critical patent/CN110058934B/en
Publication of CN110058934A publication Critical patent/CN110058934A/en
Application granted granted Critical
Publication of CN110058934B publication Critical patent/CN110058934B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • G06F9/4893Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues taking into account power or heat criteria
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/509Offload
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention aims to provide a method for obtaining an optimal task unloading decision in a cloud computing environment, and meanwhile, the time delay and the energy consumption are considered to minimize the system overhead. The method for obtaining the optimal task unloading decision in the cloud computing environment mainly comprises the following three steps: A. mathematical modeling is carried out on the system; B. and C, solving a problem formulation text mixed integer programming problem by using a branch-and-bound algorithm. The invention considers the scene that the cloud node and the fog node exist simultaneously, is used for solving the scene that a plurality of tasks need to make unloading decisions, and comprehensively considers the energy consumption and the time delay so as to minimize the overhead of the mobile terminal.

Description

Method for making optimal task unloading decision in large-scale cloud computing environment
Background
With the development of mobile communication technology and the popularization of intelligent terminals, various network services and applications are endless, which puts new demands on service quality and communication delay. Although the central processing of mobile devices is more and more powerful, it is still unable to handle computationally intensive tasks in a short time. In addition, the local processing of these applications faces another problem, namely the rapid drain on battery power. Particularly for applications requiring real-time responses, such as online multiplayer VR games. In conventional cloud computing architectures, cloud servers are typically geographically concentrated in one or more data centers, which can create large transmission delays. These problems seriously affect the efficiency and user experience of the application. In order to solve the above-described problem, fog calculation (also referred to as edge calculation) has been proposed in the industry. In short, mobile edge computing is offloading computing tasks to local fog devices (e.g., routers, base stations, etc. with some computing power) over a wireless network rather than sending the computing tasks to a remote cloud server. The purpose of edge computing is to completely replace cloud computing, and to remedy some of its shortcomings. Edge computing extends cloud computing only to network edges. In summary, "fog" is cloud service that is closer to the end user. Edge computation can significantly improve the user experience compared to traditional methods. When tasks which cannot be processed or have excessive processing cost (time delay and energy consumption) of the mobile device are encountered, the computing tasks can be unloaded onto a fog server or a cloud server, and after the computing is completed, the result is returned to the mobile terminal. Today, the 5G technology is about to be put into commercial use, and the mobile edge computing technology will exert its power to the greatest extent. It not only can promote new mobile application, but also can greatly raise quality of existent application.
In such a context, how to make the offloading decision becomes critical. That is to say which tasks the mobile terminal needs to offload to the "cloud"? Which are unloaded onto "fog? Which are the least costly to process locally? Improper offloading decisions not only do not reduce overhead, but instead increase power consumption and latency. According to the cloud computing system, a mathematical model of a cloud computing architecture is established to convert the cloud computing architecture into a mixed integer programming problem, the mixed integer programming problem is designed and solved through an algorithm, only a scene with a cloud server is considered in the existing similar technology, and the cloud computing system solves the problem that the cloud server and the cloud server exist simultaneously.
Disclosure of Invention
The invention aims to provide a method for obtaining an optimal task unloading decision in a cloud computing environment, and meanwhile, the time delay and the energy consumption are considered to minimize the system overhead. The method for obtaining the optimal task unloading decision in the cloud computing environment mainly comprises the following three steps:
A. Mathematical modeling of the system: assuming that n tasks need to be offloaded in one cell, in a cloud and fog computing system architecture, three execution modes of the computing tasks are respectively local execution, and the tasks are offloaded to a fog node for execution and cloud node for execution. We use Respectively represents the energy consumption of the nth task under three conditionsRepresenting the nth task as written and used in three casesThe weighted sum of experiment and energy consumption under the three conditions of the nth task is respectively expressed, so that the total cost of the selected unloading decision is expressed. The subscript n of the symbol in this patent collectively indicates the nth computational task, and is not specifically indicated below. The energy consumption and the time delay in the three cases are respectively modeled first
(1) Overhead of native execution
(2) Offloading overhead to foggy node computation
When the terminal equipment selects to offload the task to the fog node, the time delay is composed of two parts, namelyTask transmission delayFog node processing delay (the calculation result downloading delay is negligible). The energy consumption also comprises two parts, namelyTask transmission energy consumptionAnd the idle state energy consumption of the terminal equipment.
(3) Offloading overhead to cloud node execution
Similar to the mathematical model offloaded to the foggy node, both energy consumption and time delay are two more components. But a transmission delay t is introduced when the cloud node is unloaded to calculate.
D n represents the CPU cycles required to complete the computational tasks. B n denotes the size of the computational task that needs to be transferred. P d,Pt,Pi represents the power of the terminal device when performing the calculation task, uploading the calculation task and being idle, respectively.F e,fc represents the CPU frequencies of the terminal device, the cloud node and the cloud node, respectively.Is the size of the upload bandwidth of the terminal device.
B. formulating the problem as a mixed integer programming problem:
Subject to:
C1:
C2:
C3:
C1 represents that the total upload bandwidth of n devices is max C. C2 indicates that the upload bandwidth allocated to each device cannot be negative. C3 indicates that the terminal device can only execute locally, and the terminal device can execute from one of three strategies of offloading to the cloud node and offloading to the cloud node, which are denoted by 0,1 and 2 respectively.
C. Solving using a branch-and-bound algorithm: and using programming languages such as python, matlab and the like to design and realize a branch-and-bound algorithm function, and entering the obtained mixed integer programming model through modeling and the sizes of n tasks into the function to obtain the minimum cost and corresponding unloading decision.
Compared with the prior art, the invention has the following remarkable advantages:
1. consider a scenario where cloud nodes and cloud nodes coexist.
2. The branch-and-bound method is used for solving, so that the method has a faster solving speed compared with the traditional exhaustion method.
3. The parameter adjustment is flexible, and different environments can be simulated by setting different parameter sizes.
For example, in an application scenario where delay sensitivity is low and power consumption is required, β may be increased and α may be decreased.
Drawings
Fig. 1 is a general flow chart of the present invention.
Fig. 2 is a time-consuming comparison of a conventional exhaustive process.
Fig. 3 is a ratio of overhead corresponding to a real optimal decision system to overhead corresponding to an optimal offloading decision derived by an algorithm.
Fig. 4 is a diagram of the system overhead corresponding to the computing tasks within different size ranges obtained by the method in this patent.
Detailed Description
Embodiments of the present invention will be described below with reference to the accompanying drawings. FIG. 1 is a general flow chart of the invention, and the specific implementation mode is as follows:
A. mathematical modeling is performed on a cloud computing architecture: obtaining constant parameters required in mathematical models by prior art sensing and incorporating into mathematical models of build systems
B. Formulating a mathematical model obtained by modeling into a mixed integer programming problem: and formulating a mathematical model obtained by modeling into a mixed integer programming problem according to the problem to be solved.
C. Implementing a branch-and-bound algorithm with a programming language design: the algorithm is implemented using a high-level programming language such as java, python. Or a third party open source matlab library function BNB20_new ().
D. and (3) carrying out algorithm solution on the mixed integer programming problem and the calculation task vector: according to the function interface, the mixed integer programming function model and the calculation task vector are expressed in the form of the adopted high-level language, and then are brought into algorithm solution.

Claims (1)

1. The method for obtaining the optimal task unloading decision in the cloud computing environment comprises the following three steps:
A. Mathematical modeling of the system: assuming that n tasks in a cell need to be offloaded in a cloud-to-fog computing system architecture, three execution modes of computing tasks are respectively local execution, fog node execution and cloud node execution; we use Respectively represents the energy consumption of the nth task under three conditionsRepresenting the time consumption in the three cases of the nth task and usingRespectively representing weighted sums of time consumption and energy consumption under the three conditions of the nth task, so as to represent the total cost of the selected unloading decision; the subscript n uniformly represents the nth computing task; the energy consumption and the time delay in the three cases are respectively modeled first
(1) Overhead of native execution
(2) Offloading overhead to foggy node computation
When the terminal equipment selects to offload the task to the fog node, the time delay is composed of two parts, namelyTask transmission delayThe fog node processes the time delay, and the time delay for downloading the calculation result is negligible; the energy consumption also comprises two parts, namelyTask transmission energy consumptionThe idle state energy consumption of the terminal equipment;
(3) Offloading overhead to cloud node execution
Similar to the mathematical model unloaded to the fog node, the energy consumption and the time delay are both two parts; however, a transmission delay t is introduced when the cloud node is unloaded to calculate;
D n denotes the CPU cycles required to complete the computational tasks; b n represents the size of the computational task to be transferred; p d,Pt,Pi represents the power of the terminal device when executing the calculation task, uploading the calculation task and being idle; f e,fc represents CPU frequencies of the terminal equipment, the fog node and the cloud node respectively; the size of the uploading bandwidth of the terminal equipment;
B. formulating the problem as a mixed integer programming problem:
Subject to:
C1:
C2:
C3:
c1 represents that the total uploading bandwidth of n devices is maximum C; c2 indicates that the upload bandwidth allocated to each device cannot be negative; c3 represents that the terminal equipment can only execute from local, and one of three strategies of unloading to the fog node and unloading the cloud node is selected to execute, wherein the strategies are respectively represented by 0,1 and 2;
C. Solving using a branch-and-bound algorithm: and using programming languages such as python, matlab and the like to design and realize a branch-and-bound algorithm function, and entering the obtained mixed integer programming model through modeling and the sizes of n tasks into the function to obtain the minimum cost and corresponding unloading decision.
CN201910336787.1A 2019-04-25 2019-04-25 Method for making optimal task unloading decision in large-scale cloud computing environment Active CN110058934B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910336787.1A CN110058934B (en) 2019-04-25 2019-04-25 Method for making optimal task unloading decision in large-scale cloud computing environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910336787.1A CN110058934B (en) 2019-04-25 2019-04-25 Method for making optimal task unloading decision in large-scale cloud computing environment

Publications (2)

Publication Number Publication Date
CN110058934A CN110058934A (en) 2019-07-26
CN110058934B true CN110058934B (en) 2024-07-09

Family

ID=67320684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910336787.1A Active CN110058934B (en) 2019-04-25 2019-04-25 Method for making optimal task unloading decision in large-scale cloud computing environment

Country Status (1)

Country Link
CN (1) CN110058934B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111124531B (en) * 2019-11-25 2023-07-28 哈尔滨工业大学 Method for dynamically unloading calculation tasks based on energy consumption and delay balance in vehicle fog calculation
WO2021217401A1 (en) * 2020-04-28 2021-11-04 重庆邮电大学 Traffic management method and management apparatus
CN112004239B (en) * 2020-08-11 2023-11-21 中国科学院计算机网络信息中心 Cloud edge collaboration-based computing and unloading method and system
CN112416603B (en) * 2020-12-09 2023-04-07 北方工业大学 Combined optimization system and method based on fog calculation
CN112787920B (en) * 2021-03-03 2021-11-19 厦门大学 Underwater acoustic communication edge calculation time delay and energy consumption optimization method for ocean Internet of things

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107493334A (en) * 2017-08-18 2017-12-19 西安电子科技大学 A kind of cloud and mist calculating network framework and the method for strengthening cloud and mist network architecture reliability
CN108540406A (en) * 2018-07-13 2018-09-14 大连理工大学 A kind of network discharging method based on mixing cloud computing

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104869151A (en) * 2015-04-07 2015-08-26 北京邮电大学 Business unloading method and system
EP3221789A1 (en) * 2015-10-21 2017-09-27 Deutsche Telekom AG Method and system for code offloading in mobile computing

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107493334A (en) * 2017-08-18 2017-12-19 西安电子科技大学 A kind of cloud and mist calculating network framework and the method for strengthening cloud and mist network architecture reliability
CN108540406A (en) * 2018-07-13 2018-09-14 大连理工大学 A kind of network discharging method based on mixing cloud computing

Also Published As

Publication number Publication date
CN110058934A (en) 2019-07-26

Similar Documents

Publication Publication Date Title
CN110058934B (en) Method for making optimal task unloading decision in large-scale cloud computing environment
CN109814951B (en) Joint optimization method for task unloading and resource allocation in mobile edge computing network
CN108880893B (en) Mobile edge computing server combined energy collection and task unloading method
CN107708135B (en) Resource allocation method suitable for mobile edge computing scene
CN111240701B (en) Task unloading optimization method for end-side-cloud collaborative computing
CN110096362B (en) Multitask unloading method based on edge server cooperation
CN110418353B (en) Edge computing server placement method based on particle swarm algorithm
CN111130911B (en) Calculation unloading method based on mobile edge calculation
CN110928654A (en) Distributed online task unloading scheduling method in edge computing system
WO2021244354A1 (en) Training method for neural network model, and related product
CN110489176B (en) Multi-access edge computing task unloading method based on boxing problem
CN111556516B (en) Distributed wireless network task cooperative distribution method facing delay and energy efficiency sensitive service
CN111935677B (en) Internet of vehicles V2I mode task unloading method and system
CN110795235B (en) Method and system for deep learning and cooperation of mobile web
WO2020189844A1 (en) Method for processing artificial neural network, and electronic device therefor
CN112181655A (en) Hybrid genetic algorithm-based calculation unloading method in mobile edge calculation
CN113220356A (en) User computing task unloading method in mobile edge computing
CN112988347B (en) Edge computing unloading method and system for reducing energy consumption and cost sum of system
CN111711962A (en) Cooperative scheduling method for subtasks of mobile edge computing system
CN115473896B (en) Electric power Internet of things unloading strategy and resource allocation optimization method based on DQN algorithm
CN111556576B (en) Time delay optimization method based on D2D _ MEC system
CN113507712B (en) Resource allocation and calculation task unloading method based on alternate direction multiplier
CN110190982B (en) Non-orthogonal multiple access edge computation time and energy consumption optimization based on fair time
CN111680791A (en) Communication method, device and system suitable for heterogeneous environment
CN114745386B (en) Neural network segmentation and unloading method in multi-user edge intelligent scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant