CN112162837A - Software definition-based edge computing scheduling method and system - Google Patents

Software definition-based edge computing scheduling method and system Download PDF

Info

Publication number
CN112162837A
CN112162837A CN202010982716.1A CN202010982716A CN112162837A CN 112162837 A CN112162837 A CN 112162837A CN 202010982716 A CN202010982716 A CN 202010982716A CN 112162837 A CN112162837 A CN 112162837A
Authority
CN
China
Prior art keywords
computing
local
task
denotes
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010982716.1A
Other languages
Chinese (zh)
Other versions
CN112162837B (en
Inventor
罗万明
周旭
任勇毛
覃毅芳
范鹏飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Computer Network Information Center of CAS
Original Assignee
Computer Network Information Center of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Computer Network Information Center of CAS filed Critical Computer Network Information Center of CAS
Priority to CN202010982716.1A priority Critical patent/CN112162837B/en
Publication of CN112162837A publication Critical patent/CN112162837A/en
Application granted granted Critical
Publication of CN112162837B publication Critical patent/CN112162837B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44594Unloading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/48Indexing scheme relating to G06F9/48
    • G06F2209/486Scheduler internals
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Multi Processors (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses an edge calculation scheduling method and system based on software definition. The method comprises the following steps: 1) local computing unit collects local computing tasks ithUploading the data to a distributed controller; 2) distributed controller based on binary group (v)L[t],vC[t]) Is determined by the value ofthPerforming local computation or unloading to an edge computation service node; wherein v isL[t]1 denotes the tth time slot ithIs executed locally, vC[t]1 denotes the tth time slot ithUnloading to an edge computing service node for execution; introduction of
Figure DDA0002688132560000011
For determining doublets (v)L[t],vC[t]) Taking the value of (A); wherein
Figure DDA0002688132560000012
S {0,1, …, Q } × {0,1, …, M } × {0,1, …, N-1}, Q representing the maximum capacity of the task queue buffer, M representing ithThe number of data packets contained, N represents ithThe number of slots required is calculated locally.

Description

Software definition-based edge computing scheduling method and system
Technical Field
The invention belongs to the field of edge computing of computer networks, and particularly relates to an edge computing scheduling method and system based on software definition.
Background
Since the 21 st century, the continuous development and evolution of information network technology bring new opportunities and development for various fields of basic communication, financial economy, traditional manufacturing and the like, and greatly promote cross-industry fusion. With the rapid development of the internet of things and the popularization of wireless networks, the age of everything interconnection has come, and the number of network edge devices is rapidly increased, so that the data generated by the devices reaches the level of the Zeolum (ZB). In the era of centralized big data processing with a cloud computing model as a core, the key technology of the era cannot efficiently process data generated by edge devices, and is mainly represented by: 1) the linearly increasing centralized cloud computing power cannot match the explosively increasing massive edge data; 2) the mass data are transmitted from the network edge equipment to the cloud center, so that the load of network transmission bandwidth is increased rapidly, and long network delay is caused; 3) network edge data relates to personal privacy, so that privacy security problems become more prominent; 4) the network edge device with limited electric energy transmits data to the cloud center, and large electric energy is consumed. The protocol system supported by the existing network equipment is huge, so that the protocol system is highly complex, the technical development of an IP network is limited, the current application trends of cloud computing, big data, server virtualization and the like cannot be met, the demand of a user on flow is continuously expanded, various novel services continuously appear, and the network operation and maintenance cost is increased.
The invention provides an effective solution based on the emergence of Software Defined Networking (SDN) and Edge Computing (EC). The SDN has a network global topology view, and can realize centralized management of equipment and data flow by managing data flow forwarding through a controller; the data plane transmission is simplified, and the unified south-north interface can realize the programmability of the network while improving the architecture expansibility. Edge computing is used as an emerging computing model, computing resources are deployed at one end close to a data source in a distributed computing node mode, stable and time-efficient services which cannot be achieved by cloud computing can be provided, and the method is mainly applied to scenes with high time delay and data security requirements.
Disclosure of Invention
In order to overcome the challenges of limited original data storage and computing capacity of the traditional network, the invention provides an edge computing scheduling method and system based on software definition. The system combines layered cloud, edge computing and content-aware caching technologies, and under the SDN framework, different scenes and service requirements are considered, and an edge computing unloading scheme is designed. The invention fully utilizes the centralized control capability of the SDN to the network, and carries out global optimal selection on different scheduling and unloading schemes to obtain the optimal performance.
The technical scheme of the invention is as follows:
a software-defined-based edge computing scheduling method comprises the following steps:
1) local computing unit collects local computing tasks ithGenerating a calculation task request and uploading the calculation task request to the distributed controller;
2) using a doublet (v)L[t],vC[t]) Representing a computation task scheduling decision of the t-th time slot; wherein v isL[t],vC[t]∈{0,1},vL[t]1 denotes the time slot to be calculated ithIs executed in the local computing unit, vC[t]1 denotes the time slot to be calculated ithUnloading to an edge computing service node for execution; i.e. the optional decision scheme v { (v)L[t],vC[t]) L (0,1), (1,0), (1,1), (0,0) }; introducing a set of probability parameters
Figure BDA0002688132540000021
For determining doublets (v)L[t],vC[t]) Taking the value of (A); wherein
Figure BDA0002688132540000022
In the probability parameter
Figure BDA0002688132540000023
Four possible scheduling decisions are represented: (0,1), (1,0), (1,1), (0, 0); state space S {0,1, ·, Q } × {0,1, ·, M } × {0,1,. and N-1}, where "x" denotes a cartesian product, Q denotes a maximum capacity of a task queue buffer, and M denotes a compute task ithThe number of data packets contained, N represents the calculation task ithCalculating the number of time slots required by a local calculation unit;
3) distributed controller based on binary group (v)L[t],vC[t]) To determine the computing task ithExecuting in the local computing unit or unloading to the edge computing service node.
Further, a binary group (v) is determinedL[t],vC[t]) The value taking method comprises the following steps: when the local computing unit and the unloading transmission unit are idle, two computing tasks are selected, wherein one computing task is executed in the local computing unit, and the other computing task is unloaded to the edge computing service node for execution;
Figure BDA0002688132540000024
wherein, i is 0,1,. Q; m is 0,1,. said, M; n-1, 0, 1.
Further, a binary group (v) is determinedL[t],vC[t]) The value taking method comprises the following steps: when the local CPU is idle, the offload transfer unit is occupied,
Figure BDA0002688132540000025
further, a binary group (v) is determinedL[t],vC[t]) The value taking method comprises the following steps: when the local CPU is occupied and the unloading transmission unit is idle,
Figure BDA0002688132540000026
further, a binary group (v) is determinedL[t],vC[t]) The value taking method comprises the following steps: when both the local CPU and the offload transfer unit are occupied,
Figure BDA0002688132540000027
the edge computing scheduling system based on software definition is characterized by comprising a plurality of local computing units, a plurality of edge computing service nodes and a distributed controller; each local computing unit is connected with each edge computing service node and the distributed controller through a domain controller, and each edge computing service node is connected with the distributed controller through a network;
the local computing unit is used for collecting local computing tasks ithGenerating a calculation task request and uploading the calculation task request to the distributed controller;
the distributed controller uses a binary group (v)L[t],vC[t]) Representing a computation task scheduling decision for the t-th time slot, based on the tuple (v)L[t],vC[t]) To determine the computing task ithExecuting at the local computing unit or unloading to the edge computing service node for execution; wherein v isL[t],vC[t]∈{0,1},vL[t]1 denotes the time slot to be calculated ithIs executed in the local computing unit, vC[t]1 denotes the time slot to be calculated ithUnloading to an edge computing service node for execution; i.e. the optional decision scheme v { (v)L[t],vC[t]) L (0,1), (1,0), (1,1), (0,0) }; introducing a set of probability parameters
Figure BDA0002688132540000031
For determining doublets (v)L[t],vC[t]) Taking the value of (A); wherein
Figure BDA0002688132540000032
In the probability parameter
Figure BDA0002688132540000033
Four possible scheduling decisions are represented: (0,1), (1,0), (1,1), (0, 0); state space S {0,1, ·, Q } × {0,1, ·, M } × {0,1,. and N-1}, where "x" denotes a cartesian product, Q denotes a maximum capacity of a task queue buffer, and M denotes a compute task ithThe number of data packets contained, N represents the calculation task ithThe number of time slots required is calculated at the local calculation unit.
The architecture diagram is as shown in fig. 1, the scheme can form a working mode of local acquisition, edge calculation and cloud enhancement, can remarkably relieve the flow and the calculation load in a core network by providing a multi-level and multi-region indexing service mode, and can accelerate the calculation process. And for the initiated computing task, how to judge whether the initiated computing task is handed to a local computing unit or the initiated computing task is unloaded to an edge server is jointly completed by a local domain controller and a distributed SDN controller. The system architecture of the invention is divided into five layers: the system comprises a basic device layer, a data transmission layer, an edge computing service platform, an SDN central control layer and a cloud service main control platform, wherein the basic device layer, the data transmission layer, the edge computing service platform, the SDN central control layer and the cloud service main control platform respectively represent a data source, a transmission medium and computing and control services from bottom to top according to service division. The general computational offload flow is: the method comprises the steps that various data collected by a basic device layer enter an internal SDN through access points APs of various access media, a domain controller and a distributed controller negotiate a task judgment result, the domain controller issues a judgment result instruction to a switch to complete forwarding of a calculation task, and in the whole process, the system architecture ensures the QoS of various services.
Compared with the prior art, the invention has the following positive effects:
the random scheduling scheme greatly reduces the calculation delay, optimizes the use and arrangement of local and edge resources and improves the QoS of multi-task calculation.
Drawings
Figure 1 is a diagram of an SDN architecture edge computing offload system.
FIG. 2 is a flow diagram of system architecture computing offload.
Detailed Description
The present invention is described in further detail below with reference to the attached drawings.
The edge computing offload process for local tasks is shown in fig. 2.
1) Request Collection phase
The lowest basic equipment layer of the system architecture is responsible for collecting various data by using various types of sensors and enters an internal data transmission layer through respective Access Points (APs). In a data transmission layer, some screening and aggregation processing are performed on the acquired data, and specific execution logic is issued through a regional SDN controller. For tasks needing to be processed by means of edge computing services, the regional SDN controller is responsible for collecting requests of the tasks and uniformly uploading the requests containing task parameters to an upper-layer distributed controller to wait for edge computing unloading judgment.
2) Edge computation offload decision
The distributed controller in the framework is mainly responsible for monitoring the server nodes in the edge computing service cluster, and the server nodes can periodically acquire the states of the edge computing nodes in the managed network, wherein the states include whether the edge computing nodes are available, node idle resource information, task queue length and the like.
The distributed controller evaluates the resources required by the request collected in the first stage and the states of all edge computing nodes through a computing unloading algorithm, and determines whether the current request needs to execute computing unloading or not; if the calculation unloading is needed, the distributed controller selects a proper unloading scheme according to the distribution of the current calculation task, gives the routing information of the edge calculation node which finally executes the calculation task, returns the information to the local controller node in the first stage, and is executed by the local controller in combination with the CPS, otherwise, the calculation task executes the local calculation.
3) Task computation and update
The local controller receives a calculation unloading decision result returned by the distributed controller, if calculation unloading is needed, the local controller can analyze unloaded edge calculation node information from the returned result, and then task related data are transmitted to a calculation unloading server appointed by the distributed controller; otherwise, the task is delivered to the local computing unit for computing. After the calculation is finished, the local controller receives the final calculation result, the controller delivers the result to the information physical system for corresponding operation, and simultaneously, the result is written into a log to finish persistent storage, so that the subsequent off-line analysis task needing data can be conveniently used.
The technical scheme of unloading for solving the technical problems of the invention is as follows:
because the computing task is usually discrete and the input data stream is independent and random, the scheme uses Poisson arrival stream analog input data to carry out unloading judgment on the edge computing task of the discrete network through various computing unloading algorithms. The unloading time delay mainly considered by the scheme can be divided into: calculating the time delay locally (D)Local) Data transmission delay (T)TranData) Edge computation time delay (D)Remote) Queue wait delay (D)Queue) Result return delay (D)Result) Five parts in total. For locally collected compute task requests, the present solution uses a triple to represent:
Figure BDA0002688132540000041
wherein the content of the first and second substances,
Figure BDA0002688132540000042
representing locally collected computing tasks ithThe quantization result of (2); diRepresenting the current computing task ithThe data amount of (2), in units of Kbytes; ciRepresenting the current computing task ithRequired number of CPU clock cycles, unit Cycle, TiRepresenting the current computing task ithThe overall delay constraint of (1), unit Second.
Random scheduling scheme
In order to minimize the average time delay of each calculation task, the scheme provides a random arbitration algorithm for scheduling tasks for each time slot. To describe the scheduling strategy, a binary set (v) is usedL[t],vC[t]) Computing task scheduling decisions representing the t-th time slotWherein v isL[t],vC[t]E {0, 1}, which respectively indicate that the tth time slot locally computes or offloads the computation task to the edge computation, therefore, there are four possible decision schemes, v { (v {)L[t],vC[t]) | (0,1), (1,0), (1,1), (0,0) }. Introducing a set of probability parameters
Figure BDA0002688132540000051
Wherein
Figure BDA0002688132540000052
The state space S is introduced as {0,1, ·, Q } × {0,1, ·, M } × {0,1,. and N-1}, where "x" denotes a cartesian product, Q denotes a maximum capacity of a task queue buffer, and M denotes a number of packets included in a task, that is, N denotes a number of slots required by the task in local computation. In the probability parameter
Figure BDA0002688132540000053
Four possible scheduling decisions are represented: (0,1), (1,0), (1,1), (0, 0).
When the local CPU or the transmission unit is idle, the task can be scheduled to the local computation or unloaded to the edge computation, when the task queue buffer is empty, there is no task to be scheduled, and k is 1,2,3,
Figure BDA0002688132540000054
when k is equal to 4, the reaction solution is,
Figure BDA0002688132540000055
the task will be scheduled in four instances where the task queue buffer is not empty.
Example 1: when both the local CPU and the offload transfer unit are idle, at most two computing tasks can be handled, i.e., one local computation and one computation offload. The task scheduling decision at this time is
Figure BDA0002688132540000056
Wherein, i is 0,1,. Q; m is 0,1,. said, M; n-1, 0, 1. The same goes on.
Figure BDA0002688132540000057
The probability that a scheduling scheme is k when the number of tasks in a task buffer area is i, the number of data packets contained in the tasks is m, and the time slot required by the tasks in local calculation is n is provided, which is an optimization decision problem of a Markov chain.
Example 2: when the local CPU is idle and the offload transfer unit is occupied, the local compute task may be started or held.
The task scheduling decision at this time is
Figure BDA0002688132540000058
Example 3: when the local CPU is occupied and the offload transfer unit is idle, an offload task may be transferred or held. The task scheduling decision at this time is
Figure BDA0002688132540000059
Example 4: when both the local CPU and the offload transfer unit are occupied,
Figure BDA00026881325400000510
the above embodiments are only intended to illustrate the technical solution of the present invention and not to limit the same, and a person skilled in the art can modify the technical solution of the present invention or substitute the same without departing from the spirit and scope of the present invention, and the scope of the present invention should be determined by the claims.

Claims (10)

1. A software-defined-based edge computing scheduling method comprises the following steps:
1) local computing unit collects local computing tasks ithAnd generating a computing task request and uploading the computing task request to the branchA distributed controller;
2) using a doublet (v)L[t],vC[t]) Representing a computation task scheduling decision of the t-th time slot; wherein v isL[t],vC[t]∈{0,1},vL[t]1 denotes the time slot to be calculated ithIs executed in the local computing unit, vC[t]1 denotes the time slot to be calculated ithUnloading to an edge computing service node for execution; i.e. the optional decision scheme v { (v)L[t],vC[t]) L (0,1), (1,0), (1,1), (0,0) }; introducing a set of probability parameters
Figure FDA0002688132530000011
For determining doublets (v)L[t],vC[t]) Taking the value of (A); wherein
Figure FDA0002688132530000012
In the probability parameter
Figure FDA0002688132530000013
k is 1,2,3,4, representing four possible scheduling decisions: (0,1), (1,0), (1,1), (0, 0); the state space S {0,1, …, Q } × {0,1, …, M } × {0,1, …, N-1}, where "x" denotes the cartesian product, Q denotes the maximum capacity of the task queue buffer, and M denotes the compute task ithThe number of data packets contained, N represents the calculation task ithCalculating the number of time slots required by a local calculation unit;
3) distributed controller based on binary group (v)L[t],vC[t]) To determine the computing task ithExecuting in the local computing unit or unloading to the edge computing service node.
2. The method of claim 1, wherein a doublet (v) is determinedL[t],vC[t]) The value taking method comprises the following steps: when the local computing unit and the unloading transmission unit are idle, two computing tasks are selected, wherein one computing task is executed in the local computing unit, and the other computing task is unloaded to the edge computing service node for execution;
Figure FDA0002688132530000014
wherein i is 0,1, …, Q; m is 0,1, …, M; n-0, 1, …, N-1;
Figure FDA0002688132530000015
when the number of tasks in a task buffer area is i, the number of data packets contained in the tasks is m, and the time slot required by the tasks in local calculation is n, the probability that the scheduling scheme is k is adopted.
3. The method of claim 1, wherein a doublet (v) is determinedL[t],vC[t]) The value taking method comprises the following steps: when the local CPU is idle, the offload transfer unit is occupied,
Figure FDA0002688132530000016
wherein i is 0,1, …, Q; m is 0,1, …, M; n is 0,1, …, N-1.
4. The method of claim 1, wherein a doublet (v) is determinedL[t],vC[t]) The value taking method comprises the following steps: when the local CPU is occupied and the unloading transmission unit is idle,
Figure FDA0002688132530000017
wherein i is 0,1, …, Q; m is 0,1, …, M; n is 0,1, …, N-1.
5. The method of claim 1, wherein a doublet (v) is determinedL[t],vC[t]) The value taking method comprises the following steps: when both the local CPU and the offload transfer unit are occupied,
Figure FDA0002688132530000021
wherein i is 0,1, …, Q; m is 0,1, …, M; n is 0,1, …, N-1.
6. The edge computing scheduling system based on software definition is characterized by comprising a plurality of local computing units, a plurality of edge computing service nodes and a distributed controller; each local computing unit is connected with each edge computing service node and the distributed controller through a domain controller, and each edge computing service node is connected with the distributed controller through a network;
the local computing unit is used for collecting local computing tasks ithGenerating a calculation task request and uploading the calculation task request to the distributed controller;
the distributed controller uses a binary group (v)L[t],vC[t]) Representing a computation task scheduling decision for the t-th time slot, based on the tuple (v)L[t],vC[t]) To determine the computing task ithExecuting at the local computing unit or unloading to the edge computing service node for execution; wherein v isL[t],vC[t]∈{0,1},vL[t]1 denotes the time slot to be calculated ithIs executed in the local computing unit, vC[t]1 denotes the time slot to be calculated ithUnloading to an edge computing service node for execution; i.e. the optional decision scheme v { (v)L[t],vC[t]) L (0,1), (1,0), (1,1), (0,0) }; introducing a set of probability parameters
Figure FDA0002688132530000022
For determining doublets (v)L[t],vC[t]) Taking the value of (A); wherein
Figure FDA0002688132530000023
In the probability parameter
Figure FDA0002688132530000024
k is 1,2,3,4, representing four possible scheduling decisions: (0,1), (1,0), (1,1), (0, 0); the state space S {0,1, …, Q } × {0,1, …, M } × {0,1, …, N-1}, where "x" denotes the cartesian product, Q denotes the maximum capacity of the task queue buffer, and M denotes the compute task ithThe number of data packets contained, N represents the calculation task ithThe number of time slots required is calculated at the local calculation unit.
7. The system of claim 6, wherein the distributed controller determines a duplet (v)L[t],vC[t]) The value taking method comprises the following steps: when the local computing unit and the unloading transmission unit are idle, two computing tasks are selected, wherein one computing task is executed in the local computing unit, and the other computing task is unloaded to the edge computing service node for execution;
Figure FDA0002688132530000025
Figure FDA0002688132530000026
wherein i is 0,1, …, Q; m is 0,1, …, M; n-0, 1, …, N-1;
Figure FDA0002688132530000027
when the number of tasks in a task buffer area is i, the number of data packets contained in the tasks is m, and the time slot required by the tasks in local calculation is n, the probability that the scheduling scheme is k is adopted.
8. The system of claim 6, wherein the distributed controller determines a duplet (v)L[t],vC[t]) The value taking method comprises the following steps: when the local CPU is idle, the offload transfer unit is occupied,
Figure FDA0002688132530000028
wherein i is 0,1, …, Q; m is 0,1, …, M; n is 0,1, …, N-1.
9. The system of claim 6, wherein the distributed controller determines a duplet (v)L[t],vC[t]) The value taking method comprises the following steps: when the local CPU is occupied and the unloading transmission unit is idle,
Figure FDA0002688132530000031
wherein i is 0,1, …, Q; m is 0,1, …, M; n is 0,1, …, N-1.
10. The system of claim 6, wherein the distributed controller determines a duplet (v)L[t],vC[t]) The value taking method comprises the following steps: when both the local CPU and the offload transfer unit are occupied,
Figure FDA0002688132530000032
Figure FDA0002688132530000033
wherein i is 0,1, …, Q; m is 0,1, …, M; n is 0,1, …, N-1.
CN202010982716.1A 2020-09-17 2020-09-17 Edge calculation scheduling method and system based on software definition Active CN112162837B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010982716.1A CN112162837B (en) 2020-09-17 2020-09-17 Edge calculation scheduling method and system based on software definition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010982716.1A CN112162837B (en) 2020-09-17 2020-09-17 Edge calculation scheduling method and system based on software definition

Publications (2)

Publication Number Publication Date
CN112162837A true CN112162837A (en) 2021-01-01
CN112162837B CN112162837B (en) 2022-08-23

Family

ID=73858173

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010982716.1A Active CN112162837B (en) 2020-09-17 2020-09-17 Edge calculation scheduling method and system based on software definition

Country Status (1)

Country Link
CN (1) CN112162837B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113268341A (en) * 2021-04-30 2021-08-17 国网河北省电力有限公司信息通信分公司 Distribution method, device, equipment and storage medium of power grid edge calculation task
CN115134243A (en) * 2022-09-02 2022-09-30 北京科技大学 Industrial control task distributed deployment method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10037231B1 (en) * 2017-06-07 2018-07-31 Hong Kong Applied Science and Technology Research Institute Company Limited Method and system for jointly determining computational offloading and content prefetching in a cellular communication system
CN109548155A (en) * 2018-03-01 2019-03-29 重庆大学 A kind of non-equilibrium edge cloud network access of distribution and resource allocation mechanism
CN110798858A (en) * 2019-11-07 2020-02-14 华北电力大学(保定) Distributed task unloading method based on cost efficiency
CN110928654A (en) * 2019-11-02 2020-03-27 上海大学 Distributed online task unloading scheduling method in edge computing system
CN111654712A (en) * 2020-06-22 2020-09-11 中国科学技术大学 Dynamic self-adaptive streaming media multicast method suitable for mobile edge computing scene

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10037231B1 (en) * 2017-06-07 2018-07-31 Hong Kong Applied Science and Technology Research Institute Company Limited Method and system for jointly determining computational offloading and content prefetching in a cellular communication system
CN109548155A (en) * 2018-03-01 2019-03-29 重庆大学 A kind of non-equilibrium edge cloud network access of distribution and resource allocation mechanism
CN110928654A (en) * 2019-11-02 2020-03-27 上海大学 Distributed online task unloading scheduling method in edge computing system
CN110798858A (en) * 2019-11-07 2020-02-14 华北电力大学(保定) Distributed task unloading method based on cost efficiency
CN111654712A (en) * 2020-06-22 2020-09-11 中国科学技术大学 Dynamic self-adaptive streaming media multicast method suitable for mobile edge computing scene

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MUHAMMET OGUZ OZCAN 等: "Remote Debugging for Containerized Applications in Edge Computing Environments", 《2019 IEEE INTERNATIONAL CONFERENCE ON EDGE COMPUTING (EDGE)》 *
王妍 等: "云辅助移动边缘计算中的计算卸载策略", 《计算机工程》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113268341A (en) * 2021-04-30 2021-08-17 国网河北省电力有限公司信息通信分公司 Distribution method, device, equipment and storage medium of power grid edge calculation task
CN115134243A (en) * 2022-09-02 2022-09-30 北京科技大学 Industrial control task distributed deployment method and system

Also Published As

Publication number Publication date
CN112162837B (en) 2022-08-23

Similar Documents

Publication Publication Date Title
CN112162789A (en) Edge calculation random unloading decision method and system based on software definition
Cui et al. A blockchain-based containerized edge computing platform for the internet of vehicles
CN107404523A (en) Cloud platform adaptive resource dispatches system and method
Wu et al. Computation offloading method using stochastic games for software defined network-based multi-agent mobile edge computing
CN112148381A (en) Software definition-based edge computing priority unloading decision method and system
CN112162837B (en) Edge calculation scheduling method and system based on software definition
CN110086855B (en) Intelligent Spark task perception scheduling method based on ant colony algorithm
CN112650581A (en) Cloud-side cooperative task scheduling method for intelligent building
CN115103404A (en) Node task scheduling method in computational power network
Wu et al. Optimal deploying IoT services on the fog computing: A metaheuristic-based multi-objective approach
CN113553146A (en) Cloud edge cooperative computing task merging and scheduling method
Ren et al. Multi-objective optimization for task offloading based on network calculus in fog environments
Khelifa et al. Combining task scheduling and data replication for SLA compliance and enhancement of provider profit in clouds
CN111324429B (en) Micro-service combination scheduling method based on multi-generation ancestry reference distance
Chen et al. A3C-based and dependency-aware computation offloading and service caching in digital twin edge networks
CN113703984A (en) SOA (service oriented architecture) -based cloud task optimization strategy method under 5G cloud edge collaborative scene
CN113159539A (en) Joint green energy scheduling and dynamic task allocation method in multilayer edge computing system
Li et al. Task computation offloading for multi-access edge computing via attention communication deep reinforcement learning
Nishanbayev et al. The model of forming the structure of the “cloud” data center
CN115086249B (en) Cloud data center resource allocation method based on deep reinforcement learning
Cui et al. Resource-Efficient DNN Training and Inference for Heterogeneous Edge Intelligence in 6G
Fang et al. A Scheduling Strategy for Reduced Power Consumption in Mobile Edge Computing
Aung et al. Data processing model for mobile IoT systems
Duan et al. Lightweight federated reinforcement learning for independent request scheduling in microgrids
CN114035919A (en) Task scheduling system and method based on power distribution network layered distribution characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant