CN110365753B - Low-delay load distribution method and device for Internet of things service based on edge calculation - Google Patents

Low-delay load distribution method and device for Internet of things service based on edge calculation Download PDF

Info

Publication number
CN110365753B
CN110365753B CN201910568179.3A CN201910568179A CN110365753B CN 110365753 B CN110365753 B CN 110365753B CN 201910568179 A CN201910568179 A CN 201910568179A CN 110365753 B CN110365753 B CN 110365753B
Authority
CN
China
Prior art keywords
task
algorithm
terminal
matrix
edge node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910568179.3A
Other languages
Chinese (zh)
Other versions
CN110365753A (en
Inventor
邵苏杰
徐思雅
周俊
辛辰
牛旭东
亓峰
丰雷
王徐延
赵轩
郭少勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Nanjing Power Supply Co of State Grid Jiangsu Electric Power Co Ltd
Original Assignee
Beijing University of Posts and Telecommunications
Nanjing Power Supply Co of State Grid Jiangsu Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications, Nanjing Power Supply Co of State Grid Jiangsu Electric Power Co Ltd filed Critical Beijing University of Posts and Telecommunications
Priority to CN201910568179.3A priority Critical patent/CN110365753B/en
Publication of CN110365753A publication Critical patent/CN110365753A/en
Application granted granted Critical
Publication of CN110365753B publication Critical patent/CN110365753B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5066Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/101Server selection for load balancing based on network conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5017Task decomposition

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computer Hardware Design (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the invention provides an Internet of things service low-delay load distribution method and device based on edge calculation, wherein the method comprises the following steps: acquiring a task request of each application in each terminal and the computing capacity of each edge node; and inputting the task request of each application in each terminal and the computing capacity of each edge node into a preset optimization problem model, and outputting a resource distribution matrix and a task distribution matrix, wherein the preset optimization problem model comprises a particle swarm algorithm model improved by using an ant colony algorithm and a semi-definite relaxation algorithm model improved by using a Gaussian random algorithm. According to the low-delay load distribution method and device for the service of the Internet of things based on the edge calculation, provided by the embodiment of the invention, the ant colony algorithm is applied to improve the particle swarm algorithm, the convergence time of the algorithm is reduced, the performance of the resource distribution result is improved, the rank 1 constraint in the semi-definite relaxation problem is solved by applying the Gaussian random variable, the performance of the task distribution result is improved, and finally the service delay is reduced.

Description

Low-delay load distribution method and device for Internet of things service based on edge calculation
Technical Field
The invention relates to the technical field of Internet of things, in particular to a low-delay load distribution method and device for Internet of things services based on edge calculation.
Background
The electric power internet of things is a product of applying the internet of things technology to an intelligent power grid, and along with terminal intellectualization and function diversification, more and more services provide extremely high requirements for service delay. In the traditional cloud computing architecture, a large amount of transmission time is consumed when data is uploaded to a remote cloud server, and the limited bandwidth of a core network cannot bear mass data transmission easily. The edge calculation is a novel calculation technology closer to a service terminal, and becomes an effective solution for processing the delay sensitive service of the power internet of things.
The edge node is positioned at the edge of the network, has the functions of gateway, calculation and storage, bears part or all of task requests which originally need to be processed by the cloud server, and is accessed into a large amount of ubiquitous services. With the gradual development of multiple services under the architecture, the applications which need to be processed by a single node are gradually increased, and the time delay requirements are diversified. A single edge node can process a large amount of ubiquitous services, but under the condition that service terminal requests are frequent, such as frequent movement of a polling terminal, simultaneous uploading of a large amount of data of a collection terminal in an abnormal environment and the like, the task queuing caused by limited computing resources of the single edge node cannot meet the delay requirements of all services. On one hand, the task request processing rate can be improved by optimizing the computing resource distribution mode in the edge node, and on the other hand, the workload of the edge node with overhigh load can be reduced by reallocating the terminal task request among a plurality of edge nodes. Therefore, an effective workload distribution mechanism is urgently needed in edge computing to meet the delay requirement in a multi-service scenario.
In the prior art, in order to meet the delay requirement in a multi-service scene, the following technical scheme is adopted:
the technical scheme 1: patent No. CN108848192A patent of "a method for performing distributed processing on an Internet of things equipment cluster", relates to a method for performing distributed processing on an Internet of things equipment cluster, and mainly comprises the following three steps: firstly, determining a data source for processing a workload, and calculating the duration time required for processing the workload; secondly, selecting a group of internet of things devices within a threshold distance of a data source within a time range to form a device subset; and thirdly, configuring a lightweight application program on the first device in the Internet of things device subset and processing the workload.
The technical scheme 2 is as follows: patent CN101126992 patent "method and system for distributing multiple tasks among multiple nodes in a network" relates to a computer-implemented method for distributing multiple tasks among multiple processing nodes in a processor network, which is mainly completed by two steps: first, calculating task processor consumption values of a plurality of tasks, calculating measured node processor consumption values of a plurality of nodes, and calculating target node processor consumption values of the plurality of nodes; second, a load index is calculated from the difference between the measured node processor consumption value for node i and the target program processor consumption value for node i to balance the workload among the nodes such that the calculated load index for each node is substantially zero.
Technical scheme 3: a patent "an allocation method of an edge computing system" with patent number CN108415763A relates to an allocation method of an edge computing system, which is mainly completed by three steps: firstly, a computing terminal receives task information of a task to be distributed sent by a scheduler; secondly, calculating to obtain the cost of executing the task according to a preset quotation model, determining quotation, and applying for the task to a scheduler according to the quotation; thirdly, the scheduler receives the quotations of each computing terminal, and distributes the tasks to the computing terminals through a preset task distribution model, so that the sum of the quotations of each task is optimal.
However, in the technical scheme 1, it is difficult to ensure the stability of the internet of things device sub-cluster, the overhead for maintaining and updating the internet of things device sub-cluster is relatively high, and the requirements are difficult to meet when processing the delay sensitive service. The technical scheme 2 only solves the load balance among peer nodes in the network, does not distinguish the difference of task types on the nodes, and cannot solve the delay requirements of various service types in the scene of dealing with ubiquitous service access. Technical scheme 3 the quotation model of the computing terminal and the task allocation model parameters of the scheduler need to be modified in real time, and each task needs all the computing terminals to compute quotation, which wastes the computing resources of the computing terminals.
Disclosure of Invention
An object of the embodiments of the present invention is to provide a method and an apparatus for low-latency load distribution of an internet of things service based on edge computing, which overcome the above problems or at least partially solve the above problems.
In order to solve the above technical problem, in one aspect, an embodiment of the present invention provides a low latency load distribution method for an internet of things service based on edge computing, including:
acquiring a task request of each application in each terminal and the computing capacity of each edge node;
and inputting the task request of each application in each terminal and the computing capacity of each edge node into a preset optimization problem model, and outputting a resource distribution matrix and a task distribution matrix, wherein the preset optimization problem model comprises a particle swarm algorithm model improved by using an ant colony algorithm and a semi-definite relaxation algorithm model improved by using a Gaussian random algorithm.
Further, the inputting the task request of each application in each terminal and the computing power of each edge node into a preset optimization problem model and outputting a resource allocation matrix and a task allocation matrix specifically includes:
inputting a task request of each application in each terminal and the computing capacity of each edge node into a particle swarm algorithm model improved by an ant colony algorithm, and outputting a resource allocation matrix;
and inputting the task request of each application in each terminal and the computing capacity of each edge node into a semi-definite relaxation algorithm model improved by a Gaussian random algorithm, and outputting a task distribution matrix.
Further, the inputting the task request of each application in each terminal and the computing power of each edge node into the particle swarm algorithm model improved by the ant colony algorithm and outputting the resource allocation matrix specifically includes:
in the last iteration process, the optimal position of the particle currently found is obtained;
calculating pheromones according to the currently found optimal position;
and in the next iteration process, performing path selection according to the concentration of the pheromone acquired in the last iteration process, acquiring the optimal position of the particle at present again until the preset iteration times are reached, and outputting the final position of the particle as a resource allocation matrix.
Further, the inputting the task request of each application in each terminal and the computing power of each edge node into a semi-definite relaxation algorithm model improved by using a gaussian random algorithm and outputting a task allocation matrix specifically includes:
inputting a task request of each application in each terminal and the computing capacity of each edge node into a semi-definite relaxation algorithm model improved by a Gaussian random algorithm;
if the rank of the current optimal solution is judged to be 1, performing array remodeling on the current optimal solution to obtain a task allocation matrix; and if the rank of the obtained current optimal solution is judged to be not 1, obtaining an approximate value of the current optimal solution by using a Gaussian random algorithm, and then performing array remodeling on the approximate value of the current optimal solution to obtain a task allocation matrix.
On the other hand, an embodiment of the present invention provides an internet of things service low-latency load distribution device based on edge calculation, including:
the acquisition module is used for acquiring a task request of each application in each terminal and the computing capacity of each edge node;
the distribution module is used for inputting the task request of each application in each terminal and the computing capacity of each edge node into a preset optimization problem model and outputting a resource distribution matrix and a task distribution matrix, wherein the preset optimization problem model comprises a particle swarm algorithm model improved by using an ant colony algorithm and a semi-definite relaxation algorithm model improved by using a Gaussian random algorithm.
Further, the allocation module comprises a resource allocation submodule and a task allocation submodule:
the resource allocation submodule is used for inputting the task request of each application in each terminal and the computing capacity of each edge node into a particle swarm algorithm model improved by an ant colony algorithm and outputting a resource allocation matrix;
and the task allocation submodule is used for inputting the task request of each application in each terminal and the computing capacity of each edge node into a semi-definite relaxation algorithm model improved by a Gaussian random algorithm and outputting a task allocation matrix.
Further, the resource allocation sub-module is specifically configured to:
in the last iteration process, the optimal position of the particle currently found is obtained;
calculating pheromones according to the currently found optimal position;
and in the next iteration process, performing path selection according to the concentration of the pheromone acquired in the last iteration process, acquiring the optimal position of the particle at present again until the preset iteration times are reached, and outputting the final position of the particle as a resource allocation matrix.
Further, the task allocation submodule is specifically configured to:
inputting a task request of each application in each terminal and the computing capacity of each edge node into a semi-definite relaxation algorithm model improved by a Gaussian random algorithm;
if the rank of the current optimal solution is judged to be 1, performing array remodeling on the current optimal solution to obtain a task allocation matrix; and if the rank of the obtained current optimal solution is judged to be not 1, obtaining an approximate value of the current optimal solution by using a Gaussian random algorithm, and then performing array remodeling on the approximate value of the current optimal solution to obtain a task allocation matrix.
In another aspect, an embodiment of the present invention provides an electronic device, including: a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the above method when executing the computer program.
In yet another aspect, the present invention provides a non-transitory computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the above method.
The method and the device for distributing the low-delay load of the service of the internet of things based on the edge calculation are based on a resource distribution method based on an improved particle swarm algorithm and a task distribution method based on a semi-definite relaxation algorithm, and the pheromone strategy in the ant colony algorithm is applied to improve the particle swarm algorithm, so that the convergence time of the algorithm is reduced, the performance of a resource distribution result is improved, the rank 1 constraint in the semi-definite relaxation problem is solved by applying a Gaussian random variable, the performance of the task distribution result is improved, and finally the service delay is reduced.
Drawings
Fig. 1 is a schematic diagram of a low-latency load distribution method for an internet of things service based on edge computing according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an internet of things workload distribution model based on edge computing according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a workload distribution flow based on an improved particle swarm optimization and a semi-fixed relaxation algorithm according to an embodiment of the present invention;
fig. 4 is a schematic diagram of an internet of things service low-latency load distribution device based on edge computing according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention mainly considers the service delay of the terminal UE, wherein the service delay refers to the time required from the generation of a task request of the terminal to the completion of the processing of the edge node, and comprises network delay and calculation delay. The network delay is related to the task allocation mode of the terminal task among the edge nodes, and the calculation delay is related to the resource allocation mode inside the edge nodes. In order to comprehensively consider the influence of network delay and calculation delay, the embodiment of the invention is based on a resource allocation method of an improved particle swarm algorithm and a task allocation method of a semi-fixed relaxation algorithm, the pheromone strategy in the ant colony algorithm is applied to improve the particle swarm algorithm, the convergence time of the algorithm is reduced, the performance of a resource allocation result is improved, the rank-1 constraint in the semi-fixed relaxation problem is solved by applying a Gaussian random variable, the performance of the task allocation result is improved, and finally the service delay is reduced.
Fig. 1 is a schematic diagram of an internet of things service low-latency load distribution method based on edge computing according to an embodiment of the present invention, and as shown in fig. 1, an internet of things service low-latency load distribution method based on edge computing according to an embodiment of the present invention is provided, an execution main body of the internet of things service low-latency load distribution method is an internet of things service low-latency load distribution device based on edge computing, and the method includes:
step S101, acquiring a task request of each application in each terminal and computing capacity of each edge node.
Specifically, in a region, including a plurality of edge nodes EN and a plurality of terminals UE, when a system is initialized, the load born by the edge nodes EN is balanced by a load balancing technique, and with the processing of tasks and the continuous change of task requests of the terminals UE, tasks to be processed by different edge nodes EN will be different, so that load distribution needs to be performed uniformly to improve the overall computation efficiency of all edge nodes EN in the region, improve the processing efficiency of the task requests of the terminals UE, and reduce service delay.
Each edge node EN may include a plurality of Virtual Machines (VMs), the CPU processing rate of a VM represents the computing capacity of the VM, and the actual CPU processing rate in the edge node EN represents the actual computing capacity of the edge node EN.
The terminal UE may include multiple Applications (APPs), each APP may generate a respective task request, a task request generated by one APP serves as an indivisible task, and the task request of each APP is only allocated to one edge node EN for processing.
When the load distribution starts to be performed, first, a task request of each application APP in each terminal UE and a computing power of each edge node EN need to be acquired.
Step S102, inputting a task request of each application in each terminal and the computing capacity of each edge node into a preset optimization problem model, and outputting a resource distribution matrix and a task distribution matrix, wherein the preset optimization problem model comprises a particle swarm algorithm model improved by an ant colony algorithm and a semi-definite relaxation algorithm model improved by a Gaussian random algorithm.
Specifically, after acquiring a task request of each application APP in each terminal UE and the computing power of each edge node EN, the task request of each application APP in each terminal UE and the computing power of each edge node EN are input into a preset optimization problem model, and a resource allocation matrix and a task allocation matrix are output.
The preset optimization problem model aims to minimize the sum of service time delays of all terminal UE, and the service time delay of each terminal UE is the maximum value of all application APP time delays in the terminal UE.
The service delay of each terminal UE includes the network delay from the terminal UE to the edge node EN and the calculated delay of the task request at the edge node EN.
Network latency includes transmission latency due to port rate and propagation latency due to physical distance. In general, the propagation delay is much longer than the transmission delay, and therefore, the network delay in the embodiment of the present invention mainly considers the propagation delay.
The computation delay is the delay due to the CPU computation rate. For the sake of analysis, it is assumed that the edge calculation EN starts processing after all task requests arrive, and the calculation delay is an average value of all tasks processed by the CPU over a period of time.
The preset optimization problem model comprises a particle swarm algorithm model improved by an ant colony algorithm and a semi-definite relaxation algorithm model improved by a Gaussian random algorithm.
The method for distributing the low-delay load of the service of the internet of things based on the edge calculation, which is provided by the embodiment of the invention, is based on a resource distribution method of an improved particle swarm algorithm and a task distribution method of a semi-definite relaxation algorithm, the pheromone strategy in the ant colony algorithm is applied to improve the particle swarm algorithm, the convergence time of the algorithm is reduced, the performance of a resource distribution result is improved, the rank 1 constraint in the semi-definite relaxation problem is solved by applying the Gaussian random variable, the performance of the task distribution result is improved, and finally the service delay is reduced.
Based on any of the above embodiments, further, the inputting the task request of each application in each terminal and the computing capability of each edge node into a preset optimization problem model, and outputting a resource allocation matrix and a task allocation matrix specifically includes:
inputting a task request of each application in each terminal and the computing capacity of each edge node into a particle swarm algorithm model improved by an ant colony algorithm, and outputting a resource allocation matrix;
and inputting the task request of each application in each terminal and the computing capacity of each edge node into a semi-definite relaxation algorithm model improved by a Gaussian random algorithm, and outputting a task distribution matrix.
Specifically, the method for solving the resource allocation matrix and the task allocation matrix by using the preset optimization problem model comprises two steps.
Firstly, a task request of each application APP in each terminal UE and the computing capacity of each edge node EN are input into a particle swarm algorithm model improved by an ant colony algorithm, and a resource allocation matrix is output.
The particle swarm algorithm has the advantages of simple concept, high searching speed, high calculating efficiency, easy realization, parallel processing and the like, and is suitable for solving the problem of the resource allocation type. However, the method and the device are easy to fall into the local optimum, specifically, premature convergence and rapid reduction of population diversity are shown, and aiming at the problem, in the embodiment of the invention, the optimization experience information of all particles is stored in the form of pheromone in the ant colony algorithm, and the speed of a particle swarm is influenced in a path selection mode, so that the faster convergence is maintained and the local optimum is avoided.
Then, the task request of each application in each terminal and the computing capacity of each edge node are input into a semi-definite relaxation algorithm model improved by a Gaussian random algorithm, and a task distribution matrix is output.
And after the resource allocation matrix is calculated, outputting the task allocation matrix by utilizing a semi-definite relaxation algorithm model improved by a Gaussian random algorithm.
The method for distributing the low-delay load of the service of the internet of things based on the edge calculation, which is provided by the embodiment of the invention, is based on a resource distribution method of an improved particle swarm algorithm and a task distribution method of a semi-definite relaxation algorithm, the pheromone strategy in the ant colony algorithm is applied to improve the particle swarm algorithm, the convergence time of the algorithm is reduced, the performance of a resource distribution result is improved, the rank 1 constraint in the semi-definite relaxation problem is solved by applying the Gaussian random variable, the performance of the task distribution result is improved, and finally the service delay is reduced.
Based on any of the above embodiments, further, the inputting the task request of each application in each terminal and the computing capability of each edge node into the particle swarm algorithm model improved by using the ant colony algorithm and outputting the resource allocation matrix specifically includes:
in the last iteration process, the optimal position of the particle currently found is obtained;
calculating pheromones according to the currently found optimal position;
and in the next iteration process, performing path selection according to the concentration of the pheromone acquired in the last iteration process, acquiring the optimal position of the particle at present again until the preset iteration times are reached, and outputting the final position of the particle as a resource allocation matrix.
Specifically, the particle swarm algorithm adopts an elite attraction strategy to guide the optimization process of all particles, the particles lack information sharing, and the algorithm can fall into local optimization when the elite particles cannot be updated in time.
In order to avoid the situation, in the embodiment of the invention, the optimization experience information of all particles is stored in the form of pheromones in the ant colony algorithm, and the speed of the particle swarm is influenced in a path selection mode, so that the faster convergence is maintained and the local optimization is avoided.
By utilizing the pheromone strategy in the ant colony algorithm, when the particles find the current optimal position in the last iteration process, the pheromones with different degrees are released according to the size of the fitness function, wherein the fitness function is the reciprocal of the service delay function.
And when the next iteration is carried out, the particles carry out path selection according to the concentration of the pheromone, so that the optimal position is selected.
And iterating according to the above mode until the preset iteration times are reached to obtain the optimal position of the particle, and taking the optimal position as a resource allocation matrix.
The method for distributing the low-delay load of the service of the internet of things based on the edge calculation, which is provided by the embodiment of the invention, is based on a resource distribution method of an improved particle swarm algorithm and a task distribution method of a semi-definite relaxation algorithm, the pheromone strategy in the ant colony algorithm is applied to improve the particle swarm algorithm, the convergence time of the algorithm is reduced, the performance of a resource distribution result is improved, the rank 1 constraint in the semi-definite relaxation problem is solved by applying the Gaussian random variable, the performance of the task distribution result is improved, and finally the service delay is reduced.
Based on any of the above embodiments, further, the inputting the task request of each application in each terminal and the computing power of each edge node into a semi-deterministic relaxation algorithm model improved by using a gaussian random algorithm, and outputting a task allocation matrix specifically includes:
inputting a task request of each application in each terminal and the computing capacity of each edge node into a semi-definite relaxation algorithm model improved by a Gaussian random algorithm;
if the rank of the current optimal solution is judged to be 1, performing array remodeling on the current optimal solution to obtain a task allocation matrix; and if the rank of the obtained current optimal solution is judged to be not 1, obtaining an approximate value of the current optimal solution by using a Gaussian random algorithm, and then performing array remodeling on the approximate value of the current optimal solution to obtain a task allocation matrix.
Specifically, after the resource allocation matrix is calculated, a semi-definite relaxation algorithm model improved by a Gaussian random algorithm is utilized to output the task allocation matrix.
And inputting the task request of each application in each terminal and the computing capacity of each edge node into a semi-definite relaxation algorithm model improved by a Gaussian random algorithm.
If the rank of the optimal solution obtained by the convex optimization method without considering rank 1(rank-1) constraint is 1, namely the optimal solution meets rank-1 constraint, then performing array remodeling reshape on the current optimal solution to obtain a task allocation matrix. Where reshape is the operation of the numpy tool to process an array, changing the shape of the array without changing the number of elements.
If the rank of the optimal solution obtained by the convex optimization method without considering rank-1 constraint is not 1, namely the optimal solution does not meet rank-1 constraint, obtaining an approximate value of the current optimal solution by using a Gaussian random algorithm, and then performing array remodeling reshape on the approximate value of the current optimal solution to obtain a task allocation matrix.
The method for distributing the low-delay load of the service of the internet of things based on the edge calculation, which is provided by the embodiment of the invention, is based on a resource distribution method of an improved particle swarm algorithm and a task distribution method of a semi-definite relaxation algorithm, the pheromone strategy in the ant colony algorithm is applied to improve the particle swarm algorithm, the convergence time of the algorithm is reduced, the performance of a resource distribution result is improved, the rank 1 constraint in the semi-definite relaxation problem is solved by applying the Gaussian random variable, the performance of the task distribution result is improved, and finally the service delay is reduced.
Based on any of the above embodiments, fig. 2 is a schematic diagram of a workload distribution model of the internet of things based on edge computing according to an embodiment of the present invention, as shown in fig. 2, an en (edge node) set in an area is considered as
Figure BDA0002110095870000101
UE (user end) set is
Figure BDA0002110095870000102
Wherein the UEjThe APP (application) set of
Figure BDA0002110095870000103
The task request of the k-th APP can be represented by a vector wjk=[ljkjk],ljkDenotes wjkAmount of data, omega, to be transmittedjkDenotes wjkI.e. the number of instructions the CPU needs to execute, ωjkObeying a poisson distribution.
The service delay of the terminal is the maximum value of the APP delay in the UE. Assuming that a task request of an APP in the UE is an indivisible task, each APP request is allocated to only one EN for processing. dijkRepresenting a UEjW of class k APPjkDistribution to ENiThe delay of (2). So UEjIs that a task allocation problem
Figure BDA0002110095870000111
The mapping problem of (2). Considering all UEs, set of overall APP
Figure BDA0002110095870000112
Is one
Figure BDA0002110095870000113
The mapping problem of (2).
Figure BDA0002110095870000114
Wherein d isjRepresenting a UEjThe service delay of (a) is set,
Figure BDA0002110095870000115
representing a UEjThe set of APPs of (a),
Figure BDA0002110095870000116
denotes the EN set, xijkRepresenting a UEjW ofjkWhether or not to assign to ENi,dijkRepresenting a UEjW ofjkDistribution to ENiThe delay of (2).
X=(xijk)I×J×KThe value of an array element is expressed by a formula as follows:
Figure BDA0002110095870000117
wherein x isijkRepresenting a UEjW ofjkWhether or not to assign to ENi,wjkRepresenting a UEjTask request of class k APP above, UEjDenotes the jth UE, ENiDenotes the ith EN.
A task request wjkOnly one EN can be assigned to process, so there are the following constraints:
Figure BDA0002110095870000118
wherein the content of the first and second substances,
Figure BDA0002110095870000119
denotes the EN set, xijkRepresenting a UEjW ofjkWhether or not to assign to ENi
To satisfy Q of the terminaloS, it is required that the service delay of the terminal cannot be larger than a tolerance threshold TjThus, the following constraints are satisfied:
dj≤Tj(4)
wherein d isjRepresenting a UEjService delay of, TjRepresenting a UEjThe service delay threshold of (2).
Analyzing the time delay of an APP request, including the network time delay from the UE to the EN and the calculation time delay of the task request on the EN.
Figure BDA00021100958700001110
Wherein d isijkRepresenting a UEjW ofjkDistribution to ENiThe time delay of (a) is greater than (b),
Figure BDA0002110095870000121
representing a UEjW ofjkDistribution to ENiThe network delay of the network is increased and,
Figure BDA0002110095870000122
representing a UEjW ofjkDistribution to ENiThe calculated time delay of (1).
Network latency includes transmission latency due to port rate and propagation latency due to physical distance. In general, the propagation delay is much larger than the transmission delay, so the network delay in the present invention mainly considers the propagation delay.
Figure BDA0002110095870000123
Wherein the content of the first and second substances,
Figure BDA0002110095870000124
representing a UEjW ofjkDistribution to ENiNetwork delay of rijRepresenting a UEjTo ENiC represents the propagation speed of a wireless or wired channel.
The computation delay is the delay due to the CPU computation rate. For analytical convenience, we assume that EN starts processing after all requests arrive, and the computational latency is the average of all tasks processed by the CPU over a period of time.
Figure BDA0002110095870000125
Wherein the content of the first and second substances,
Figure BDA0002110095870000126
representing a UEjW ofjkDistribution to ENiThe time delay of the calculation of (a) is,
Figure BDA0002110095870000127
denotes the UE set, xijkRepresenting a UEjW ofjkWhether or not to assign to ENi,ωjkDenotes wjkThe work volume size of the task of (v)ikElements in matrix V are allocated to VMs in EN, representing VMsikCPU processing rate, VMikRepresents ENiVM, V ═ V (V)ik)I×K
The sum of the computing power of all VMs in an EN should not be greater than the actual computing power of the EN, i.e., the following constraints are satisfied.
Figure BDA0002110095870000128
0≤νik≤νi(9)
Wherein the content of the first and second substances,
Figure BDA00021100958700001210
denotes the APP set, vikRepresenting a VMikV. CPU processing rate ofiRepresents ENiThe processing rate of the CPU.
Thus, the workload distribution problem is as follows:
Figure BDA0002110095870000129
wherein X represents a three-dimensional array of task requests in the UE mapped to EN, V represents a VM allocation matrix in EN,
Figure BDA0002110095870000131
representing a set of UEs, djRepresenting a UEjThe constraint (2) represents the above formula (2), the constraint (3) represents the above formula (3), the constraint (4) represents the above formula (4), the constraint (8) represents the above formula (8), and the constraint (9) represents the above formula (9).
The embodiment of the invention uses the improved particle swarm algorithm to distribute the computing resources in the edge nodes, and carries out normalization processing on the matrix V to ensure that V isiki=pikTo obtain a resource allocation matrix P, element PikRepresenting edge nodes ENiMiddle VMikThe ratio of the computing resources of (2) to the total computing resources of the edge nodes, VMikRepresents ENiThe k VM in (1). I.e. solve the following problem:
Figure BDA0002110095870000132
xijkis an initial value of X, set to
Figure BDA0002110095870000133
If i ═ i*Then xijk1, otherwise, xijk=0。
Where V represents the VM allocation matrix in EN,
Figure BDA0002110095870000134
which represents a set of UEs that are to be served,
Figure BDA0002110095870000135
representing a UEjThe set of APPs in (1) is,
Figure BDA0002110095870000136
denotes the EN set, xijkRepresenting a UEjW ofjkWhether or not to be distributed toENi,rijRepresenting a UEjTo ENiC represents the propagation velocity of a wireless or wired channel, ωjkDenotes wjkThe work volume size of the task of (v)ikRepresenting a VMikCPU processing rate, VMikRepresents ENiOf VM, viRepresents ENiThe processing rate of the CPU.
The particle swarm algorithm has the advantages of simple concept, high searching speed, high calculating efficiency, easy realization, parallel processing and the like, and is suitable for solving the problem of the resource allocation type. Aiming at the problem, the invention provides an improved particle swarm algorithm, which utilizes pheromone form in the ant colony algorithm to store optimization experience information of all particles and influences the speed of the particle swarm in a path selection mode, thereby maintaining faster convergence and avoiding local optimization.
Fig. 3 is a schematic diagram of a workload distribution flow based on an improved particle swarm and a semi-definite relaxation algorithm according to an embodiment of the present invention, as shown in fig. 3, a particle attribute is divided into two parts, namely, a position and a speed of a particle, where each particle position represents a feasible solution of a resource distribution problem, and a position of a particle epsilon is defined as a resource distribution matrix PεSpeed is defined as a matrix Uε
The velocity update formula is as follows:
Uε(n+1)=g[wUε(n)+c1·r1(n)·(Pbε(n)-Pε(n))+c2·r2(n)·(Gb(n)-Pε(n))](12)
wherein, Uε(n +1) represents the velocity of the particle ε after the nth iteration, g represents the limiting function of the velocity, w represents the inertial weight, Uε(n) denotes the velocity of the particle epsilon at the nth iteration, c1Denotes a first learning factor, r1(n) represents a first random number of size in the interval (0,1) at the nth iteration, Pbε(n) represents the individual optimal solution, P, for the particle epsilon at the nth iterationε(n) representing the particle epsilon for the nth iterationPosition, c2Represents a second learning factor, r2(n) represents a second random number with a size within an interval (0,1) at the nth iteration, gb (n) represents a global optimal solution at the nth iteration, n represents the number of iterations, and epsilon represents the number of particles.
The location update formula is as follows:
Pε(n+1)=Pε(n)+Uε(n+1) (13)
wherein, Pε(n +1) denotes the position of the particle epsilon after the nth iteration, Pε(n) denotes the position of the particle epsilon at the nth iteration, Uε(n +1) represents the velocity of the particle epsilon after the nth iteration, n represents the number of iterations, and epsilon represents the particle number.
uikElement, u, representing the ith row and the kth column in the particle velocity matrixik∈[-umax,umax],umaxFor the maximum value of the particle velocity, ensuring that the particle position does not exceed the boundary, the function acts to limit the velocity to [ -u [ ]max,umax]Within the range.
Figure BDA0002110095870000141
Wherein g represents a limiting function of the speed,
Figure BDA0002110095870000142
element, u, of the ith row and kth column in the velocity matrix representing the particle epsilonmaxIs the maximum value of the particle velocity.
The problem is aimed at minimizing the service delay, so the fitness function is the inverse of the service delay function. The fitness function is as follows:
Figure BDA0002110095870000143
wherein, PεWhich indicates the position of the particle epsilon,
Figure BDA0002110095870000151
which represents a set of UEs that are to be served,
Figure BDA0002110095870000152
representing a UEjThe set of APPs in (1) is,
Figure BDA0002110095870000153
denotes the EN set, xijkRepresenting a UEjW ofjkWhether or not to assign to ENi,rijRepresenting a UEjTo ENiC represents the propagation velocity of a wireless or wired channel, ωjkDenotes wjkThe work volume size of the task of (v)ikRepresenting a VMikCPU processing rate, VMikRepresents ENiOf VM, viRepresents ENiThe processing rate of the CPU.
The particle swarm algorithm adopts an elite attraction strategy to guide the optimization process of all particles, the particles lack information sharing, and the algorithm can fall into local optimization when the elite particles cannot be updated in time. By means of the pheromone strategy in the ant colony algorithm, when the current optimal position is found, the pheromones with different degrees are released according to the fitness function, and when the next iteration is carried out, the particles carry out path selection according to the concentration of the pheromones, so that the optimal position is selected.
The pheromone matrix updating formula is as follows:
T(n+1)=(1-ρ)T(n)+ΔT(n) (16)
wherein, T (n +1) pheromone matrix after the nth iteration, rho is pheromone volatilization factor, T (n) represents pheromone matrix in the nth iteration, and delta T (n) represents pheromone increment matrix in the nth iteration.
Pheromone delta matrix:
Figure BDA0002110095870000154
wherein, DeltaT (n) represents pheromone increment matrix in the nth iteration, epsilon is population scale, S is pheromone intensity, fitness is fitness function, P isε(n) denotes the position of the particle epsilon at the nth iteration.
The path transition matrix Φ represents the probability that a particle will undergo a path transition under the influence of a pheromone, and its elements
Figure BDA0002110095870000155
The calculation formula of (a) is as follows:
Figure BDA0002110095870000156
wherein the content of the first and second substances,
Figure BDA0002110095870000157
is the probability of the path transition after the nth iteration, τik(n) is the element of the pheromone matrix at the nth iteration ηikThe heuristic information indicates that a task with a large amount of computation is allocated with more computation resources, α indicates the importance degree of the pheromone, and β indicates the importance degree of the heuristic information.
ηikThe calculation formula of (a) is as follows:
Figure BDA0002110095870000158
wherein, ηikAs heuristic information, pikBeing elements of a position matrix, viRepresents ENiThe processing rate of the CPU of (1),
Figure BDA0002110095870000161
denotes the UE set, xijkRepresenting a UEjW ofjkWhether or not to assign to ENi,ωjkDenotes wjkThe workload size of the task.
Considering the path transition probability, the velocity update formula is:
Uε(n+1)=g[wUε(n)+c1·r1(n)·(Pbε(n)-Pε(n))+c2·r2(n)·(Gb(n)-Pε(n))+Φ(n)](20)
wherein, Uε(n +1) denotes the velocity of the particle epsilon after the nth iteration, g denotes the limiting function of the velocity, wRepresenting the inertial weight, Uε(n) denotes the velocity of the particle epsilon at the nth iteration, c1Denotes a first learning factor, r1(n) represents a first random number of size in the interval (0,1) at the nth iteration, Pbε(n) represents the individual optimal solution, P, for the particle epsilon at the nth iterationε(n) denotes the position of the particle epsilon at the nth iteration, c2Represents a second learning factor, r2(n) represents a second random number with the size in an interval (0,1) during the nth iteration, gb (n) represents a global optimal solution during the nth iteration, n represents the number of iterations, epsilon represents the number of particles, and Φ (n) represents a path transition matrix during the nth iteration.
The embodiment of the invention solves the task allocation problem by using semi-definite relaxation and Gaussian random method. Defining J New variables
Figure BDA0002110095870000162
Namely, it is
Figure BDA0002110095870000163
Because of the fact that
Figure BDA0002110095870000164
So that there are
Figure BDA0002110095870000165
The workload distribution problem translates into the following:
Figure BDA0002110095870000166
wherein X represents a three-dimensional array of task requests in the UE mapped to EN, and psi represents J variables psijThe column vector of the component is composed of,
Figure BDA0002110095870000167
indicating the set of UEs, ψjA temporary variable representing the new definition mentioned above,
Figure BDA0002110095870000168
a set of applications is represented that is,
Figure BDA0002110095870000169
represents the EN set, rijRepresenting a UEjTo ENiC represents the propagation velocity of a wireless or wired channel, xijkRepresenting a UEjW ofjkWhether or not to assign to ENi,ωjkDenotes wjkThe work volume size of the task of (v)ikRepresenting a VMikCPU processing rate, VMikRepresents ENiThe k-th VM, T ofjRepresenting a UEjThe service delay threshold of (2).
The binary constraint may be replaced by a quadratic constraint as follows:
Figure BDA0002110095870000171
wherein x represents a binary variable having a value of 0 or 1.
Spread X and Ψ into vector form:
Figure BDA0002110095870000172
Ψ=[ψ12,…,ψJ]T(24)
wherein X represents the vector form generated by X, and XijkFor an element in x, Ψ denotes the J variables ΨjConstituent column vectors, #jTemporary variables as defined above.
Defining a new variable y ═ xTψT]TY element of yiThe following constraints are satisfied:
Figure BDA0002110095870000173
wherein y represents a column vector consisting of X and Ψ, X represents a vector form into which X is generated, and X representsijkFor an element in x, Ψ denotes the J variables ΨjConstituent column vectors, #jTemporary variables as defined above.
Equation (21) can be converted to the following equation:
Figure BDA0002110095870000174
where y represents a column vector consisting of x and Ψ, TjRepresenting a UEjService delay threshold of rijRepresenting a UEjTo ENiC represents the propagation velocity of a wireless or wired channel, ωjkDenotes wjkThe work volume size of the task of (v)ikRepresenting a VMikThe CPU processing rate of.
Wherein u ispIs a (I × J × K + J) dimensional column vector, wherein the p-th element is 1, the other elements are 0, diag (u)p) Is a radical of upThe elements in (A) are diagonal-line-constructed (I × J × K) × (I × J × K) diagonal matrices,
the other symbols are explained as follows:
Figure BDA0002110095870000181
Figure BDA0002110095870000182
b2=[T1T2… TJ]T
Figure BDA0002110095870000183
A1=[B1,-B2]
A2=[0I×J×K,EJ×J]
A3=[diag(11×I11×I…11×I)(J×K)×(I×J×K)0(J×K)×J]
Figure BDA0002110095870000184
B2=diag(V V … V)(I×J×K)×J
B3=diag(W1W2… WK)(I×K)×(I×J×K)
Wk=[W1kW2k… WJk]I×(I×J)
Wjk=ωjkEI
V=[ν1121,…,νI11222,…,νI2,…,ν1K2K,…,νIK]T
equation (26) is converted into a semi-definite programming problem by the form of matrix traces. Definition of
Figure BDA0002110095870000185
And Q ═ I × J × K + J, equation (26) is uniformly expressed as follows:
Figure BDA0002110095870000186
wherein Tr (-) denotes the trace of the matrix, B0Representing an objective function, Z representing a workload distribution matrix, F representing service delay constraint, D representing service delay threshold constraint, G representing task distribution unique constraint, and H representing task distribution mode binary constraint.
The respective symbols are explained as follows,
Figure BDA0002110095870000187
Figure BDA0002110095870000188
zQ+1,Q+1is the Q +1 th row Q +1 column element, A, of the matrix Z1,tIs a matrix A1The t-th row vector of A2,jIs a matrix A2The jth row vector of (1). In equation (27), only the constraint rank (Z) ═ 1 is non-convex, assuming Z is*Is an optimal solution obtained by using a convex optimization method without considering rank-1 constraintIf Z is*Satisfying rank-1 constraint, one can follow the following method from Z*And obtaining the optimal solution.
Extraction of Z*The upper left corner of (I × J × K) × (I × J × K) matrix Z' ═ x*x*TBecause of xijk∈ {0,1}, so xijkxijk=xijk,x*Can be constructed by the diagonal of Z', by x*Conversion into workload distribution matrix X by reshape operation*. reshape is the operation of the numpy tool to manipulate an array, changing the shape of the array without changing the number of elements.
If Z is*The rank-1 constraint is not satisfied and the approximate solution of equation (27) can be obtained by gaussian randomness.
By extracting the sub-matrix Z' of the top left corner (I × J × K) × (I × J × K), L random (I × J × K) × 1 vectors are generated, the L-th vector being defined as ξ J × K) × vectorlObey normal distribution
Figure BDA0002110095870000191
ξlIs equal to the dimension of x. In order for the recovered vector to satisfy the constraint
Figure BDA0002110095870000192
Using sigmoid function
Figure BDA0002110095870000193
Figure BDA0002110095870000194
μ > 1 Each vector
Figure BDA0002110095870000195
Mapping to vectors
Figure BDA0002110095870000196
Then will be
Figure BDA0002110095870000197
Conversion to phase by reshape operationCorresponding workload distribution array
Figure BDA0002110095870000198
For the
Figure BDA0002110095870000199
For each jk column of (1), the largest element is set to 1 and the rest is set to 0. By performing this process, each array is obtained
Figure BDA00021100958700001910
Corresponding array
Figure BDA00021100958700001911
Make it satisfy the constraint
Figure BDA00021100958700001912
Finally, by searching L number groups
Figure BDA00021100958700001913
Is an objective function of
Figure BDA00021100958700001914
Minimum value, resulting in a solution X of P1SDR. The complexity of this algorithm is O (m)4n1/2log (1/epsilon) + LIJK), where n ═ I × J × K, m ═ I × J × K + I + J + K.
Based on any of the above embodiments, fig. 4 is a schematic diagram of an internet of things service low-latency load distribution device based on edge computing according to an embodiment of the present invention, as shown in fig. 4, an internet of things service low-latency load distribution device based on edge computing according to an embodiment of the present invention includes an obtaining module 401 and a distribution module 402, where:
the obtaining module 401 is configured to obtain a task request of each application in each terminal and a computing capability of each edge node; the allocation module 402 is configured to input a task request of each application in each terminal and a computing capability of each edge node into a preset optimization problem model, and output a resource allocation matrix and a task allocation matrix, where the preset optimization problem model includes a particle swarm algorithm model improved by using an ant colony algorithm and a semi-definite relaxation algorithm model improved by using a gaussian random algorithm.
The low-delay load distribution device for the service of the internet of things based on the edge calculation, which is provided by the embodiment of the invention, is based on a resource distribution method of an improved particle swarm algorithm and a task distribution method of a semi-fixed relaxation algorithm, and the pheromone strategy in the ant swarm algorithm is applied to improve the particle swarm algorithm, so that the convergence time of the algorithm is reduced, the performance of a resource distribution result is improved, the rank 1 constraint in the semi-fixed relaxation problem is solved by applying a Gaussian random variable, the performance of the task distribution result is improved, and finally, the service delay is reduced.
Based on any of the above embodiments, further, the allocation module includes a resource allocation sub-module and a task allocation sub-module:
the resource allocation submodule is used for inputting the task request of each application in each terminal and the computing capacity of each edge node into a particle swarm algorithm model improved by an ant colony algorithm and outputting a resource allocation matrix;
and the task allocation submodule is used for inputting the task request of each application in each terminal and the computing capacity of each edge node into a semi-definite relaxation algorithm model improved by a Gaussian random algorithm and outputting a task allocation matrix.
Based on any of the embodiments above, further, the resource allocation sub-module is specifically configured to:
in the last iteration process, the optimal position of the particle currently found is obtained;
calculating pheromones according to the currently found optimal position;
and in the next iteration process, performing path selection according to the concentration of the pheromone acquired in the last iteration process, acquiring the optimal position of the particle at present again until the preset iteration times are reached, and outputting the final position of the particle as a resource allocation matrix.
Based on any of the embodiments described above, further, the task allocation submodule is specifically configured to:
inputting a task request of each application in each terminal and the computing capacity of each edge node into a semi-definite relaxation algorithm model improved by a Gaussian random algorithm;
if the rank of the current optimal solution is judged to be 1, performing array remodeling on the current optimal solution to obtain a task allocation matrix; and if the rank of the obtained current optimal solution is judged to be not 1, obtaining an approximate value of the current optimal solution by using a Gaussian random algorithm, and then performing array remodeling on the approximate value of the current optimal solution to obtain a task allocation matrix.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 5, the electronic device includes: a processor (processor)501, a memory (memory)502, a bus 503, and computer programs stored on the memory and executable on the processor.
The processor 501 and the memory 502 complete communication with each other through a bus 503;
the processor 501 is configured to call and execute the computer program in the memory 502 to perform the steps in the above method embodiments, including:
acquiring a task request of each application in each terminal and the computing capacity of each edge node;
and inputting the task request of each application in each terminal and the computing capacity of each edge node into a preset optimization problem model, and outputting a resource distribution matrix and a task distribution matrix, wherein the preset optimization problem model comprises a particle swarm algorithm model improved by using an ant colony algorithm and a semi-definite relaxation algorithm model improved by using a Gaussian random algorithm.
In addition, the logic instructions in the memory may be implemented in the form of software functional units and may be stored in a computer readable storage medium when sold or used as a stand-alone product. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Embodiments of the present invention provide a computer program product comprising a computer program stored on a non-transitory computer-readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform the steps of the above-described method embodiments, for example, including:
acquiring a task request of each application in each terminal and the computing capacity of each edge node;
and inputting the task request of each application in each terminal and the computing capacity of each edge node into a preset optimization problem model, and outputting a resource distribution matrix and a task distribution matrix, wherein the preset optimization problem model comprises a particle swarm algorithm model improved by using an ant colony algorithm and a semi-definite relaxation algorithm model improved by using a Gaussian random algorithm.
An embodiment of the present invention provides a non-transitory computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above method embodiments, for example, including:
acquiring a task request of each application in each terminal and the computing capacity of each edge node;
and inputting the task request of each application in each terminal and the computing capacity of each edge node into a preset optimization problem model, and outputting a resource distribution matrix and a task distribution matrix, wherein the preset optimization problem model comprises a particle swarm algorithm model improved by using an ant colony algorithm and a semi-definite relaxation algorithm model improved by using a Gaussian random algorithm.
The above-described embodiments of the apparatuses and devices are merely illustrative, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (4)

1. An Internet of things service low-delay load distribution method based on edge calculation is characterized by comprising the following steps:
acquiring a task request of each application in each terminal and the computing capacity of each edge node;
inputting a task request of each application in each terminal and the computing capacity of each edge node into a preset optimization problem model, and outputting a resource distribution matrix and a task distribution matrix, wherein the preset optimization problem model comprises a particle swarm algorithm model improved by using an ant colony algorithm and a semi-definite relaxation algorithm model improved by using a Gaussian random algorithm;
the method for outputting the resource allocation matrix and the task allocation matrix comprises the following steps of inputting a task request of each application in each terminal and the computing capacity of each edge node into a preset optimization problem model, and specifically comprises the following steps:
inputting a task request of each application in each terminal and the computing capacity of each edge node into a particle swarm algorithm model improved by an ant colony algorithm, and outputting a resource allocation matrix;
inputting a task request of each application in each terminal and the computing capacity of each edge node into a semi-definite relaxation algorithm model improved by a Gaussian random algorithm, and outputting a task distribution matrix;
the method for outputting the resource allocation matrix comprises the following steps of inputting a task request of each application in each terminal and the computing capacity of each edge node into a particle swarm algorithm model improved by using an ant colony algorithm, and specifically comprises the following steps:
in the last iteration process, the optimal position of the particle currently found is obtained;
calculating pheromones according to the currently found optimal position;
in the next iteration process, performing path selection according to the concentration of the pheromone acquired in the previous iteration process, acquiring the optimal position of the particle at present again until the preset iteration times are reached, and outputting the final position of the particle as a resource allocation matrix;
the method for outputting the task allocation matrix comprises the following steps of inputting a task request of each application in each terminal and the computing capacity of each edge node into a semi-definite relaxation algorithm model improved by a Gaussian random algorithm, and outputting the task allocation matrix, and specifically comprises the following steps:
inputting a task request of each application in each terminal and the computing capacity of each edge node into a semi-definite relaxation algorithm model improved by a Gaussian random algorithm;
if the rank of the current optimal solution is judged to be 1, performing array remodeling on the current optimal solution to obtain a task allocation matrix; and if the rank of the obtained current optimal solution is judged to be not 1, obtaining an approximate value of the current optimal solution by using a Gaussian random algorithm, and then performing array remodeling on the approximate value of the current optimal solution to obtain a task allocation matrix.
2. The utility model provides a low time delay load distribution device of thing networking service based on edge calculation which characterized in that includes:
the acquisition module is used for acquiring a task request of each application in each terminal and the computing capacity of each edge node;
the distribution module is used for inputting the task request of each application in each terminal and the computing capacity of each edge node into a preset optimization problem model and outputting a resource distribution matrix and a task distribution matrix, wherein the preset optimization problem model comprises a particle swarm algorithm model improved by using an ant colony algorithm and a semi-definite relaxation algorithm model improved by using a Gaussian random algorithm;
the allocation module comprises a resource allocation submodule and a task allocation submodule:
the resource allocation submodule is used for inputting the task request of each application in each terminal and the computing capacity of each edge node into a particle swarm algorithm model improved by an ant colony algorithm and outputting a resource allocation matrix;
the task allocation submodule is used for inputting a task request of each application in each terminal and the computing capacity of each edge node into a semi-definite relaxation algorithm model improved by a Gaussian random algorithm and outputting a task allocation matrix;
the resource allocation submodule is specifically configured to:
in the last iteration process, the optimal position of the particle currently found is obtained;
calculating pheromones according to the currently found optimal position;
in the next iteration process, performing path selection according to the concentration of the pheromone acquired in the previous iteration process, acquiring the optimal position of the particle at present again until the preset iteration times are reached, and outputting the final position of the particle as a resource allocation matrix;
the task allocation submodule is specifically configured to:
inputting a task request of each application in each terminal and the computing capacity of each edge node into a semi-definite relaxation algorithm model improved by a Gaussian random algorithm;
if the rank of the current optimal solution is judged to be 1, performing array remodeling on the current optimal solution to obtain a task allocation matrix; and if the rank of the obtained current optimal solution is judged to be not 1, obtaining an approximate value of the current optimal solution by using a Gaussian random algorithm, and then performing array remodeling on the approximate value of the current optimal solution to obtain a task allocation matrix.
3. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor when executing the computer program performs the steps of the method for edge-based computing for low latency load distribution for internet of things services as claimed in claim 1.
4. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the steps of the method for edge-based computing for internet of things service low latency load distribution according to claim 1.
CN201910568179.3A 2019-06-27 2019-06-27 Low-delay load distribution method and device for Internet of things service based on edge calculation Active CN110365753B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910568179.3A CN110365753B (en) 2019-06-27 2019-06-27 Low-delay load distribution method and device for Internet of things service based on edge calculation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910568179.3A CN110365753B (en) 2019-06-27 2019-06-27 Low-delay load distribution method and device for Internet of things service based on edge calculation

Publications (2)

Publication Number Publication Date
CN110365753A CN110365753A (en) 2019-10-22
CN110365753B true CN110365753B (en) 2020-06-23

Family

ID=68215793

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910568179.3A Active CN110365753B (en) 2019-06-27 2019-06-27 Low-delay load distribution method and device for Internet of things service based on edge calculation

Country Status (1)

Country Link
CN (1) CN110365753B (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110927537B (en) * 2019-11-27 2022-04-05 国网江苏省电力有限公司电力科学研究院 Partial discharge monitoring device and method based on Internet of things edge calculation
CN111506416B (en) * 2019-12-31 2023-09-12 远景智能国际私人投资有限公司 Computing method, scheduling method, related device and medium of edge gateway
CN111224824B (en) * 2020-01-06 2021-05-04 华东师范大学 Edge autonomous model construction method
CN111224825B (en) * 2020-01-06 2021-05-04 华东师范大学 Edge autonomous model construction device
CN111314123B (en) * 2020-02-04 2022-11-04 国网江苏省电力有限公司南京供电分公司 Time delay and energy consumption-oriented power Internet of things work load distribution method
CN111399998B (en) * 2020-02-26 2022-11-15 山东师范大学 Method and system for ensuring performance of multiple delay sensitive programs by using approximate calculation
CN111445111B (en) * 2020-03-09 2022-10-04 国网江苏省电力有限公司南京供电分公司 Electric power Internet of things task allocation method based on edge cooperation
CN111835827B (en) * 2020-06-11 2021-07-27 北京邮电大学 Internet of things edge computing task unloading method and system
CN111858051B (en) * 2020-07-20 2023-09-05 国网四川省电力公司电力科学研究院 Real-time dynamic scheduling method, system and medium suitable for edge computing environment
CN112068952A (en) * 2020-07-21 2020-12-11 清华大学 Cooperative optimization method for network communication, calculation and storage resources of multiple unmanned aerial vehicles
CN111953549A (en) * 2020-08-21 2020-11-17 上海海事大学 Online optimization resource management method for maritime affair edge nodes
CN112148482B (en) * 2020-09-11 2023-08-22 电子科技大学 Edge computing task scheduling method based on combination of load balancing
CN112671830B (en) * 2020-12-02 2023-05-30 武汉联影医疗科技有限公司 Resource scheduling method, system, device, computer equipment and storage medium
CN112511592B (en) * 2020-11-03 2022-07-29 深圳市中博科创信息技术有限公司 Edge artificial intelligence computing method, Internet of things node and storage medium
CN112486665B (en) * 2020-11-03 2022-03-18 深圳市中博科创信息技术有限公司 Edge artificial intelligence computing task scheduling method based on peer-to-peer network
CN112702401B (en) * 2020-12-15 2022-01-04 北京邮电大学 Multi-task cooperative allocation method and device for power Internet of things
CN112783567B (en) * 2021-01-05 2022-06-14 中国科学院计算技术研究所 DNN task unloading decision method based on global information
CN112738272B (en) * 2021-01-12 2022-07-15 浙江工业大学 Edge node load balancing method for minimizing network delay
CN113435103B (en) * 2021-05-19 2024-06-07 深圳供电局有限公司 Power distribution room abnormality detection method, system, server, edge gateway and medium
CN113507519B (en) * 2021-07-08 2022-10-04 燕山大学 Edge computing bandwidth resource allocation method and system for smart home
CN114024970A (en) * 2021-09-28 2022-02-08 国网辽宁省电力有限公司锦州供电公司 Power internet of things work load distribution method based on edge calculation
CN117170885B (en) * 2023-11-03 2024-01-26 国网山东综合能源服务有限公司 Distributed resource optimization allocation method and system based on cloud edge cooperation
CN117649175B (en) * 2024-01-26 2024-03-29 江苏中创供应链服务有限公司 Cross-border bin allocation service method and system based on edge calculation

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7277960B2 (en) * 2003-07-25 2007-10-02 Hewlett-Packard Development Company, L.P. Incorporating constraints and preferences for determining placement of distributed application onto distributed resource infrastructure
CN108319502A (en) * 2018-02-06 2018-07-24 广东工业大学 A kind of method and device of the D2D tasks distribution based on mobile edge calculations
CN109218414A (en) * 2018-08-27 2019-01-15 杭州中恒云能源互联网技术有限公司 A kind of distributed computing method of smart grid-oriented hybrid network framework

Also Published As

Publication number Publication date
CN110365753A (en) 2019-10-22

Similar Documents

Publication Publication Date Title
CN110365753B (en) Low-delay load distribution method and device for Internet of things service based on edge calculation
CN107172166B (en) Cloud and mist computing system for industrial intelligent service
Gupta et al. Load balancing based task scheduling with ACO in cloud computing
Wang et al. Load balancing task scheduling based on genetic algorithm in cloud computing
Mousavi et al. Dynamic resource allocation in cloud computing
Maleki et al. Mobility-aware computation offloading in edge computing using machine learning
CN111314123B (en) Time delay and energy consumption-oriented power Internet of things work load distribution method
Mundy et al. An efficient SpiNNaker implementation of the neural engineering framework
CN108418718B (en) Data processing delay optimization method and system based on edge calculation
Nanjappan et al. An adaptive neuro-fuzzy inference system and black widow optimization approach for optimal resource utilization and task scheduling in a cloud environment
CN112882815A (en) Multi-user edge calculation optimization scheduling method based on deep reinforcement learning
Djigal et al. Machine and deep learning for resource allocation in multi-access edge computing: A survey
Keshk et al. Cloud task scheduling for load balancing based on intelligent strategy
CN109656713B (en) Container scheduling method based on edge computing framework
Lin et al. A model-based approach to streamlining distributed training for asynchronous SGD
Saber et al. Hybrid load balance based on genetic algorithm in cloud environment
CN112835684B (en) Virtual machine deployment method for mobile edge computing
CN111309472A (en) Online virtual resource allocation method based on virtual machine pre-deployment
Cho et al. QoS-aware workload distribution in hierarchical edge clouds: a reinforcement learning approach
CN113722112B (en) Service resource load balancing processing method and system
Li et al. Data analytics for fog computing by distributed online learning with asynchronous update
CN115714820A (en) Distributed micro-service scheduling optimization method
CN111027665A (en) Cloud manufacturing scheduling method based on improved chaotic bat swarm algorithm
Huang et al. Computation offloading for multimedia workflows with deadline constraints in cloudlet-based mobile cloud
Moreira et al. Task allocation framework for software-defined fog v-RAN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant