CN117032994A - Unloading allocation decision determining method and device for industrial Internet system - Google Patents

Unloading allocation decision determining method and device for industrial Internet system Download PDF

Info

Publication number
CN117032994A
CN117032994A CN202311288572.XA CN202311288572A CN117032994A CN 117032994 A CN117032994 A CN 117032994A CN 202311288572 A CN202311288572 A CN 202311288572A CN 117032994 A CN117032994 A CN 117032994A
Authority
CN
China
Prior art keywords
unloading
allocation
edge
decision
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311288572.XA
Other languages
Chinese (zh)
Inventor
刘克新
李孝丰
池程
谢滨
朱斯语
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
China Academy of Information and Communications Technology CAICT
Original Assignee
Beihang University
China Academy of Information and Communications Technology CAICT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University, China Academy of Information and Communications Technology CAICT filed Critical Beihang University
Priority to CN202311288572.XA priority Critical patent/CN117032994A/en
Publication of CN117032994A publication Critical patent/CN117032994A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44594Unloading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/509Offload

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the disclosure discloses a method and a device for determining an unloading allocation decision of an industrial Internet system, wherein the method comprises the following steps: processing state information of the industrial Internet system in a target time frame by using a deep neural network to obtain a characteristic vector of the industrial Internet system in the target time frame; converting the feature vector into a feature matrix; determining the plurality of sets of offload allocation decisions based on the feature matrix; and determining an optimal unloading and distributing decision from the plurality of groups of unloading and distributing decisions based on the numerical value size relation of the plurality of objective function values corresponding to the plurality of groups of unloading and distributing strategies. The method and the device can quickly determine the optimal unloading allocation decision of the industrial Internet system, are beneficial to task unloading and resource allocation according to the optimal unloading allocation decision, and further greatly improve the average calculation rate and greatly reduce the cloud and edge service cost.

Description

Unloading allocation decision determining method and device for industrial Internet system
Technical Field
The disclosure relates to the technical field of industrial Internet, in particular to a method and a device for determining unloading allocation decisions of an industrial Internet system.
Background
Mobile edge computing (Mobile Edge Computing, MEC) networks may be deployed in the industrial internet. As cloud computing and edge computing evolve, devices with limited computing power may choose to offload heavy computing tasks to a cloud or edge server. However, due to limited computing power and network resources of the server, the offloading performance of the task is affected by time-varying channel power gain and network resource allocation, while the execution efficiency of the task is affected by the allocated computing power. It is therefore necessary to create a method that can reasonably perform task offloading allocation.
In the related art, a centralized and distributed alternative direction multiplier method (Alternating Direction Method of Multipliers, ADMM), a coordinate descent method or a lagrangian relaxation method based on the scheduling problem in the volunteer cloud is adopted for task unloading and resource allocation, but the calculation of the methods is responsible, and multiple iterations are needed to obtain a final solution.
How to reasonably carry out task unloading is a problem to be solved urgently.
Disclosure of Invention
The embodiment of the disclosure provides a method and a device for determining an unloading allocation decision of an industrial Internet system, which are used for solving the problem of reasonably unloading tasks.
In a first aspect of an embodiment of the present disclosure, there is provided an offload allocation decision determining method of an industrial internet system, the industrial internet system including a cloud center, a plurality of edge servers, and a plurality of edge devices, the method being applied to the edge devices and/or the cloud center, the method including:
processing state information of the industrial Internet system in a target time frame by using a deep neural network to obtain a characteristic vector of the industrial Internet system in the target time frame;
converting the feature vector into a feature matrix;
determining the plurality of sets of offload allocation decisions based on the feature matrix;
and determining an optimal unloading and distributing decision from the plurality of groups of unloading and distributing decisions based on the numerical value size relation of the plurality of objective function values corresponding to the plurality of groups of unloading and distributing strategies.
In one embodiment of the present disclosure, the determining the plurality of sets of offload allocation decisions based on the feature matrix includes:
determining the multiple groups of unloading allocation decisions based on numerical relationships among unloading indication variables corresponding to matrix elements in the feature matrix and a first randomly generated noise matrix;
each matrix element in the feature matrix corresponds to an unloading indication variable, the value of the unloading indication variable is used for selecting equipment for processing unloading tasks from the plurality of edge equipment and the plurality of edge equipment, the dimension of the feature matrix is identical to the dimension of the feature vector, the dimension of the first noise matrix is identical to the dimension of the feature matrix, and each element in the first noise matrix obeys standard Gaussian distribution.
In one embodiment of the present disclosure, the number of the plurality of sets of offload allocation decisions is 2n+2, N is an integer greater than 1, and N is the same as the number of edge devices in the plurality of edge devices;
the determining the multiple groups of unloading allocation decisions based on the numerical relation between unloading indication variables corresponding to matrix elements in the feature matrix and the randomly generated first noise matrix comprises:
determining a first unloading allocation decision set based on a numerical relation among unloading indication variables corresponding to matrix elements in the feature matrix, wherein the first unloading allocation decision set comprises N+1 groups of unloading allocation decisions;
determining a second noise matrix based on the feature matrix and the first noise matrix;
determining a second unloading allocation decision set based on a numerical relation between unloading indication variables corresponding to matrix elements in the feature matrix and the second noise matrix, wherein the second unloading allocation decision set comprises N+1 groups of unloading allocation decisions;
the plurality of sets of offload allocation decisions are determined based on the first set of offload allocation decisions and the second set of offload allocation decisions.
In one embodiment of the present disclosure, before the determining the optimal offload allocation decision from the multiple sets of offload allocation decisions based on the magnitude relation of the values of the multiple objective function values corresponding to the multiple sets of offload allocation policies, the method further includes:
Acquiring the central processing unit frequency of the target edge equipment corresponding to each unloading decision in the plurality of groups of unloading allocation decisions, the uploading power of the edge equipment unloaded to a target edge server and the allocation bandwidth of the target edge server, and the allocation time of the target edge equipment for locally calculating and calling the cloud center service;
and calculating an objective function value corresponding to each unloading allocation decision in the plurality of groups of unloading allocation decisions based on the CPU frequency, the uploading power unloaded to an objective edge server by the edge equipment, the allocation bandwidth and the allocation time, and obtaining the plurality of objective function values.
In one embodiment of the disclosure, the obtaining the central processor frequency of the target edge device corresponding to each of the plurality of sets of offloading allocation decisions includes:
and solving a preset optimization problem formula aiming at the CPU frequency to obtain the CPU frequency.
In one embodiment of the present disclosure, the method for obtaining the uplink power and the allocated bandwidth includes:
and solving a preset optimization problem formula aiming at the uploading power and the distribution bandwidth to obtain the uploading power and the distribution bandwidth.
In one embodiment of the present disclosure, the method for obtaining the allocation time includes:
and solving a preset optimization problem formula aiming at the distribution time to obtain the distribution time.
In a second aspect of the embodiments of the present disclosure, there is provided an offload allocation decision determining apparatus of an industrial internet system, the industrial internet system including a cloud center, a plurality of edge servers, and a plurality of edge devices, the apparatus being applied to the edge devices and/or the cloud center, the apparatus comprising:
the characteristic vector determining module is used for processing the state information of the industrial Internet system in the target time frame by using the deep neural network to obtain the characteristic vector of the industrial Internet system in the target time frame;
the feature matrix determining module is used for converting the feature vector into a feature matrix;
the unloading allocation decision determining module is used for determining the plurality of groups of unloading allocation decisions based on the feature matrix;
and the optimal decision determining module is used for determining an optimal unloading and distributing decision from the plurality of groups of unloading and distributing decisions based on the numerical value magnitude relation of the plurality of objective function values corresponding to the plurality of groups of unloading and distributing strategies.
In one embodiment of the disclosure, the unloading allocation decision determining module is configured to determine the multiple sets of unloading allocation decisions based on a numerical relationship between unloading indication variables corresponding to matrix elements in the feature matrix and a first noise matrix that is randomly generated;
each matrix element in the feature matrix corresponds to an unloading indication variable, the value of the unloading indication variable is used for selecting equipment for processing unloading tasks from the plurality of edge equipment and the plurality of edge equipment, the dimension of the feature matrix is identical to the dimension of the feature vector, the dimension of the first noise matrix is identical to the dimension of the feature matrix, and each element in the first noise matrix obeys standard Gaussian distribution.
In one embodiment of the present disclosure, the number of the plurality of sets of offload allocation decisions is 2n+2, N is an integer greater than 1, and N is the same as the number of edge devices in the plurality of edge devices;
the unloading allocation strategy determining module is used for determining a first unloading allocation decision set based on the numerical relation between unloading indication variables corresponding to matrix elements in the feature matrix, wherein the first unloading allocation decision set comprises N+1 groups of unloading allocation decisions; the unloading allocation strategy determining module is further used for determining a second noise matrix based on the feature matrix and the first noise matrix; the unloading allocation policy determining module is further configured to determine a second unloading allocation decision set based on a numerical relation between unloading indication variables corresponding to matrix elements in the feature matrix and the second noise matrix, where the second unloading allocation decision set includes n+1 groups of unloading allocation decisions; the offload allocation policy determination module is further to determine the plurality of sets of offload allocation decisions based on the first set of offload allocation decisions and the second set of offload allocation decisions.
In one embodiment of the present disclosure, the optimal decision determining module is configured to obtain a central processor frequency of a target edge device corresponding to each set of offloading decisions in the plurality of sets of offloading allocation decisions, an upload power of the edge device offloading to a target edge server, and an allocation bandwidth of the target edge server, and an allocation time of the target edge device locally calculating and invoking the cloud center service; the optimal decision determining module is further configured to calculate an objective function value corresponding to each set of offloading allocation decisions in the multiple sets of offloading allocation decisions based on the central processing unit frequency, the uploading power and the allocation bandwidth of the edge device offloading to the target edge server, and the allocation time, so as to obtain the multiple objective function values.
In one embodiment of the disclosure, the optimal decision determining module is configured to solve a preset optimization problem formula for the cpu frequency, so as to obtain the cpu frequency.
In one embodiment of the present disclosure, the optimal decision determining module is configured to solve a preset optimization problem formula for the uplink power and the allocated bandwidth, so as to obtain the uplink power and the allocated bandwidth.
In one embodiment of the present disclosure, the optimal decision determining module is configured to solve a preset optimization problem formula for the allocation time, so as to obtain the allocation time.
A third aspect of embodiments of the present disclosure provides an industrial internet system, comprising:
a mobile edge computing network comprising a cloud center, a plurality of edge servers, and a plurality of edge devices;
the task processing device of the industrial internet system according to the second aspect.
In a fourth aspect of embodiments of the present disclosure, there is provided an electronic device, including:
a memory for storing a computer program product;
a processor for executing the computer program product stored in the memory, and when the computer program product is executed, implementing the method according to the first aspect.
A fifth aspect of an embodiment of the present disclosure provides a computer readable storage medium having stored thereon computer program instructions, wherein the computer program instructions, when executed by a processor, implement the method according to the first aspect.
According to the industrial Internet system and the unloading allocation decision determining method and device thereof, the state information of the industrial Internet system in the target time frame is processed by the deep neural network, the characteristic vector of the industrial Internet system in the target time frame is obtained, after the characteristic vector is converted into the characteristic matrix, multiple groups of unloading allocation decisions of the industrial Internet system in the target time frame can be obtained according to the characteristic matrix, the optimal unloading allocation decision can be determined from the multiple groups of unloading allocation decisions according to the numerical value magnitude relation of the multiple objective function values corresponding to the multiple groups of unloading allocation strategies, task unloading and resource allocation can be carried out according to the optimal unloading allocation decision, and therefore average calculation rate is greatly improved, and cloud and edge service cost is greatly reduced.
The technical scheme of the present disclosure is described in further detail below through the accompanying drawings and examples.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
The disclosure may be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a block diagram of an industrial Internet system in an embodiment of the present disclosure;
FIG. 2 is a flow chart of a method for determining an offload assignment decision for an industrial Internet system in an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of the operation of an unloading allocation decision determination method of an industrial Internet system according to an embodiment of the disclosure;
FIG. 4 is a graph of experimental results of average raw task queue lengths for edge devices in one example of the present disclosure;
FIG. 5 is a graph of experimental results of average raw task queue lengths for edge servers in one example of the present disclosure;
FIG. 6 is a graph of experimental results of average energy consumption of an edge device in one example of the present disclosure;
FIG. 7 is a block diagram of an offloading allocation decision-making device of an industrial Internet system in an embodiment of the disclosure;
FIG. 8 is a block diagram of an industrial Internet system in an embodiment of the present disclosure;
Fig. 9 is a block diagram of an electronic device in an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless it is specifically stated otherwise.
It will be appreciated by those of skill in the art that the terms "first," "second," etc. in embodiments of the present disclosure are used merely to distinguish between different steps, devices or modules, etc., and do not represent any particular technical meaning nor necessarily logical order between them.
It should also be understood that in embodiments of the present disclosure, "plurality" may refer to two or more, and "at least one" may refer to one, two or more.
It should also be appreciated that any component, data, or structure referred to in the presently disclosed embodiments may be generally understood as one or more without explicit limitation or the contrary in the context.
In addition, the term "and/or" in this disclosure is merely an association relationship describing an association object, and indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" in the present disclosure generally indicates that the front and rear association objects are an or relationship.
It should also be understood that the description of the various embodiments of the present disclosure emphasizes the differences between the various embodiments, and that the same or similar features may be referred to each other, and for brevity, will not be described in detail.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
Embodiments of the present disclosure may be applicable to electronic devices such as terminal devices, computer systems, servers, etc., which may operate with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known terminal devices, computing systems, environments, and/or configurations that may be suitable for use with the terminal device, computer system, server, or other electronic device include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, network personal computers, small computer systems, mainframe computer systems, and distributed cloud computing technology environments that include any of the foregoing, and the like.
Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc., that perform particular tasks or implement particular abstract data types. The computer system/server may be implemented in a distributed cloud computing environment in which tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computing system storage media including memory storage devices.
In embodiments of the present disclosure, task offloading includes an edge device selecting a local device for task data computation or selecting a designated edge server for task processing. And allocating corresponding data processing resources for the tasks to be processed according to the requirements.
Before describing the task processing method of the industrial internet system of the present disclosure, the architecture and related modeling content of the industrial internet system to which the task processing method of the industrial internet system of the present disclosure is directed will be described.
Fig. 1 is a block diagram of an industrial internet system in an embodiment of the present disclosure. As shown in FIG. 1, the industrial Internet system comprises a cloud center,MEdge serverNAnd edge devices. Wherein,MandNare integers greater than 1. The time is divided into time frames with equal length, each time frame is of lengthT. In the first placeWithin a time frame, use->Representation ofTask arrival amount. Assume task arrival amount +.>Independently and equispaced, the second moment exists and is bounded and can be estimated from past observations. />Indicate->Within a time frame->Edge devices and->Radio channel gain between edge servers, under the assumption of block fading model +.>Constant over time frames, and independently varying between time frames. The edge device may choose to perform the computing task locally or offload to an edge server. The local execution can consider CPU frequency setting, and the unloading needs to consider the uploading power, bandwidth allocation and the selection of the unloaded edge server. The edge server can allocate +.>Unloading to cloud end, distributingAnd executing locally.
Modeling related to edge devices is as follows:to unload the indicator variable, the value is 0 or 1,/for>The value range of (2) is +. >To->Integer of>The value of (2) is in the range of 0 toMInteger of>If 1, then the first +.>The edge device is at->Selecting local calculation without offloading tasks within a time frame, but +.>If the value is not 0 +.>If 1, it means +.>The edge device is at->Intra-time frame to->The edge servers are offloaded. Considering mutual exclusivity of operations here, it is assumed that an edge device can only choose one operation within one time frame, namely:
(1)
when the wireless device chooses to perform the computing task locally1) applying dynamic voltage frequency adjustment technique, CPU frequency is considered>Is set to an upper limit of +.>The raw data processing amount and consumed energy of locally executing the calculation task in the time frame at this time are:
(2)
(3)
wherein the method comprises the steps ofRepresenting the number of calculation cycles required to process unit bit raw data, a bit of the original data>Representing the calculated energy efficiency parameter.
When the wireless device chooses to offload computing tasks to the edge serverIf not 1), decision needs to be made for uploading power and bandwidth allocation, and the amount of original data and consumed energy unloaded to the edge in the time frame are:
(4)
(5)
wherein the method comprises the steps ofFor uploading power, < >>Representing the power spectral density of the noise,Bfor bandwidth, & gt>Is- >The edge server is->Bandwidth allocation for individual wireless devices, therefore, needs to satisfy:
(6)
(7)
modeling for task queues usesIndicate->Task queue length of task raw data of individual wireless device, set +.>There is a task queue length update formula for the wireless device:
(8)
and considering the actual physical meaning, the amount of data processed needs to be less than or equal to the amount of task data present in the task queue, so the following constraint needs to be satisfied:
(9)
modeling related to cloud edge servers is as follows: first, theAn edge server can allocate +.>Uninstall to cloud, allocate->Locally, and therefore needs to satisfy:
(10)
the amount of raw data that is locally processed by the edge server at this time in a time frame, and the cost of invoking the edge server is:
(11)
(12)
wherein the method comprises the steps ofRepresents the edge server CPU frequency,/I>And calling the unit cost of the edge service.
When the data is unloaded to the cloud, it is generally assumed that the data processing capacity of the cloud computing center is large enough, so only the constraint caused by the network bandwidth capacity between cloud edges is considered, and the data unloaded to the cloud by the edge server can be processed in a time frame, and at the moment, the cost of uploading/processing the original data and calling the cloud service is as follows:
(13)
(14)
Wherein the method comprises the steps ofRepresenting data transmission rate, +.>And (5) calling the unit cost of the cloud service.
In addition, it is also considered that the bandwidth between the edge server and the cloud center is limited, i.e., the total amount of data transmitted to the cloud center in one time frame needs to be equal to or less than the total bandwidth times the length of the time frame. The inequality across the total bandwidth is divided simultaneously to obtain the following bandwidth allocation constraint:
(15)
modeling task queues for edge servers usingIndicate->Task raw data queue length of each edge server is set +.>There is a task queue length update formula for the wireless device:
(16)
and considering the actual physical meaning, the amount of data processed needs to be less than or equal to the amount of task data present in the task queue, so the following constraint needs to be satisfied:
(17)
the optimization objective is to reduce the long-term average cost of invoking cloud-edge services while maximizing the long-term average computation of the network under the conditions of long-term stability and average power limitation of the task queues. Defining an objective function to be optimized in a single time frame as follows:
(18)
definition of the definition,/>The optimization problem (P1) obtained by modeling is as follows:
(19)
(20)
(21)
(22)
(23)
(24)
(25)
(26)
(27)
(28)
(29)
wherein equation (19) represents a long-term average of the difference between the calculated amount of the network and the cloud edge service cost, the optimization objective is set to maximize, in equation (20) Represents the firstiThe average power limit of the individual wireless devices, equation (21) and equation (22), represent the need for long-term stability of the task queues of the edge servers and edge devices, where strong stability theory of the queues in queuing theory is employed. The constraints in formulas (23) - (28) appear above, and formula (29) represents the relevant value constraints for all variables to be optimized.
For the optimization problem (P1), it is difficult to solve directly because solving directly (P1) not only requires obtaining random information of all time frames (including wireless channel gainAnd task arrival amount->) And it is difficult to directly use constraint formulas (20) - (22) that contain infinite term sums. Even if the obtained random information is converted into a constraint formula, the calculation difficulty and the solving cost for solving the problem are too high, the time is very long, the real-time requirement under a large-scale system is difficult to meet, or the solving quality is reduced under a complex uncertain environment. The problem is then decoupled into consideration by introducing the Lyapunov optimization theoryThe problem of certainty for each time frame.
Introducing a virtual energy queue for constraint equation (20)Definitions->And has an updated formula:
(30)
Wherein,is a positive scaling factor. This queue can be visually understood as "excess amount of energy", when the amount of energy consumption in a time frame is larger than +.>When the virtual energy queue is used, the part of the excess usage is stored in the virtual energy queue; when the energy consumption is less than +.>When the virtual energy queue length is not 0, the energy "saved" in the time frame is used to cancel out the previous excess. If the virtual queue is strongly stable, then the constraint formula holds so the constraint can be translated into:
(31)
for convenience of description, recordWherein->,/>. Defining a quadratic lyapunov function as:
(32)
if it isFor any time frametAll bounded, it is easy to see that all task queues and virtual energy queues are bounded, constraint formulas (20) - (22) hold. To control +.>Defining the Lyapunov drift of the values of (a)The following are provided:
(33)
further consider the following drift plus penalty expression
(34)
Wherein the method comprises the steps ofIs a normal number representing the "importance" of the objective function to the entire expression. The motivation for choosing such an expression is that it is not only necessary to minimize +.>At the same time it is necessary to obtain as much larger an objective function +.>Is a value of (2). Original drift plus penalty table In the expression, the sign of the penalty function is positive, while the given expression is given a negative sign, because a larger objective function value needs to be obtained instead of a smaller penalty function value. Therefore, the solution of the original optimization problem (P1) can be converted into the solution aiming at +.>Is described. In general, in Lyapunov optimization theory, there is not a direct optimization +.>But is optimized +.>Is given by the following theorem:
theorem: assuming within each time frameAnd->Independently distributed with respect to all possible +.>And->And optionally->+.>The method meets the following conditions:
(35)
wherein,Bis a normal number, when the system determinesBAnd then determining.
At the beginning of each time frame, using opportunistic expectation minimization principles, the original optimization problem (P1) is transformed into an optimization problem (P2) after removal (35) of the parts of the upper bound given that are not related to the variables to be optimized:
(36)
(23)-(29)
for the optimization problem (P2), if the load instruction variableAs is known, then the optimization problem at this time can be decoupled into three sub-problems and is not difficult to solve. It can be considered that at the beginning of each time frame, by observing the state information composed of the current channel gain and the queue information +. >One or more groups of possible unloading indication matrices are given first>And then respectively solving the optimal resource allocation decision corresponding to each group of unloading decisions and the objective function value at the moment, and selecting a group of decisions with the maximum objective function value as the optimal decision. Wherein status information->And a radio channel gain matrix->The definition is as follows:
(37)
(38)
(39)
however, finding the best possible offloading decision remains a challenging task. One possible solution is to traverse all possible offloading decisions, which requires solving the optimal resource allocation decisionsM+1) N Next, whenNAndMwhen larger, real-time solutions deployed in practical systems to achieve decisions are nearly impossible. Considering the technology of deep reinforcement learning, the state information is input by using a neural networkOutputs oneNM+1) vector, deformed to sum +.>The matrix with the same shape is enabled to approach the optimal unloading decision as far as possible in the process of interacting with an actual system through a reinforcement learning method.
Hereinafter, an unloading allocation decision determining method of an industrial internet system according to an embodiment of the present disclosure will be specifically described.
Fig. 2 is a flow chart of a method for determining an offload allocation decision of an industrial internet system according to an embodiment of the disclosure. As shown in fig. 2, the industrial internet system comprises a cloud center, a plurality of edge servers and a plurality of edge devices, and the unloading allocation decision determining method of the industrial internet system is applied to the edge devices and/or the cloud center. The unloading allocation decision determining method of the industrial Internet system comprises the following steps:
S1: and processing the state information of the industrial Internet system in the target time frame by using a deep neural network to obtain the characteristic vector of the industrial Internet system in the target time frame.
The state information of the industrial Internet system can be used as input and transmitted to the pre-trained deep neural network to output the characteristic vector of the industrial Internet system in the target time frame.
FIG. 3 is a diagram of an industrial Internet system in accordance with an embodiment of the present disclosureAnd an operating schematic diagram of the load allocation decision determining method. As shown in fig. 3, the state information u of the current industrial internet system is acquired at a target time frame t And passes this information as vectors to the input layer of the deep neural network (Deep Neural Networks, DNN). The DNN output layer outputs oneNM+1) vector in dimension. The target time frame may be a designated time frame or each time frame.
The DNN comprises an input layer, an hidden layer and an output layer. The input layer is responsible for receiving state information u collected by the observer t And expanded into vectors and passed to the hidden layer at the same time. The multiple hidden layers then abstract the states passed on by the input layer into higher-level representations that contain key features that support system decisions. Finally, the high level representation is passed to the output layer, which outputs a NM+1) vector dimensions, each element of the vector being a real number between 0 and 1. The general approximation theorem indicates that a multi-layer perceptron comprising enough hidden neurons can approximate any continuous function with any accuracy, and therefore, attempts can be made to use the multi-layer perceptron to approximate its output to the optimal offloading decision. Parameters in DNNθ t Is initially randomly initialized and obeys a standard normal distribution.
S2: and converting the feature vector into a feature matrix. I.e.NMVector deformation in +1) dimension intoNM+1) feature matrix
S3: and determining the plurality of groups of unloading allocation decisions based on the feature matrix.
In one example of the present disclosure, the predetermined quantization method is utilized according toGeneration (2)N+2) group offload allocation decision +.>
S4: and determining an optimal unloading and distributing decision from the plurality of groups of unloading and distributing decisions based on the numerical value size relation of the plurality of objective function values corresponding to the plurality of groups of unloading and distributing strategies.
After a plurality of groups of unloading allocation strategies are obtained, calculating the objective function value corresponding to each group of unloading allocation strategies according to a calculation formula of the objective function value. The objective function value corresponding to each group of unloading allocation strategies represents the degree of superiority of each group of unloading allocation strategies, and the larger the objective function value is, the more excellent the unloading allocation strategies of the corresponding group are represented.
And after determining the optimal offloading allocation decision, performing task offloading and resource allocation according to the task processing equipment (edge equipment or edge server) corresponding to the optimal offloading allocation decision.
In addition, the state information u can also be used t And the optimal offloading decision combination employed is stored as a state decision pair in the playback memory of the DNN. Every time a certain time frame passes, the algorithm randomly extracts a batch of state decision pairs from the playback memory to update the DNN.
In this embodiment, the state information of the industrial internet system in the target time frame is processed by using the deep neural network, so as to obtain a feature vector of the industrial internet system in the target time frame, after the feature vector is converted into a feature matrix, multiple groups of unloading and distributing decisions of the industrial internet system in the target time frame can be obtained according to the feature matrix, and according to the numerical value magnitude relation of multiple objective function values corresponding to the multiple groups of unloading and distributing strategies, an optimal unloading and distributing decision can be determined from the multiple groups of unloading and distributing decisions, which is beneficial to task unloading and resource distribution according to the optimal unloading and distributing decision, so that the average calculation rate is greatly improved, and cloud and edge service costs are greatly reduced.
In one embodiment of the present disclosure, S3 includes: and determining the multiple groups of unloading allocation decisions based on numerical relationships among unloading indication variables corresponding to matrix elements in the feature matrix and a first randomly generated noise matrix. Each matrix element in the feature matrix corresponds to an unloading indication variable, the value of the unloading indication variable is used for selecting equipment for processing unloading tasks from the plurality of edge equipment and the plurality of edge equipment, the dimension of the feature matrix is identical to the dimension of the feature vector, the dimension of the first noise matrix is identical to the dimension of the feature matrix, and each element in the first noise matrix obeys standard Gaussian distribution.
Since each matrix element in the feature matrix corresponds to an offload indicating variable, the value of the offload indicating variable is used to select a device for processing offload tasks from a plurality of edge devices and a plurality of edge devices (see equation (1) and related content above), an n+1 set of offload allocation decisions may be generated from the feature matrix. After the feature matrix is fused with the first noise matrix, an n+1 group of unloading allocation decisions can be regenerated, thereby obtaining (2) N+2) group offload allocation decisions.
In this embodiment, based on the numerical relation between the unloading indication variables corresponding to the matrix elements in the feature matrix and the randomly generated first noise matrix, multiple groups of unloading allocation decisions can be quickly generated.
In one embodiment of the present disclosure, the number of sets of offload allocation decisions is 2N+2,NIs an integer greater than 1, andNas many as the number of edge devices among the plurality of edge devices, S3 may further include:
s3-1: and determining a first unloading allocation decision set based on the numerical relation between unloading indication variables corresponding to matrix elements in the feature matrix. Wherein the first unloading allocation decision set comprises%N+1) group offload allocation decisions.
Find to make->Maximum->Will->Set to 1, the rest set to 0, generate the 1 st set of offloading decisions +.>
NGroup decisionAccording to the sum->A similar method is generated, but every time it is generated, one of them is +.>Instead of looking for a pattern that makes +.>Maximum->But is made +.>Second greatest->Will->Set to 1 and the rest to 0, so that it can be generatedNGroup decision. For this purposeNGroup decisions are ordered according to the second largest +. >Ordering from big to small, thus generatingNGroup decision->
From group 1 offloading decisionsAndNgroup decision->Composing the first offload allocation decision set +.>
S3-2: a second noise matrix is determined based on the feature matrix and the first noise matrix.
Will beNM+1) dimensional first noise matrix and feature matrixAnd adding to obtain a second noise matrix.
S3-3: and determining a second unloading allocation decision set based on the numerical relation between unloading indication variables corresponding to matrix elements in the feature matrix and the second noise matrix. Wherein the second set of offload allocation decisions comprisesN+1 group offload allocation decision.
Generating based on the second noise matrixThe same method of group decision generates +.>Group decision. By->The group decisions form a second set of offload allocation decisions.
S3-4: a plurality of sets of offload allocation decisions are determined based on the first set of offload allocation decisions and the second set of offload allocation decisions.
In this embodiment, based on the numerical relation between the unloading indication variables corresponding to the matrix elements in the feature matrix, half of the unloading allocation decisions close to the feature matrix distance can be generated, and the other half of the unloading allocation decisions with a certain exploratory property can be generated based on the second noise matrix, so that the trade-off of exploration and utilization is achieved, and the training of the deep neural network in the deep reinforcement learning is facilitated.
In one embodiment of the present disclosure, before S4, further comprising:
S-A: and acquiring the central processing unit frequency of the target edge device corresponding to each unloading decision in the plurality of groups of unloading allocation decisions, the uploading power of the edge device unloaded to the target edge server and the allocation bandwidth of the target edge server, and the allocation time of the target edge device for locally calculating and calling the cloud center service.
S-B: and calculating an objective function value corresponding to each unloading allocation decision in a plurality of groups of unloading allocation decisions based on the frequency of the central processing unit, the uploading power and the allocation bandwidth of the unloading of the edge equipment to the target edge server and the allocation time, and obtaining a plurality of objective function values.
When the feature matrix is known, the optimization problem (P2) may be decoupled into three sub-problems, including selecting the CPU frequency of the target edge device that processes the task data locally, selecting the upload power of the target edge device offloaded to the target edge server and the allocation bandwidth of the target edge server, and the allocation time for the edge server to calculate and invoke the cloud center service locally.
Traversing the computationally generated (2N+2) optimal resource allocation decisions and objective function values corresponding to the set of offloading decisions.
In this embodiment, when the feature matrix characterizing the offloading decision is known, the optimization problem of the offloading decision may be decoupled into the CPU frequency of the target edge device, the uploading power of the target edge device that is offloaded to the target edge server and the allocation bandwidth of the target edge server, and the three problems of the local calculation of the edge server and the allocation time of the cloud center service that are invoked by the edge server, so that multiple sets of offloading decisions may be traversed to obtain multiple objective function values corresponding to multiple sets of offloading decisions, which is helpful for quickly determining the optimal offloading decision based on the multiple objective function values.
In one embodiment of the present disclosure, S-A comprises:
S-A-1: and solving a preset optimization problem formula aiming at the CPU frequency to obtain the CPU frequency.
For all edge devices that select to process task data locally, the following optimization problem needs to be solved:
(40)
(24),(29)
therefore, the optimal analytic solution of the problem is easily given by solving the residence point through derivation:
(41)
in this embodiment, the preset optimization problem formula for the cpu frequency is solved, so that the cpu frequency can be obtained quickly and easily.
In another embodiment of the present disclosure, S-A comprises:
S-A-2: and solving a preset optimization problem formula aiming at the uploading power and the distribution bandwidth to obtain the uploading power and the distribution bandwidth.
For all edge devices selectively offloaded to an edge server, the edge devices selectively offloaded to different edge servers do not affect each other, and the edge devices selectively offloaded to the same edge server are coupled in an optimization problem, so for each edge server, the following optimization problem needs to be solved:
(42)/>
(23),(24),(29)
wherein the method comprises the steps ofDue to->Is not able to determine the convexity of the function to be optimized in equation (47). However, for those which make +.>Edge device of->It can be found that only,/>Edge device->The corresponding part of the function to be optimized is 0, while the others +.>And->All possible values of (a) will make the part of the corresponding function to be optimized smaller than 0, thus for making +.>Edge device of->Setting up,/>Is the most reasonable setting. After this part is removed from the functions to be optimized, the remaining functions are concave functions, and the algorithm in convex optimization can be used to find the maximum value.
In this embodiment, the preset optimization problem formula for the uploading power and the allocation bandwidth is solved, so that the uploading power and the allocation bandwidth can be obtained quickly and easily.
In yet another embodiment of the present disclosure, S-A comprises:
S-A-3: and solving a preset optimization problem formula aiming at the distribution time to obtain the distribution time.
Regarding the allocation time of the target edge device for locally calculating and calling the cloud center service, the optimization problem to be solved is as follows:
(43)
(25)-(27),(29)
the above-described problem is a linear programming problem, which has a number of available solving algorithms. The algorithm can be solved by using an open source Python algorithm library and a solution algorithm highs provided in a mathematical tool package SciPy, and the algorithm can be automatically selected between a high-performance parallel dual improved simplex method and an interior point method to solve the problem. For larger scale and sparse linear programming problems, this is the fastest linear programming solver in SciPry. With this tool, a high-precision solution to this linear programming problem can also be easily obtained by time-division setting.
In this embodiment, the preset optimization problem formula for the allocation time is solved, so that the allocation time of the cloud center service can be calculated and called locally by the target edge device quickly and easily.
In order for those skilled in the art to further understand the offloading allocation decision determining method of the industrial internet system of the embodiment of the present disclosure, the performance of the offloading allocation decision determining method of the industrial internet system of the embodiment of the present disclosure is explained through the following experiments.
All experiments were performed on a computer equipped with Intel Core i7-10700 CPU and 16 GB memory. In addition, performance was compared to the other three algorithms.
Coordinate Decent (CD) when solving (P2), a coordinate descent method is used to obtain a near optimal offloading decision. First, the offloading decision is randomly generated, and then the optimal resource allocation decision and the objective function in (P2) are calculated. Next, by changing one component in the offloading decision, it is determined how to maximize the change in the objective function. This process is repeated until changing only one component no longer results in a larger objective function, thereby obtaining a near optimal offloading decision.
Greedy Offloading (GO) all edge devices attempt to offload data to an edge server, which also attempts to offload data to the cloud center. Each edge device selects the edge server with the highest channel power gain. Bandwidth is evenly allocated to edge devices that offload data to the same edge server. Time is also evenly distributed for requesting cloud services.
Local Processing (LP) all edge devices choose to process task data locally. Only the setting of the CPU frequency needs to be considered under the constraint of average power consumption.
Assuming that the task arrival amounts follow an exponential distribution, each time frame is independently and equidistributed and has a limited expected value. And assuming that the expectation of the radio channel gain satisfies:
(44)
wherein,indicating antenna gain +.>For carrier frequency +.>Representing the path loss index, +.>Is->Edge devices and->Distance between edge servers. This section considers that the edge devices are statically deployed, their coordinates areEdge server coordinates are +.>. To increase reproducibility of the experiments, table 1 gives detailed configurations related to simulation experiments, key parameters and super parameters of DNN.
The convergence of the proposed algorithm is evaluated to determine if the queue long-term stability constraint and the average power consumption constraint can be met.
Fig. 4 is a graph of experimental results of average raw task queue lengths for edge devices in one example of the present disclosure. In fig. 4, the x-axis represents time frames and the y-axis represents average queue lengths of wireless edge devices. Fig. 5 is a graph of experimental results of average raw task queue lengths of edge servers in one example of the present disclosure, where in fig. 5, the x-axis represents time frames and the y-axis represents average queue lengths of edge servers. Fig. 6 is a graph of experimental results of average energy consumption of an edge device in one example of the present disclosure, and in fig. 6, the x-axis represents time frames and the y-axis represents average energy consumption of a wireless edge device in watts.
As shown in fig. 4-6, the task processing method and the CD algorithm of the industrial internet system according to the embodiments of the present disclosure meet the requirements of data queue stability and average power limitation of the wireless edge device and the edge server, but the GO algorithm and the LP algorithm do not meet the data queue stability of the edge device. For the GO algorithm, this is because when all edge devices choose to offload, the bandwidth load is excessive, resulting in reduced transmission efficiency. For the LP algorithm, it is apparent that it is difficult for the edge device to process these data without assistance.
Table 1 comparison of coordinate descent method and solution time of this algorithm
In table 1, the average solving time is calculated by dividing the total running time of the program by the number of time frames. In other words, the average solution time includes the training time of the neural network.
The results show that the proposed algorithm is several tens of times faster than the CD algorithm, which makes the algorithm more practical.
Fig. 7 is a block diagram illustrating a construction of an offload assignment decision determining apparatus of an industrial internet system according to an embodiment of the present disclosure. As shown in fig. 7, the industrial internet system includes a cloud center, a plurality of edge servers, and a plurality of edge devices, and the offloading allocation decision determining apparatus of the industrial internet system is applied to the edge devices and/or the cloud center. An off-load allocation decision-making apparatus for an industrial internet system, comprising:
The feature vector determining module 100 is configured to process state information of the industrial internet system in a target time frame by using the deep neural network, so as to obtain a feature vector of the industrial internet system in the target time frame;
a feature matrix determining module 200, configured to convert the feature vector into a feature matrix;
an offload allocation decision determining module 300, configured to determine a plurality of groups of offload allocation decisions based on the feature matrix;
the optimal decision determining module 400 is configured to determine an optimal offloading allocation decision from the plurality of offloading allocation decisions based on a numerical magnitude relation of a plurality of objective function values corresponding to the plurality of offloading allocation policies.
In one embodiment of the present disclosure, the offload allocation decision determining module 300 is configured to determine a plurality of groups of offload allocation decisions based on a numerical relationship between offload indication variables corresponding to matrix elements in the feature matrix and a first noise matrix generated randomly; each matrix element in the feature matrix corresponds to an unloading indication variable, the value of the unloading indication variable is used for selecting equipment for processing unloading tasks from a plurality of edge equipment and a plurality of edge equipment, the dimension of the feature matrix is identical to that of the feature vector, the dimension of the first noise matrix is identical to that of the feature matrix, and each element in the first noise matrix obeys standard Gaussian distribution.
In one embodiment of the present disclosure, the number of the plurality of sets of offload allocation decisions is 2n+2, N is an integer greater than 1, and N is the same as the number of edge devices in the plurality of edge devices; the unloading allocation policy determining module 300 is configured to determine a first unloading allocation decision set based on a numerical relationship between unloading indication variables corresponding to matrix elements in the feature matrix, where the first unloading allocation decision set includes n+1 groups of unloading allocation decisions; the offload allocation policy determination module 300 is further configured to determine a second noise matrix based on the feature matrix and the first noise matrix; the offload allocation policy determining module 300 is further configured to determine a second offload allocation decision set based on a numerical relationship between offload indicating variables corresponding to matrix elements in the feature matrix and a second noise matrix, where the second offload allocation decision set includes n+1 groups of offload allocation decisions; the offload allocation policy determination module 300 is further configured to determine a plurality of sets of offload allocation decisions based on the first offload allocation decision set and the second offload allocation decision set.
In one embodiment of the present disclosure, the optimal decision determining module 400 is configured to obtain a central processor frequency of a target edge device corresponding to each set of offloading decision in the plurality of sets of offloading allocation decisions, an upload power of the edge device offloading to the target edge server, an allocation bandwidth of the target edge server, and an allocation time of the target edge device for locally calculating and invoking a cloud center service; the optimal decision determining module 400 is further configured to calculate an objective function value corresponding to each set of offloading allocation decisions in the plurality of sets of offloading allocation decisions based on the central processor frequency, the upload power and the allocation bandwidth from the edge device to the target edge server, and the allocation time, so as to obtain a plurality of objective function values.
In one embodiment of the present disclosure, the optimal decision determining module 400 is configured to solve a preset optimization problem formula for a cpu frequency, so as to obtain the cpu frequency.
In one embodiment of the present disclosure, the optimal decision determining module 400 is configured to solve a preset optimization problem formula for the uploading power and the allocated bandwidth, so as to obtain the uploading power and the allocated bandwidth.
In one embodiment of the present disclosure, the optimal decision determining module 400 is configured to solve a preset optimization problem formula for the allocation time, so as to obtain the allocation time.
It should be noted that, a specific implementation manner of the offloading allocation decision determining device of the industrial internet system in the embodiment of the disclosure is similar to a specific implementation manner of the offloading allocation decision determining method of the industrial internet system in the embodiment of the disclosure, and specific reference is made to a description of a portion of the offloading allocation decision determining method of the industrial internet system, which is omitted for redundancy reduction.
Fig. 8 is a block diagram of an industrial internet system according to an embodiment of the present disclosure. As shown in fig. 8, the industrial internet system includes: a mobile edge computing network 10, and a task processing device 20 of the mobile edge computing network as described in the above embodiments.
Wherein the mobile edge computing network 10 comprises: the cloud center comprises a cloud center, a plurality of edge servers and a plurality of edge devices.
The cloud center serves as a data center and can store all data in the mobile edge computing network. The cloud center may also manage multiple edge servers and multiple edge devices, e.g., when one edge server fails, the cloud center may notify other edge servers and multiple edge devices to suspend sending data and edge computing tasks to the failed edge server. The operation center can also designate one edge server to take over the failed edge server in the rest edge servers for edge calculation task processing.
The edge servers in the edge servers can receive the dispatching of the cloud center, receive edge computing tasks sent by the edge devices in the edge devices, and store edge computing task data processed in corresponding periods to the cloud center periodically.
The plurality of edge devices can communicate with the cloud center and the plurality of edge servers, send edge calculation tasks to a designated server in the plurality of edge servers through the cloud center, and receive calculation results of the edge calculation tasks fed back by the designated server.
The specific function of the task processing device 20 of the industrial internet system is the same as that of the industrial internet system of the above embodiment, and will not be described in detail. It should be noted that, the task processing device 20 of the industrial internet system may be disposed in a cloud center, or may be disposed in other devices.
In addition, the embodiment of the disclosure also provides an electronic device, which comprises:
a memory for storing a computer program;
and the processor is used for executing the computer program stored in the memory, and when the computer program is executed, the unloading allocation decision determining method of the industrial internet system is realized.
Next, an electronic device according to an embodiment of the present disclosure is described with reference to fig. 9. As shown in fig. 9, the electronic device includes one or more processors and memory.
The processor may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities, and may control other components in the electronic device to perform the desired functions.
The memory may store one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and/or nonvolatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program products may be stored on the computer readable storage medium that can be run by a processor to implement the offload allocation decision determining method and/or other desired functions of the industrial internet system of the various embodiments of the present disclosure described above.
In one example, the electronic device may further include: input devices and output devices, which are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
In addition, the input device may include, for example, a keyboard, a mouse, and the like.
The output device may output various information including the determined distance information, direction information, etc., to the outside. The output device may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, etc.
Of course, only some of the components of the electronic device relevant to the present disclosure are shown in fig. 9 for simplicity, components such as buses, input/output interfaces, and the like being omitted. In addition, the electronic device may include any other suitable components depending on the particular application.
In addition to the methods and apparatus described above, embodiments of the present disclosure may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps in an offload allocation decision determining method for an industrial internet system according to various embodiments of the present disclosure described in the above section of the present description.
The computer program product may write program code for performing the operations of embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium, having stored thereon computer program instructions, which when executed by a processor, cause the processor to perform the steps in the offload allocation decision determining method of an industrial internet system according to various embodiments of the present disclosure described in the above section of the present description.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present disclosure have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present disclosure are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present disclosure. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, since the disclosure is not necessarily limited to practice with the specific details described.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different manner from other embodiments, so that the same or similar parts between the embodiments are mutually referred to. For system embodiments, the description is relatively simple as it essentially corresponds to method embodiments, and reference should be made to the description of method embodiments for relevant points.
The block diagrams of the devices, apparatuses, devices, systems referred to in this disclosure are merely illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
The methods and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, firmware. The above-described sequence of steps for the method is for illustration only, and the steps of the method of the present disclosure are not limited to the sequence specifically described above unless specifically stated otherwise. Furthermore, in some embodiments, the present disclosure may also be implemented as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
It is also noted that in the apparatus, devices and methods of the present disclosure, components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered equivalent to the present disclosure.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the disclosure to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (10)

1. An offload allocation decision determining method of an industrial internet system, wherein the industrial internet system comprises a cloud center, a plurality of edge servers and a plurality of edge devices, the method being applied to the edge devices and/or the cloud center, the method comprising:
processing state information of the industrial Internet system in a target time frame by using a deep neural network to obtain a characteristic vector of the industrial Internet system in the target time frame;
converting the feature vector into a feature matrix;
determining a plurality of groups of unloading allocation decisions based on the feature matrix;
and determining an optimal unloading allocation decision from the plurality of groups of unloading allocation decisions based on the numerical value magnitude relation of a plurality of objective function values corresponding to the plurality of groups of unloading allocation strategies, so that the edge equipment performs task unloading and resource allocation on the cloud center and/or the plurality of edge servers based on the optimal unloading allocation decision.
2. The method of claim 1, wherein the determining the plurality of sets of offload allocation decisions based on the feature matrix comprises:
determining the multiple groups of unloading allocation decisions based on numerical relationships among unloading indication variables corresponding to matrix elements in the feature matrix and a first randomly generated noise matrix;
each matrix element in the feature matrix corresponds to an unloading indication variable, the value of the unloading indication variable is used for selecting equipment for processing unloading tasks from the plurality of edge equipment and the plurality of edge equipment, the dimension of the feature matrix is identical to the dimension of the feature vector, the dimension of the first noise matrix is identical to the dimension of the feature matrix, and each element in the first noise matrix obeys standard Gaussian distribution.
3. The method of claim 2, wherein the number of the plurality of sets of offload allocation decisions is 2n+2, N is an integer greater than 1, and N is the same as the number of edge devices in the plurality of edge devices;
the determining the multiple groups of unloading allocation decisions based on the numerical relation between unloading indication variables corresponding to matrix elements in the feature matrix and the randomly generated first noise matrix comprises:
Determining a first unloading allocation decision set based on a numerical relation among unloading indication variables corresponding to matrix elements in the feature matrix, wherein the first unloading allocation decision set comprises N+1 groups of unloading allocation decisions;
determining a second noise matrix based on the feature matrix and the first noise matrix;
determining a second unloading allocation decision set based on a numerical relation between unloading indication variables corresponding to matrix elements in the feature matrix and the second noise matrix, wherein the second unloading allocation decision set comprises N+1 groups of unloading allocation decisions;
the plurality of sets of offload allocation decisions are determined based on the first set of offload allocation decisions and the second set of offload allocation decisions.
4. A method according to any of claims 1-3, further comprising, prior to said determining an optimal offload allocation decision from said plurality of sets of offload allocation decisions based on a numerical magnitude relation of a plurality of objective function values corresponding to said plurality of sets of offload allocation policies:
acquiring the central processing unit frequency of the target edge equipment corresponding to each unloading decision in the plurality of groups of unloading allocation decisions, the uploading power of the edge equipment unloaded to a target edge server and the allocation bandwidth of the target edge server, and the allocation time of the target edge equipment for locally calculating and calling the cloud center service;
And calculating an objective function value corresponding to each unloading allocation decision in the plurality of groups of unloading allocation decisions based on the CPU frequency, the uploading power unloaded to an objective edge server by the edge equipment, the allocation bandwidth and the allocation time, and obtaining the plurality of objective function values.
5. The method of claim 4, wherein the obtaining the cpu frequency of the target edge device corresponding to each of the plurality of sets of offloading allocation decisions comprises:
and solving a preset optimization problem formula aiming at the CPU frequency to obtain the CPU frequency.
6. The method of claim 4, wherein the means for obtaining the uplink power and the allocated bandwidth comprises:
and solving a preset optimization problem formula aiming at the uploading power and the distribution bandwidth to obtain the uploading power and the distribution bandwidth.
7. An apparatus for determining an offload allocation decision for an industrial internet system, wherein the industrial internet system comprises a cloud center, a plurality of edge servers, and a plurality of edge devices, the apparatus being applied to the edge devices and/or the cloud center, the apparatus comprising:
The characteristic vector determining module is used for processing the state information of the industrial Internet system in the target time frame by using the deep neural network to obtain the characteristic vector of the industrial Internet system in the target time frame;
the feature matrix determining module is used for converting the feature vector into a feature matrix;
the unloading allocation decision determining module is used for determining a plurality of groups of unloading allocation decisions based on the feature matrix;
and the optimal strategy determining module is used for determining an optimal unloading and distributing decision from the plurality of groups of unloading and distributing decisions based on the numerical value magnitude relation of the plurality of objective function values corresponding to the plurality of groups of unloading and distributing strategies.
8. An industrial internet system, comprising:
a mobile edge computing network comprising a cloud center, a plurality of edge servers, and a plurality of edge devices;
the off-load allocation decision-making device of an industrial internet system of claim 7.
9. An electronic device, comprising:
a memory for storing a computer program product;
a processor for executing a computer program product stored in said memory, which, when executed, implements the method of any of the preceding claims 1-6.
10. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method of any of the preceding claims 1-6.
CN202311288572.XA 2023-10-07 2023-10-07 Unloading allocation decision determining method and device for industrial Internet system Pending CN117032994A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311288572.XA CN117032994A (en) 2023-10-07 2023-10-07 Unloading allocation decision determining method and device for industrial Internet system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311288572.XA CN117032994A (en) 2023-10-07 2023-10-07 Unloading allocation decision determining method and device for industrial Internet system

Publications (1)

Publication Number Publication Date
CN117032994A true CN117032994A (en) 2023-11-10

Family

ID=88641486

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311288572.XA Pending CN117032994A (en) 2023-10-07 2023-10-07 Unloading allocation decision determining method and device for industrial Internet system

Country Status (1)

Country Link
CN (1) CN117032994A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111245651A (en) * 2020-01-08 2020-06-05 上海交通大学 Task unloading method based on power control and resource allocation
CN113612843A (en) * 2021-08-02 2021-11-05 吉林大学 MEC task unloading and resource allocation method based on deep reinforcement learning
KR20230007941A (en) * 2021-07-06 2023-01-13 금오공과대학교 산학협력단 Edge computational task offloading scheme using reinforcement learning for IIoT scenario
CN116347522A (en) * 2023-04-04 2023-06-27 南京大学 Task unloading method and device based on approximate computation multiplexing under cloud edge cooperation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111245651A (en) * 2020-01-08 2020-06-05 上海交通大学 Task unloading method based on power control and resource allocation
KR20230007941A (en) * 2021-07-06 2023-01-13 금오공과대학교 산학협력단 Edge computational task offloading scheme using reinforcement learning for IIoT scenario
CN113612843A (en) * 2021-08-02 2021-11-05 吉林大学 MEC task unloading and resource allocation method based on deep reinforcement learning
CN116347522A (en) * 2023-04-04 2023-06-27 南京大学 Task unloading method and device based on approximate computation multiplexing under cloud edge cooperation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张钰雯 等: "工业互联网标识解析体系发展趋势", 信息通信技术与政策, no. 8, pages 43 - 46 *

Similar Documents

Publication Publication Date Title
Ali et al. An automated task scheduling model using non-dominated sorting genetic algorithm II for fog-cloud systems
Higham et al. Squeezing a matrix into half precision, with an application to solving linear systems
US11521067B2 (en) Decentralized distributed deep learning
Abd Elaziz et al. IoT workflow scheduling using intelligent arithmetic optimization algorithm in fog computing
Salimi et al. Task scheduling using NSGA II with fuzzy adaptive operators for computational grids
Akbari Torkestani A new approach to the job scheduling problem in computational grids
Song et al. Processing optimization of typed resources with synchronized storage and computation adaptation in fog computing
Babar et al. Intelligent computation offloading for IoT applications in scalable edge computing using artificial bee colony optimization
CN113037800B (en) Job scheduling method and job scheduling device
Li et al. Resource scheduling based on improved spectral clustering algorithm in edge computing
Wei et al. Multi-dimensional resource allocation in distributed data centers using deep reinforcement learning
Salimi et al. Task scheduling with Load balancing for computational grid using NSGA II with fuzzy mutation
CN112579286A (en) Method, apparatus and storage medium for light source mask optimization
Zhang et al. Effect: Energy-efficient fog computing framework for real-time video processing
Madhumala et al. Virtual machine placement using energy efficient particle swarm optimization in cloud datacenter
CN117032992B (en) Task processing method and device of industrial Internet system
Menouer et al. New multi-objectives scheduling strategies in docker swarmkit
CN112560392A (en) Method, apparatus and storage medium for processing a circuit layout
Jiang et al. An improved multi-objective grey wolf optimizer for dependent task scheduling in edge computing
CN117032994A (en) Unloading allocation decision determining method and device for industrial Internet system
Salimi et al. Task scheduling for computational grids using NSGA II with fuzzy variance based crossover
CN116302481A (en) Resource allocation method and system based on sparse knowledge graph link prediction
Shubair Enhancement of task scheduling technique of big data cloud computing
Narayana et al. A research on various scheduling strategies in fog computing environment
Cavallo et al. A LAHC-based job scheduling strategy to improve big data processing in geo-distributed contexts

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination