CN112948114A - Edge computing method and edge computing platform - Google Patents

Edge computing method and edge computing platform Download PDF

Info

Publication number
CN112948114A
CN112948114A CN202110224975.2A CN202110224975A CN112948114A CN 112948114 A CN112948114 A CN 112948114A CN 202110224975 A CN202110224975 A CN 202110224975A CN 112948114 A CN112948114 A CN 112948114A
Authority
CN
China
Prior art keywords
edge
task
edge calculation
edge node
mec application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110224975.2A
Other languages
Chinese (zh)
Other versions
CN112948114B (en
Inventor
白龙
黄颢
尹超
马跃睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing United Time And Space Information Technology Co ltd
Liantong Shike Beijing Information Technology Co ltd
China United Network Communications Group Co Ltd
Original Assignee
Beijing United Time And Space Information Technology Co ltd
Liantong Shike Beijing Information Technology Co ltd
China United Network Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing United Time And Space Information Technology Co ltd, Liantong Shike Beijing Information Technology Co ltd, China United Network Communications Group Co Ltd filed Critical Beijing United Time And Space Information Technology Co ltd
Priority to CN202110224975.2A priority Critical patent/CN112948114B/en
Publication of CN112948114A publication Critical patent/CN112948114A/en
Application granted granted Critical
Publication of CN112948114B publication Critical patent/CN112948114B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Stored Programmes (AREA)

Abstract

The application provides an edge computing method and an edge computing platform, which comprehensively consider the problems of migration cost, server energy consumption and the like by making an MEC migration strategy based on multi-attribute decision. The method comprises the following steps: migrating services generated by an MEC application program instance; updating the availability of the service; installing flow rules, uninstalling flow rules or updating parameters of existing flow rules according to the request of the MEC application program instance; installing a task or uninstalling a task according to the request of the MEC application program instance; and allocating different weights to different edge calculation tasks in the task edge calculation table so as to process the edge calculation tasks according to the weights.

Description

Edge computing method and edge computing platform
Technical Field
The present application relates to the field of communications, and in particular, to an edge computing method and an edge computing platform.
Background
Currently, relying on remote cloud computing alone is not sufficient to achieve the millisecond delay requirements of 5G computing and communications. Furthermore, data exchange between the user equipment and the remote cloud will saturate the backhaul link and degrade the backhaul network quality, making it critical to utilize the edge cloud as a complement to cloud computing. Edge clouds push traffic, computing, and network functions towards the network edge, which is also consistent with the key feature of next generation networks, i.e., information is increasingly generated and consumed locally, resulting from the explosion of internet of things (IoT), etc.), social networking, and content delivery applications.
At present, the influence of model data placement difference on task migration edge calculation of an edge layer is ignored by an edge layer collaborative storage mechanism, which can cause the problems of high task migration processing cost, unbalanced load of an edge server and the like.
Disclosure of Invention
The application provides an edge computing method and an edge computing platform, which comprehensively consider the problems of migration cost, server energy consumption and the like by making a multi-access edge computing (MEC) migration strategy based on multi-attribute decision.
In a first aspect, an edge calculation method is provided, including: migrating services generated by an MEC application program instance; updating the availability of the service; installing flow rules, uninstalling flow rules or updating parameters of existing flow rules according to the request of the MEC application program instance; installing a task or uninstalling a task according to the request of the MEC application program instance; and allocating different weights to different edge calculation tasks in the task edge calculation table so as to process the edge calculation tasks according to the weights.
According to the embodiment of the application, multiple factors such as cost, server computing capacity, distance between a user and a server, server energy consumption and the like are comprehensively considered, computing migration of the MEC layer is abstracted into a multi-attribute decision-making model, an MEC computing migration strategy based on multi-attribute decision is formulated, cost is reduced, and load balance is improved.
With reference to the first aspect, in some implementations of the first aspect, updating the availability of the service includes: initializing a kernel of at least one edge node; sending the initialization kernel and training samples to at least one MEC application device; obtaining parameters fed back by the at least one MEC application device, wherein the parameters are obtained by back propagation of the at least one MEC application device through a neural network; and updating the availability of the service according to the parameter and the preset learning rate.
With reference to the first aspect, in some implementations of the first aspect, assigning different weights to different edge calculation tasks in the task edge calculation table, so as to process the edge calculation tasks according to the weights, includes: arranging the edge calculation processing standards of the at least one edge node in a descending order to generate a task edge calculation table; and sequentially accessing the at least one edge node according to the sequence of the task edge calculation table.
With reference to the first aspect, in some implementations of the first aspect, the maximum task amount Ω that the at least one edge node can perform task edge calculation is determined by a sum of a counter default and a register Quantum; sequentially accessing the at least one edge node according to the order of the task edge calculation table, including: when the maximum task amount omega capable of performing task edge calculation by the first edge node in the at least one edge node is less than the total amount O of task data to be distributedeIf yes, ending the task edge calculation of the first edge node; or, when the maximum task amount Ω of the first edge node capable of performing task edge calculation is greater than or equal to the total task data amount O to be distributedeThen the task edge computation for the first edge node is performed.
In a second aspect, an edge computing platform is provided, comprising: the migration module is used for migrating the service generated by the MEC application program instance; a processing module for updating the availability of the service; the processing module is used for installing the flow rule, uninstalling the flow rule or updating the parameters of the existing flow rule according to the request of the MEC application program example; the processing module is used for installing tasks or uninstalling tasks according to the request of the MEC application program instance; the processing module is further configured to: and allocating different weights to different edge calculation tasks in the task edge calculation table so as to process the edge calculation tasks according to the weights.
With reference to the second aspect, in some implementations of the second aspect, the processing module is specifically configured to: initializing a kernel of the at least one edge node; sending the initialization kernel and training samples to at least one MEC application device; obtaining parameters fed back by the at least one MEC application device, wherein the parameters are obtained by back propagation of the at least one MEC application device through a neural network; and updating the availability of the service according to the parameter and the preset learning rate.
With reference to the second aspect, in some implementations of the second aspect, the processing module is specifically configured to: arranging the edge calculation processing standards of the at least one edge node in a descending order to generate a task edge calculation table; and sequentially accessing the at least one edge node according to the sequence of the task edge calculation table.
With reference to the second aspect, in some implementations of the second aspect, the maximum task amount Ω that the at least one edge node can perform task edge calculation is determined by a sum of a counter default and a register Quantum; the processing module is specifically configured to: when the maximum task amount omega capable of performing task edge calculation by the first edge node in the at least one edge node is less than the total amount O of task data to be distributedeIf yes, ending the task edge calculation of the first edge node; or, when the maximum task amount Ω of the first edge node capable of performing task edge calculation is greater than or equal to the total task data amount O to be distributedeThen execute the first edgeAnd calculating task edges of the nodes.
In a third aspect, an edge computing device is provided, which includes a processor coupled to a memory and configured to execute instructions in the memory to implement the method in any one of the possible implementations of the first aspect. Optionally, the apparatus further comprises a memory. Optionally, the apparatus further comprises a communication interface, the processor being coupled to the communication interface.
In a fourth aspect, a processor is provided, comprising: input circuit, output circuit and processing circuit. The processing circuit is configured to receive a signal via the input circuit and transmit a signal via the output circuit, so that the processor performs the method of any one of the possible implementations of the first aspect.
In a specific implementation process, the processor may be a chip, the input circuit may be an input pin, the output circuit may be an output pin, and the processing circuit may be a transistor, a gate circuit, a flip-flop, various logic circuits, and the like. The input signal received by the input circuit may be received and input by, for example and without limitation, a receiver, the signal output by the output circuit may be output to and transmitted by a transmitter, for example and without limitation, and the input circuit and the output circuit may be the same circuit that functions as the input circuit and the output circuit, respectively, at different times. The embodiment of the present application does not limit the specific implementation manner of the processor and various circuits.
In a fifth aspect, a processing apparatus is provided that includes a processor and a memory. The processor is configured to read instructions stored in the memory, and may receive signals via the receiver and transmit signals via the transmitter to perform the method of any one of the possible implementations of the first aspect.
Optionally, there are one or more processors and one or more memories.
Alternatively, the memory may be integrated with the processor, or provided separately from the processor.
In a specific implementation process, the memory may be a non-transient memory, such as a Read Only Memory (ROM), which may be integrated on the same chip as the processor, or may be separately disposed on different chips.
It will be appreciated that the associated data interaction process, for example, sending the indication information, may be a process of outputting the indication information from the processor, and receiving the capability information may be a process of receiving the input capability information from the processor. In particular, the data output by the processor may be output to a transmitter and the input data received by the processor may be from a receiver. The transmitter and receiver may be collectively referred to as a transceiver, among others.
The processing device in the fifth aspect may be a chip, the processor may be implemented by hardware or software, and when implemented by hardware, the processor may be a logic circuit, an integrated circuit, or the like; when implemented in software, the processor may be a general-purpose processor implemented by reading software code stored in a memory, which may be integrated with the processor, located external to the processor, or stand-alone.
In a sixth aspect, there is provided a computer program product comprising: computer program (also called code, or instructions), which when executed, causes a computer to perform the method of any of the possible implementations of the first aspect described above.
In a seventh aspect, a computer-readable storage medium is provided, which stores a computer program (which may also be referred to as code or instructions) that, when executed on a computer, causes the computer to perform the method in any of the possible implementations of the first aspect.
Drawings
FIG. 1 is a schematic flow chart of a 5G edge calculation method provided by the present invention;
FIG. 2 is a schematic block diagram of an edge computing platform provided by an embodiment of the present application;
fig. 3 is a schematic block diagram of an edge computing device according to an embodiment of the present disclosure.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
In future 5G networks, ubiquitous cloud computing will be supported. The mobile internet and the cloud computing technology are fused to form the mobile cloud computing technology. A large number of new applications and services are emerging, such as real-time online gaming, virtual reality, and ultra-high definition task flows, all of which require unprecedented high access speeds and low latency. The past decade has witnessed the flight of the next generation of the internet with different vision, including IoT, tactile internet (millisecond delay), internet, and social networks. As predicted by cisco, the internet will increase by 2020 by about 500 billion IoT devices (e.g., sensors and wearable devices), with most of the resources used for computing, communication, and storage, and must rely on remote or edge clouds to increase their processing power. Currently, it is widely believed that reliance on remote cloud computing alone is not sufficient to achieve the millisecond delay requirements of 5G computing and communications. Furthermore, data exchange between the user equipment and the remote cloud will saturate the backhaul link and degrade the backhaul network quality, making it critical to utilize the edge cloud as a complement to cloud computing. It pushes traffic, computing, and network functionality towards the network edge, consistent with the key features of next generation networks, i.e., information is increasingly generated and consumed locally, resulting from the explosion of IoT, social networking, and content delivery applications.
As an application of the MEC technology acting on a user side, frequent deployment is required on an edge side in a 5G data network, and an existing cloud computing platform has a large system size, high construction complexity and large requirements on a deployment environment, and cannot meet a large number of deployment requirements; moreover, most of the existing cloud computing platforms refer to the deployment standard in the 4G era, and 5G application services cannot be migrated. The actual edge computing environment has heterogeneity, namely a strong edge server like a small data center and a weak edge server with limited computing or storage capacity. When the weak edge server is overloaded or cannot meet the data requirements of the application, the real-time data processing task cannot be completed within the deadline, resulting in task failure. In addition, the existing edge layer collaborative storage mechanism ignores the influence of model data placement difference on task migration edge calculation of an edge layer, and can cause the problems of high task migration processing cost, unbalanced load of an edge server and the like.
In view of this, in the embodiment of the present application, a monitoring center and a task analysis processing architecture under edge computing are constructed, a specific task analysis scheme is formulated, an effective atomic filtering algorithm is provided, multiple factors such as cost, server computing capacity, distance between a user and a server, and server energy consumption are considered comprehensively, computing migration of an MEC layer is abstracted into a multi-attribute decision model, and an MEC computing migration strategy based on multi-attribute decision is formulated.
Before introducing the edge calculation method provided by the embodiment of the present application, the following points are explained:
first, in the embodiments shown below, terms and english abbreviations such as multiple access edge computing, availability of services, and the like are exemplary examples given for convenience of description, and should not limit the present application in any way. This application is not intended to exclude the possibility that other terms may be defined in existing or future protocols to carry out the same or similar functions.
Second, the first, second and various numerical numbers in the embodiments shown below are merely for convenience of description and are not intended to limit the scope of the embodiments of the present application.
Third, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, and c, may represent: a, or b, or c, or a and b, or a and c, or b and c, or a, b and c, wherein a, b and c can be single or multiple.
The steps and/or the flow of the edge calculation method in the embodiment of the present application may be performed by an edge calculation device, which may be, for example, a server or the like having a function of performing the steps and/or the flow of the edge calculation method, and the embodiment of the present application is not limited herein.
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
FIG. 1 is a schematic flow chart of a 5G edge calculation method 100 provided by the present invention. As shown in fig. 1, the method 100 includes the steps of:
s101, migrating the service generated by the MEC application program instance.
And S102, updating the availability of the service.
And S103, installing the flow rule, uninstalling the flow rule or updating the parameters of the existing flow rule according to the request of the MEC application program instance.
And S104, installing a task or uninstalling the task according to the request of the MEC application program instance.
And S105, distributing different weights to different edge calculation tasks in the task edge calculation table so as to process the edge calculation tasks according to the weights.
It should be understood that the weights (or priorities) are classified according to the influence on the completion of all the edge calculation tasks, so that the time delay of the system is maximally reduced, the stability of the edge calculation optimization system of the edge node tasks is maximally ensured, and the 5G data multi-service edge calculation processing is completed.
The embodiment of the application provides basic functions such as service registration, flow control and task control required by using the MEC service by interacting with the MEC application program and the data plane; introducing edge nodes, constructing an edge node edge calculation optimization model, establishing an edge node task control center, defining and giving an edge node edge calculation optimization scheme, and providing a task edge calculation optimization algorithm based on an edge node edge calculator. A plurality of factors such as cost, server computing capacity, distance between a user and a server, server energy consumption and the like are comprehensively considered.
As an alternative embodiment, S101 includes: in response to receiving a service migration message sent by the MEC application program instance, migrating a new service, and updating the availability status of the migrated new service; determining the calculation level of the task in the vertical direction, migrating the task to the optimal vertical level, and then considering the calculation migration in the horizontal direction; notifying other related MEC application instances after the service migration unit completes migrating the new service.
As an alternative embodiment, S102 includes: the edge server initializes the kernel, and can determine the size and the number of the update service according to the training sample and the depth of the neural network; the edge server sends the initialized kernel and training set to each cooperative MEC application program device, and for a single MEC application program device, the edge server sends all update services; the edge server will divide the training set and send its parts to each MEC application, the size of the division depending on the number of training set and MEC application devices.
After receiving the training set and the update service, the MEC application device executes the rest of a Convolutional Neural Network (CNN), including convolution, linear rectification function (ReLU), pooling, full connection, and corresponding backward propagation; the MEC application program feeds back parameters obtained through back propagation to the edge server, the edge server updates the updating service according to the feedback of the intelligent MEC application program and the preset learning rate, and then the edge server sends the updated updating service to the intelligent MEC application programs again.
As an alternative embodiment, S103 includes: in response to receiving a flow rule installation, uninstallation or update request sent by an MEC application program instance, installing, uninstalling or updating a flow gauge; forwarding the traffic rule install, uninstall, or update request to a data plane; and sending a response generated after the data plane operates according to the installation, uninstallation or update request back to the MEC application program instance.
As an alternative embodiment, S104 includes: in response to receiving a task installation request sent by an MEC application program instance, installing a task in a DNS server or a proxy, or in response to receiving a task uninstallation request sent by the MEC application program instance, deleting the task from the DNS server or the proxy; generating a response according to the result of task installation or task uninstallation and sending the response to the MEC application program instance; when an edge user generates an application task, a task unloading request is sent to an edge cloud service center, and request information comprises the size of task data and the maximum tolerable time delay of the task.
When the edge cloud service center receives the request information, the whole unloading environment is analyzed comprehensively, the considered environment comprises hardware performance, power, mobility and wireless network environment of a request user, parameters such as bandwidth resources and edge server states, and the unloading decision information comprising task division and execution modes, unloading sending power and distributed wireless resources is issued to the user by the unloading control center through analysis and optimal configuration;
and when the unloaded task is completed in the edge cloud, returning a data result to the user through a downlink.
In the above S103 and S104, basic functions such as a service registration function, a flow control function, and a task control function required for using the MEC service are provided by interacting with the MEC application and the data plane, which is helpful for optimizing the performance of the entire edge computing network.
As an alternative embodiment, S105 includes:
(1) allocating task processing workload: and recording the task atom filtered by the atom algorithm in the T moment as OeThen OeNamely the total processing amount of the task data tasks waiting to be distributed in the T moment.
(2) And (3) generating a task edge calculation table: handle SEArranging according to a descending order to generate a task edge calculation table, updating the task edge calculation table after the task edge calculation of a single edge node is finished, and preferentially processing an edge calculation task with a lower weight value and close to the ending processing time of the edge calculation task; sEAnd processing standard of task edge calculation is carried out for the edge node.
(3) And (3) performing task edge calculation sequencing by adopting a least square method: sequentially accessing edge nodes by edge computing tasks to be processed according to the sequence of a task edge computing table, wherein the sum of a counter default and a register Quantum of a single edge node is the maximum task quantity omega of the edge node capable of performing task edge computing, and when omega is less than OeWhen the task edge is calculated, the current task edge calculation is finished. When omega is more than or equal to OeThen the counter of the edge node at the next moment is omega-OeAnd completing the task edge calculation of the current node. And continuously executing task edge calculation serialization until the edge nodes in the task edge calculation table are accessed in sequence.
The method and the device introduce the edge nodes, construct an edge node edge calculation optimization model, establish an edge node task control center, define and give an edge node edge calculation optimization scheme, and provide a task edge calculation optimization algorithm based on an edge node edge calculator.
As an optional embodiment, the least square method specifically includes the following steps:
firstly, calculating a weight between a hidden layer and a characteristic layer of the continuous Fourier neural network according to the following formula:
Figure BDA0002956924580000081
wherein, Tt+1Representing the weight between the hidden layer and the characterization layer when the T +1 th recursion occurs, T representing the recursion times of continuous Fourier neural network weight training, Tt"represents the weight between the hidden layer and the characterization layer at the t recursion time, mu represents the learning rate of the weight between the hidden layer and the characterization layer, and the general value range is 0<μ<1,
Figure BDA0002956924580000082
Representing the partial derivative operation of the absolute error of the sample to the weight between the hidden layer and the characterization layer during the t-th recursion, wherein alpha is a momentum variable and is generally in a value range of 0.9<α<1,ΔTt"represents the weight offset between the hidden layer and the characterized layer at the t-th recursion.
Secondly, calculating the weight between the calling layer and the hidden layer of the continuous Fourier neural network according to the following formula:
Figure BDA0002956924580000091
wherein, Tt'+1Representing the weight between the calling layer and the hidden layer when the T +1 th recursion is performed, T representing the recursion times of continuous Fourier neural network weight training, and Tt' represents the weight between the calling layer and the hidden layer during the t recursion, mu represents the learning rate of the weight between the calling layer and the hidden layer, and the value range is 0<μ<1,
Figure BDA0002956924580000092
Representing the partial derivative operation of the absolute error of the sample in the t recursion on the weight between the calling layer and the hidden layer, wherein alpha is a momentum variable and is generally within the value range of 0.9<α<1,ΔTt' represents the weight offset between the calling layer and the hidden layer at the t-th recursion.
Thirdly, calculating a scaling variable of a hidden Fourier activation function of the continuous Fourier neural network according to the following formula:
Figure BDA0002956924580000093
wherein m ist+1Representing scaling variables of hidden layer Fourier activation function when t +1 recursion is performed, t representing the recursion times of continuous Fourier neural network weight training, mtRepresents the scaling variable of the hidden layer Fourier activation function during the t recursion, and mu represents the learning rate of the scaling variable of the hidden layer Fourier activation function, and the general value range is 0<μ<1,
Figure BDA0002956924580000094
Representing the partial derivative operation of the absolute error of the sample on the hidden layer Fourier activation function scaling variable in the t recursion, wherein alpha is a momentum variable and the general value range is 0.9<α<1,ΔmtAnd (4) representing the correction offset of the scaling variable of the hidden layer Fourier activating function at the t-th recursion.
Fourthly, calculating a displacement variable of the hidden Fourier activation function of the continuous Fourier neural network according to the following formula:
Figure BDA0002956924580000095
wherein n ist+1Representing the displacement variable of the hidden layer Fourier activation function when the t +1 recursion is carried out, t represents the recursion times of continuous Fourier neural network weight training, ntRepresenting the displacement variable of the hidden layer Fourier activation function in the t recursion, and mu representing the learning rate of the displacement variable of the hidden layer Fourier activation function, wherein the general value range is 0<μ<1,
Figure BDA0002956924580000096
Representing the partial derivative operation of the absolute error of the sample on the displacement variable of the hidden layer Fourier activation function in the t recursion, wherein alpha is a momentum variable and the general value range is 0.9<α<1,ΔntAnd (4) representing the offset correction of the displacement variable of the hidden layer Fourier activation function at the t-th recursion.
And fifthly, judging whether the maximum recursion times are reached, if not, returning to the first step, and if so, stopping recursion to obtain the optimal weight of the network, the optimal scaling variable and the optimal displacement variable of the Fourier activating function and the optimal hidden layer representation.
It should be understood that the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
The edge calculation method according to the embodiment of the present application is described in detail above with reference to fig. 1, and the edge calculation platform and the edge calculation apparatus according to the embodiment of the present application are described in detail below with reference to fig. 2 and 3.
Fig. 2 shows a schematic block diagram of an edge computing platform 200 provided by an embodiment of the present application, where the edge computing platform 200 includes: a migration module 210 and a processing module 220.
Wherein the migration module 210 is configured to: migrating services generated by an MEC application program instance; a processing module for updating the availability of the service; the processing module 220 is configured to: installing flow rules, uninstalling flow rules or updating parameters of existing flow rules according to the request of the MEC application program instance; the processing module 220 is configured to: installing a task or uninstalling a task according to the request of the MEC application program instance; the processing module 220 is configured to: and allocating different weights to different edge calculation tasks in the task edge calculation table so as to process the edge calculation tasks according to the weights.
Optionally, the processing module 220 is configured to: initializing a kernel of the at least one edge node; the edge computing platform 200 further comprises a sending module for sending the initialization kernel and training samples to at least one MEC application device; the edge computing platform 200 further comprises an obtaining module for obtaining a parameter fed back by the at least one MEC application device, wherein the parameter is obtained by back propagation of the at least one MEC application device through a neural network; the processing module 220 is configured to: and updating the availability of the service according to the parameter and the preset learning rate.
Optionally, the processing module 220 is configured to: arranging the edge calculation processing standards of the at least one edge node in a descending order to generate a task edge calculation table; and sequentially accessing the at least one edge node according to the sequence of the task edge calculation table.
Optionally, the maximum task amount Ω that at least one edge node can perform task edge calculation is determined by the sum of the counter default and the register Quantum; the processing module 220 is configured to: when the maximum task amount omega capable of performing task edge calculation by the first edge node in the at least one edge node is less than the total amount O of task data to be distributedeIf yes, ending the task edge calculation of the first edge node; or, when the maximum task amount Ω of the first edge node capable of performing task edge calculation is greater than or equal to the total task data amount O to be distributedeThen the task edge computation for the first edge node is performed.
It should be appreciated that the edge computing platform 200 herein is embodied in the form of a functional module. The term module herein may refer to an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (e.g., a shared, dedicated, or group processor) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that support the described functionality. In an alternative example, it may be understood by those skilled in the art that the edge computing platform 200 may be embodied as an edge computing device in the foregoing embodiment, or functions of the edge computing device in the foregoing embodiment may be integrated in the edge computing platform 200, and the edge computing platform 200 may be configured to execute each procedure and/or step corresponding to the edge computing device in the foregoing method embodiment, and in order to avoid repetition, details are not described here again.
The edge computing platform 200 has functions of implementing corresponding steps executed by the edge computing device in the method; the above functions may be implemented by hardware, or may be implemented by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the functions described above. For example, the obtaining module may be a communication interface, such as a transceiver interface.
In an embodiment of the present application, the edge computing platform 200 in fig. 2 may also be a chip or a chip system, such as: system on chip (SoC).
Fig. 3 shows a schematic block diagram of an edge computing device 300 provided by an embodiment of the present application. The apparatus 300 includes a processor 310, a transceiver 320, and a memory 330. Wherein, the processor 310, the transceiver 320 and the memory 330 are communicated with each other through the internal connection path, the memory 330 is used for storing instructions, and the processor 310 is used for executing the instructions stored in the memory 330 to control the transceiver 320 to transmit and/or receive signals.
It should be understood that the apparatus 300 may be embodied as the edge computing device in the above embodiment, or the functions of the edge computing device in the above embodiment may be integrated in the apparatus 300, and the apparatus 300 may be configured to perform each step and/or flow corresponding to the edge computing device in the above method embodiment. Alternatively, the memory 330 may include both read-only memory and random access memory, and provides instructions and data to the processor. The portion of memory may also include non-volatile random access memory. For example, the memory may also store device type information. The processor 310 may be configured to execute the instructions stored in the memory, and when the processor executes the instructions, the processor may perform the steps and/or processes corresponding to the edge computing device in the above method embodiments.
It should be understood that, in the embodiment of the present application, the processor 310 may be a Central Processing Unit (CPU), and the processor may also be other general processors, Digital Signal Processors (DSP), Application Specific Integrated Circuits (ASIC), Field Programmable Gate Arrays (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and so on. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor executes instructions in the memory, in combination with hardware thereof, to perform the steps of the above-described method. To avoid repetition, it is not described in detail here.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. An edge calculation method, comprising:
migrating services generated by a multi-access edge computing MEC application program instance;
updating availability of the service;
installing a flow rule, uninstalling the flow rule or updating the parameters of the existing flow rule according to the request of the MEC application program instance;
installing a task or uninstalling a task according to the request of the MEC application program instance;
and distributing different weights for different edge calculation tasks in a task edge calculation table so as to process the edge calculation tasks according to the weights.
2. The method of claim 1, wherein the updating the availability of the service comprises:
initializing a kernel of at least one edge node;
sending the initialization kernel and training samples to at least one MEC application device;
obtaining parameters fed back by the at least one MEC application device, wherein the parameters are obtained by back propagation of the at least one MEC application device through a neural network;
and updating the availability of the service according to the parameters and a preset learning rate.
3. The method according to claim 2, wherein the assigning different weights to different edge calculation tasks in a task edge calculation table to process the edge calculation tasks according to the weights comprises:
arranging the edge calculation processing standards of the at least one edge node in a descending order to generate a task edge calculation table;
and sequentially accessing the at least one edge node according to the sequence of the task edge calculation table.
4. The method of claim 3, wherein the maximum number of tasks Ω that the at least one edge node can perform task edge computation is determined by the sum of a counter and a register;
the sequentially accessing the at least one edge node according to the sequence of the task edge calculation table comprises:
when the maximum task amount omega capable of performing task edge calculation by the first edge node in the at least one edge node is less than the total amount O of task data to be distributedeIf yes, ending the task edge calculation of the first edge node; alternatively, the first and second electrodes may be,
when the maximum task amount omega of the first edge node capable of performing task edge calculation is larger than or equal to the total task data amount O to be distributedeThen the task edge computation of the first edge node is performed.
5. An edge computing platform, comprising:
the migration module is used for migrating the service generated by the multi-access edge computing MEC application program instance;
a processing module for updating the availability of the service; installing a flow rule, uninstalling the flow rule or updating the parameters of the existing flow rule according to the request of the MEC application program instance; installing a task or uninstalling a task according to the request of the MEC application program instance; and distributing different weights for different edge calculation tasks in a task edge calculation table so as to process the edge calculation tasks according to the weights.
6. The edge computing platform of claim 5, wherein the processing module is specifically configured to:
initializing a kernel of at least one edge node;
sending the initialization kernel and training samples to at least one MEC application device;
obtaining parameters fed back by the at least one MEC application device, wherein the parameters are obtained by back propagation of the at least one MEC application device through a neural network;
and updating the availability of the service according to the parameters and a preset learning rate.
7. The edge computing platform of claim 6, wherein the processing module is specifically configured to:
arranging the edge calculation processing standards of the at least one edge node in a descending order to generate a task edge calculation table;
and sequentially accessing the at least one edge node according to the sequence of the task edge calculation table.
8. The edge computing platform of claim 7, wherein a maximum task amount Ω that the at least one edge node can perform task edge computation is determined by a sum of a counter and a register;
the processing module is specifically configured to:
when the maximum task amount omega capable of performing task edge calculation by the first edge node in the at least one edge node is less than the total amount O of task data to be distributedeIf yes, ending the task edge calculation of the first edge node; alternatively, the first and second electrodes may be,
when the maximum task amount omega of the first edge node capable of performing task edge calculation is larger than or equal to the total task data amount O to be distributedeThen the task edge computation of the first edge node is performed.
9. An edge computing device, comprising: a processor coupled with a memory for storing a computer program that, when invoked by the processor, causes the apparatus to perform the method of any of claims 1 to 4.
10. A computer-readable storage medium for storing a computer program comprising instructions for implementing the method of any one of claims 1 to 4.
CN202110224975.2A 2021-03-01 2021-03-01 Edge computing method and edge computing platform Active CN112948114B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110224975.2A CN112948114B (en) 2021-03-01 2021-03-01 Edge computing method and edge computing platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110224975.2A CN112948114B (en) 2021-03-01 2021-03-01 Edge computing method and edge computing platform

Publications (2)

Publication Number Publication Date
CN112948114A true CN112948114A (en) 2021-06-11
CN112948114B CN112948114B (en) 2023-11-10

Family

ID=76246911

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110224975.2A Active CN112948114B (en) 2021-03-01 2021-03-01 Edge computing method and edge computing platform

Country Status (1)

Country Link
CN (1) CN112948114B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113453255A (en) * 2021-06-25 2021-09-28 国网湖南省电力有限公司 Method and device for balancing and optimizing service data transmission load of edge device container

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109067583A (en) * 2018-08-08 2018-12-21 深圳先进技术研究院 A kind of resource prediction method and system based on edge calculations
US20200145337A1 (en) * 2019-12-20 2020-05-07 Brian Andrew Keating Automated platform resource management in edge computing environments
CN111132235A (en) * 2019-12-27 2020-05-08 东北大学秦皇岛分校 Mobile offload migration algorithm based on improved HRRN algorithm and multi-attribute decision
CN111615128A (en) * 2020-05-25 2020-09-01 浙江九州云信息科技有限公司 Multi-access edge computing method, platform and system
CN111953759A (en) * 2020-08-04 2020-11-17 国网河南省电力公司信息通信公司 Collaborative computing task unloading and transferring method and device based on reinforcement learning
US20200374740A1 (en) * 2019-05-22 2020-11-26 Affirmed Networks, Inc. Systems and methods for distribution of application logic in digital networks
CN112134916A (en) * 2020-07-21 2020-12-25 南京邮电大学 Cloud edge collaborative computing migration method based on deep reinforcement learning
US20210011765A1 (en) * 2020-09-22 2021-01-14 Kshitij Arun Doshi Adaptive limited-duration edge resource management

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109067583A (en) * 2018-08-08 2018-12-21 深圳先进技术研究院 A kind of resource prediction method and system based on edge calculations
US20200374740A1 (en) * 2019-05-22 2020-11-26 Affirmed Networks, Inc. Systems and methods for distribution of application logic in digital networks
US20200145337A1 (en) * 2019-12-20 2020-05-07 Brian Andrew Keating Automated platform resource management in edge computing environments
CN111132235A (en) * 2019-12-27 2020-05-08 东北大学秦皇岛分校 Mobile offload migration algorithm based on improved HRRN algorithm and multi-attribute decision
CN111615128A (en) * 2020-05-25 2020-09-01 浙江九州云信息科技有限公司 Multi-access edge computing method, platform and system
CN112134916A (en) * 2020-07-21 2020-12-25 南京邮电大学 Cloud edge collaborative computing migration method based on deep reinforcement learning
CN111953759A (en) * 2020-08-04 2020-11-17 国网河南省电力公司信息通信公司 Collaborative computing task unloading and transferring method and device based on reinforcement learning
US20210011765A1 (en) * 2020-09-22 2021-01-14 Kshitij Arun Doshi Adaptive limited-duration edge resource management

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JINGYA ZHOU 等: "Task Offloading for Social Sensing Applications in Mobile Edge Computing", 《2019 SEVENTH INTERNATIONAL CONFERENCE ON ADVANCED CLOUD AND BIG DATA (CBD)》, pages 333 - 338 *
乐光学 等: "边缘计算多约束可信协同任务迁移策略", 《电信科学》, no. 11, pages 36 - 50 *
唐思宇: "面向物联网应用的边缘控制系统关键技术研究与验证", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 2, pages 136 - 1038 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113453255A (en) * 2021-06-25 2021-09-28 国网湖南省电力有限公司 Method and device for balancing and optimizing service data transmission load of edge device container
CN113453255B (en) * 2021-06-25 2023-03-24 国网湖南省电力有限公司 Method and device for balancing and optimizing service data transmission load of edge device container

Also Published As

Publication number Publication date
CN112948114B (en) 2023-11-10

Similar Documents

Publication Publication Date Title
US10924535B2 (en) Resource load balancing control method and cluster scheduler
Le et al. Allox: compute allocation in hybrid clusters
Ardagna et al. SLA based resource allocation policies in autonomic environments
CN108228347A (en) The Docker self-adapting dispatching systems that a kind of task perceives
CN111723933B (en) Training method of neural network model and related products
CN112148492B (en) Service deployment and resource allocation method considering multi-user mobility
Patel et al. Survey of load balancing techniques for grid
CN105144109A (en) Distributed data center technology
CN111131486B (en) Load adjustment method and device of execution node, server and storage medium
Shekhar et al. URMILA: Dynamically trading-off fog and edge resources for performance and mobility-aware IoT services
CN112422644A (en) Method and system for unloading computing tasks, electronic device and storage medium
Cicconetti et al. Uncoordinated access to serverless computing in MEC systems for IoT
Alfakih et al. Multi-objective accelerated particle swarm optimization with dynamic programing technique for resource allocation in mobile edge computing
CN114595049A (en) Cloud-edge cooperative task scheduling method and device
Benedetti et al. Reinforcement learning applicability for resource-based auto-scaling in serverless edge applications
Gao et al. Com-DDPG: A multiagent reinforcement learning-based offloading strategy for mobile edge computing
CN112948114A (en) Edge computing method and edge computing platform
CN116472726A (en) Techniques for selecting network protocols
Li et al. Research on energy‐saving virtual machine migration algorithm for green data center
Rego et al. Enhancing offloading systems with smart decisions, adaptive monitoring, and mobility support
Zinner et al. A discrete-time model for optimizing the processing time of virtualized network functions
CN113821317A (en) Edge cloud collaborative micro-service scheduling method, device and equipment
CN117135131A (en) Task resource demand perception method for cloud edge cooperative scene
KR102590112B1 (en) Coded and Incentive-based Mechanism for Distributed Training of Machine Learning in IoT
CN114020469A (en) Edge node-based multi-task learning method, device, medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant