CN112948114B - Edge computing method and edge computing platform - Google Patents

Edge computing method and edge computing platform Download PDF

Info

Publication number
CN112948114B
CN112948114B CN202110224975.2A CN202110224975A CN112948114B CN 112948114 B CN112948114 B CN 112948114B CN 202110224975 A CN202110224975 A CN 202110224975A CN 112948114 B CN112948114 B CN 112948114B
Authority
CN
China
Prior art keywords
edge
task
edge computing
calculation
mec application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110224975.2A
Other languages
Chinese (zh)
Other versions
CN112948114A (en
Inventor
白龙
黄颢
尹超
马跃睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing United Time And Space Information Technology Co ltd
Liantong Shike Beijing Information Technology Co ltd
China United Network Communications Group Co Ltd
Original Assignee
Beijing United Time And Space Information Technology Co ltd
Liantong Shike Beijing Information Technology Co ltd
China United Network Communications Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing United Time And Space Information Technology Co ltd, Liantong Shike Beijing Information Technology Co ltd, China United Network Communications Group Co Ltd filed Critical Beijing United Time And Space Information Technology Co ltd
Priority to CN202110224975.2A priority Critical patent/CN112948114B/en
Publication of CN112948114A publication Critical patent/CN112948114A/en
Application granted granted Critical
Publication of CN112948114B publication Critical patent/CN112948114B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Stored Programmes (AREA)

Abstract

The application provides an edge computing method and an edge computing platform, which comprehensively consider the problems of migration cost, server energy consumption and the like by making a multi-attribute decision-based MEC migration strategy. Comprising the following steps: migrating services generated by the MEC application program instance; updating the availability of the service; installing flow rules, uninstalling flow rules or updating parameters of the existing flow rules according to the request of the MEC application program instance; installing or uninstalling tasks according to the request of the MEC application program instance; different weights are allocated to different edge computing tasks in the task edge computing table so as to process the edge computing tasks according to the weights.

Description

Edge computing method and edge computing platform
Technical Field
The present application relates to the field of communications, and more particularly, to an edge computing method and an edge computing platform.
Background
Currently, relying on remote cloud computing alone is not sufficient to achieve the millisecond delay requirements of 5G computing and communication. Furthermore, data exchange between the user device and the remote cloud will saturate the backhaul link and reduce backhaul network quality, which makes it critical to utilize the edge cloud as a complement to cloud computing. Edge clouds push traffic, computing, and network functions towards the network edge, which also coincides with the key features of the next generation networks, i.e., information is increasingly generated and consumed locally, which stems from the explosive growth of internet of things (internet of things, ioT), etc.), social networks, and content delivery applications.
At present, the edge layer collaborative storage mechanism ignores the influence of the model data placement difference on edge layer task migration edge calculation, which can cause the problems of high task migration processing cost, unbalanced load of an edge server and the like.
Disclosure of Invention
The application provides an edge computing method and an edge computing platform, which comprehensively consider the problems of migration cost, server energy consumption and the like by making a multi-access edge computing (multi-access edge computing, MEC) migration strategy based on multi-attribute decision.
In a first aspect, an edge computing method is provided, including: migrating services generated by the MEC application program instance; updating the availability of the service; installing flow rules, uninstalling flow rules or updating parameters of the existing flow rules according to the request of the MEC application program instance; installing or uninstalling tasks according to the request of the MEC application program instance; different weights are allocated to different edge computing tasks in the task edge computing table so as to process the edge computing tasks according to the weights.
According to the embodiment of the application, a plurality of factors such as cost, server computing capacity, distance between a user and a server, server energy consumption and the like are comprehensively considered, the computing migration of the MEC layer is abstracted into a multi-attribute decision model, and the MEC computing migration strategy based on multi-attribute decision is formulated, so that the cost is reduced, and the load balancing is improved.
With reference to the first aspect, in certain implementations of the first aspect, updating the availability of the service includes: initializing a kernel of at least one edge node; transmitting the initialization kernel and training samples to at least one MEC application device; acquiring parameters fed back by the at least one MEC application device, wherein the parameters are acquired by the at least one MEC application device through back propagation of the neural network; and updating the availability of the service according to the parameters and the preset learning rate.
With reference to the first aspect, in some implementations of the first aspect, assigning different weights to different edge computing tasks in the task edge computing table to process the edge computing tasks according to the weights includes: arranging edge computing processing standards of the at least one edge node in a descending order to generate a task edge computing table; the at least one edge node is accessed sequentially according to the order of the task edge computation table.
With reference to the first aspect, in certain implementations of the first aspect, a maximum task amount Ω of which at least one edge node is capable of performing task edge calculations is determined by a sum of a counter Deficit and a register Quantum; sequentially accessing the at least one edge node according to the order of the task edge computation table, comprising: when the maximum task quantity omega of the first edge node in the at least one edge node capable of performing task edge calculation is smaller than the total task data quantity O to be distributed e Ending the task edge calculation of the first edge node; or when the maximum task amount omega of the first edge node capable of performing task edge calculation is greater than or equal to the total amount O of task data to be distributed e And executing the task edge calculation of the first edge node.
In a second aspect, there is provided an edge computing platform comprising: the migration module is used for migrating the service generated by the MEC application program instance; a processing module for updating the availability of the service; the processing module is used for installing the flow rule, uninstalling the flow rule or updating the parameters of the existing flow rule according to the request of the MEC application program instance; the processing module is used for installing or unloading tasks according to the requests of the MEC application program instance; the processing module is also used for: different weights are allocated to different edge computing tasks in the task edge computing table so as to process the edge computing tasks according to the weights.
With reference to the second aspect, in certain implementations of the second aspect, the processing module is specifically configured to: initializing a kernel of the at least one edge node; transmitting the initialization kernel and training samples to at least one MEC application device; acquiring parameters fed back by the at least one MEC application device, wherein the parameters are acquired by the at least one MEC application device through back propagation of the neural network; and updating the availability of the service according to the parameters and the preset learning rate.
With reference to the second aspect, in certain implementations of the second aspect, the processing module is specifically configured to: arranging edge computing processing standards of the at least one edge node in a descending order to generate a task edge computing table; the at least one edge node is accessed sequentially according to the order of the task edge computation table.
With reference to the second aspect, in certain implementations of the second aspect, a maximum task amount Ω of which at least one edge node is capable of performing task edge calculations is determined by a sum of a counter Deficit and a register Quantum; the processing module is specifically used for: when the maximum task quantity omega of the first edge node in the at least one edge node capable of performing task edge calculation is smaller than the total task data quantity O to be distributed e Ending the task edge calculation of the first edge node; or when the maximum task amount omega of the first edge node capable of performing task edge calculation is greater than or equal to the total amount O of task data to be distributed e And executing the task edge calculation of the first edge node.
In a third aspect, there is provided an edge computing device comprising a processor coupled to a memory operable to execute instructions in the memory to implement a method as in any one of the possible implementations of the first aspect. Optionally, the apparatus further comprises a memory. Optionally, the apparatus further comprises a communication interface, the processor being coupled to the communication interface.
In a fourth aspect, there is provided a processor comprising: input circuit, output circuit and processing circuit. The processing circuitry is configured to receive signals via the input circuitry and to transmit signals via the output circuitry such that the processor performs the method of any one of the possible implementations of the first aspect described above.
In a specific implementation process, the processor may be a chip, the input circuit may be an input pin, the output circuit may be an output pin, and the processing circuit may be a transistor, a gate circuit, a trigger, various logic circuits, and the like. The input signal received by the input circuit may be received and input by, for example and without limitation, a receiver, the output signal may be output by, for example and without limitation, a transmitter and transmitted by a transmitter, and the input circuit and the output circuit may be the same circuit, which functions as the input circuit and the output circuit, respectively, at different times. The embodiment of the application does not limit the specific implementation modes of the processor and various circuits.
In a fifth aspect, a processing device is provided that includes a processor and a memory. The processor is configured to read instructions stored in the memory and to receive signals via the receiver and to transmit signals via the transmitter to perform the method of any one of the possible implementations of the first aspect.
Optionally, the processor is one or more and the memory is one or more.
Alternatively, the memory may be integrated with the processor or the memory may be separate from the processor.
In a specific implementation process, the memory may be a non-transient (non-transitory) memory, for example, a Read Only Memory (ROM), which may be integrated on the same chip as the processor, or may be separately disposed on different chips.
It should be appreciated that the related data interaction process, for example, transmitting the indication information, may be a process of outputting the indication information from the processor, and the receiving the capability information may be a process of receiving the input capability information by the processor. Specifically, the data output by the processing may be output to the transmitter, and the input data received by the processor may be from the receiver. Wherein the transmitter and receiver may be collectively referred to as a transceiver.
The processing means in the fifth aspect may be a chip, and the processor may be implemented by hardware or by software, and when implemented by hardware, the processor may be a logic circuit, an integrated circuit, or the like; when implemented in software, the processor may be a general-purpose processor, implemented by reading software code stored in a memory, which may be integrated in the processor, or may reside outside the processor, and exist separately.
In a sixth aspect, there is provided a computer program product comprising: a computer program (which may also be referred to as code, or instructions) which, when executed, causes a computer to perform the method of any one of the possible implementations of the first aspect.
In a seventh aspect, a computer readable storage medium is provided, which stores a computer program (which may also be referred to as code, or instructions) which, when run on a computer, causes the computer to perform the method of any one of the possible implementations of the first aspect.
Drawings
FIG. 1 is a schematic flow chart of a 5G edge computing method provided by the application;
FIG. 2 is a schematic block diagram of an edge computing platform provided by an embodiment of the present application;
fig. 3 is a schematic block diagram of an edge computing device according to an embodiment of the present application.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
In future 5G networks, ubiquitous cloud computing will be supported. The mobile internet and the cloud computing technology are fused to form the mobile cloud computing technology. A large number of new applications and services are emerging, such as real-time online gaming, virtual reality, and ultra-high definition task flows, all of which require unprecedented high access speeds and low latency. The last decade witnessed the vacations of the next generation of different internet landscape including IoT, haptic internet (millisecond delay), internet and social networks. It is predicted by cisco that by 2020, the internet will add about 500 million IoT devices (e.g., sensors and wearable devices), most of which are used for computing, communication, and storage, and must rely on remote clouds or edge clouds to increase their processing power. At present, it is widely recognized that relying on remote cloud computing alone is not sufficient to achieve the millisecond delay requirements of 5G computing and communication. Furthermore, data exchange between the user device and the remote cloud will saturate the backhaul link and reduce backhaul network quality, which makes it critical to utilize the edge cloud as a complement to cloud computing. It pushes traffic, computing, and network functions towards the network edge, which also coincides with the key features of the next generation network, i.e., information is increasingly generated locally and consumed locally, which stems from the explosive growth of IoT, social networks, and content delivery applications.
As an application of the MEC technology acting on the near-user side, frequent deployment is required on the edge side in the 5G data network, and the existing cloud computing platform has large system volume, high construction complexity, large requirements on deployment environment and cannot meet a large number of deployment requirements; in addition, most of the existing cloud computing platforms refer to the deployment standard in the 4G era, and cannot migrate 5G application services. The actual edge computing environment is heterogeneous, with both strong edge servers, such as small data centers, and weak edge servers with limited computing or storage capabilities. When the weak edge server is overloaded or cannot meet the data requirements of the application, the real-time data processing task cannot be completed within the deadline, resulting in task failure. In addition, the existing edge layer collaborative storage mechanism ignores the influence of the model data placement difference on edge layer task migration edge calculation, and the problems of high task migration processing cost, unbalanced edge server load and the like can be caused.
In view of this, the embodiment of the application constructs a monitoring center and a task analysis processing architecture under edge computing, formulates a specific task analysis scheme, gives an effective atomic filtering algorithm, comprehensively considers a plurality of factors such as cost, computing capacity of a server, distance between a user and the server, energy consumption of the server and the like, abstracts computing migration of an MEC layer into a multi-attribute decision model, and formulates an MEC computing migration strategy based on multi-attribute decision.
Before introducing the edge computing method provided by the embodiment of the application, the following description is made:
first, in the embodiments shown below, terms and english abbreviations, such as multi-access edge calculation, availability of services, etc., are given as exemplary examples for convenience of description, and should not be construed as limiting the present application in any way. The present application does not exclude the possibility of defining other terms in existing or future protocols that perform the same or similar functions.
Second, the first, second and various numerical numbers in the embodiments shown below are merely for convenience of description and are not intended to limit the scope of the embodiments of the present application.
Third, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a alone, a and B together, and B alone, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, and c may represent: a, b, or c, or a and b, or a and c, or b and c, or a, b and c, wherein a, b and c can be single or multiple.
The steps and/or flows of the edge computing method in the embodiments of the present application may be performed by an edge computing device, which may be, for example, a server or the like having a function of performing the steps and/or flows of the edge computing method, and the embodiments of the present application are not limited herein.
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the application to those skilled in the art.
Fig. 1 is a schematic flow chart of a 5G edge computing method 100 provided by the present application. As shown in fig. 1, the method 100 includes the steps of:
s101, migrating services generated by MEC application program instances.
S102, updating the availability of the service.
S103, installing the flow rule, uninstalling the flow rule or updating the parameters of the existing flow rule according to the request of the MEC application program instance.
S104, installing or unloading tasks according to the request of the MEC application program instance.
S105, different weights are allocated to different edge calculation tasks in the task edge calculation table, so that the edge calculation tasks are processed according to the weights.
It should be understood that the weights (or priorities) are classified according to the influence on completing all edge computing tasks, so that the time delay of the system is reduced to the maximum extent, the stability of the edge computing optimization system of the edge node task edge is ensured to the maximum extent, and the 5G data multi-service edge computing process is completed.
The embodiment of the application provides basic functions such as service registration, flow control, task control and the like required by using MEC service through interaction with the MEC application program and the data plane; introducing edge nodes, constructing an edge node edge calculation optimization model, establishing an edge node task regulation center, defining and providing an edge node edge calculation optimization scheme, and providing a task edge calculation optimization algorithm based on an edge node edge calculator. And a plurality of factors such as cost, computing power of the server, distance between a user and the server, energy consumption of the server and the like are comprehensively considered.
As an alternative embodiment, S101 includes: in response to receiving a service migration message sent by an MEC application program instance, migrating a new service, and updating the availability status of the migrated new service; determining a calculation level of a task in a vertical direction, transferring the task to an optimal vertical level, and then considering calculation transfer in a horizontal direction; after the service migration unit completes migrating the new service, other relevant MEC application program instances are notified.
As an alternative embodiment, S102 includes: the edge server initializes the kernel, and can determine the size and the number of the update services according to the training samples and the depth of the neural network; the edge server sends the initialized kernel and training set to each cooperative MEC application device, and for a single MEC application device, the edge server will send all update services; the edge server will divide the training set and send portions thereof to each MEC application, the size of the division being dependent on the training set and the number of MEC application devices.
After receiving the training set and the update service, the MEC application device performs the rest of the convolutional neural network (convolutional neural network, CNN), including convolutional, linear rectification functions (rectified linear unit, reLU), pooling, full-join, and corresponding backward propagation; the MEC application feeds back the parameters obtained by back propagation to the edge servers, which update the update services according to the feedback of the intelligent MEC application and the preset learning rate, and then the edge servers send the updated update services to the intelligent MEC application again.
As an alternative embodiment, S103 includes: responding to a flow rule installation, uninstallation or update request sent by an MEC application program instance, and installing, uninstalling or updating the flow rule; forwarding the traffic rule installation, uninstallation or update request to a data plane; and sending a response generated after the data plane operates according to the installation, uninstallation or update request back to the MEC application program instance.
As an alternative embodiment, S104 includes: installing a task in a DNS server or proxy in response to receiving a task installation request sent by the MEC application instance, or deleting the task from the DNS server or proxy in response to receiving a task uninstallation request sent by the MEC application instance; generating a response according to the task installation or uninstallation result and sending the response to the MEC application program instance; when an edge user generates an application task, a task unloading request is firstly sent to an edge cloud service center, and the request information comprises the task data size and the maximum tolerable time delay of the task.
When the edge cloud service center receives the request information, analyzing the whole unloading environment, wherein the considered environment comprises the hardware performance, power, mobility, wireless network environment, bandwidth resources, edge server state and other parameters of the request user, and the unloading control center transmits unloading decision information comprising task division and execution modes, unloading transmission power and allocated wireless resources to the user through analysis and optimal configuration;
and when the task to be offloaded is completed in the edge cloud, returning a data result to the user through the downlink.
In the above S103 and S104, basic functions such as a service registration function, a flow control function, and a task control function required for using the MEC service are provided by interacting with the MEC application program and the data plane, which helps to optimize the overall edge computing network performance.
As an alternative embodiment, S105 includes:
(1) And (3) distributing task processing workload: task atoms filtered by atomic algorithm in time T are marked as O e Then O e The total task processing amount of task data waiting to be allocated in the moment T.
(2) Generating a task edge calculation table: handle S E The task edge calculation tables are generated by arranging in descending order, the task edge calculation tables are updated after the task edge calculation of a single edge node is completed, and the edge calculation tasks with low weight values and close to the cut-off processing time of the edge calculation tasks are processed preferentially; s is S E Processing criteria for performing task edge calculations for edge nodes.
(3) And (3) carrying out task edge calculation sequencing by adopting a least square method: the edge computing task to be processed sequentially accesses the edge nodes according to the sequence of the task edge computing table, the sum of a counter Deficit and a register Quantum of a single edge node is the maximum task quantity omega capable of performing task edge computing for the edge node, and when omega is less than O e And ending the current task edge calculation. When omega is greater than or equal to O e When the counter of the edge node at the next moment is omega-O e The task edge calculation of the current node is completed. And continuously executing task edge calculation sequencing until the edge nodes in the task edge calculation table are accessed in sequence.
According to the embodiment of the application, edge nodes are introduced, an edge node edge calculation optimization model is constructed, an edge node task regulation center is established, an edge node edge calculation optimization scheme is defined and provided, and a task edge calculation optimization algorithm based on an edge node edge calculator is provided.
As an alternative embodiment, the least square method specifically includes the following steps:
the first step, calculating weights between hidden layers and characterization layers of the continuous Fourier neural network according to the following formula:
wherein T is t+1 Representing weight between hidden layer and characterization layer at t+1st recursion, T representing number of continuous Fourier neural network weight training recursion, T t "represents weight between hidden layer and characterization layer at t-th recursion, μ represents learning rate of weight between hidden layer and characterization layer, and the range of weight is 0<μ<1,Representing partial derivative operation of absolute error of sample between hidden layer and characterization layer in t-th recursion, alpha is momentum variable, and the general value range is 0.9<α<1,ΔT t "represents the weight offset between the hidden layer and the token layer at the t-th recursion.
Secondly, calculating weights between a calling layer and a hidden layer of the continuous Fourier neural network according to the following formula:
wherein T is t ' +1 The weight between the calling layer and the hidden layer in the t+1st recursion is represented, T represents the number of times of the weight training recursion of the continuous Fourier neural network, and T t ' represents the weight between the calling layer and the hidden layer in the t-th recursion, mu represents the learning rate of the weight between the calling layer and the hidden layer, and the general value range is 0<μ<1,Representing partial derivative operation of absolute error of sample on weight between calling layer and hidden layer in t-th recursion, alpha is momentum variable, and its general value range is 0.9<α<1,ΔT t ' represents the amount of weight correction between the call layer and the hidden layer at the t-th recursion.
Thirdly, calculating the scaling variable of the hidden layer Fourier activation function of the continuous Fourier neural network according to the following formula:
wherein m is t+1 Scaling variable representing hidden layer Fourier activation function at t+1st recursion, t representing number of continuous Fourier neural network weight training recursion, m t Represents the scaling variable of the hidden layer Fourier activation function at the t-th recursion, mu represents the learning rate of the scaling variable of the hidden layer Fourier activation function, and the general value range is 0<μ<1,Representing partial derivative operation of absolute error of sample on hidden layer Fourier activation function scaling variable at t-th recursion, alpha is momentum variable, and the value range is 0.9<α<1,Δm t Representing the offset of the hidden layer Fourier activation function scaling variable at the t-th recursion.
Fourthly, calculating displacement variables of hidden layer Fourier activation functions of the continuous Fourier neural network according to the following formula:
wherein n is t+1 The displacement variable of the hidden layer Fourier activation function in the t+1st recursion time is represented, t represents the number of the recursion times of the weight training of the continuous Fourier neural network, and n t Represents the displacement variable of the hidden layer Fourier activation function in the t-th recursion, mu represents the learning rate of the displacement variable of the hidden layer Fourier activation function, and the general value range is 0<μ<1,Representing partial derivative operation of absolute error of sample on displacement variable of hidden layer Fourier activation function at t-th recursion, alpha is momentum variable, and general value range is 0.9<α<1,Δn t And representing the offset of the displacement variable of the hidden layer Fourier activation function at the t-th recursion.
And fifthly, judging whether the maximum recursion times are reached, if not, returning to the first step, if so, stopping recursion to obtain the optimal weight of the network, and obtaining the optimal scaling variable and the optimal displacement variable of the Fourier activation function and the optimal hidden layer representation.
It should be understood that the sequence numbers of the above processes do not mean the order of execution, and the execution order of the processes should be determined by the functions and internal logic of the processes, and should not be construed as limiting the implementation process of the embodiments of the present application.
The edge computing method according to the embodiment of the present application is described in detail above with reference to fig. 1, and the edge computing platform and the edge computing device according to the embodiment of the present application will be described in detail below with reference to fig. 2 and 3.
FIG. 2 shows a schematic block diagram of an edge computing platform 200 provided by an embodiment of the present application, the edge computing platform 200 comprising: a migration module 210 and a processing module 220.
Wherein, migration module 210 is configured to: migrating services generated by the MEC application program instance; a processing module for updating the availability of the service; the processing module 220 is configured to: installing flow rules, uninstalling flow rules or updating parameters of the existing flow rules according to the request of the MEC application program instance; the processing module 220 is configured to: installing or uninstalling tasks according to the request of the MEC application program instance; the processing module 220 is configured to: different weights are allocated to different edge computing tasks in the task edge computing table so as to process the edge computing tasks according to the weights.
Optionally, the processing module 220 is configured to: initializing a kernel of the at least one edge node; the edge computing platform 200 further includes a transmitting module for transmitting the initialization kernel and training samples to at least one MEC application device; the edge computing platform 200 further includes an obtaining module configured to obtain parameters fed back by the at least one MEC application device, where the parameters are obtained by back propagation of the at least one MEC application device through the neural network; the processing module 220 is configured to: and updating the availability of the service according to the parameters and the preset learning rate.
Optionally, the processing module 220 is configured to: arranging edge computing processing standards of the at least one edge node in a descending order to generate a task edge computing table; the at least one edge node is accessed sequentially according to the order of the task edge computation table.
Optionally, the maximum task amount Ω of the at least one edge node capable of task edge calculation is determined by the sum of the counter Deficit and the register Quantum; the processing module 220 is configured to: when the maximum task quantity omega of the first edge node in the at least one edge node capable of performing task edge calculation is smaller than the total task data quantity O to be distributed e Ending the task edge calculation of the first edge node; or when the maximum task amount omega of the first edge node capable of performing task edge calculation is greater than or equal to the total amount O of task data to be distributed e And executing the task edge calculation of the first edge node.
It should be appreciated that the edge computing platform 200 herein is embodied in the form of functional modules. The term module herein may refer to an application specific integrated circuit (application specific integrated circuit, ASIC), an electronic circuit, a processor (e.g., a shared, dedicated, or group processor, etc.) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that support the described functionality. In an alternative example, it will be understood by those skilled in the art that the edge computing platform 200 may be specifically an edge computing device in the foregoing embodiment, or the functions of the edge computing device in the foregoing embodiment may be integrated into the edge computing platform 200, and the edge computing platform 200 may be used to execute each flow and/or step corresponding to the edge computing device in the foregoing method embodiment, which is not repeated herein for avoiding repetition.
The edge computing platform 200 has functions of implementing corresponding steps executed by the edge computing device in the method; the above functions may be implemented by hardware, or may be implemented by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the functions described above. For example, the acquisition module may be a communication interface, such as a transceiver interface.
In an embodiment of the present application, the edge computing platform 200 in fig. 2 may also be a chip or a system of chips, for example: system on chip (SoC).
Fig. 3 shows a schematic block diagram of an edge computing device 300 provided by an embodiment of the application. The apparatus 300 includes a processor 310, a transceiver 320, and a memory 330. Wherein the processor 310, the transceiver 320 and the memory 330 are in communication with each other through an internal connection path, the memory 330 is used for storing instructions, and the processor 310 is used for executing the instructions stored in the memory 330 to control the transceiver 320 to transmit signals and/or receive signals.
It should be understood that the apparatus 300 may be specifically an edge computing device in the foregoing embodiment, or the functions of the edge computing device in the foregoing embodiment may be integrated in the apparatus 300, and the apparatus 300 may be used to perform the steps and/or flows corresponding to the edge computing device in the foregoing method embodiment. Alternatively, the memory 330 may include read-only memory and random access memory and provide instructions and data to the processor. A portion of the memory may also include non-volatile random access memory. For example, the memory may also store information of the device type. The processor 310 may be configured to execute instructions stored in the memory, and when the processor executes the instructions, the processor may perform the steps and/or processes described above in connection with the edge computing device in the method embodiments.
It should be appreciated that in embodiments of the present application, the processor 310 may be a central processing unit (central processing unit, CPU), which may also be other general purpose processors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in the processor for execution. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor executes instructions in the memory to perform the steps of the method described above in conjunction with its hardware. To avoid repetition, a detailed description is not provided herein.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (8)

1. An edge computing method, comprising:
migrating services generated by the multi-access edge computing MEC application program instance;
updating the availability of the service;
installing flow rules, uninstalling flow rules or updating parameters of the existing flow rules according to the request of the MEC application program instance;
installing or uninstalling tasks according to the requests of the MEC application program instances;
different weights are distributed to different edge computing tasks in a task edge computing table, so that the edge computing tasks are processed according to the weights;
the updating the availability of the service comprises the following steps:
initializing a kernel of at least one edge node;
transmitting the initialization kernel and training samples to at least one MEC application device;
acquiring parameters fed back by the at least one MEC application device, wherein the parameters are acquired by the at least one MEC application device through back propagation of a neural network;
and updating the availability of the service according to the parameters and a preset learning rate.
2. The method of claim 1, wherein assigning different weights to different edge computing tasks in a task edge computing table to process the edge computing tasks according to the weights comprises:
arranging processing standards of task edge calculation of the at least one edge node in descending order, preferentially processing the edge calculation task with a lower weight value and close to the cut-off processing time of the edge calculation task, and generating a task edge calculation table;
and sequentially accessing the at least one edge node according to the sequence of the task edge calculation table.
3. The method according to claim 2, characterized in that the maximum task amount Ω of task edge calculation enabled by the at least one edge node is determined by the sum of a counter and a register;
the sequentially accessing the at least one edge node according to the order of the task edge calculation table includes:
when at least one ofThe maximum task quantity omega of the first edge node in the edge nodes capable of performing task edge calculation is smaller than the total task data quantity O to be distributed e Ending the task edge calculation of the first edge node; or,
when the maximum task quantity omega of the first edge node capable of performing task edge calculation is greater than or equal to the total quantity O of task data to be distributed e And executing the task edge calculation of the first edge node.
4. An edge computing platform, comprising:
the migration module is used for migrating the service generated by the multi-access edge computing MEC application program instance;
a processing module, configured to update availability of the service; installing flow rules, uninstalling flow rules or updating parameters of the existing flow rules according to the request of the MEC application program instance; installing or uninstalling tasks according to the requests of the MEC application program instances; and allocating different weights for different edge computing tasks in a task edge computing table, so as to process the edge computing tasks according to the weights;
the processing module is specifically configured to:
initializing a kernel of at least one edge node;
transmitting the initialization kernel and training samples to at least one MEC application device;
acquiring parameters fed back by the at least one MEC application device, wherein the parameters are acquired by the at least one MEC application device through back propagation of a neural network;
and updating the availability of the service according to the parameters and a preset learning rate.
5. The edge computing platform of claim 4, wherein the processing module is specifically configured to:
arranging processing standards of task edge calculation of the at least one edge node in descending order, preferentially processing the edge calculation task with a lower weight value and close to the cut-off processing time of the edge calculation task, and generating a task edge calculation table;
and sequentially accessing the at least one edge node according to the sequence of the task edge calculation table.
6. The edge computing platform of claim 5, wherein a maximum amount of tasks Ω that the at least one edge node can perform task edge calculations is determined by a sum of a counter and a register;
the processing module is specifically configured to:
when the maximum task quantity omega capable of performing task edge calculation of the first edge node in the at least one edge node is smaller than the total task data quantity O to be distributed e Ending the task edge calculation of the first edge node; or,
when the maximum task quantity omega of the first edge node capable of performing task edge calculation is greater than or equal to the total quantity O of task data to be distributed e And executing the task edge calculation of the first edge node.
7. An edge computing device, comprising: a processor coupled to a memory for storing a computer program which, when invoked by the processor, causes the apparatus to perform the method of any one of claims 1 to 3.
8. A computer readable storage medium storing a computer program comprising instructions for implementing the method of any one of claims 1 to 3.
CN202110224975.2A 2021-03-01 2021-03-01 Edge computing method and edge computing platform Active CN112948114B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110224975.2A CN112948114B (en) 2021-03-01 2021-03-01 Edge computing method and edge computing platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110224975.2A CN112948114B (en) 2021-03-01 2021-03-01 Edge computing method and edge computing platform

Publications (2)

Publication Number Publication Date
CN112948114A CN112948114A (en) 2021-06-11
CN112948114B true CN112948114B (en) 2023-11-10

Family

ID=76246911

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110224975.2A Active CN112948114B (en) 2021-03-01 2021-03-01 Edge computing method and edge computing platform

Country Status (1)

Country Link
CN (1) CN112948114B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113453255B (en) * 2021-06-25 2023-03-24 国网湖南省电力有限公司 Method and device for balancing and optimizing service data transmission load of edge device container

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109067583A (en) * 2018-08-08 2018-12-21 深圳先进技术研究院 A kind of resource prediction method and system based on edge calculations
CN111132235A (en) * 2019-12-27 2020-05-08 东北大学秦皇岛分校 Mobile offload migration algorithm based on improved HRRN algorithm and multi-attribute decision
CN111615128A (en) * 2020-05-25 2020-09-01 浙江九州云信息科技有限公司 Multi-access edge computing method, platform and system
CN111953759A (en) * 2020-08-04 2020-11-17 国网河南省电力公司信息通信公司 Collaborative computing task unloading and transferring method and device based on reinforcement learning
CN112134916A (en) * 2020-07-21 2020-12-25 南京邮电大学 Cloud edge collaborative computing migration method based on deep reinforcement learning

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11184794B2 (en) * 2019-05-22 2021-11-23 Microsoft Technology Licensing, Llc Systems and methods for distribution of application logic in digital networks
US11824784B2 (en) * 2019-12-20 2023-11-21 Intel Corporation Automated platform resource management in edge computing environments
US11630706B2 (en) * 2020-09-22 2023-04-18 Intel Corporation Adaptive limited-duration edge resource management

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109067583A (en) * 2018-08-08 2018-12-21 深圳先进技术研究院 A kind of resource prediction method and system based on edge calculations
CN111132235A (en) * 2019-12-27 2020-05-08 东北大学秦皇岛分校 Mobile offload migration algorithm based on improved HRRN algorithm and multi-attribute decision
CN111615128A (en) * 2020-05-25 2020-09-01 浙江九州云信息科技有限公司 Multi-access edge computing method, platform and system
CN112134916A (en) * 2020-07-21 2020-12-25 南京邮电大学 Cloud edge collaborative computing migration method based on deep reinforcement learning
CN111953759A (en) * 2020-08-04 2020-11-17 国网河南省电力公司信息通信公司 Collaborative computing task unloading and transferring method and device based on reinforcement learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Task Offloading for Social Sensing Applications in Mobile Edge Computing;Jingya Zhou 等;《2019 Seventh International Conference on Advanced Cloud and Big Data (CBD)》;333-338 *
边缘计算多约束可信协同任务迁移策略;乐光学 等;《电信科学》(第11期);36-50 *
面向物联网应用的边缘控制系统关键技术研究与验证;唐思宇;《中国优秀硕士学位论文全文数据库 信息科技辑》(第2期);I136-1038 *

Also Published As

Publication number Publication date
CN112948114A (en) 2021-06-11

Similar Documents

Publication Publication Date Title
US11429449B2 (en) Method for fast scheduling for balanced resource allocation in distributed and collaborative container platform environment
CN111400001B (en) Online computing task unloading scheduling method facing edge computing environment
CN108228347A (en) The Docker self-adapting dispatching systems that a kind of task perceives
WO2018176385A1 (en) System and method for network slicing for service-oriented networks
CN105144109B (en) Distributive data center technology
US9501326B2 (en) Processing control system, processing control method, and processing control program
CN111131486B (en) Load adjustment method and device of execution node, server and storage medium
CN112148492B (en) Service deployment and resource allocation method considering multi-user mobility
Santos et al. Zeus: A resource allocation algorithm for the cloud of sensors
KR101578177B1 (en) Method and system for migration based on resource utilization rate in cloud computing
Shekhar et al. URMILA: Dynamically trading-off fog and edge resources for performance and mobility-aware IoT services
Cicconetti et al. Uncoordinated access to serverless computing in MEC systems for IoT
Yao et al. Forecasting assisted VNF scaling in NFV-enabled networks
CN112948114B (en) Edge computing method and edge computing platform
CN117135131A (en) Task resource demand perception method for cloud edge cooperative scene
Xu et al. Online learning algorithms for offloading augmented reality requests with uncertain demands in MECs
CN108769253A (en) A kind of adaptive prefetching control method of distributed system access performance optimization
Alqarni et al. A survey of computational offloading in cloud/edge-based architectures: strategies, optimization models and challenges
Abdelwahab et al. Flocking virtual machines in quest for responsive iot cloud services
More et al. Energy-aware VM migration using dragonfly–crow optimization and support vector regression model in Cloud
CN116472726A (en) Techniques for selecting network protocols
CN113821317A (en) Edge cloud collaborative micro-service scheduling method, device and equipment
CN116996941A (en) Calculation force unloading method, device and system based on cooperation of cloud edge ends of distribution network
Huang et al. Intelligent task migration with deep Qlearning in multi‐access edge computing
KR20220150126A (en) Coded and Incentive-based Mechanism for Distributed Training of Machine Learning in IoT

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant