CN113315806B - Multi-access edge computing architecture for cloud network fusion - Google Patents

Multi-access edge computing architecture for cloud network fusion Download PDF

Info

Publication number
CN113315806B
CN113315806B CN202110400752.7A CN202110400752A CN113315806B CN 113315806 B CN113315806 B CN 113315806B CN 202110400752 A CN202110400752 A CN 202110400752A CN 113315806 B CN113315806 B CN 113315806B
Authority
CN
China
Prior art keywords
network
user
access
edge computing
mac
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110400752.7A
Other languages
Chinese (zh)
Other versions
CN113315806A (en
Inventor
王璐
张健浩
伍楷舜
王廷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN202110400752.7A priority Critical patent/CN113315806B/en
Publication of CN113315806A publication Critical patent/CN113315806A/en
Application granted granted Critical
Publication of CN113315806B publication Critical patent/CN113315806B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0893Assignment of logical groups to network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/101Server selection for load balancing based on network conditions

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a multi-access edge computing architecture for cloud network fusion. The access network side of the framework is provided with a plurality of edge computing nodes, wherein a physical channel of the access network side is divided into a plurality of sub-channels, each sub-channel supports an MAC access mode, and a network controller defined by software is arranged in the framework and is used for being responsible for resource allocation of the physical layer sub-channels and MAC layer protocols and controlling the edge computing nodes or cloud centers of end users to unload. The invention can carry out fine-grained control and cooperative management on network resources and computing resources, and provides more effective computing unloading and service enhancement.

Description

Multi-access edge computing architecture for cloud network fusion
Technical Field
The invention relates to the technical field of computer networks, in particular to a multi-access edge computing architecture for cloud network fusion.
Background
In recent years, tablet computers, smart phones, large-scale sensors, and a wide variety of heterogeneous internet of things devices have become more and more popular and have become a major computing resource in daily life. It is conservative to estimate that by 2022, 500 billion terminals will be interconnected. With the explosive growth of terminal devices, a great deal of applications designed for terminal devices, such as interactive games, natural language processing, facial recognition, augmented reality, etc., have emerged. This type of application often requires a large amount of resources, including intensive computing resources and high-speed transmission resources. With the increasing richness of novel interactive mobile applications and the increasing power of terminal device functions, it is now in the great revolution of mobile computing.
Recent research advances have witnessed mode transitions in mobile computing. Under the drive of mass data generated by terminal equipment continuously, centralized mobile cloud computing is migrating to mobile edge computing. The computation, storage and network resources are all integrated at the Base Station (BS) side. The large amount of free computing resources and memory at the edge of the network can be fully utilized to accomplish computationally intensive and delay critical computing tasks. As various computing and storage resources become more and more proximate to end users, mobile edge computing is expected to provide services with ultra-low latency and ultra-low network congestion for applications with large resource consumption.
The explosive growth of terminal devices makes wireless connectivity one of the key technologies for exploiting the potential of mobile edge computing. Accordingly, the applicability of mobile edge computing has been extended into Radio Access Networks (RANs) to provide the capability for edge computing. This is also known as multi-access edge computing (MEC). In a multi-access computing architecture, the edge computing resources may be deployed in an LTE base station (eNodeB), a 3G Radio Network Controller (RNC), or a multi-antenna aggregation base station. The multi-access edge computing deeply fuses theories and technologies of two subjects of mobile computing and wireless communication, is one of typical technologies of cloud network fusion, and is thus advocated by researchers in the academic world and the industry. It is expected that combining the new wireless network technology with the service-oriented edge-cloud architecture can significantly reduce network congestion and user delay, improve quality of service (QoS) and quality of experience (QoE) of users, and provide better services for end applications, content providers, and third-party operators.
Despite the continual efforts and extensive efforts of researchers in multi-access edge computing, multi-access edge computing still faces many challenges due to the physical hardware limitations of the terminal devices and the connection capability limitations of the wireless channels. For example, existing schemes typically only consider coarse-grained resource allocation policies, including transmission, computation, and storage resources, which lack fine-grained control over all possible resources, which becomes a major obstacle to implementing delay-sensitive services.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a cloud network fusion-oriented multi-access edge computing architecture, which is a new technical scheme of a fine-grained multi-access edge computing architecture based on software definition and can flexibly allocate resources in a fine-grained manner.
The technical scheme of the invention is to provide a multi-access edge computing architecture facing cloud network fusion. The access network side of the architecture is provided with a plurality of edge computing nodes, wherein a physical channel of the access network side is divided into a plurality of sub-channels, each sub-channel supports an MAC access mode, and a network controller defined by software is arranged in the architecture and is used for being responsible for resource allocation of the physical layer sub-channels and MAC layer protocols and controlling the edge computing nodes or cloud centers of end users to unload.
Compared with the prior art, the invention has the advantages that the fine-grained multi-access edge computing architecture based on software definition is provided, and fine-grained control and cooperative management can be performed on network resources and computing resources. In addition, a two-stage resource allocation strategy based on deep reinforcement learning Q-learning is designed, so that more effective computation unloading and service enhancement are provided.
Other features of the present invention and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a schematic diagram of an edge computing protocol stack for cloud network convergence according to an embodiment of the present invention;
fig. 2 is a physical layer/MAC layer slice diagram according to one embodiment of the invention;
fig. 3 is a schematic diagram of a fine-grained multi-access edge computing architecture oriented to cloud network convergence according to an embodiment of the present invention.
FIG. 4 is a schematic diagram of a software-defined multi-access edge computing-based distributed node architecture according to one embodiment of the invention;
FIG. 5 is a schematic diagram of a resource allocation model for cloud network convergence according to an embodiment of the invention;
figure 6 is a flow diagram of SDN based resource allocation and task offloading in accordance with one embodiment of the present invention.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
The invention provides a fine-grained multi-access edge computing architecture based on software definition, which mainly relates to three aspects and comprises the following steps: the physical layer/MAC layer design adopts a fine-grained physical layer/MAC layer slicing technology facing cloud network fusion, starts from the bottom layer of a network, and realizes the fine-grained mixed MAC concurrent transmission of an access network through physical layer/MAC layer slicing; performing collaborative optimization on resources of the multi-access mobile edge computing node, wherein a fine-grained multi-access edge computing architecture defined by software is adopted, and the edge computing can better utilize the access network characteristics and the resources of the MEC node to perform collaborative optimization; the self-adaptive resource allocation algorithm is a fine-grained self-adaptive resource allocation Learning algorithm which is designed based on Q-Learning and faces to cloud network fusion.
Specific examples of the above three aspects will be described in detail below.
1) Physical layer/MAC layer design
Starting from the bottom layer of the access network, a slice design based on a physical layer and a MAC layer is provided. Referring to fig. 1, an original physical channel is divided into a plurality of fine-grained units (e.g., sub-channels composed of a plurality of sub-carriers), and each sub-channel can support a MAC access method. By the mode, different terminal devices can access the same channel through different MAC protocols according to the channel quality and the transmission requirement of the terminal devices, so that the channel resources are utilized to the maximum extent.
Specifically, the physical layer/MAC layer slice is built on top of Orthogonal Frequency Division Multiplexing (OFDM). The physical layer subcarriers are decoupled, so that fine-grained MAC access is realized. The physical layer/MAC layer slice provides a flexible, adaptive transmission commitment for end users to access the network. The concurrent transmission of multiple MAC accesses in the frequency domain can better meet the dynamic and diverse transmission requirements of the terminal users. In this context, the protocol stack is further provided with an sdn (software defined network) controller, running above the physical layer/MAC layer slice, responsible for the allocation of the physical layer sub-channels and the MAC layer protocol. By timely adjusting the sub-channel resources of the physical layer and the access protocol of the MAC layer, the SDN controller aims to utilize the diversity of the underlying channels to cooperatively optimize the resources of the physical layer and the MAC layer, and the resource utilization rate is maximally improved.
In addition, since the conventional contention policy for subchannel resources significantly increases the transmission cost, in a preferred embodiment, the present invention further designs a contention policy for frequency domain. In the frequency domain competition strategy, the terminal user can utilize different frequency domains to simultaneously request the resource requirement, so that compared with the traditional time domain competition, the frequency domain competition strategy can fully utilize the frequency domain resource, avoid the transmission conflict of the time domain competition and obviously reduce the competition overhead.
For example, in order to be applied to a fine-grained edge computing architecture, a two-stage F-RTS/F-CTS structure (F-RTS represents frequency-domain ready transmission and C-RTS represents frequency-domain acknowledgement transmission) is designed, as an example of a frequency-domain contention strategy given in fig. 2. The entire channel is divided into 5 sub-channels. After the network is initialized, the end user first waits for a distributed inter-frame spacing (DIFS) to complete the synchronization. Subsequently, during the polling period, the end-user makes a request for transmission/computation of demand by a frequency domain contention policy. After the SDN controller obtains the requirements of all users, a resource allocation policy is run, all possible resources (such as transmission resources, computation resources, and storage resources) are allocated according to the channel quality and transmission/computation requirements of the end user, and an allocation result is fed back in the F-CTS. After waiting for the PCF interframe space (PIFS), the end user may access the channel resources allocated to him or her according to the notification result.
Specifically, the two-stage F-RTS/F-CTS structure implements two-stage contention polling, which are a contention/notification polling phase and a transmission polling phase, respectively. The contention/polling phase of the first stage is responsible for the allocation of channel and computational resources by F-RTS/F-CTS. In the second level of the transmission polling phase, the F-RTS/F-CTS is used for another purpose. If the end-user is assigned a contention type MAC protocol, such as Carrier Sense Multiple Access (CSMA), i.e. sub-channels 2 and 5 in fig. 2, then the F-RTS/F-CTS of these two sub-channels is used to organize the end-user for access to the access network. If the end user is assigned a reserved type of MAC protocol, such as sub-channels 1, 3, 4 in fig. 2, then the F-RTS/F-CTS of these three sub-channels are used to schedule the transmission. Thus, the F-RTS/F-CTS has different frame formats in the two layers. For example, in the first stage, F-RTS/F-CTS is used for contention of resources, and thus in the frequency domain, F-RTS/CTS includes two parts: one part is used as an identifier and the other part is used as a contention/notification frequency band. The identification portion is used to indicate whether the current frame is a F-RTS or a F-CTS frame. This part identification is placed at the beginning of the frame. Taking a 64-point FFT as an example, generally, 16 sub-carriers are suitable to ensure the interference rejection capability of the BAM. Then, in the contention/notification frequency band, in order to ensure the anti-interference performance, a request for transmission requirement is made with every 4 subcarriers as a base unit, and a request for computing resource is made with every 12 subcarriers as a base unit. Accordingly, in the notification phase, the sub-carriers are also divided into two parts, one part is used for confirming the allocation of the MAC protocol, and the other part is used for confirming the allocation of the sub-channels. In the second stage, the F-RTS/F-CTS is used for fine-grained access network resource access, and since the transmission requirements of the terminal device are dynamically diverse, the transmission of the second stage can be used for concurrent transmission of different types of MAC protocols. The specific allocation of this portion is also planned by the SDN controller. For example, a reserved type of MAC protocol, such as Time Division Multiple Access (TDMA), may make a transmission scheduling arrangement through the F-CTS. While a contention type MAC protocol, such as CSMA, may utilize F-RTS/F-CTS for fine-grained subchannel access contention. In CSMA contention at the second stage, a user can randomly select one subcarrier as its identity and use the subcarrier to transmit a BAM symbol during F-RTS. If the end user's access is granted, the SDN controller will notify on the corresponding sub-carrier during the F-CTS. By the method, fine-grained access network resource allocation can be realized, and the spectrum utilization rate and the system efficiency of the access network are improved to the maximum extent.
2) Fine-grained multi-access edge computing architecture based on software definition
Software-defined based multi-access edge computing architecture as shown in fig. 3, MEC nodes (e.g., servers) with storage and computing resources are deployed among access networks to provide resilient network services, such as computing offload and service caching, for end users. These MEC nodes may be deployed in enodebs, BSs, macro stations, or small base stations. In addition, the MEC node may also be deployed in a residential area and access is made through an edge switch or an integrated switch. In addition to the MEC nodes in the access network, a MEC Data Center (DC) of appropriate size is deployed between the access network and the aggregation network as needed. Typically, aggregation nodes of a convergence network, such as Public Switched Telephone Network (PSTN), central or Mobile Telephone Switching Office (MTSO), are ideal sites for deploying MEC data centers, since all traffic passes through these nodes before accessing the Internet. Also, MEC DC is a software-defined based data center, where MEC nodes (i.e. resource pools containing computing, storage capacity) are controlled by one or more SDN controllers as needed.
Fig. 4 shows a schematic diagram of a software-defined multi-access edge computing distributed node architecture. The architecture includes a plurality of MEC nodes having different functions and roles, including, for example, a common node (CNode), a regional proxy node (RANode), a super node (SNode), and a certificate authority node (CA).
Common node CNode: cnodes are the most common MEC nodes, distributed throughout the access network. The CNode is used for providing computing unloading service for the terminal user and providing storage resources for the service cache of the remote Internet/cloud service. CNodes are highly virtualized, and Virtual Machines (VMs) can be remotely installed/migrated onto CNodes under the control of the SNode.
Area agent node RANode: the RANode represents a regional agent MEC node located in the access network. The RANode is selected by the super node SNode. The SNode determines the appropriate RANode from cnodes within the scope of the access network. The RANode is responsible for resource discovery, managing/monitoring the status of all CNodes within its area. Although having the role of a regional proxy, the RANode itself is also a CNode, and computation offload can be performed.
Super node SNode: the SNode represents a super intelligent node located in the MEC data center between the access network and the aggregation network. Each SNode is responsible for managing the number of cnodes and ranodes that the SDN controller allocates to it. Tasks of SNode include: managing VM remote installations on CNodes/ranodes, join/leave of nodes (seamless extension), node configuration, user management, etc. The SNodes are controlled by the SDN controller and may communicate with other SNodes. The SNode may also cache Internet services and cloud services offloaded from a remote data center. Further, the SNode can offload part of the caching service to CNode or RANode closer to the end user, thereby greatly improving the computational offloading efficiency.
The certificate authority node CA: the CA is a node located in the MEC data center that is responsible for certificate generation and management for users, provides the functions of signing, authorization and certificates, and retains information for all authorized users in the certificate store.
3) Adaptive resource allocation algorithm
In the embodiment of the invention, an adaptive resource allocation strategy based on deep reinforcement learning is adopted. Specifically, for the adaptive resource allocation algorithm, a fine-grained edge architecture oriented to cloud network fusion is abstracted to the system model shown in fig. 5. The model is set to be composed of m user clients {1,2,...,m} N edge compute nodes CNode {1,2,..,n} And the SDN controller, the SNode and the central cloud form a multi-layer network. Based on the system model, a resource allocation algorithm is modeled, and the method specifically comprises the following steps:
and (3) user tasks: ClientTask i And (i e {1, 2.. eta., m }) represents a task request submitted by a user i. Wherein, ClientTask i And can be subdivided into: task i ={Mem i ,f i In which Mem i The size of the memory required by the task is represented, and the memory size represents the storage capacity required by the task. f. of j It represents the clock period required to process 1bit of data and represents the computational power required for that task.
Service node resources: ServerSource j And (j ∈ {1, 2.·, n }) represents the amount of resources owned by serving node j. ServerSource j It can likewise be subdivided into: ServerSource j ={Mem j ,f j }。
And MAC protocol: mac protocol ═ TDMA, CSMA }.
Channel quality CSI: CSI ═ { CSI c1 ,CSI c2 ,...,CSI cm }。
The model should meet the following constraints, i.e., the user task should be smaller than the service node resources:
Figure BDA0003020286100000082
the following relationship also exists between the MAC protocol selected by the user and the channel quality CSI: MAC λ CSI. Where λ is the probability of selecting a certain MAC protocol.
Given user task ClientTask i And channel quality CSI i The goal is to optimize the network utility and computational utility of the overall system, which can be expressed as:
Figure BDA0003020286100000081
in the description herein, a service node means a node for providing a computing offload service for an end user and providing a storage resource for a service cache of a remote Internet/cloud service, and may be, for example, a public node (CNode).
Based on the modeling, preferably, a collaborative learning optimization strategy fusing network and cloud characteristics is designed, fine-grained learning is firstly carried out on the distribution of transmission resources through the access network characteristics of a terminal user, and the strategy learning of calculation unloading is carried out by combining the calculation performance of the edge calculation node according to the learning result of the transmission resources, so that the network and the calculation resources are utilized to the maximum extent.
Specifically, the collaborative learning optimization strategy fusing the network and cloud characteristics comprises the following steps:
step S110, according to the channel quality of the terminal user, the physical layer/MAC layer strategy of each user is self-adaptively learned.
The SNode first bases on the channel quality CSI of the end user {1,2,..,m} And carrying out adaptive learning on the physical layer/MAC layer strategy of each user. Assume that there are currently a total of S sub-channel channels {1,2...s} User client i The computational tasks that need to be offloaded may be denoted as task i ={s i ,g i J, i ∈ {1,2,. Q }, where Q represents the total number of tasks unloaded by user i, s i Represents the size of the task data input, g j It represents the requested server computing resource size. Then the SNode needs to be the userclient i Determining a MAC policy MAC to use i
State (State): the control has all end-user channel quality, so the overall system state represented by the control can be expressed as:
State={Subch 1 ,Subch 2 ,...,Subch n ,MAC i }
wherein, subch represents channel resource, MAC i Indicating the subchannel MAC protocol assigned by the ith user.
Action (Action): the Action is expressed as a control, namely, the Action set can be expressed as:
{MAC 1 ,MAC 2 ,...,MAC n }
reward (Reward): the reward component is determined by the status of the environmental information before and after the action is performed.
For example, based on the environment information of the delay indicator, if State t 100s and State t+1 There is a-0.2 increase for 120 s. Therefore, by the above calculation method, the index growth ratio can be obtained, and the calculation formula is as follows:
IncreaseRatio=|State t -State t+1 |/State t
in the above formula, the molecular portion takes the absolute value form because the effects brought by the index growth ratio are not uniform based on the different index forms. For the delay index, the index value increase brings negative benefits, while for the indexes such as throughput, the index value increase is positive benefits. Therefore, it is necessary to set an index increase ratio calculation formula in accordance with a specific index form.
According to the above analysis, the increase ratio of the index value can be obtained, but in order to prevent the over-fitting phenomenon and the Q-table update rationality analysis, a reward calculation formula is set as follows:
Figure BDA0003020286100000091
for the above equation, after the dominant MAC protocol has been acquired, reward is set to 0 to prevent Q table from being too large and Q _ Learning from converging too fast. λ is the attenuation factor to prevent the increase ratio (IncreaseRatio) from being too large, which results in too large a reward and too fast convergence.
The following describes an adaptive MAC allocation protocol based on Q-learning.
************************************************
Algorithm 1. MAC protocol selection algorithm based on Q _ Learning
Inputting: network index type
And (3) outputting: network performance index
Firstly, initializing Q table, initial state
Repetition of
Determining current network state according to current state and network index type
Thirdly, selecting the MAC protocol based on the current network condition by utilizing a Q table or an epsilon _ greedy strategy
Fourthly, the user changes the MAC protocol and reports the new state
Determining new network state according to new state and network index type
Calculating action reward by comparing network state
Seventhly, updating the Q table according to the acquired reward and updating the current state
Recording the network performance index
And (4) finishing conditions: reaching a termination state or reaching a number of training sessions
Return: network performance index
**************************************************
And step S120, the SNode further performs calculation and unloading strategy learning on the terminal user.
After determining the end user's physical layer/MAC layer policy, the SNode further performs computation offload policy learning for the end user. Suppose a Server Server j A resource can be represented as a source j ={f j J ∈ {1, 2., n }, where fj represents a comprehensive evaluation of server resources. While the resources allocated in terms of communication are carried out using the MAC protocolDescription of i.e. c k ={TDMA,CSMA},k∈{1,...,n}。
State (State): the control possesses all the edge server information and the task information submitted by the user, so the overall system state represented by the control can be represented as:
State=
{EdgeServerSource 1 ,EdgeServerSource 2 ,...,EdgeServerSource n ,Task u }
wherein, the EdgeServerSource n Indicating the resources owned by the nth edge server, Task u Indicating the task request information of the u-th user.
Action (Action): the Action is expressed as a control, and an unloading server is selected for the user, namely, the Action set can be expressed as:
Action={EdgeServer 1 ,EdgeServer 2 ,...,EdgeServer n ,cloud}
among them, the EdgeServer n Representing the nth edge server and cloud the cloud server.
Reward (Reward):
as can be seen from the objective criteria for calculating the offloading, the shorter the time delay consumed for completing the task, the better the user task offloading request. Therefore, it is easy to know that when a user task is offloaded to an edge server in the local area, the consumed time delay is much smaller than the time delay for offloading to an edge server in other areas, and much smaller than the time delay for offloading to a cloud server. Similarly, the cost consumed by unloading to the edge servers in other areas is slightly larger than that of the edge server in the area and is smaller than that of the cloud server. Thus, setting different priorities identifies different levels of server levels. For example:
Figure BDA0003020286100000111
different profits are obtained according to different priorities; high priority is expected to yield better revenue, while medium priority yields relatively less revenue, while cloud servers will yield negative revenue due to their high latency over long distances. The reward calculation formula is as follows:
Figure BDA0003020286100000112
where λ represents the attenuation factor to prevent too fast network convergence.
The following describes a Deep Q-learning based computational offload adaptive learning process.
****************************************************
Algorithm 2, DQN-based calculation unloading algorithm
Inputting: predictive neural network, target neural network, experience pool
And (3) outputting: network performance index
Initializing experience pool and neural network parameters
Repeating:
determining current state by edge server information and task request
Selecting action (unload node) action based on current state by using prediction neural network or epsilon-greedy strategy
User unloading to edge server action or cloud server
Acquiring new state and calculating action reward
Sixthly, storing the four-tuple (current state, action, reward, new state) into the experience pool
Seventhly, randomly sampling by an experience pool to be used as training data of the prediction neural network
Calculating loss function based on the prediction neural network and the target neural network
Ninthly updating predicted neural network parameters using loss function
Replacing target neural network parameters with predicted neural network parameters after a certain number of training and exploration on R (positive charge)
Figure BDA0003020286100000121
Recording the network performance index
And (4) finishing conditions: reaching a termination state or reaching a number of trains
Return: network performance index
****************************************************
More specifically, the working mechanism of SDN in the multiple access edge computing architecture is as follows:
first, Internet service providers proactively offload their own associated services to MEC nodes. As shown in figure 6 for an SDN-based resource allocation and task offloading flow, assume that a mobile user is accessing an Internet-based game front-end server over a regular data path. The gaming service provider uses the MEC node through registration to proactively offload its services and cache them and store them in the appropriate SNode. Further, SNodes push services to end users by replicating them to CNodes. Meanwhile, the CNode collects the channel information CSI of the mobile users in the coverage area in a fixed time interval and reports the CSI to the SNode.
After the computing services are offloaded, when a mobile user wants to access the game server, the user makes his request to the SNode. The SNode first runs a physical layer/MAC layer allocation algorithm according to the channel quality of the user. And next, according to the MAC protocol and the access requirement distributed by the terminal user, the SNode selects the optimal CNode for the user to unload, and if the appropriate CNode does not exist, the user task is unloaded to the central cloud. In this way, the associated traffic is successfully offloaded from the network core and the Internet, thereby significantly reducing traffic burden.
In summary, the present invention provides a fine-grained multi-access edge computing architecture based on software definition, which can perform fine-grained control and cooperative management on network resources and computing resources. In addition, a two-stage resource allocation strategy based on deep reinforcement learning Q-learning is designed. A large number of simulation experiments prove that the architecture can provide more effective computation unloading and service enhancement.
The present invention may be a system, method and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present invention may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + +, Python, or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, by software, and by a combination of software and hardware are equivalent.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.

Claims (8)

1. A multi-access edge computing architecture facing cloud network fusion is provided with a plurality of edge computing nodes on an access network side, and is characterized in that a physical channel on the access network side is divided into a plurality of sub-channels, each sub-channel supports an MAC access mode, and a network controller defined by software is arranged in the architecture and is used for being responsible for resource allocation of the physical layer sub-channels and an MAC layer protocol and controlling the edge computing nodes or a cloud center of a terminal user to unload;
the access network side is provided with various types of edge computing nodes, including public nodes, regional agent nodes, super nodes and certificate authorization nodes, wherein the public nodes are distributed in the whole access network and are used for providing computing unloading service for terminal users and providing storage resources for service cache of remote Internet or cloud service; the regional agent node is positioned at a regional agent edge computing node in the access network and is responsible for discovering, managing or monitoring the state of a public node in a region by resources; the super nodes are positioned in a data center between the access network and the convergence network, and each super node is responsible for managing a public node and a regional agent node which are distributed to the super nodes by a network controller defined by software; the certificate authority node is a node located in a data center and is responsible for generating and managing a certificate of a user;
wherein the super node SNode is according to the channel quality CSI of the terminal user {1,2,...,m} And distributing the MAC protocol for the terminal user by using reinforcement Learning Q-Learning, wherein the State is the channel quality of the terminal user, and the overall State of the system is represented as follows: state ═ Subch 1 ,Subch 2 ,...,Subch n ,MAC i }; action represents a MAC protocol selected by the userThe action set is denoted as { MAC 1 ,MAC 2 ,...,MAC n }; reward is determined by the environment information state before and after the action is executed, subch represents the channel resource, and MAC i And (3) the sub-channel MAC protocol allocated to the ith user is shown, m is the number of end users, and n is the number of edge computing nodes.
2. The cloud network convergence-oriented multi-access edge computing architecture of claim 1, wherein in response to a request for transmission or computing requirements of a terminal, the software-defined network controller runs a resource allocation policy, allocates resources according to channel quality, transmission requirements, or computing requirements of a terminal user, and feeds back an allocation result in an F-CTS, and after waiting for PCF inter-frame intervals, the terminal user accesses the allocated channel resources according to a notification result.
3. The cloud-fusion-oriented multi-access edge computing architecture of claim 1, wherein ClientTask is given to a user task i And channel quality CSI i And taking the optimized network utility and the calculated utility as resource allocation targets, which are expressed as follows:
Figure FDA0003647277130000021
wherein, the user task ClientTask i (i ∈ {1, 2.. multidata, m }), representing a task request submitted by user i, a ServerSource j (j ∈ {1, 2...., n }) indicating the amount of resources owned by serving node j, MAC protocol is denoted as MAC protocol ═ { TDMA, CSMA }, and channel quality CSI is denoted as CSI ═ CSI c1 ,CSI c2 ,...,CSI cm }。
4. The cloud network convergence oriented multi-access edge computing architecture of claim 3, wherein the following constraints are provided when solving the resource allocation objective:
the user task is smaller than the service node resource;
the relation between the MAC protocol selected by the user and the channel quality CSI is: MAC λ CSI, where λ is the probability of selecting a certain MAC protocol.
5. The cloud network convergence oriented multi-access edge computing architecture of claim 1, wherein the reward is represented by a calculation formula:
Figure FDA0003647277130000022
where λ is the attenuation factor.
6. The cloud network convergence-oriented multi-access edge computing architecture of claim 1, wherein the super node SNode employs reinforcement Learning Q-Learning to perform policy Learning for computation offloading of end users, wherein a State is edge server information and task information submitted by a user, and a system overall State is expressed as:
State={EdgeServerSource 1 ,EdgeServerSource 2 ,…,EdgeServerSource n ,Task u },
the Action represents that the user selects an uninstalling server, and the Action set is represented as: action ═ EdgeServer 1 ,EdgeServer 2 ,…,EdgeServer n (cloud }; reward is the time delay consumed for completing the task aiming at the task unloading request of the user;
wherein, the EdgeServerSource n Indicating the resources owned by the nth edge server, Task u Indicating the task request information of the u-th user, wherein, the EdgeServer n Representing the nth edge server and cloud the cloud server.
7. The cloud network convergence oriented multi-access edge computing architecture of claim 6, wherein the reward is represented by a calculation formula:
Figure FDA0003647277130000031
wherein, λ represents an attenuation factor, and is set to a high priority if being offloaded to a local server, to a medium priority if being offloaded to another regional server, and to a low priority if being offloaded to a cloud server.
8. A computer readable storage medium having stored thereon a computer program, wherein the program when executed by a processor implements a software defined network controller of a multi-access edge computing architecture according to any of claims 1 to 7.
CN202110400752.7A 2021-04-14 2021-04-14 Multi-access edge computing architecture for cloud network fusion Active CN113315806B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110400752.7A CN113315806B (en) 2021-04-14 2021-04-14 Multi-access edge computing architecture for cloud network fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110400752.7A CN113315806B (en) 2021-04-14 2021-04-14 Multi-access edge computing architecture for cloud network fusion

Publications (2)

Publication Number Publication Date
CN113315806A CN113315806A (en) 2021-08-27
CN113315806B true CN113315806B (en) 2022-09-27

Family

ID=77372121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110400752.7A Active CN113315806B (en) 2021-04-14 2021-04-14 Multi-access edge computing architecture for cloud network fusion

Country Status (1)

Country Link
CN (1) CN113315806B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115766721A (en) * 2022-11-21 2023-03-07 中国联合网络通信集团有限公司 Service transmission method, device and storage medium thereof
CN117155845B (en) * 2023-10-31 2024-01-23 北京邮电大学 Internet of things data interaction method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110418416A (en) * 2019-07-26 2019-11-05 东南大学 Resource allocation methods based on multiple agent intensified learning in mobile edge calculations system
CN110933692A (en) * 2019-12-02 2020-03-27 山东大学 Optimized cache system based on edge computing framework and application thereof

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109302709B (en) * 2018-09-14 2022-04-05 重庆邮电大学 Mobile edge computing-oriented vehicle networking task unloading and resource allocation strategy
CN109981753B (en) * 2019-03-07 2021-04-27 中南大学 Software-defined edge computing system and resource allocation method for Internet of things
US11844100B2 (en) * 2019-03-12 2023-12-12 Nec Corporation Virtual radio access network control

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110418416A (en) * 2019-07-26 2019-11-05 东南大学 Resource allocation methods based on multiple agent intensified learning in mobile edge calculations system
CN110933692A (en) * 2019-12-02 2020-03-27 山东大学 Optimized cache system based on edge computing framework and application thereof

Also Published As

Publication number Publication date
CN113315806A (en) 2021-08-27

Similar Documents

Publication Publication Date Title
Yang et al. Catalyzing cloud-fog interoperation in 5G wireless networks: An SDN approach
CN109951821B (en) Task unloading scheme for minimizing vehicle energy consumption based on mobile edge calculation
Wang et al. Intelligent cognitive radio in 5G: AI-based hierarchical cognitive cellular networks
Sun et al. Autonomous resource slicing for virtualized vehicular networks with D2D communications based on deep reinforcement learning
CN112822050B (en) Method and apparatus for deploying network slices
CN113315806B (en) Multi-access edge computing architecture for cloud network fusion
CN111083634A (en) CDN and MEC-based vehicle networking mobility management method
WO2019129169A1 (en) Electronic apparatus and method used in wireless communications, and computer readable storage medium
CN112887999B (en) Intelligent access control and resource allocation method based on distributed A-C
Xu et al. PDMA: Probabilistic service migration approach for delay‐aware and mobility‐aware mobile edge computing
CN114390057A (en) Multi-interface self-adaptive data unloading method based on reinforcement learning under MEC environment
WO2021045957A1 (en) Facilitating service continuity and quality of experience through dynamic prioritized distribution in the citizens broadband radio spectrum
Zhou et al. Communications, caching, and computing for next generation HetNets
Zheng et al. 5G network-oriented hierarchical distributed cloud computing system resource optimization scheduling and allocation
US20240031427A1 (en) Cloud-network integration oriented multi-access edge computing architecture
Haitao et al. Multipath transmission workload balancing optimization scheme based on mobile edge computing in vehicular heterogeneous network
Garg et al. SDN-NFV-aided edge-cloud interplay for 5G-envisioned energy internet ecosystem
CN114531478A (en) Method, apparatus and computer program product for edge resource aggregation
Yang et al. Cooperative task offloading for mobile edge computing based on multi-agent deep reinforcement learning
Lin et al. Deep reinforcement learning-based task scheduling and resource allocation for NOMA-MEC in Industrial Internet of Things
KR101924628B1 (en) Apparatus and Method for controlling traffic offloading
Wu et al. Dynamic handoff policy for RAN slicing by exploiting deep reinforcement learning
KR102391956B1 (en) Coalitional Method for Optimization of Computing Offloading in Multiple Access Edge Computing (MEC) supporting Non-Orthogonal Multiple Access (NOMA)
Huang et al. An efficient spectrum scheduling mechanism using Markov decision chain for 5G mobile network
Malazi et al. Distributed service placement and workload orchestration in a multi-access edge computing environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant