CN115967962B - Intelligent super-surface-assisted end-edge collaborative computing migration method and system - Google Patents

Intelligent super-surface-assisted end-edge collaborative computing migration method and system Download PDF

Info

Publication number
CN115967962B
CN115967962B CN202211683047.3A CN202211683047A CN115967962B CN 115967962 B CN115967962 B CN 115967962B CN 202211683047 A CN202211683047 A CN 202211683047A CN 115967962 B CN115967962 B CN 115967962B
Authority
CN
China
Prior art keywords
task
edge
computing
calculation
phase shift
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211683047.3A
Other languages
Chinese (zh)
Other versions
CN115967962A (en
Inventor
郭棉
柳秀山
丁家俊
谭龙
许乘源
曾繁成
郭发
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Polytechnic Normal University
Original Assignee
Guangdong Polytechnic Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Polytechnic Normal University filed Critical Guangdong Polytechnic Normal University
Priority to CN202211683047.3A priority Critical patent/CN115967962B/en
Publication of CN115967962A publication Critical patent/CN115967962A/en
Application granted granted Critical
Publication of CN115967962B publication Critical patent/CN115967962B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses an intelligent super-surface-assisted end edge collaborative computing migration method and system, wherein the method comprises the following steps: the method comprises the steps that computing tasks issued by an application program are received through an Internet of things terminal set, and task information and local resource information of the computing tasks are sent to a migration decision controller; executing a calculation migration decision process through a migration decision controller, generating a decision result, and sending the decision result to an Internet of things terminal set, an intelligent super-surface controller and an edge server; acquiring task processing instructions through the terminal set of the Internet of things, and executing a task processing process; acquiring phase shift information through an intelligent super-surface controller, and configuring the phase shift of the reflecting unit; and acquiring the edge calculation decision information through the edge server, executing a task processing process, and returning a processing result to the terminal set of the Internet of things. The invention improves the wireless transmission performance and the resource utility, can meet the low delay requirement of the application of the Internet of things, and can be widely applied to the technical field of edge calculation.

Description

Intelligent super-surface-assisted end-edge collaborative computing migration method and system
Technical Field
The invention relates to the technical field of edge computing, in particular to an intelligent super-surface-assisted end-edge collaborative computing migration method and system.
Background
Along with the development of intelligent technology, the devices, products and the like in the intelligent factory are embedded with sensors and communication modules and even endowed with certain computing capacity, and meanwhile, various high-definition cameras, industrial robots and the like with the communication capacity and the certain computing capacity are distributed in the factory, and the devices, products, robots and the like with the communication capacity and the computing capacity are called as Internet of things terminals in the Internet of things technology. In the industrial production process, the Internet of things terminal generates and collects massive data. In order to mine the maximum value of the mass data, rapid calculation processing is required to be performed on the data in time. For example, taking a remote operation as an example, after a remote controller sends a control command to a device, an industrial robot, etc. in a factory, the execution condition of the control command by the device, the industrial robot, etc. and the environmental state of the factory are collected as soon as possible, and then data analysis, etc. are performed based on an artificial intelligence algorithm, so as to quickly make a next control decision.
In the traditional cloud computing mode, data generated by the terminal of the internet of things needs to reach a remote cloud computing center through an access network, a core network and the like, so that the data often experiences network delay of more than hundreds of milliseconds, and the low-delay requirement of the application of the industrial internet of things is difficult to meet. The edge computing concept that has emerged in recent years provides a new paradigm for data processing for industrial internet of things applications. In the edge computing approach, data generated by the data source will migrate nearby to edge servers at the edge of the network or be computed in local computing nodes. Edge computation effectively reduces network delay for data transmission through computational migration. However, compared with cloud computing, the computing capabilities of the internet of things terminal and the edge server are limited, and a single node is difficult to meet the high-performance computing requirements of mass data. Therefore, there is a need for edge-to-edge collaboration, i.e., a portion of data is processed locally at the terminal of the internet of things, and another portion of data is migrated to an edge server for processing, and the edge computing capability is improved through edge-to-edge collaboration.
Currently, researchers have conducted active research on edge computing strategies for edge-to-edge collaboration with the goal of minimizing delay, minimizing energy consumption, etc. However, most existing end-edge collaborative computing migration methods only consider computing migration policies, or consider joint optimization of computing migration policies with wireless bandwidth and/or resource allocation of edge servers. In the existing research method, it is often assumed that the channel quality of the direct link from the terminal of the internet of things to the base station can meet the requirement of uploading data/computing tasks, or that the requirement of uploading data/computing tasks can be met by increasing the bandwidth configuration. In fact, in a real edge computing environment, due to the existence of an obstacle or an excessive distance between the terminal of the internet of things and the base station, the path loss of the direct link is very large, and the quality of the wireless link is very poor, so that the data/computing task cannot be uploaded through the direct link, and in this case, only the task can be locally computed. However, if the local resources also fail to meet the task's computational requirements, it is difficult to address the relevant task requirements. In addition, the joint optimization of the computing migration strategy and the resource allocation facing the distributed data source/the internet of things terminal is a challenging problem, and on the basis of the joint optimization of the wireless link, the low-delay facing end-edge collaborative computing migration problem solving faces a greater challenge.
Disclosure of Invention
In view of this, the embodiment of the invention provides an intelligent super-surface assisted end-edge collaborative computing migration method and system, so as to improve wireless transmission performance, improve resource utility of an edge computing system and meet low-delay requirements of internet of things application.
An aspect of the embodiment of the invention provides an intelligent hypersurface-assisted end-edge collaborative computing migration method, which comprises the following steps:
s101, an Internet of things terminal set receives a calculation task submitted by a user running an application program, and task information and local resource information of the calculation task are sent to a migration decision controller, wherein the task information comprises the size of the task in bits or bytes and the size of the task in calculation resource requirements, and the local resource information comprises the available calculation resource size of the Internet of things terminal;
s102, after receiving the task information and the local resource information, the migration decision controller executes a calculation migration decision process to generate a decision result, wherein the decision result comprises a calculation migration strategy of the calculation task, calculation resources allocated to the calculation task and phase shift of an intelligent super-surface, the phase shift information of the intelligent super-surface in the decision result is sent to the intelligent super-surface controller, and the calculation migration strategy of the calculation task and the calculation resource information allocated to the calculation task are sent to an Internet of things terminal set and an edge server;
S103, after receiving the phase shift information of the intelligent super surface in the decision result, the intelligent super surface controller configures the phase shift of the reflecting unit;
s104, the terminal set of the Internet of things executes a task processing process according to the received decision result information;
s105, the edge server executes a task processing process according to the received decision result information and returns a processing result to the terminal set of the Internet of things;
wherein, the calculating migration decision process executed by the migration decision controller in S105 includes:
s201, setting an initialization calculation migration strategy of each task in a task set as edge calculation, and updating the edge calculation set and a local calculation set;
s202, calculating system average delay, and marking the system average delay as system average delay before decision updating;
s203, selecting an edge calculation task from the edge calculation set, wherein the uploading rate obtained by the edge calculation task is the lowest uploading rate obtained by all tasks with undetermined calculation migration strategies in the edge calculation set;
s204, updating a calculation migration strategy of the edge calculation task into local calculation, and migrating the edge calculation task from an edge calculation set to a local calculation set;
S205, executing an edge computing resource configuration strategy on the edge computing set, and determining phase shift of the intelligent super surface, uploading rate of tasks in the edge computing set, computing resources allocated to the tasks in the edge computing set by an edge server and end-to-end delay of the tasks in the edge computing set;
s206, executing a local computing resource configuration strategy on the local computing set, and determining end-to-end delay of tasks in the local computing set;
s207, calculating new system average delay;
s208, judging whether the new system average delay is not lower than the system average delay before decision updating: if yes, go to S209; if not, jumping to S210;
s209, restoring the calculation migration strategy of the task in S203 or S204 to be edge calculation, and transferring the edge calculation task from a local calculation set to an edge calculation set, marking the edge calculation task as determined by the calculation migration strategy, and going to S210;
s210, judging whether each task in the edge computing set is marked as being determined by a computing migration strategy: if yes, go to S211; if not, returning to S202;
s211, determining a current calculation migration policy as a calculation migration policy result of a task of the terminal set of the Internet of things, respectively determining a current edge calculation set and a local calculation set as an edge calculation set and a local calculation set of the task of the terminal set of the Internet of things, executing an edge calculation resource configuration policy on the edge calculation set, and determining phase shift information of an intelligent super surface and calculation resources allocated to the task in the edge calculation set by an edge server; and executing a local computing resource configuration strategy on the local computing set, and determining the computing resources allocated to the task of computing the migration strategy as the local computing by the local Internet of things terminal.
Optionally, the performing the edge computing resource allocation policy on the edge computing set includes:
determining the phase shift of the intelligent super surface, and calculating the uploading delay of each task in the edge calculation set;
distributing computing resources of an edge server for each task in the edge computing set, and computing edge computing processing delay of each task;
end-to-end delays for each task in the edge computation set are computed.
Optionally, the determining the phase shift information of the intelligent super surface includes:
calculating a transmission gain factor of each task in the edge calculation set;
the calculation formula of the transmission gain factor is as follows:
wherein m represents the mth task, complex number in the edge computing setChannel coefficient, complex number (X) representing the direct wireless link of an Internet of things terminal to a base station generating said task H Represents the conjugate transpose of X, and II represents the norm of X;
determining the sum of the direct-connection transmission channel gains of each task in the edge calculation set as the sum of the channel gains before the phase shift optimization of the reflection array;
the calculation formula of the sum of the gains of the direct connection transmission channels is as follows:
wherein SNR is tot,pre Representing the sum of channel gains before phase shift optimization of the reflection array;represents the edge calculation set, m represents the mth task in the edge calculation set, plural +. >Channel coefficient, complex number (X) representing the direct wireless link of an Internet of things terminal to a base station generating said task H Represents the conjugate transpose of X, |X| 2 Square of the modulus representing complex number X;
and calculating the phase shift of the intelligent super-surface based on the tasks on each task in the computing set according to the sum of the channel gains before the transmission gain factors and the reflection array phase shift optimization.
Optionally, calculating the phase shift of the intelligent super-surface based on the task for each task in the computing set according to the sum of the channel gains before the transmission gain factor and the reflection array phase shift optimization includes:
marking the phase shift of the product of the conjugate transposed matrix of the channel coefficient of the task direct-connection wireless link and the transmission gain factor as the 0 th phase shift parameter;
for each reflection unit in the reflection array of the intelligent super surface, taking the phase shift corresponding to the channel coefficient of the wireless link from the reflection unit to the base station as the 1 st phase shift parameter facing the corresponding task;
taking the phase shift corresponding to the product of the channel coefficient of the direct connection wireless link from the Internet of things terminal generating the task to the reflecting unit of the intelligent super surface and the transmission gain factor of the task as the 2 nd phase shift parameter facing the task;
Taking the difference between the 0 th phase shift parameter and the 1 st phase shift parameter and the 2 nd phase shift parameter as the phase shift of the reflection unit based on the current task;
arranging the exponential form of the phase shift of each reflection unit in the intelligent super-surface reflection array according to the position sequence of the reflection units to form a diagonal matrix, and marking the diagonal matrix as a reflection coefficient matrix based on the current task;
calculating the sum of channel gains of the reflection coefficient matrix based on the current task to obtain the phase shift based on the current task;
the calculation formula of the channel gain sum of the reflection coefficient matrix of the current task is as follows:
where g is an N x 1 complex vector representing the channel coefficient vector from the intelligent subsurface reflective array to the base station, the natural number N representing that the reflective array has N reflective units,is an n x 1 complex vector representing tasks m, m 'through m', respectivelyChannel coefficient vector of reflection array, < >>Representing the set +.>In which the variables X except m m′ Summing;
selecting a group of reflection coefficient matrixes from the intelligent super-surface reflection coefficient matrix set based on the task; wherein the reflection coefficient matrix is the set of reflection coefficient matrices that maximizes the sum of channel gains;
Judging whether the channel gain sum based on the reflection coefficient matrix is larger than the channel gain sum before the phase shift optimization of the reflection array, if so, taking the reflection coefficient array as the reflection coefficient array of the intelligent super-surface, and taking the phase shift corresponding to the reflection coefficient array as the phase shift of the intelligent super-surface; if not, the latest reflection coefficient array of the intelligent super-surface is used as the reflection coefficient array of the intelligent super-surface, and the phase shift corresponding to the reflection coefficient array is used as the phase shift of the intelligent super-surface.
Optionally, the calculating the uploading delay of each task in the edge calculating set specifically includes:
determining the uploading delay of the task according to the ratio of the size of the task in bits or bytes to the uploading rate of the task;
the computing resources of the edge server are allocated to each task in the edge computing set, specifically:
for tasks in the edge computing set, computing resources of the edge server are distributed according to the ratio of the size of computing resources of each task to the total computing resource demand size;
the edge calculation processing delay of each task is calculated, specifically:
determining edge calculation processing delay of a task according to the ratio of the size of the calculation resources of the task to the size of the calculation resources allocated to the task by an edge server;
The end-to-end delay of each task in the computing edge computing set is specifically:
and determining the end-to-end delay of each task according to the sum of the uploading delay and the edge calculation processing delay of the task.
Optionally, the task processing instruction in the decision result is acquired through the internet of things terminal set, and a task processing process is executed, including:
judging whether the migration decision is local calculation or not, if so, executing the task according to the size of the computing resource of the task of which the strategy is local calculation, which is allocated to the local Internet of things terminal in the decision result, and returning the processing result to the application program; if not, the local task is sent to the edge server for processing through the intelligent super-surface auxiliary wireless communication system, and the result returned by the edge server is returned to the application program.
In another aspect of the embodiment of the present invention, there is provided an intelligent subsurface-assisted end-edge collaborative computing migration system, including:
the system comprises an Internet of things terminal set, a migration decision controller and a migration decision controller, wherein the Internet of things terminal set is used for receiving a computing task issued by an application program and sending task information and local resource information of the computing task to the migration decision controller; acquiring a task processing instruction in the decision result, and executing a task processing process;
The migration decision controller is used for executing a calculation migration decision process after receiving the task information and the local resource information, generating a decision result and sending the decision result to the terminal set of the Internet of things, the intelligent super-surface controller and the edge server;
the intelligent super-surface controller is used for acquiring phase shift information in the decision result and configuring the phase shift of the reflecting unit;
the edge server is used for acquiring edge calculation decision information in the decision result, executing a task processing process and returning a processing result to the terminal set of the Internet of things;
the migration decision controller is specifically configured to:
setting an initialization migration strategy of each task in a task set as edge calculation, and updating an edge calculation set and a local calculation set;
calculating the system average delay, and marking the system average delay as the system average delay before decision updating;
selecting one side computing task from the side computing set, updating the migration decision of the side computing task into local computing, and migrating the side computing task from the side computing set to the local computing set; the uploading rate obtained by the edge computing task is the lowest among the non-traversed tasks in the edge computing set;
Executing an edge computing resource configuration strategy on the edge computing set, and determining phase shift of the intelligent super surface, uploading rate of tasks in the edge computing set, computing resources allocated to the tasks in the edge computing set by an edge server and end-to-end delay of the tasks in the edge computing set;
executing a local computing resource configuration strategy on the local computing set, and determining end-to-end delay of tasks in the local computing set;
calculating a new system average delay, and when the new system average delay is not lower than the system average delay before decision updating, restoring the migration decision of the edge calculation task into edge calculation, and migrating the edge calculation task from a local calculation set back to an edge calculation set, and marking the edge calculation task as traversed;
updating the edge computing set and the local computing set according to the current migration decision, executing an edge computing resource configuration strategy on the edge computing set, and determining phase shift information of the intelligent super surface and computing resources distributed to tasks in the edge computing set by an edge server; executing a local computing resource configuration strategy on the local computing set, and determining the computing resources of tasks with the strategy being the local computing assigned by the local Internet of things terminal until the assignment of all the non-traversed tasks is completed, thereby completing the computing migration decision process.
Another aspect of the embodiment of the invention also provides an electronic device, which includes a processor and a memory;
the memory is used for storing programs;
the processor executes the program to implement the method as described above.
Another aspect of the embodiments of the present invention also provides a computer-readable storage medium storing a program that is executed by a processor to implement a method as described above.
Another aspect of embodiments of the invention also provides a computer program product comprising a computer program which, when executed by a processor, implements a method as described above.
According to the embodiment of the invention, the computing task issued by the application program is received through the terminal set of the Internet of things, and the task information and the local resource information of the computing task are sent to the migration decision controller; after receiving the task information and the local resource information through the migration decision controller, executing a calculation migration decision process to generate a decision result, and sending the decision result to an Internet of things terminal set, an intelligent super-surface controller and an edge server; acquiring a task processing instruction in the decision result through the Internet of things terminal set, and executing a task processing process; the intelligent super-surface controller is used for acquiring phase shift information in the decision result and configuring the phase shift of the reflecting unit; and acquiring edge calculation decision information in the decision result through the edge server, executing a task processing process, and returning the processing result to the terminal set of the Internet of things. The invention improves the wireless transmission performance, improves the resource utility of the edge computing system, and can meet the low-delay requirement of the application of the Internet of things.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a general flow chart of an intelligent subsurface assisted end-edge collaborative computing migration method provided by an embodiment of the present invention;
FIG. 2 is a schematic diagram of an edge computing system according to an embodiment of the present invention;
FIG. 3 is a flow chart of a migration decision calculation provided by an embodiment of the present invention;
FIG. 4 is a flowchart of an edge computing resource allocation policy provided by an embodiment of the present invention;
FIG. 5 is a flowchart of determining an intelligent subsurface phase shift for an intelligent subsurface-assisted end-edge collaborative computing migration method provided by an embodiment of the present invention;
fig. 6 is a flowchart of task processing executed by the terminal of the internet of things according to the decision result provided by the embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
Aiming at the problems in the prior art, the invention provides an intelligent super-surface assisted end-edge collaborative computing migration method, which is characterized in that an intelligent super-surface reflection array, a joint optimization computing migration strategy, phase shift of an intelligent super-surface, computing resources of a local computing node and an edge server are placed between an Internet of things terminal and the edge server, so that the wireless transmission performance is improved, the resource utility of an edge computing system is improved, and the low delay requirement of Internet of things application is met.
The invention provides an intelligent super-surface assisted terminal edge collaborative computing migration method, which comprises an edge computing system, wherein the edge computing system comprises an Internet of things terminal set, a base station, an edge server, an intelligent super-surface reflection array, a migration decision controller and an intelligent super-surface controller, the Internet of things terminal set comprises one or more Internet of things terminals with computing capacity, the intelligent super-surface array is placed between the Internet of things terminals and the base station, so that reflection beams/links are formed between the Internet of things terminals and the base station, the edge server is located in the base station, the Internet of things terminals communicate with the edge server through the base station, and when the Internet of things terminals receive computing tasks from applications, the intelligent super-surface assisted terminal edge collaborative computing migration process is triggered, and the method comprises the following steps:
And S101, the terminal set of the Internet of things receives a calculation task from an application and sends task information and local resource information to the migration decision controller.
And S102, after receiving the task information and the local resource information of the terminal set of the Internet of things, the migration decision controller executes a calculation migration decision process, determines a calculation migration strategy of a task in the task set, calculation resources allocated to the task and phase shift of the intelligent super surface, returns the decision result to the terminal set of the Internet of things, sends the phase shift information of the intelligent super surface to the intelligent super surface controller, and sends decision information related to edge calculation to an edge server.
And S103, the intelligent super-surface controller sets the phase shift of the reflecting unit according to the phase shift information.
S104, executing a task processing process by the Internet of things terminal in the Internet of things terminal set according to the decision result.
And S105, the edge server executes a task processing process according to the edge calculation related decision information, and returns a result to the terminal of the Internet of things.
The calculation migration decision process of the migration decision controller in step S102 specifically includes the following steps:
s201, setting an initialization migration strategy of a task in a task set as edge calculation, and updating the edge calculation set and a local calculation set.
S202: calculating a system average delay and marking the system average delay as a system average delay before decision updating.
S203, selecting a task from the edge calculation set, wherein the uploading rate obtained by the task is the lowest in the set of the edge calculation set, which is not marked as traversed.
And S204, updating the migration decision of the task into local calculation, and migrating the task from the edge calculation set to the local calculation set.
S205, executing an edge computing resource configuration strategy on the edge computing set, and determining phase shift of the intelligent super surface, uploading rate of tasks in the edge computing set, computing resources allocated to the tasks in the edge computing set by an edge server and end-to-end delay of the tasks in the edge computing set.
S206, executing a local computing resource configuration strategy on the local computing set, and determining the end-to-end delay of the tasks in the local computing set.
S207, calculating the average delay of the system.
S208, judging whether the system average delay is not lower than the system average delay before decision updating: if yes, go to step S209; if not, go to step S210.
S209, restoring the migration decision to the decision before decision updating, namely restoring the migration decision of the task in the step S204 to edge calculation, and migrating the task from the local calculation set to the edge calculation set, marking the task as traversed, and proceeding to the step S210.
S210, judging whether each task in the edge calculation set is marked as traversed or not: if yes, go to step S211; if not, the process goes to step S202.
S211, updating an edge computing set and a local computing set according to a current migration decision, executing an edge computing resource configuration strategy on the edge computing set, and determining phase shift of the intelligent super surface and computing resources allocated to tasks in the edge computing set by an edge server; and executing a local computing resource configuration strategy on the local computing set, determining the computing resource allocated to the task of which the strategy is local computing by the local Internet of things terminal, and ending the computing migration decision process.
Optionally, the task information in the step S101 or S102 includes a size of the task in bits or bytes and a size of the task in computing resource requirements, and the local resource information includes a size of computing resources available to the terminal of the internet of things.
Optionally, the method for updating the edge calculation set and the local calculation set in step S201 or S211 is as follows: for each task in the task set, reading a calculation migration strategy of the task, and judging whether the calculation migration strategy is local calculation or not: if yes, the task is put into a local computing set; and if not, putting the task into an edge calculation set.
Optionally, the specific method for executing the edge computing resource configuration policy on the edge computing set in step S205 or S211 is as follows:
s301, determining phase shift of the intelligent super surface, and calculating uploading delay of tasks in the edge calculation set.
And S302, distributing computing resources of an edge server for the tasks in the edge computing set, and computing edge computing processing delay of the tasks.
S303, calculating end-to-end delay of tasks in the edge calculation set.
The method for determining the intelligent super-surface phase shift in the step S301 specifically includes:
s401, calculating a transmission gain factor of each task in the edge calculation set according to the following formula:wherein m represents the mth task in the edge computation set, complex +.>Channel coefficient, complex number (X) representing the direct wireless link of an Internet of things terminal to a base station generating said task H Represents the conjugate transpose of X, and II X II represents the norm of X.
S402, the sum of the direct transmission channel gains of the tasks in the edge calculation set is recorded as the sum of the channel gains before the reflection array phase shift optimization, i.e.Marked as the sum of channel gains before reflection array phase shift optimization, where M 0 Represents the edge calculation set, m represents the mth task in the edge calculation set, plural +. >Channel coefficient, complex number (X) representing the direct wireless link of an Internet of things terminal to a base station generating said task H Represents the conjugate transpose of X, |X| 2 Representing the square of the modulus of the complex number X.
S403, calculating the phase shift of the intelligent super surface based on the tasks according to the following steps for each task in the edge calculation set:
s403-1, marking the phase shift of the product of the conjugate transpose matrix of the channel coefficient of the task direct-connect wireless link and its transmission gain factor as the 0 th phase shift parameter, i.e. lettingIn theta 0 m Representing the 0 th phase shift parameter facing task m, arg (X) representing the phase shift of complex number X, +.>Representing a collection of edge computations.
S403-2, for each reflection unit in the reflection array of the intelligent subsurface, calculating a task-based phase shift as follows:
1) Taking the phase shift corresponding to the channel coefficient of the radio link from the reflecting unit to the base station as the 1 st phase shift parameter facing the task, namely, lettingIn the formula, theta 1 m,n Representing the 1 st phase shift parameter, complex number g, of a reflective element n of an intelligent subsurface reflective array based on task m n Channel coefficient representing the radio link of said reflection unit to the base station>Representing a set of reflective elements in a smart subsurface reflective array.
2) Taking the phase shift corresponding to the product of the channel coefficient of the direct connection wireless link from the Internet of things terminal generating the task to the reflecting unit of the intelligent super surface and the transmission gain factor of the task as the 2 nd phase shift parameter facing the task, namely, lettingIn theta 2 m,n Representing the 2 nd phase shift parameter of the reflecting element n of the intelligent super-surface reflecting array based on task m, complex +.>And the channel coefficient of the direct-connection wireless link from the terminal m of the Internet of things to the reflecting unit n of the intelligent super surface is represented.
3) Taking the difference between the 0 th phase shift parameter and the 1 st phase shift parameter and the 2 nd phase shift parameter as the phase shift of the reflecting unit based on the task, namely, lettingIn θ m,n Representing the phase shift of the reflection unit n based on the task m.
S403-3, arranging the exponential form of the phase shift of each reflection unit in the intelligent super-surface reflection array according to the position sequence of the reflection units to form a diagonal matrix, marking the diagonal matrix as a reflection coefficient matrix based on the task, namely, enabling the reflection coefficient matrix to be based on the taskIs expressed as a reflection coefficient matrix of (a)In (1) the->An exponential form representing the phase shift of the nth reflective element in the smart subsurface reflective array, d iag (X) representing the diagonal matrix of X.
S403-4 through the formulaCalculating the sum of channel gains of a reflection coefficient matrix based on the task, wherein g is a complex vector of N multiplied by 1, and represents the channel coefficient vector from the intelligent super-surface reflection array to the base station, and the natural number N represents that the reflection array has N reflection units> Is an Nx1 complex vector representing the channel coefficient vector from task m, m' to the reflection array, respectively,/->Representing the set +.>In which the variables X except m m′ And (5) summing.
S404, selecting a set of reflectance matrices from a set of intelligent task-based subsurface reflectance matrices, the reflectance matrices being those of the set of reflectance matrices that maximize the sum of channel gains, i.e., a task m' based reflectance matrix Λ m′ Will be selected if the sum of the channel gains of the reflection coefficient matrix based thereon satisfies:wherein argmax x Y represents finding the variable that maximizes Y from the set of variables of x.
S405, judging whether the channel gain sum based on the reflection coefficient matrix is larger than the channel gain sum before the reflection array phase shift optimization, namely, judgingWhether or not it is: if yes, go to step S406; if not, the process goes to step S407.
S406, taking the reflection coefficient array as a reflection coefficient array of the intelligent super-surface, and taking the corresponding phase shift as the phase shift of the intelligent super-surface, namely, letting; Λ=Λ m′Wherein the symbol->Representation definition, updating the channel gain sum before reflection array phase shift optimization to the channel gain sum based on the reflection coefficient matrix, i.e. let +.>According to the formula->The transmission gain factors of the tasks in the edge calculation set are updated, and the process returns to step S403.
S407, taking the latest reflection coefficient array of the intelligent super surface as the reflection coefficient array of the intelligent super surface, taking the phase shift corresponding to the reflection coefficient array as the phase shift of the intelligent super surface, and ending the process.
The method for calculating the upload delay of the task in the edge calculation set in step S301 is as follows: the upload delay of a task in a set of edge computing tasks is determined by the ratio of the size of the task in bits or bytes to the upload rate of the task, wherein the upload rate of the task is determined by the formulaDetermining, where B is the link bandwidth, p m Is the transmit power of the internet of things terminal m that generated the task, σ is the channel noise.
In the step S302, the method for allocating computing resources of the edge server to the tasks in the edge computing set includes: for a task in an edge computing set, computing resources of an edge server are fairly allocated in a ratio of the size of its computing resources to the total computing resource demand size, i.e., the computing resources allocated to the task are the product of the ratio and the total available computing resources of the edge server, where the total computing resource demand size is the sum of the sizes of the computing resources of the tasks in the edge computing set.
The method for calculating the edge calculation processing delay of the task in step S302 is as follows: the edge computing processing delay of a task in an edge computing task set is determined by the ratio of the size of the computing resources of the task to the size of the computing resources allocated to the task by an edge server.
The method for calculating the end-to-end delay of the task in the edge calculation set in step S303 is as follows: for a task in the edge computation set, its end-to-end delay is determined by the sum of the upload delay of the task and the edge computation processing delay.
The method for executing the local computing resource configuration strategy on the local computing set in the step S206 or S211 specifically comprises the following steps: and for the tasks in the local computing set, distributing all locally available computing resources of the Internet of things terminal generating the tasks to the tasks.
The method for determining the end-to-end delay of the tasks in the local computing set comprises the following steps: for a task in a local computing set, the end-to-end delay of the task is determined by the ratio of the size of the computing resource requirement of the task to the size of the computing resource allocated to the task by the Internet of things terminal generating the task.
Optionally, the method for calculating the average delay of the system in step S202 or S207 is as follows: and taking the average value of the end-to-end delays of the tasks in the task set as the average delay of the system.
Optionally, the specific method for setting the phase shift of the reflection unit by the intelligent super-surface controller in step S104 according to the phase shift information is as follows: and setting the phase shift of the reflecting units in the intelligent super-surface reflecting array according to the phase shift value in the phase shift information.
Optionally, the task processing process executed by the internet of things terminal in the internet of things terminal set in step S104 according to the decision result includes the following steps:
s501, judging whether the migration decision is local calculation or not: if yes, go to step S502; if not, the process goes to step S503.
S502, executing the task according to the size of the computing resource allocated to the task calculated locally by the local Internet of things terminal in the decision result, and returning the processing result to the application.
And S503, the local task is sent to the edge server for processing through the intelligent super-surface assisted wireless communication system, and the result returned by the edge server is returned to the application.
Optionally, the task processing process executed by the edge server in step S105 according to the decision information related to the edge calculation specifically includes: the tasks are performed on each task in the edge computing set in accordance with the size of its assigned computing resources.
Compared with the prior art, the invention has the following advantages:
1. according to the intelligent super-surface-assisted end-edge collaborative computing migration method disclosed by the invention, the wireless transmission performance between the terminal and the base station of the Internet of things is enhanced in a mode of placing the intelligent super-surface reflection array between the terminal and the base station of the Internet of things and optimizing the phase shift of the intelligent super-surface, and the mode of enhancing the wireless transmission performance by reflecting signals through the intelligent super-surface reflection array does not need to consume extra transmitting power, so that the mode not only reduces data uploading delay, but also improves energy efficiency.
2. According to the intelligent super-surface-assisted end-edge collaborative computing migration method disclosed by the invention, the computing tasks of multiple users are cooperatively processed in a mode of jointly optimizing computing migration strategies, phase shifting of the intelligent super-surface and computing resource allocation of the end edges, so that the utilization rate of computing resources of the end and the edge is improved, and the end-to-end delay of the computing tasks is reduced.
3. According to the intelligent super-surface-assisted end-edge collaborative computing migration method disclosed by the invention, the algorithm complexity is reduced by searching an optimized computing migration decision, intelligent super-surface phase shift and computing resource configuration mode through iteration and a distributed algorithm, so that the algorithm can be quickly executed and solved in an edge computing system with limited computing capacity.
The implementation process of the terminal edge collaborative computing migration method in the specific application scene is described in detail below with reference to the attached drawings of the specification:
example 1
Fig. 1 is a general flow diagram of an intelligent super-surface assisted terminal edge collaborative computing migration method provided by the embodiment of the invention, and in combination with fig. 2, the method includes an edge computing system, the edge computing system includes an internet of things terminal set, a base station, an edge server, an intelligent super-surface reflection array, a migration decision controller, and an intelligent super-surface control, the internet of things terminal set includes one or more internet of things terminals with computing capability, the intelligent super-surface array is placed between the internet of things terminal and the base station, so that a reflection beam/link is formed between the internet of things terminal and the base station, the edge server is located in the base station, the internet of things terminal communicates with the edge server through the base station, and when the internet of things terminal receives a computing task from an application, the intelligent super-surface assisted terminal edge collaborative computing migration process is triggered, the method includes the following steps:
s101, the terminal set of the Internet of things receives a computing task from an application, and task information { S } 1 ,S 2 ,…,S m ,…,S M ;W 1 ,W 2 ,…,W m ,…,W M ' and local resource information The method comprises the steps of sending the number of terminals of the Internet of things in an Internet of things set to a migration decision controller, wherein the natural number M represents the number of terminals of the Internet of things in the Internet of things set, and the natural number S m Representing the size of a computing task generated by an Internet of things terminal M (M is more than or equal to 1 and less than or equal to M) and taking bits or bytes as units, and a natural number W m Representing the size of a computing task generated by an Internet of things terminal M (M is more than or equal to 1 and less than or equal to M) and taking computing resource requirements as units, and natural number +.>And the available computing resource of the terminal M (M is more than or equal to 1 and less than or equal to M) of the Internet of things is represented.
S102, after the migration decision controller receives the task information and the local resource information of the terminal set of the Internet of things, executing a calculation migration decision process, and determining a calculation migration strategy of a task in the task setComputing resources allocated to tasks>Phase shift of intelligent supersurface>In the above formula, I m 、f m Respectively representing calculation migration decision and allocated calculation resources of task M (M is more than or equal to 1 and less than or equal to M), and theta n The phase shift of the nth (N is more than or equal to 1 and less than or equal to N) reflecting unit of the intelligent super-surface reflecting array is represented, the decision result is returned to the terminal set of the Internet of things, the phase shift information of the intelligent super-surface is sent to the intelligent super-surface controller, and the decision information related to edge calculation is sent to the edge server.
S103, the intelligent super-surface controller sets the phase shift of the reflecting unit according to the phase shift information, namely, the phase shift of the reflecting unit N (N is more than or equal to 1 and less than or equal to N) is set as theta n
And S104, the Internet of things terminal in the Internet of things terminal set executes a task processing process according to the decision result and returns the result to the application.
And S105, the edge server executes a task processing process according to the edge calculation related decision information, and returns a result to the terminal of the Internet of things.
Example 2
Fig. 3 is a flowchart of a calculation migration decision in an intelligent super-surface-assisted end-edge collaborative calculation migration method according to an embodiment of the present invention. With reference to fig. 1, fig. 2, and fig. 3, it is illustrated how, after the migration decision controller receives the task information and the resource information of the terminal set of the internet of things, the calculation migration decision process is performed to determine the calculation migration policy of the task in the task set, the calculation resources allocated to the task, and the phase shift of the intelligent super surface. After the migration decision controller receives the task information and the resource information of the terminal set of the Internet of things, the following calculation migration decision process is executed:
s201, setting an initialized migration strategy of the tasks in the task set as edge calculation, namely, for all the tasks M (1.ltoreq.m.ltoreq.M) in the task set, lettingWherein->Representing a task set, updating an edge computation set and a local computation set, i.e. let the edge computation set be +. >Local computing set +.>
S202: calculate the system average delay and mark it as the system average delay D before the decision update pre
S203, selecting a task from the edge calculation sets, wherein the uploading rate obtained by the task is the lowest in the sets of the edge calculation sets, which are not marked as traversed, namely, the task m' meets the following conditionWherein argminX represents the inverse function of X that minimizes the result, +.>Representing edge computing set +.>Set of tasks marked as traversed, R m Indicating the upload rate of task m.
S204, updating the migration decision of the task into local calculation, namely, enabling I to m′ =0, and migrate the task from the edge computation set to the local computation set, i.e. letAnd->
S205, executing an edge computing resource allocation strategy on the edge computing set to determine phase shift theta of the intelligent super surface,Upload rate of tasks in edge computation setComputing resources allocated by edge servers to tasks in an edge computing setEnd-to-end delay of tasks in edge computation set +.>
S206, executing local computing resource allocation strategy on the local computing set, and determining end-to-end delay of tasks in the local computing set
S207, calculating the average delay of the system, specifically, taking the average value of the end-to-end delays of the tasks in the task set as the average delay of the system, namely, using a formula The system average delay is calculated.
S208, judging whether the system average delay is not lower than the system average delay before decision updating, namely, judging that D is not less than D pre Whether or not it is: if yes, go to step S209; if not, go to step S210.
S209, restoring the migration decision to the decision before the decision update, namely restoring the migration decision of the task described in the step S204 to edge calculation, namely enabling I to be m′ =1 and migrate the task from the local computation set back to the edge computation set, i.e. letMarking the task as traversed, i.e. let +.>Wherein M is ep Representing any marked as traversed in a collection of edge computationsThe collection of transactions goes to step S210.
S210, determining whether each task in the edge computation set has been marked as traversed, i.e., determining the setAnd->Whether or not the same: if yes, go to step S211; if not, the process goes to step S202.
S211, updating an edge computing set and a local computing set according to a current migration decision, executing an edge computing resource configuration strategy on the edge computing set, and determining phase shift of the intelligent super surface and computing resources distributed to each task in the edge computing set by an edge server; and executing a local computing resource configuration strategy on the local computing set, determining the computing resource allocated to the task of which the strategy is local computing by the local Internet of things terminal, and ending the computing migration decision process.
Example 3
Fig. 4 is a flowchart of an edge computing resource configuration strategy of an intelligent super-surface assisted end-edge collaborative computing migration method according to an embodiment of the present invention, and fig. 5 is a flowchart for determining an intelligent super-surface phase shift. In connection with fig. 1, 2, 3, 4 and 5, an edge computing resource allocation policy enforcement procedure is illustrated:
s301, determining phase shift theta of the intelligent super surface, and calculating uploading delay of tasks in the edge calculation set
S302, distributing computing resources of an edge server for tasks in an edge computing setAnd calculates the edge calculation processing delay of the task +.>
S303, calculating edge calculation setEnd-to-end delay of a syndicated task
The method for determining the intelligent super-surface phase shift θ in step S301 specifically includes:
s401, calculating a transmission gain factor of each task in the edge calculation set according to the following formula:wherein m represents the edge computation set +.>The mth task of (a) a plurality of->Channel coefficient, complex number (X) representing the direct wireless link of an Internet of things terminal to a base station generating said task H Represents the conjugate transpose of X, |X|| represents the norm of X.
S402, the sum of the direct transmission channel gains of the tasks in the edge calculation set is recorded as the sum of the channel gains before the reflection array phase shift optimization, i.e. Marked as the sum of channel gains before reflection array phase shift optimization, where M 0 Represents the edge calculation set, m represents the mth task in the edge calculation set, plural +.>Channel coefficient, complex number (X) representing the direct wireless link of an Internet of things terminal to a base station generating said task H Represents the conjugate transpose of X, |X| 2 Representing the square of the modulus of the complex number X.
S403, task in edge computing setThe task-based calculation is performed as followsIs a phase shift theta of the intelligent supersurface of (2) m Wherein->Wherein the natural number N represents the number of reflection units in the intelligent super-surface reflection array and the corresponding sum of channel gains +.>
S403-1, marking the phase shift of the product of the conjugate transpose matrix of the channel coefficient of the task direct-connect wireless link and its transmission gain factor as the 0 th phase shift parameter, i.e. lettingIn theta 0 m Representing the 0 th phase shift parameter facing task m, arg (X) representing the phase shift of complex number X, +.>Representing a collection of edge computations.
S403-2 for each reflection unit in the reflection array of the intelligent subsurfaceThe task-based phase shift θ is calculated as follows m,n
1) Taking the phase shift corresponding to the channel coefficient of the radio link from the reflecting unit to the base station as the 1 st phase shift parameter facing the task, namely, letting In the formula, theta 1 m,n Representing the 1 st phase shift parameter, complex number g, of a reflective element n of an intelligent subsurface reflective array based on task m n Channel coefficient representing the radio link of said reflection unit to the base station>Representing a set of reflective elements of an intelligent subsurface reflective array.
2) Taking the phase shift corresponding to the product of the channel coefficient of the direct connection wireless link from the Internet of things terminal generating the task to the reflecting unit of the intelligent super surface and the transmission gain factor of the task as the 2 nd phase shift parameter facing the task, namely, lettingIn theta 2 m, Representing the 2 nd phase shift parameter of the reflecting element n of the intelligent super-surface reflecting array based on task m, complex +.>And the channel coefficient of the direct-connection wireless link from the terminal m of the Internet of things to the reflecting unit n of the intelligent super surface is represented.
3) Taking the difference between the 0 th phase shift parameter and the 1 st phase shift parameter and the 2 nd phase shift parameter as the phase shift of the reflecting unit based on the task, namely, lettingIn θ m,n Representing the phase shift of the reflection unit n based on the task m.
S403-3, arranging the exponential form of the phase shift of each reflection unit in the intelligent super-surface reflection array according to the position sequence of the reflection units to form a diagonal matrix, marking the diagonal matrix as a reflection coefficient matrix based on the task, namely, enabling the reflection coefficient matrix to be based on the task Is expressed as a reflection coefficient matrix of (a)In (1) the->An exponential form representing the phase shift of the nth reflective element in the smart subsurface reflective array, diag (X) representing the diagonal matrix of X.
S403-4 through the formula Calculating the channel gain sum of a reflection coefficient matrix based on a task m, wherein g is a complex vector of N multiplied by 1, representing the channel coefficient vector from the intelligent super-surface reflection array to the base station, and the natural number N represents that the reflection array has N reflection units>Is a complex vector of N x 1, representing the channel coefficient vector from task m, m' to the reflection array, respectively,/I>Representing a set of pairsIn which the variables X except m m ' summing.
S404, reflecting coefficient matrix set from intelligent super surface based on taskA set of reflection coefficient matrices is selected, which is the reflection coefficient matrix Λ of the set of reflection coefficient matrices that maximizes the sum of the channel gains, i.e. based on the task m m Will be selected if the sum of the channel gains of the reflection coefficient matrix based thereon satisfies: />Wherein argmax x Y represents finding the variable that maximizes Y from the set of variables of x.
S405, judging whether the channel gain sum based on the reflection coefficient matrix is larger than the channel gain sum before the reflection array phase shift optimization, namely, judging Whether or not it is:if yes, go to step S406; if not, the process goes to step S407.
S406, taking the reflection coefficient array as a reflection coefficient array of the intelligent super-surface, and taking the corresponding phase shift as the phase shift of the intelligent super-surface, namely, letting;wherein the symbol->Representation definition, updating the channel gain sum before reflection array phase shift optimization to the channel gain sum based on the reflection coefficient matrix, i.e. let +.>According to the formula->The transmission gain factors of the tasks in the edge calculation set are updated, and the process returns to step S403.
S407, taking the latest reflection coefficient array Λ of the intelligent super-surface as the reflection coefficient array of the intelligent super-surface, taking the phase shift theta corresponding to the reflection coefficient array as the phase shift of the intelligent super-surface, and ending the process of determining the phase shift of the intelligent super-surface.
The method for calculating the upload delay of the task in the edge calculation set in step S301 is as follows: the upload delay of a task in a set of edge computing tasks is determined by the ratio of the size of the task in bits or bytes to the upload rate of the task, wherein the upload rate of the task is determined by the formulaDetermining, wherein B is link bandwidth, P m Is the transmit power of the internet of things terminal m that generated the task, σ is the channel noise.
In the step S302, the method for allocating computing resources of the edge server to the tasks in the edge computing set includes: for tasks in the edge computing set, the computing resources are sizedFairly allocating computing resources of the edge server in proportion to a total computing resource demand size, wherein the total computing resource demand size is a sum of the sizes of computing resources of tasks in the edge computing set, i.e., let W m Representing the computational resource requirements of task m, f o Representing the available computing resources of the edge server, the computing resources allocated to task m by the edge server may be represented as
The method for calculating the edge calculation processing delay of the task in step S302 is as follows: the edge computation processing delay of a task in an edge computation task set is determined by the ratio of the size of the computing resources of the task to the size of the computing resources allocated to the task by the edge server, i.e.,
the method for calculating the end-to-end delay of the task in the edge calculation set in step S303 is as follows: for a task in the set of edge calculations, its end-to-end delay is determined by the sum of the upload delay of the task and the edge calculation processing delay, i.e., the end-to-end delay of task m is availableTo determine.
Example 4
Fig. 6 is a flowchart of task processing executed by the terminal of the internet of things according to the decision result provided by the embodiment of the invention. With reference to fig. 1, fig. 2, fig. 3, and fig. 6, a process of executing task processing by the internet of things terminal after the migration decision controller sends the migration decision result and the resource allocation result to the internet of things terminal is illustrated:
Let i= (I 1 ,I 2 ,…,I m ,…,I M ) Representing migration decisions received by a terminal set of the Internet of things, wherein I m (1.ltoreq.m.ltoreq.M) represents the decision on task M, let I m =0 denotes local computation, I m =1 represents edge computation, and in connection with fig. 6, for any of the terminal sets of the internet of thingsThe meaning terminal M (M is more than or equal to 1 and less than or equal to M), and the task processing process performed according to the decision result comprises the following steps:
s501 judging I m Whether =0 holds: yes, go to step S502; if not, the process goes to step S503.
S502, computing resource f m Assigned to a task, where f m The local internet of things terminal distributes the calculation resources of the tasks according to the decision result, executes task processing and returns the result to the application.
And S503, the task is sent to the edge server through the intelligent super-surface-assisted communication system shown in the figure 2, and the result returned by the edge server is returned to the application.
In some alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flowcharts of the present invention are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed, and in which sub-operations described as part of a larger operation are performed independently.
Furthermore, while the invention is described in the context of functional modules, it should be appreciated that, unless otherwise indicated, one or more of the described functions and/or features may be integrated in a single physical device and/or software module or one or more functions and/or features may be implemented in separate physical devices or software modules. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary to an understanding of the present invention. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be apparent to those skilled in the art from consideration of their attributes, functions and internal relationships. Accordingly, one of ordinary skill in the art can implement the invention as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative and are not intended to be limiting upon the scope of the invention, which is to be defined in the appended claims and their full scope of equivalents.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.
While the preferred embodiment of the present invention has been described in detail, the present invention is not limited to the embodiments described above, and those skilled in the art can make various equivalent modifications or substitutions without departing from the spirit of the present invention, and these equivalent modifications or substitutions are included in the scope of the present invention as defined in the appended claims.

Claims (8)

1. An intelligent super-surface-assisted end edge collaborative computing migration method is characterized by comprising the following steps of:
s101, an Internet of things terminal set receives a calculation task submitted by a user running an application program, and task information and local resource information of the calculation task are sent to a migration decision controller, wherein the task information comprises the size of the task in bits or bytes and the size of the task in calculation resource requirements, and the local resource information comprises the available calculation resource size of the Internet of things terminal;
S102, after receiving the task information and the local resource information, the migration decision controller executes a calculation migration decision process to generate a decision result, wherein the decision result comprises a calculation migration strategy of the calculation task, calculation resources allocated to the calculation task and phase shift of an intelligent super-surface, the phase shift information of the intelligent super-surface in the decision result is sent to the intelligent super-surface controller, and the calculation migration strategy of the calculation task and the calculation resource information allocated to the calculation task are sent to an Internet of things terminal set and an edge server;
s103, after receiving the phase shift information of the intelligent super surface in the decision result, the intelligent super surface controller configures the phase shift of the reflecting unit;
s104, the terminal set of the Internet of things executes a task processing process according to the received decision result information;
s105, the edge server executes a task processing process according to the received decision result information and returns a processing result to the terminal set of the Internet of things;
wherein, the calculating migration decision process executed by the migration decision controller in S105 includes:
s201, setting an initialization calculation migration strategy of each task in a task set as edge calculation, and updating the edge calculation set and a local calculation set;
S202, calculating system average delay, and marking the system average delay as system average delay before decision updating;
s203, selecting an edge calculation task from the edge calculation set, wherein the uploading rate obtained by the edge calculation task is the lowest uploading rate obtained by all tasks with undetermined calculation migration strategies in the edge calculation set;
s204, updating a calculation migration strategy of the edge calculation task into local calculation, and migrating the edge calculation task from an edge calculation set to a local calculation set;
s205, executing an edge computing resource configuration strategy on the edge computing set, and determining phase shift of the intelligent super surface, uploading rate of tasks in the edge computing set, computing resources allocated to the tasks in the edge computing set by an edge server and end-to-end delay of the tasks in the edge computing set;
s206, executing a local computing resource configuration strategy on the local computing set, and determining end-to-end delay of tasks in the local computing set;
s207, calculating new system average delay;
s208, judging whether the new system average delay is not lower than the system average delay before decision updating: if yes, go to S209; if not, jumping to S210;
s209, restoring the calculation migration strategy of the task in S203 or S204 to be edge calculation, and transferring the edge calculation task from a local calculation set to an edge calculation set, marking the edge calculation task as determined by the calculation migration strategy, and going to S210;
S210, judging whether each task in the edge computing set is marked as being determined by a computing migration strategy: if yes, go to S211; if not, returning to S202;
s211, determining a current calculation migration policy as a calculation migration policy result of a task of the terminal set of the Internet of things, respectively determining a current edge calculation set and a local calculation set as an edge calculation set and a local calculation set of the task of the terminal set of the Internet of things, executing an edge calculation resource configuration policy on the edge calculation set, and determining phase shift information of an intelligent super surface and calculation resources allocated to the task in the edge calculation set by an edge server; and executing a local computing resource configuration strategy on the local computing set, and determining the computing resources allocated to the task of computing the migration strategy as the local computing by the local Internet of things terminal.
2. The intelligent subsurface assisted end-edge collaborative computing migration method according to claim 1, wherein the performing an edge computing resource allocation policy on an edge computing collection comprises:
determining the phase shift of the intelligent super surface, and calculating the uploading delay of each task in the edge calculation set;
distributing computing resources of an edge server for each task in the edge computing set, and computing edge computing processing delay of each task;
End-to-end delays for each task in the edge computation set are computed.
3. A method of intelligent subsurface assisted end-edge collaborative computing migration according to claim 1 or 2, wherein determining the phase shift of an intelligent subsurface comprises:
calculating a transmission gain factor of each task in the edge calculation set;
the calculation formula of the transmission gain factor is as follows:
wherein m represents the mth task, complex number in the edge computing setRepresenting generation of the taskChannel coefficient, complex number (X) of direct connection wireless link from terminal of Internet of things to base station H Represents the conjugate transpose of X, and X represents the norm of X;
determining the sum of the direct-connection transmission channel gains of each task in the edge calculation set as the sum of the channel gains before the phase shift optimization of the reflection array;
the calculation formula of the sum of the gains of the direct connection transmission channels is as follows:
wherein SNR is tot,pre Representing the sum of channel gains before phase shift optimization of the reflection array;represents the edge calculation set, m represents the mth task in the edge calculation set, plural +.>Channel coefficient, complex number (X) representing the direct wireless link of an Internet of things terminal to a base station generating said task H Represents the conjugate transpose of X, |X| 2 Square of the modulus representing complex number X;
calculating the phase shift of the intelligent super-surface based on each task in the computing set according to the sum of the channel gains before the transmission gain factor and the reflection array phase shift optimization;
and calculating the phase shift of the intelligent super-surface based on the tasks on each task in the computing set according to the sum of the channel gains before the transmission gain factors and the reflection array phase shift optimization, wherein the phase shift comprises the following steps:
marking the phase shift of the product of the conjugate transposed matrix of the channel coefficient of the task direct-connection wireless link and the transmission gain factor as the 0 th phase shift parameter;
for each reflection unit in the reflection array of the intelligent super surface, taking the phase shift corresponding to the channel coefficient of the wireless link from the reflection unit to the base station as the 1 st phase shift parameter facing the corresponding task;
taking the phase shift corresponding to the product of the channel coefficient of the direct connection wireless link from the Internet of things terminal generating the task to the reflecting unit of the intelligent super surface and the transmission gain factor of the task as the 2 nd phase shift parameter facing the task;
taking the difference between the 0 th phase shift parameter and the 1 st phase shift parameter and the 2 nd phase shift parameter as the phase shift of the reflection unit based on the current task;
Arranging the exponential form of the phase shift of each reflection unit in the intelligent super-surface reflection array according to the position sequence of the reflection units to form a diagonal matrix, and marking the diagonal matrix as a reflection coefficient matrix based on the current task;
calculating the sum of channel gains of the reflection coefficient matrix based on the current task to obtain the phase shift based on the current task;
the calculation formula of the channel gain sum of the reflection coefficient matrix of the current task is as follows:
wherein,,a sum of channel gains representing a reflection coefficient matrix of the current task; g is an N x 1 complex vector representing the channel coefficient vector from the intelligent subsurface reflective array to the base station; Λ type m A reflection coefficient matrix representing task m; complex number (X) H Representing the conjugate transpose of X; plural->The channel coefficient of a direct-connection wireless link from an Internet of things terminal generating tasks to a base station is represented; />Respectively represent tasks m, m To the contraryA channel coefficient vector of the array; />Representing the set +.>In which the variables X except m m′ Summing;
selecting a group of reflection coefficient matrixes from the intelligent super-surface reflection coefficient matrix set based on the task; wherein the reflection coefficient matrix is the set of reflection coefficient matrices that maximizes the sum of channel gains;
Judging whether the channel gain sum based on the reflection coefficient matrix is larger than the channel gain sum before the phase shift optimization of the reflection array, if so, taking the reflection coefficient array as the reflection coefficient array of the intelligent super-surface, and taking the phase shift corresponding to the reflection coefficient array as the phase shift of the intelligent super-surface; if not, the latest reflection coefficient array of the intelligent super-surface is used as the reflection coefficient array of the intelligent super-surface, and the phase shift corresponding to the reflection coefficient array is used as the phase shift of the intelligent super-surface.
4. The intelligent subsurface assisted end edge collaborative computing migration method of claim 2, wherein,
the uploading delay of each task in the calculation set of the calculation edge is specifically as follows:
determining the uploading delay of the task according to the ratio of the size of the task in bits or bytes to the uploading rate of the task;
the computing resources of the edge server are allocated to each task in the edge computing set, specifically:
for tasks in the edge computing set, computing resources of the edge server are distributed according to the ratio of the size of computing resources of each task to the total computing resource demand size;
the edge calculation processing delay of each task is calculated, specifically:
Determining edge calculation processing delay of a task according to the ratio of the size of the calculation resources of the task to the size of the calculation resources allocated to the task by an edge server;
the end-to-end delay of each task in the computing edge computing set is specifically:
and determining the end-to-end delay of each task according to the sum of the uploading delay and the edge calculation processing delay of the task.
5. The intelligent super-surface-assisted end-edge collaborative computing migration method according to claim 1, wherein the internet of things terminal set performs a task processing procedure according to the received decision result information, and the method comprises:
judging whether the migration decision is local calculation or not, if so, executing the task according to the size of the computing resource of the task of which the strategy is local calculation, which is allocated to the local Internet of things terminal in the decision result, and returning the processing result to the application program; if not, the local task is sent to the edge server for processing through the intelligent super-surface auxiliary wireless communication system, and the result returned by the edge server is returned to the application program.
6. An intelligent subsurface assisted end-edge collaborative computing migration system, comprising:
the system comprises an Internet of things terminal set, a migration decision controller and a migration decision controller, wherein the Internet of things terminal set is used for receiving a computing task issued by an application program and sending task information and local resource information of the computing task to the migration decision controller; acquiring a task processing instruction in the decision result, and executing a task processing process;
The migration decision controller is used for executing a calculation migration decision process after receiving the task information and the local resource information, generating a decision result and sending the decision result to the terminal set of the Internet of things, the intelligent super-surface controller and the edge server;
the intelligent super-surface controller is used for acquiring phase shift information in the decision result and configuring the phase shift of the reflecting unit;
the edge server is used for acquiring edge calculation decision information in the decision result, executing a task processing process and returning a processing result to the terminal set of the Internet of things;
the migration decision controller is specifically configured to:
setting an initialization migration strategy of each task in a task set as edge calculation, and updating an edge calculation set and a local calculation set;
calculating the system average delay, and marking the system average delay as the system average delay before decision updating;
selecting one side computing task from the side computing set, updating the migration decision of the side computing task into local computing, and migrating the side computing task from the side computing set to the local computing set; the uploading rate obtained by the edge computing task is the lowest among the non-traversed tasks in the edge computing set;
Executing an edge computing resource configuration strategy on the edge computing set, and determining phase shift of the intelligent super surface, uploading rate of tasks in the edge computing set, computing resources allocated to the tasks in the edge computing set by an edge server and end-to-end delay of the tasks in the edge computing set;
executing a local computing resource configuration strategy on the local computing set, and determining end-to-end delay of tasks in the local computing set;
calculating a new system average delay, and when the new system average delay is not lower than the system average delay before decision updating, restoring the migration decision of the edge calculation task into edge calculation, and migrating the edge calculation task from a local calculation set back to an edge calculation set, and marking the edge calculation task as traversed;
updating the edge computing set and the local computing set according to the current migration decision, executing an edge computing resource configuration strategy on the edge computing set, and determining phase shift information of the intelligent super surface and computing resources distributed to tasks in the edge computing set by an edge server; executing a local computing resource configuration strategy on the local computing set, and determining the computing resources of tasks with the strategy being the local computing assigned by the local Internet of things terminal until the assignment of all the non-traversed tasks is completed, thereby completing the computing migration decision process.
7. An electronic device comprising a processor and a memory;
the memory is used for storing programs;
the processor executing the program implements the method of any one of claims 1 to 5.
8. A computer-readable storage medium, characterized in that the storage medium stores a program that is executed by a processor to implement the method of any one of claims 1 to 5.
CN202211683047.3A 2022-12-27 2022-12-27 Intelligent super-surface-assisted end-edge collaborative computing migration method and system Active CN115967962B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211683047.3A CN115967962B (en) 2022-12-27 2022-12-27 Intelligent super-surface-assisted end-edge collaborative computing migration method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211683047.3A CN115967962B (en) 2022-12-27 2022-12-27 Intelligent super-surface-assisted end-edge collaborative computing migration method and system

Publications (2)

Publication Number Publication Date
CN115967962A CN115967962A (en) 2023-04-14
CN115967962B true CN115967962B (en) 2023-08-01

Family

ID=87361104

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211683047.3A Active CN115967962B (en) 2022-12-27 2022-12-27 Intelligent super-surface-assisted end-edge collaborative computing migration method and system

Country Status (1)

Country Link
CN (1) CN115967962B (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109041130B (en) * 2018-08-09 2021-11-16 北京邮电大学 Resource allocation method based on mobile edge calculation
CN111445111B (en) * 2020-03-09 2022-10-04 国网江苏省电力有限公司南京供电分公司 Electric power Internet of things task allocation method based on edge cooperation
CN112217879B (en) * 2020-09-24 2023-08-01 江苏方天电力技术有限公司 Edge computing technology and cloud edge cooperation method based on power distribution Internet of things
CN112188551B (en) * 2020-09-29 2023-04-07 广东石油化工学院 Computation migration method, computation terminal equipment and edge server equipment

Also Published As

Publication number Publication date
CN115967962A (en) 2023-04-14

Similar Documents

Publication Publication Date Title
CN113543176B (en) Unloading decision method of mobile edge computing system based on intelligent reflecting surface assistance
CN111556461B (en) Vehicle-mounted edge network task distribution and unloading method based on deep Q network
CN111953759A (en) Collaborative computing task unloading and transferring method and device based on reinforcement learning
CN113098714B (en) Low-delay network slicing method based on reinforcement learning
CN109710374A (en) The VM migration strategy of task unloading expense is minimized under mobile edge calculations environment
CN113794494B (en) Edge computing system and computing unloading optimization method for low-orbit satellite network
WO2022171066A1 (en) Task allocation method and apparatus based on internet-of-things device, and network training method and apparatus
CN114189892A (en) Cloud-edge collaborative Internet of things system resource allocation method based on block chain and collective reinforcement learning
Li et al. Task offloading scheme based on improved contract net protocol and beetle antennae search algorithm in fog computing networks
CN113573363B (en) MEC calculation unloading and resource allocation method based on deep reinforcement learning
CN112118312A (en) Network burst load evacuation method facing edge server
Sadiki et al. Deep reinforcement learning for the computation offloading in MIMO-based Edge Computing
Hu et al. Dynamic task offloading in MEC-enabled IoT networks: A hybrid DDPG-D3QN approach
CN115967962B (en) Intelligent super-surface-assisted end-edge collaborative computing migration method and system
Han et al. Dynamic task offloading and service migration optimization in edge networks
CN117424633A (en) Secure communication transmission strategy and system under deep learning auxiliary active ARIS
Chen et al. Profit-aware cooperative offloading in uav-enabled mec systems using lightweight deep reinforcement learning
CN115955685B (en) Multi-agent cooperative routing method, equipment and computer storage medium
US8468041B1 (en) Using reinforcement learning to facilitate dynamic resource allocation
KR20200144887A (en) Neural Network Apparatus for Resource Efficient Inference
CN115942494A (en) Multi-target safe Massive MIMO resource allocation method based on intelligent reflecting surface
CN113157344B (en) DRL-based energy consumption perception task unloading method in mobile edge computing environment
Wu et al. UAV-Mounted RIS-Aided Mobile Edge Computing System: A DDQN-Based Optimization Approach
CN114980160A (en) Unmanned aerial vehicle-assisted terahertz communication network joint optimization method and device
CN114938512A (en) Broadband capacity optimization method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant