CN108920279A - A kind of mobile edge calculations task discharging method under multi-user scene - Google Patents

A kind of mobile edge calculations task discharging method under multi-user scene Download PDF

Info

Publication number
CN108920279A
CN108920279A CN201810774689.1A CN201810774689A CN108920279A CN 108920279 A CN108920279 A CN 108920279A CN 201810774689 A CN201810774689 A CN 201810774689A CN 108920279 A CN108920279 A CN 108920279A
Authority
CN
China
Prior art keywords
task
mobile device
decision
mec
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810774689.1A
Other languages
Chinese (zh)
Other versions
CN108920279B (en
Inventor
张伟哲
方滨兴
何慧
刘川意
余翔湛
刘亚维
刘国强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN201810774689.1A priority Critical patent/CN108920279B/en
Publication of CN108920279A publication Critical patent/CN108920279A/en
Application granted granted Critical
Publication of CN108920279B publication Critical patent/CN108920279B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5044Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering hardware capabilities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/485Task life-cycle, e.g. stopping, restarting, resuming execution
    • G06F9/4856Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration
    • G06F9/4862Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration the task being a mobile agent, i.e. specifically designed to migrate
    • G06F9/4875Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration the task being a mobile agent, i.e. specifically designed to migrate with migration policy, e.g. auction, contract negotiation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

A kind of mobile edge calculations task discharging method under multi-user scene, is related to the processing technology field of mobile computing system.The purpose of the invention is to reduce the reaction time delay of mobile device and energy consumption.The multi-user scene is that multiple mobile devices are connected with MEC server, and may be selected in a plurality of channel between the mobile device and MEC server one of each mobile device is communicated, and MEC server is connected by backbone network with central cloud server;Detailed process is:The building of multi-user scene task Unloading Model;Two stages task based on game theory unloads strategy:First segment rank unloads strategy:Determine that unloading is executed on the mobile apparatus or on MEC server, second segment rank unloading strategy is:Carry out determining to be to wait on MEC server or execute in central cloud server when MEC server resource deficiency.The present invention combines the individual demand of user under the premise of guaranteeing the service quality and fairness of user.

Description

A kind of mobile edge calculations task discharging method under multi-user scene
Technical field
The present invention relates to the task discharging methods under a kind of multi-user scene, are related to the processing technique neck of mobile computing system Domain.
Background technique
The quick hair of the high speed development and the new-type mobile applications with Premium Features of mobile Internet and Internet of Things It opens up and brings huge pressure to mobile computing system.However, the limited processing capacity and battery capacity of mobile device become real The obstacle of existing this requirement.Mobile edge calculations (Mobile Edge Computing, MEC) has become solution this problem Promising technology, compared with the traditional cloud computing system for using long-range public cloud, it is provided in Radio Access Network Computing capability.By the way that computation-intensive task slave mobile device to be unloaded to neighbouring MEC server, Quality of experience (packet is calculated Include delay and equipment energy consumption) it can greatly improve.However, the efficiency of MEC system depends greatly on used meter The characteristics of calculating unloading strategy, needing in view of calculating task and wireless channel is come well-designed.
Summary of the invention
The technical problem to be solved in the present invention:The object of the present invention is to provide the mobile edge meters under a kind of multi-user scene Calculation task discharging method, to reduce the reaction time delay and energy consumption of mobile device.
The technical solution adopted by the present invention to solve the above technical problem is:
A kind of mobile edge calculations task discharging method under multi-user scene, the multi-user scene is that multiple movements are set Standby to be connected with MEC server, one in a plurality of channel between the mobile device and MEC server may be selected in each mobile device A to be communicated, MEC server is connected by backbone network with central cloud server;
Detailed process is:
Step 1: multi-user scene task Unloading Model constructs
The task Unloading Model under multi-user scene is constructed, the task Unloading Model under multi-user scene includes communication mould Type, calculating task model and calculating task load module;
Step 2: the two stages task based on game theory unloads strategy
First segment rank unloads strategy:Determine that unloading is executed on the mobile apparatus or on MEC server, second segment Rank unloads strategy:Carry out determining to be waited on MEC server or in center cloud service when MEC server resource deficiency It is executed on device.
In step 1, the building process of the task Unloading Model under multi-user scene is:
The building of traffic model
Traffic model includes mobile device, MEC server and central cloud server;
Mobile device and MEC server:By wirelessly being communicated between mobile device and MEC server;Assuming that each There is M channel in base station, then the set of channel can be expressed as Μ={ 0,1 ..., M-1 },
Mobile device:For the result of decision of first stage mobile device n, with symbol anIt indicates, wherein an∈{-1}∪ Μ;Work as anWhen=- 1, indicate that calculating task at this time executes on the local cpu of mobile device;Work as anWhen ∈ { M }, indicate at this time Calculating task pass through channel anIt is unloaded on MEC server and executes;For the result of decision of second stage, with symbol xnCarry out table Show.Wherein xn∈{0,1};Work as xnIt indicates to execute on MEC server when=1, even if MEC inadequate resource, then continuing in MEC Queue in wait;If xn=0, expression task, which continues to be unloaded in central cloud server, to be executed;
For all mobile devices, a result of decision vector a=(a can be obtained1,a2,…,an) and x=(x1, x2,…,xn);The upload data transfer rate (data rate) of every channel can be calculated according to result of decision vector:
Wherein w is the bandwidth of channel, qnIndicate the transmission power of mobile device, gn,sIndicate mobile device n and base station s it Between channel gain,Indicate background noise power;
Down-transmitting data rate and upload data transfer rate between mobile device and MEC server is identical;
MEC server and central cloud server:MEC server and central cloud server carry out wired company by backbone network It connects;Wherein, using symbol RacIt indicates the transmission data rate between MEC server and central cloud server, and assumes this Data are constant in whole system implementation procedure, i.e., need to be unloaded to executing in central cloud server for task for each, Data transmission rate having the same;
Calculating task model
Calculating task model is indicated using a triple:Wherein bnIndicate that calculating task needs The data volume wanted, the data include program code, input parameter, are unloaded on MEC server and execute if necessary, then this On partial data needs to upload onto the server by the TU module of mobile device;dnCalculation amount needed for indicating calculating task is used The operand of CPU indicates;rnThe result data for indicating calculating task needs to service from MEC if unloading using task Mobile device is descended on device;
Calculating task load module, including
(1) local computing model
When user n decision is performed locally calculating task TnWhen, the process being related to only has the execution of mobile device local cpu Calculating task, it is assumed that the computing capability of mobile device isThe time that calculating task is performed locally is as follows:
The energy consumption for executing calculating task is expressed as:
Wherein cnIndicate the power of mobile device local cpu;
By the delay and energy consumption of calculating task, overall load mould of the calculating task when local cpu executes is established Type:
Wherein coefficientWithRespectively indicate the weight of mobile device n calculating task delay and energy consumption when unloading decision; Two coefficients meet following relationship:
WhenWhen larger, indicate at this time mobile device n more focused on calculating task delay, to postpone it is more sensitive;And WhenWhen bigger, then it represents that the electricity of mobile device n is lower at this time;
(2) MEC server computing model
When mobile subscriber, which determines for calculating task to be unloaded to, to be executed on MEC server, mobile device needs to count first Calculation task TnIt is uploaded on MEC server by suitable channel, then MEC server generation is specific to execute for mobile device Task;Calculating task needs just be finally completed by three steps:Task uploads, cloud executes and result passback;
The stage is uploaded in task under the scene of multi-user, each mobile device needs to select one to carry out with MEC server The channel of communication;
The stage is uploaded in task, mobile device n needs additional delay and energy consumption to complete the unloading of task;It moves Dynamic equipment uploads the data of calculating task firstly the need of one channel of selection, and delay when can obtain upload task is as follows:
Wherein bnIndicate that mobile device needs the data volume uploaded, r at this timen(a) channel selected by mobile device n is indicated Data transfer rate;For mobile device when uploading calculating task, TU module needs to consume certain energy, as follows:
Wherein qnThe transmission power of expression mobile device n, and LnIndicate that mobile device additionally needs after emitting certain data The energy to be consumed;
When mobile device n successfully uploads to calculating task on MEC server, the mistake of calculating task execution has been begun to Journey;Assuming that the computing capability that MEC server distributes to the virtual machine of mobile device n isCalculating task is on MEC server The execution time is represented by:
When the computing capability on MEC server is not able to satisfy the calculating demand of all users, postorder user, which calculates, to be appointed Business needs to carry out the decision of second stage in MEC server;If second stage decision waits at MEC, executing the time can table It is shown as:
Wherein twaitQueueing delay caused by indicating due to MEC computing resource deficiency;When MEC computing resource is enough, twait=0;Wherein, the t when MEC inadequate resourcewaitCalculating carried out using the MEC Server latency based on queueing theory it is pre- It surveys
The time that calculated result is returned can be expressed as:
Mobile device reception result energy consumption is represented by:
Wherein pnFor the reception power of mobile device, the aggregate delay executed on MEC server is calculated as follows:
The comprehensive energy consumption executed on MEC server can obtain:
(3) central cloud server computation model
It is executed if second stage selection is unloaded in central cloud server, bulk delay can be expressed as:
WhereinWithIt respectively indicates task and uploads to central cloud server, in center cloud clothes from MEC Execution and time of the passback result to MEC from central cloud server on business device;
WhereinIt is the computing capability of central cloud server;
(4) server integrated load model
Overall load of the calculating task when executing on MEC server or central cloud server is as follows:
Coefficient in formulaWithIt is identical as local computing load module meaning, and xnIt is the result of decision of second stage.
Based on queueing theory MEC Server latency prediction process be:
According to the Little rule in queueing theory, under equilibrium condition, task is to be in the average time that MEC server waits The average waiting queue length of system divided by task average admission rate, i.e.,:
WhereinIt is average waiting queue length, andIt is the average admission rate of task;The measurement of the two parameters is needed in MEC Server end carries out;
ForThe number of tasks N waited at MEC at this time is counted in each time slot tt- C, as time increases Calculate average waiting queue length:
NtAll number of tasks of t moment are represented, C is the number of users that MEC server can service simultaneously, and the two, which is subtracted each other, exactly to be needed The number of tasks to be lined up;
Wherein NtIt is whole number of tasks of t moment, while obtains task and being averaged admission rate:
Wherein N0It is number of tasks when decision starts in system, rather than number of tasks when system is initial.
In step 2, the process of the two stages task unloading strategy based on game theory is:
Utilize obtained server integrated load modelTo solve the problems, such as the unloading of the task under multi-user scene, general This problem is converted into the problem of game of a multi-user, then provides the method for Solve problems;
Step 1 determines optimization aim
As make the number of users for reaching beneficial state by task unloading most as possible under a multi-user scene, this is excellent Changing target can be indicated by following formula:
Meet:
WhereinIt is an indicator function, is defined as follows:
Step 2, building multi-user's betting model
Enable a-n=(a1,…,an-1,an+1,…,aN) indicate the result of decision in all users other than user n;If User n has obtained other proprietary result of decision a-n, user n just needs to carry out unloading decision, a based on existing informationn=- It is executed when 1 in local cpu, anOne channel of selection carries out task unloading when >=0;
The foundation of decision is as follows:
Wherein Zn(an,a-n) indicate user n load function, be defined as follows:
Г=(Ν, { An}n∈N,{Zn}n∈N), wherein Ν is user's collection, AnIt is set of strategies, ZnIt is that each user obtains most Small computational load.
Г indicates that multi-user's calculating task unloads Decision-making Game;
The Nash Equilibrium in the game is provided, specially
Definition:Result of decision collectionMulti-user's calculating task unloading Decision-making Game in be one receive it is assorted Equilibrium, if in result of decision collection a*In the case where, none user can be reduced by changing the result of decision of oneself The computational load of oneself;I.e.:
Multi-user's calculating task unloads Decision-making Game, and there are a Nash Equilibriums, while can pass through the iteration of finite number of time Reach nash banlance state;
And for the user that decision is unloaded, need to carry out the decision of second stage, xnIt is obtained by following formula:
Step 3 unloads algorithm Solving Nash Equilibrium based on two stages task, and process is:
Step 1:Air interference measurement
It is calculate by the following formula the reception power of each channel:
Then the reception power of each channel is sent to all mobile devices by base station;Each mobile device n is ok Interference is calculated by following formula:
For the channel a of mobile device n selectionn(t), resulting interference subtracts mobile device equal to channel total received power The power of n;For other channels, interference is exactly the reception power of that channel;
Step 2:MEC waits delay prediction
MEC server needs to predict the average latency:When meeting demand for services, waiting time twait=0; When being unsatisfactory for demand for services, it is necessary to according to formulaIt is predicted, and by this data in company in step 1 Air interference is sent to mobile device;
Step 3:Decision is unloaded to update
Each mobile device has obtained the interference of every channel in the stage one, has obtained taking in MEC in the stage two Be engaged at device etc. it is to be delayed, in unloading decision updates, each mobile device using every channel interference and wait it is to be delayed this Two data combine following formula to calculate best response collection Δn(t),It is Δn(t) element in;
If the Δ being calculatedn(t) non-empty illustrates that Nash Equilibrium state has not been reached yet in this mobile device, can be with Computational load is reduced by updating decision;So this mobile device will select a result of decision to send to base station RTU signal;After base station receives all RTU signals, one or more mobile devices being independent of each other can be randomly choosed to permit Perhaps decision updates, other mobile devices for not receiving UA signal will not update the decision of oneself in next time slot; Simultaneously for the user that needs wait, need to carry out the decision of second stage according to formula (2-26);So pass through finite number of time Iteration after, all mobile devices will reach Nash Equilibrium state, that is to say, that none mobile device can lead to It crosses and updates oneself decision to achieve the purpose that reduce computational load.
The beneficial effects of the invention are as follows:
In order to cope with the new challenge under such multi-user's usage scenario, the present invention establishes individual character under a multi-user scene The model of the task unloading of change, including task model, traffic model and computational load model.Meanwhile it is multi-purpose in order to meet The calculating demand at family adds one layer of center cloud (Centralized under one layer of unloading scene of only MEC server Cloud, CC), to meet excessive calculating demand.By comprehensively considering the personalization of single user, establish based on personalization Task load model meets the individual demand under multi-user scene.After model foundation, the relevant knowledge for introducing game theory will Multi-user's task unloading problem is converted into a problem of game, and by proving this problem of game, there are Nash Equilibriums, are asked The solution of topic obtains the solution of problem eventually by Nash Equilibrium is obtained, and obtains the two stages task under multi-user scene Unloading strategy.Under the premise of guaranteeing the service quality and fairness of user, combine the personalized of user needs the present invention It asks.
Detailed description of the invention
Fig. 1 is the increase with the number of iterations, the number of users figure in beneficial unloaded state;
Fig. 2 is that system reaches the number of iterations curve graph required for Nash Equilibrium under different user number;
Fig. 3 is the load variation diagram with the increase user of the number of iterations;
Fig. 4 is to unload versus number of users figure in beneficial task under different schemes, the meaning of the English of top in Fig. 4 From top to bottom it is followed successively by:The unloading of two stages task, ignores queueing delay, executes on cloud;
Fig. 5 is TCO and IWD Detail contrast figure, and the meaning of the English of top is from top to bottom followed successively by Fig. 5:Two stages Task unloading, ignores queueing delay, is promoted;
Fig. 6 is that system be averaged overall load comparison diagram under different schemes, and the English meaning of top is from top to bottom in Fig. 6 It is followed successively by:It locally executes, ignores queueing delay, two stages task unloads, and executes on cloud.
Specific embodiment
Specific embodiment one:As shown in Figures 1 to 6, the mobile side described in present embodiment under a kind of multi-user scene The specific implementation process of edge calculating task discharging method is:
1, when mobile edge calculations and base stations united deployment, each MEC server needs to service more than one shifting Employ family.Each mobile device needs to carry out network communication by wireless access network and MEC server, and MEC server can To be connected by the central cloud server of core network and distal end.If when multiple equipment passes through a channel and MEC server When being communicated, interfering with each other between equipment also be will increase, and task is unloaded on MEC server to execute and hold instead not as good as local Row.Meanwhile the MEC server resource for being deployed in network edge is limited, can not meet excessive amount of calculating demand simultaneously.
2, based on above-mentioned, the specific technical solution of the task discharging method under multi-user scene is provided.
The building of 2.1 multi-user scene task Unloading Models
In order to facilitate the research of problem, the following task Unloading Model constructed under multi-user scene, including traffic model, Calculating task model and calculating task load module.
2.1.1 traffic model
Mobile device and MEC server:By wirelessly being communicated between mobile device and MEC server.Assuming that each There is M channel in base station, then the set of channel can be expressed as Μ={ 0,1 ..., M-1 }, for first stage mobile device n The result of decision, we can use symbol anIt indicates, wherein an∈{-1}∪Μ.Work as anWhen=- 1, indicate that calculating at this time is appointed Business executes on the local cpu of mobile device;Work as anWhen ∈ { M }, indicate that calculating task at this time passes through channel anIt is unloaded to MEC It is executed on server.For the result of decision of second stage, with symbol xnTo indicate.Wherein xn∈{0,1}.Work as xnIt is indicated when=1 It is executed on MEC server, even if MEC inadequate resource, then continuing to wait in the queue of MEC;If xn=0, it indicates to appoint Business, which continues to be unloaded in central cloud server, to be executed.In this way, for all mobile devices, our available decision knot Fruit vector a=(a1,a2,…,an) and x=(x1,x2,…,xn).After having this result of decision vector, we Calculate the upload data transfer rate (data rate) of every channel:
Wherein w is the bandwidth of channel, qnIndicate the transmission power of mobile device, gn,sIndicate mobile device n and base station s it Between channel gain,Indicate background noise power.
According to formula above, it will be seen that if there is too many mobile device selection same channel and base station It is communicated, mutual interference can be very big, causes to upload data transfer rate reduction, just will increase the delay of communication.While we It is assumed that the down-transmitting data rate and upload data transfer rate between mobile device and MEC server are identical.
MEC server and central cloud server:MEC server and central cloud server carry out wired company by backbone network It connects.Wherein, we use symbol RacIt indicates the transmission data rate between MEC server and central cloud server, and assumes This data is constant in whole system implementation procedure, i.e., needs to be unloaded to times executed in central cloud server for each Business, data transmission rate having the same.
2.1.2 calculating task model
For the ease of the description of later problem, a calculating task model is constructed here.For a calculating task, we A triple can be used to indicate:Wherein bnIndicate the data volume that calculating task needs, here Data include program code, input parameter etc., are unloaded on MEC server and execute if necessary, then this partial data needs On being uploaded onto the server by the TU module of mobile device;dnCalculation amount needed for indicating calculating task, uses the operand of CPU To indicate;rnThe result data for indicating calculating task, needs to descend into shifting from MEC server if unloading using task Dynamic equipment.Mobile device can be used procedure call graph analytical technology and obtain the b of calculating taskn、dnAnd rn.In order to meet individual character The demand of change, we provide different types of application, and every kind of application has different data.There are these data, we can The delay and energy consumption executed on local and cloud with analytical calculation task, to establish delay and energy consumption load under multi-user scene Model.
2.1.3 personalized calculating task load module
In existing most of unloading strategies, the factor of optimization or it is to reduce computing relay or is to reduce battery Energy consumption, or impose uniformity without examining individual cases will delay and energy consumption put whole consider together and have ignored each user be have it is different The individual of demand.Therefore in this section in, we attempt to establish the calculating task load module of property one by one, pass through this mould Type respects fully and meets the individual demand of each user.
(1) local computing model
When user n decision is performed locally calculating task TnWhen, the process being related to only has the execution of mobile device local cpu Calculating task.Assuming that the computing capability of mobile device is(that is operand CPU per second), it is existing in order to be more in line with Sincere justice, the computing capability of each mobile device is different here.In this way, the time that calculating task is performed locally is as follows:
The energy consumption for executing calculating task can be expressed as:
Wherein cnIndicate the power of mobile device local cpu.
There are delay and the energy consumption model of calculating task, we can establish a calculating task and execute in local cpu When overall load model:
Wherein coefficientWithRespectively indicate the weight of mobile device n calculating task delay and energy consumption when unloading decision. Two coefficients meet following relationship:
WhenWhen larger, indicate at this time mobile device n more focused on calculating task delay, to postpone it is more sensitive;And WhenWhen bigger, then it represents that the electricity of mobile device n is lower at this time, in order to increase the use time of mobile device more focused on meter The energy consumption of calculation task.In this way, mobile subscriber can suitably select weight coefficient according to itself concrete condition at that time.
(2) MEC server computing model
When mobile subscriber, which determines for calculating task to be unloaded to, to be executed on MEC server, mobile device needs to count first Calculation task TnIt is uploaded on MEC server by suitable channel, then MEC server generation is specific to execute for mobile device Task.
In this way, calculating task needs just be finally completed by three steps:Task uploads, cloud executes and result is returned It passes.Under the scene of multi-user, the stage is uploaded in task, each mobile device needs to select one to be communicated with MEC server Channel.As most of researchs, ignore result passback to entire analytic process since comparison of computational results is small here It influences.
The stage is uploaded in task, mobile device n needs additional delay and energy consumption to complete the unloading of task.It moves Dynamic equipment uploads the data of calculating task firstly the need of one channel of selection, and delay when available upload task is as follows:
Wherein bnIndicate that mobile device needs the data volume uploaded, r at this timen(a) channel selected by mobile device n is indicated Data transfer rate.And mobile device, when uploading calculating task, TU module needs to consume certain energy, as follows:
Wherein qnThe transmission power of expression mobile device n, and LnIndicate that mobile device additionally needs after emitting certain data The energy to be consumed.The additional energy consumption in this part is generally existing phenomenon in all mobile devices.
When mobile device n successfully uploads to calculating task on MEC server, the mistake of calculating task execution has been begun to Journey.Assuming that the computing capability that MEC server distributes to the virtual machine of mobile device n isIn this way, calculating task is serviced in MEC The execution time on device can be expressed as:
When the computing capability on MEC server is not able to satisfy the calculating demand of all users, postorder user, which calculates, to be appointed Business needs to carry out the decision of second stage in MEC server.If second stage decision waits at MEC, executing the time can be with It is expressed as:
Wherein twaitQueueing delay caused by indicating due to MEC computing resource deficiency;When MEC computing resource is enough, twait=0.Wherein, the t when MEC inadequate resourcewaitCalculating see 2.1.4 save.
And the time that calculated result is returned can be expressed as:
Mobile device reception result energy consumption can be expressed as:
Wherein pnFor the reception power of mobile device.In this way, the aggregate delay executed on MEC server can be counted in this way It calculates:
The comprehensive energy consumption executed on MEC server can obtain:
(3) central cloud server computation model
It is executed if second stage selection is unloaded in central cloud server, bulk delay can be expressed as:
WhereinWithIt respectively indicates task and uploads to central cloud server, in center cloud clothes from MEC Execution and time of the passback result to MEC from central cloud server on business device.It can be calculated in this way:
WhereinIt is the computing capability of central cloud server, here it is considered that all identical.
(4) server integrated load model
It to sum up describes, we execute available calculating task in server (MEC server or central cloud server) When overall load:
Here coefficientWithAs local computing load module meaning, and xnIt is exactly the result of decision of second stage.
2.1.4 the MEC Server latency prediction based on queueing theory
We discuss the prediction for having served as multitask queue waiting time on MEC server in this section.According to 4.2 sections Scene description, it is understood that MEC server is the queuing model that a single queue services more.According to the Little method in queueing theory Then (Little ' s Law), under equilibrium condition, task is in the average waiting queue length that the average time that MEC server waits is system Divided by the average admission rate of task, i.e.,:
WhereinIt is average waiting queue length, andIt is the average admission rate of task.Next it just needs to measure the two ginsengs Number.The measurement of the two parameters needs to carry out in MEC server end.
ForWe can count the number of tasks N waited at MEC at this time in each time slot tt- C, with when Between increase calculate average waiting queue length:
Wherein NtIt is whole number of tasks of t moment.Simultaneously we also available task is averaged admission rate:
Wherein N0It is number of tasks when decision starts in system, rather than number of tasks when system is initial.
In this way, we have just carried out a prediction to the waiting time of any moment, this predicted value is in second stage Play important function in decision.
The 2.2 two stages tasks based on game theory unload strategy
We attempt to solve the task under multi-user scene according to load module obtained above in content in this section Unloading strategy.It converts this problem to the problem of game of a multi-user, then provides the method for Solve problems.
2.2.1 beneficial task unloading
By the investigation to above-mentioned traffic model and calculating task load module, it will be seen that working as excessive movement If equipment selects same channel to carry out task unloading, mutual interference can become very serious, lead to every movement Data transfer rate between equipment and base station reduces, and takes more time so as to cause when uploading calculating task data, and upper The overspending time will lead to the more energy consumptions of mobile device again in biography calculating task.In this case mobile device is more suitable for The multi-load excessively for locally executing task unloading task is avoided to generate.
Since each mobile subscriber is an individual, knowing from experience for each only can consider itself under the scene of multi-user Interests.Here interests are exactly to complete calculating task with the smallest energy consumption and shortest delay.Beneficial task is defined below to unload Carry concept:
Define 1:The task of a given multi-user unloads Policy Result vector a, for having selected the user n of unloading task Result of decision an(an>=0), if user is by being discharged in the load for executing calculating task on MEC server less than in local Execute calculating task load if, then be just known as this be it is beneficial (i.e.) task unloading.
The concept of beneficial task unloading has great importance in the task unloading strategy of mobile edge calculations.One side Face, from the perspective of mobile subscriber, a user will not offload tasks to MEC server and execute up and cannot obtain ratio Locally execute smaller load.Smaller load, mobile subscriber only can be obtained on MEC server by offloading tasks to Just there is motivation to go to be unloaded;On the other hand, from the perspective of MEC server operator, if more users can reach To beneficial task unloaded state, it is meant that the more users of MEC server, that is, mean higher income.Therefore, We can be by computational load (local computing loads or be unloaded to the load calculated on MEC) and according to beneficial Task unloads concept to obtain unloading strategy.
There is beneficial task to unload this concept, our target can be most under a multi-user scene The number of users for reaching beneficial state by task unloading may be made most, this model can be indicated by following formula:
Meet:
WhereinIt is an indicator function, is defined as follows:
At this moment, task unloading policing issue can describe in this way under multi-user scene:Make most use under multi-user scene Family reaches beneficial unloaded state by calculating task unloading.However, passing through the maximum for converting this problem in multiple chests Bin packing, it can be deduced that this problem is NP hardly possible.It is proved below.
In order to prove this problem, it is firstly introduced into the maximum bin packing of multiple chests.Give N number of object, each object Weight be wi, i ∈ { 1 ..., N }, while having M chest, the capacity of each chest is C, and the target of problem is to be packed into object In chest, most objects is installed in chest.Problem can be described as follows:
Meet:
It is known that the maximum bin packing of this more object is NP hardly possible.We set the movement of user now The standby object being compared in maximum bin packing, is compared to the chest in maximum bin packing for channel, in this way, the weight w of objectiIt can To be expressed as wi=qngn,s, and the capacity of chest can be expressed as the data transfer rate (data rate) of channel.If an equipment N has selected channel m, so that it may be construed to object n and put into chest m.If the task that this equipment has reached beneficial is unloaded Load state, then just illustrating that chest m is not above capacity limit.In this way, the problem of this chapter, is just converted into more objects by we Maximum bin packing.
Therefore, can prove the problem of this chapter be NP it is difficult a perfect solution can not be obtained by traditional mode. Below convert this problem to the problem of game of multi-user.
2.2.2 constructing multi-user's betting model
Next we construct the betting model of a multi-user.Enable a-n=(a1,…,an-1,an+1,…,aN) indicate institute There is the result of decision in user other than user n.If user n has obtained other proprietary result of decision a-n, user n is just It needs to carry out unloading decision based on existing information or executes (a in local cpun=-1), then one channel of selection carries out Task unloads (an≥0).The foundation of decision is as follows:
Wherein Zn(an,a-n)The load function for indicating user n, is defined as follows:
We can construct multi-user's calculating task unloading Decision-making Game in this way.Г=(Ν, { An}n∈N, {Zn}n∈N), wherein Ν is user's collection, AnIt is set of strategies, ZnIt is the minimum of computation load that each user obtains.Next this is introduced Nash Equilibrium in a game.
Define 2:Result of decision collectionIt is one in multi-user's calculating task unloading Decision-making Game to receive Assorted equilibrium, if in result of decision collection a*In the case where, none user can be dropped by changing the result of decision of oneself Oneself low computational load.I.e.:
According to existing research, it can be concluded that, multi-user's calculating task unloads Decision-making Game, and there are a Nash Equilibriums, together When nash banlance state can be reached by the iteration of finite number of time.
And for the user that decision is unloaded, need to carry out the decision of second stage.xnIt can be obtained by following formula It arrives:
2.2.3 algorithm Solving Nash Equilibrium is unloaded based on two stages task
Next the two stages task unloading algorithm an of Solving Nash Equilibrium is provided.
The design of algorithm is equivalent to the distributed task scheduling of a centralization.Firstly, the base station having time of system centre is synchronous Function, therefore the operation of all mobile devices can be synchronized by the time of base station.In each time slot, all movements Equipment attempts the result of decision of update oneself all to reduce computational load, but not all update request can obtain center The license of base station, in this way, the result of decision updates will be there are three step in each time slot:
Step 1:Air interference measurement
At this stage, each mobile device can obtain the essential information of all channels from base station there, and movement is set The standby interference that channel can be calculated by these information.All mobile device (i.e. a for selecting unloading at this momentnIt (t) >=0) can be to Base station sends a marking signal, this marking signal can be the selected channel id of this mobile device.Base station receives institute After some marking signals, the reception power of each channel can be calculated by following formula:
Then base station transmits these information to all mobile devices.In this way, each mobile device n can be under The formula in face calculates interference:
That is, for the channel a of mobile device n selectionn(t), resulting interference is equal to channel total received power and subtracts Remove the power of mobile device n;For other channels, interference is exactly the reception power of that channel.
Step 2:MEC waits delay prediction
At this stage, MEC server needs to predict the average latency.When meeting demand for services, etc. To time twait=0;When being unsatisfactory for demand for services, it is necessary to predicted according to formula (2-17), and by this data with Mobile device is sent to the air interference in step 1.
Step 3:Decision is unloaded to update
Each mobile device has obtained the interference of every channel in the stage one, has obtained taking in MEC in the stage two Be engaged at device etc. it is to be delayed, in this stage, each mobile device calculates best ring further according to following formula using above-mentioned data It should collect:
If the Δ being calculatedn(t) non-empty illustrates that Nash Equilibrium state has not been reached yet in this mobile device, can be with Computational load is reduced by updating decision.So this mobile device will select a result of decision to send to base station RTU signal.After base station receives all RTU signals, one or more mobile devices being independent of each other can be randomly choosed to permit Perhaps decision updates, other mobile devices for not receiving UA signal will not update the decision of oneself in next time slot. Simultaneously for the user that needs wait, need to carry out the decision of second stage according to formula (2-26).So pass through finite number of time Iteration after, all mobile devices will reach Nash Equilibrium state, that is to say, that none mobile device can lead to It crosses and updates oneself decision to achieve the purpose that reduce computational load.Appoint in this way, algorithm just solves to calculate under multi-user scene Business unloading decision problem.
3 are directed to the elaboration of effect of the present invention
In this section, it according to algorithm above, is verified by simulated experiment.The simulating scenes of multi-user are as follows: Center of housing estate has a MEC server, this server service assumes that this MEC is serviced in all mobile subscribers of cell Device have limited computing capability, can only simultaneously service so many mobile subscriber, if exceeded, the user exceeded just needs Carry out the decision-making level of second stage.The mobile subscriber for needing to service in cell is according to random probability distribution in certain range It is interior.
3.1 simulated experiment parameter selections
Selection for emulation experiment parameter can refer to following table:
Table 3-1 simulated experiment parameter selection
In order to meet the individual demand of different mobile subscribers, four kinds of typical mobile computing tasks are had chosen here, no Same task has different needs delay, while the calculation amount of different task and data volume are also different.Specific ginseng Number is as follows:
The parameter selection of table 3-2 different application
In emulation experiment, a kind of task type is selected for each mobile subscriber at random, user each in this way has one The data volume and calculation amount that oneself is needed.
3.2 simulated experiment results and analysis
Fig. 1 illustrates the number of users for being in beneficial unloaded state in an iterative process:
From figure 1 it appears that in an iterative process, the number of users in beneficial unloaded state is continuously increased, finally Reach a stable state, illustrates that algorithm can reach in finite number of time (being 35 iteration in current emulation experiment) and receive Assorted equilibrium state, wherein there is 25 users that task has been selected to be unloaded to the execution of MEC server in 30 users.
And system reaches the number of iterations required for Nash Equilibrium state under conditions of Fig. 2 illustrates different user number:
From figure 2 it can be seen that it is also big that system reaches the number of iterations required for Nash Equilibrium with the increase of number of users Generally as linear trend increases, the multiuser distributed task unloading algorithm for illustrating that this chapter is proposed has good property Energy.
Fig. 3 shows the increase with the number of iterations, the accumulation line chart of all user loads.It can from Fig. 3 Out, with the increase of the number of iterations, the load of user constantly changes, but finally tends towards stability, and system integrally reaches a balance State, that is, be in multi-user's game Nash Equilibrium.Meanwhile the top one line, that is, system overall load variation diagram, load Constantly reduce, it is final to stablize.
Next, multi-user's two stages task proposed by the present invention is unloaded algorithm (Two-Phase Computing Offloading, TCO) it is compared with kinds of schemes, to test the performance of multi-user's task unloading algorithm.The scheme of comparison is such as Under:
(1) (Local Executed, LE) is locally executed:The task of user used is locally executed in mobile device, and It does not offload tasks on server;
(2) (Cloud Computing, CC) is executed on cloud:All users randomly choose a channel, then pass through this The calculating task of oneself is unloaded to MEC server and executed up by channel;
(3) ignore queueing delay (Ignore Waiting Delay, IWD):In the limited feelings of MEC server computing resource Under condition, this programme ignore the task of user's unloading in MEC queue queue etc. it is to be delayed, whether have foot regardless of MEC server Enough computing resources always execute on MEC without continuing to be unloaded to central cloud server.
In simulated experiment, we carry out repeatedly different experiments, in each experiment, number of mobile users to every kind of scheme Amount N's is chosen for N=15, and 20 ..., 50, while assuming that MEC server is at best able to simultaneously service 30 users.It is right Each N, we repeat experiment 100 times, then select average value as last result.As shown in Figure 4 and Figure 5.
The case where Fig. 4 illustrates number of users under different conditions, unloads number of users in beneficial task in different schemes. There is no the data of scheme LE in Fig. 4, because in the case where only local being executed when all users do not unload, Beneficial task unloading concept is with regard to nonsensical.It can be seen from the figure that this chapter is proposed multi-purpose under different numbers of users Distributed task scheduling unloading algorithm in family can have most numbers of users in beneficial task unloaded state, than all holding on cloud Row increase 27.4% in beneficial task unloaded state number of users.And scheme TCO and scheme IWD are compared and can be sent out Existing, as N≤30, the task unloading number of users that the two is in beneficial is identical, and works as N>When 30, although beneficial in IWD The number of users of task unloaded state is greater than CC, but still is slightly less than TCO.This is because the two is only difference is that TCO can consider The calculating restricted problem of MEC server will carry out two stages decision when MEC server resource deficiency.And IWD does not account for this A bit, even if task is also unloaded, caused the computing resource of MEC server by not further enough task unloading, IWD More queueing delays.
From the point of view of specific, Fig. 5 is illustrated when number of users is more than the maximum number of user that MEC can be serviced, and TCO is in IWD Beneficial task unloaded state versus number of users:
Although being mentioned it can be seen from the figure that TCO has certain performance boost compared to IWD under MEC out-of-resource condition Rise very little.This is because, need when task is further unloaded in central cloud server by MEC through trunk Network Communication, and Traffic rate is much lower compared to wireless communication between mobile device and MEC, leads to from MEC server to central cloud server it Between traffic load it is excessively high.But the structure of two stratum servers unloading proposed by the invention have in further investigation it is important Meaning.For example, when have multiple MEC servers and central cloud server connection and each MEC server service multiple users when, This two stratum servers unloading structure will have very good solution effect in the case where user is mobile between multiple MEC servers Fruit.Due to time and energy problem, this partial content without reference to.
In the case that Fig. 6 illustrates number of users difference, in different schemes the case where system overall load.It can be with from Fig. 6 Find out, scheme LE has maximum system overall load, and remaining scheme all reduces the negative of system entirety relative to scheme LE It carries, illustrating to offload tasks to execution on cloud obviously can bring benefit to user.In remaining each unloading strategy, Ben Zhangti Multiuser distributed task unloading algorithm out has minimum system overall load, and averagely reducing than scheme LE 67.5% is System load, the good results are evident for performance boost.It compares with scheme CC, is held up since all tasks are all unloaded to cloud by scheme CC Row causes some unloadings that can not bring the promotion of performance without the influence mutual in view of user.With scheme IWD Comparison, it is as a result similar with Fig. 6, the scheme IWD of being also due to have ignored in MEC server etc. it is to be delayed.
Meanwhile multi-user's two stages task that this chapter is proposed unloads algorithm when unloading only by user Necessary sub-fraction data are uploaded, and specific decision is still locally carried out in mobile device, ensures that number in this way According to it is privately owned with it is safe.From the result and analysis of simulated experiment above, we are it can be concluded that several conclusions as follows:
1. task unloading is necessary.If task can bring huge load without unloading.By the way that task is unloaded It is downloaded on cloud and executes, the load of task execution can be greatly reduced;
2. unloading strategy is of great significance in task unloading.It is negative that suitable unloading algorithm can be substantially reduced system It carries, brings benefit to mobile subscriber and network operation commercial city;
3. multi-user's two stages task, which unloads algorithm, is unloaded to center cloud service by whether continuing in second stage decision Device has taken into account the limited problem of MEC server computing resource, can preferably adapt to real usage scenario;
4. simultaneously, multi-user's two stages task unloading algorithm fully considers and meet the demand of user individual, pass through The different focus of user are integrated into a task execution load by weight factor, can be good at the personalized need for meeting user It asks.And user can be according to suitably selecting different weights the case where itself.

Claims (4)

1. the mobile edge calculations task discharging method under a kind of multi-user scene, which is characterized in that the multi-user scene is Multiple mobile devices are connected with MEC server, and each mobile device may be selected a plurality of between the mobile device and MEC server One in channel is communicated, and MEC server is connected by backbone network with central cloud server;
Detailed process is:
Step 1: multi-user scene task Unloading Model constructs
The task Unloading Model under multi-user scene is constructed, the task Unloading Model under multi-user scene includes traffic model, meter Calculate task model and calculating task load module;
Step 2: the two stages task based on game theory unloads strategy
First segment rank unloads strategy:Determine that unloading is executed on the mobile apparatus or on MEC server, second segment rank is unloaded Carrying strategy is:Carry out determining to be waited on MEC server or in central cloud server when MEC server resource deficiency It executes.
2. the mobile edge calculations task discharging method under a kind of multi-user scene according to claim 1, feature exist In,
In step 1, the building process of the task Unloading Model under multi-user scene is:
The building of traffic model
Traffic model includes mobile device, MEC server and central cloud server;
Mobile device and MEC server:By wirelessly being communicated between mobile device and MEC server;Assuming that each base station There is M channel, then the set of channel can be expressed as M={ 0,1 ..., M-1 },
Mobile device:For the result of decision of first stage mobile device n, with symbol anIt indicates, wherein an∈{-1}∪M;When anWhen=- 1, indicate that calculating task at this time executes on the local cpu of mobile device;Work as anWhen ∈ { M }, meter at this time is indicated Calculation task passes through channel anIt is unloaded on MEC server and executes;For the result of decision of second stage, with symbol xnTo indicate.Its Middle xn∈ { 0,1 };Work as xnIt indicates to execute on MEC server when=1, even if MEC inadequate resource, then continuing the queue in MEC Middle waiting;If xn=0, expression task, which continues to be unloaded in central cloud server, to be executed;
For all mobile devices, a result of decision vector a=(a can be obtained1, a2..., an) and x=(x1, x2..., xn);The upload data transfer rate (data rate) of every channel can be calculated according to result of decision vector:
Wherein w is the bandwidth of channel, qnIndicate the transmission power of mobile device, gN, sIndicate the letter between mobile device n and base station s Road gain,Indicate background noise power;
Down-transmitting data rate and upload data transfer rate between mobile device and MEC server is identical;
MEC server and central cloud server:MEC server and central cloud server carry out wired connection by backbone network;Its In, use symbol RacIt indicates the transmission data rate between MEC server and central cloud server, and assumes that this data exists It is constant in whole system implementation procedure, i.e., it needs to be unloaded to executing in central cloud server for task for each, there is phase Same data transmission rate;
Calculating task model
Calculating task model is indicated using a triple:Wherein bnIndicate what calculating task needed Data volume, the data include program code, input parameter, are unloaded on MEC server and execute if necessary, then this part On data need to upload onto the server by the TU module of mobile device;dnCalculation amount needed for indicating calculating task, with CPU's Operand indicates;rnThe result data for indicating calculating task, is needed if being unloaded using task above and below MEC server Pass to mobile device;
Calculating task load module, including
(1) local computing model
When user n decision is performed locally calculating task TnWhen, the process being related to only has mobile device local cpu to execute calculating Task, it is assumed that the computing capability of mobile device isThe time that calculating task is performed locally is as follows:
The energy consumption for executing calculating task is expressed as:
Wherein cnIndicate the power of mobile device local cpu;
By the delay and energy consumption of calculating task, overall load model of the calculating task when local cpu executes is established:
Wherein coefficientWithRespectively indicate the weight of mobile device n calculating task delay and energy consumption when unloading decision;Two Coefficient meets following relationship:
WhenWhen larger, indicate at this time mobile device n more focused on calculating task delay, to postpone it is more sensitive;And work as When bigger, then it represents that the electricity of mobile device n is lower at this time;
(2) MEC server computing model
When mobile subscriber, which determines for calculating task to be unloaded to, to be executed on MEC server, mobile device needs to calculate and appoint first Be engaged in TnIt is uploaded on MEC server by suitable channel, then MEC server generation is specifically appointed for mobile device to execute Business;Calculating task needs just be finally completed by three steps:Task uploads, cloud executes and result passback;
The stage is uploaded in task under the scene of multi-user, each mobile device needs to select one to be communicated with MEC server Channel;
The stage is uploaded in task, mobile device n needs additional delay and energy consumption to complete the unloading of task;Movement is set The standby data that calculating task is uploaded firstly the need of one channel of selection, delay when can obtain upload task are as follows:
Wherein bnIndicate that mobile device needs the data volume uploaded, r at this timen(a) data of channel selected by mobile device n are indicated Rate;For mobile device when uploading calculating task, TU module needs to consume certain energy, as follows:
Wherein qnThe transmission power of expression mobile device n, and LnIt indicates that mobile device is additionally required after emitting certain data to disappear The energy of consumption;
When mobile device n successfully uploads to calculating task on MEC server, the process of calculating task execution has been begun to; Assuming that the computing capability that MEC server distributes to the virtual machine of mobile device n isCalculating task holding on MEC server The row time is represented by:
When the computing capability on MEC server is not able to satisfy the calculating demand of all users, postorder user's calculating task is needed It will be in the decision of MEC server progress second stage;If second stage decision waits at MEC, the execution time is represented by:
Wherein twaitQueueing delay caused by indicating due to MEC computing resource deficiency;When MEC computing resource is enough, twait= 0;
Wherein, the t when MEC inadequate resourcewaitCalculating predicted using the MEC Server latency based on queueing theory
The time that calculated result is returned can be expressed as:
Mobile device reception result energy consumption is represented by:
Wherein pnFor the reception power of mobile device, the aggregate delay executed on MEC server is calculated as follows:
The comprehensive energy consumption executed on MEC server can obtain:
(3) central cloud server computation model
It is executed if second stage selection is unloaded in central cloud server, bulk delay can be expressed as:
WhereinWithIt respectively indicates task and uploads to central cloud server, in central cloud server from MEC Upper execution and time of the passback result to MEC from central cloud server;
WhereinIt is the computing capability of central cloud server;
(4) server integrated load model
Overall load of the calculating task when executing on MEC server or central cloud server is as follows:
Coefficient in formulaWithIt is identical as local computing load module meaning, and xnIt is the result of decision of second stage.
3. the mobile edge calculations task discharging method under a kind of multi-user scene according to claim 2, feature exist In the process of the MEC Server latency prediction based on queueing theory is:
According to the Little rule in queueing theory, under equilibrium condition, task is system in the average time that MEC server waits Average waiting queue length divided by task average admission rate, i.e.,:
WhereinIt is average waiting queue length, andIt is the average admission rate of task;The measurement of the two parameters needs to service in MEC Device end carries out;
ForThe number of tasks N waited at MEC at this time is counted in each time slot tt- C is calculated flat as time increases Equal congestion lengths:
NtAll number of tasks of t moment are represented, C is the number of users that MEC server can service simultaneously, and it is exactly to need to arrange that the two, which is subtracted each other, The number of tasks of team;
Wherein NtIt is whole number of tasks of t moment, while obtains task and being averaged admission rate:
Wherein N0It is number of tasks when decision starts in system, rather than number of tasks when system is initial.
4. the mobile edge calculations task discharging method under a kind of multi-user scene according to claim 3, feature exist In in step 2, the process of the two stages task unloading strategy based on game theory is:
Utilize obtained server integrated load modelCome solve the problems, such as the task under multi-user scene unload, by this Problem is converted into the problem of game of a multi-user, then provides the method for Solve problems;
Step 1 determines optimization aim
As make the number of users for reaching beneficial state by task unloading most as possible under a multi-user scene, this optimization mesh Mark can be indicated by following formula:
WhereinIt is an indicator function, is defined as follows:
Step 2, building multi-user's betting model
Enable a-n=(a1..., an-1, an+1..., aN) indicate the result of decision in all users other than user n;If with Family n has obtained other proprietary result of decision a-n, user n just needs to carry out unloading decision, a based on existing informationn=-1 When local cpu execute, anOne channel of selection carries out task unloading when >=0;
The foundation of decision is as follows:
Wherein Zn(an, a-n) indicate user n load function, be defined as follows:
Γ=(N, { An}n∈N, { Zn}n∈N), wherein N is user's collection, AnIt is set of strategies, ZnIt is the minimum of computation that each user obtains Load.
Γ indicates that multi-user's calculating task unloads Decision-making Game;
The Nash Equilibrium in the game is provided, specially
Definition:Result of decision collectionIt is a Nash Equilibrium in multi-user's calculating task unloading Decision-making Game, If in result of decision collection a*In the case where, none user can reduce oneself by changing the result of decision of oneself Computational load;I.e.:
Multi-user's calculating task unloads Decision-making Game, and there are a Nash Equilibriums, while can be reached by the iteration of finite number of time Nash banlance state;
And for the user that decision is unloaded, need to carry out the decision of second stage, xnIt is obtained by following formula:
Step 3 unloads algorithm Solving Nash Equilibrium based on two stages task, and process is:
Step 1:Air interference measurement
It is calculate by the following formula the reception power of each channel:
Then the reception power of each channel is sent to all mobile devices by base station;Each mobile device n can pass through Following formula calculates interference:
For the channel a of mobile device n selectionn(t), resulting interference is equal to the function that channel total received power subtracts mobile device n Rate;For other channels, interference is exactly the reception power of that channel;
Step 2:MEC waits delay prediction
MEC server needs to predict the average latency:When meeting demand for services, waiting time twait=0;When not When meeting demand for services, it is necessary to according to formulaIt is predicted, and by this data in company with wireless in step 1 Interference is sent to mobile device;
Step 3:Decision is unloaded to update
Each mobile device has obtained the interference of every channel in the stage one, has been obtained in the stage two in MEC server Place etc. it is to be delayed, in unloading decision updates, each mobile device using every channel interference and wait it is to be delayed the two Data combine following formula to calculate best response collection Δn(t),It is Δn(t) element in;
If the Δ being calculatedn(t) non-empty illustrates that Nash Equilibrium state has not been reached yet in this mobile device, can be by more New decision reduces computational load;So this mobile device will select a result of decision to send RTU signal to base station; After base station receives all RTU signals, one or more mobile devices being independent of each other can be randomly choosed to allow decision more Newly, other mobile devices for not receiving UA signal will not update the decision of oneself in next time slot;Simultaneously for The user for needing to wait carries out the decision of second stage according to formula (2-26);So after the iteration of finite number of time, institute Some mobile devices will reach Nash Equilibrium state, that is to say, that none mobile device can be by updating determining for oneself Plan come achieve the purpose that reduce computational load.
CN201810774689.1A 2018-07-13 2018-07-13 Mobile edge computing task unloading method under multi-user scene Active CN108920279B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810774689.1A CN108920279B (en) 2018-07-13 2018-07-13 Mobile edge computing task unloading method under multi-user scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810774689.1A CN108920279B (en) 2018-07-13 2018-07-13 Mobile edge computing task unloading method under multi-user scene

Publications (2)

Publication Number Publication Date
CN108920279A true CN108920279A (en) 2018-11-30
CN108920279B CN108920279B (en) 2021-06-08

Family

ID=64411784

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810774689.1A Active CN108920279B (en) 2018-07-13 2018-07-13 Mobile edge computing task unloading method under multi-user scene

Country Status (1)

Country Link
CN (1) CN108920279B (en)

Cited By (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109756912A (en) * 2019-03-25 2019-05-14 重庆邮电大学 A kind of multiple base stations united task unloading of multi-user and resource allocation methods
CN109767117A (en) * 2019-01-11 2019-05-17 中南林业科技大学 The power distribution method of Joint Task scheduling in mobile edge calculations
CN109951897A (en) * 2019-03-08 2019-06-28 东华大学 A kind of MEC discharging method under energy consumption and deferred constraint
CN110022381A (en) * 2019-05-14 2019-07-16 中国联合网络通信集团有限公司 A kind of load sharing method and device
CN110062026A (en) * 2019-03-15 2019-07-26 重庆邮电大学 Mobile edge calculations resources in network distribution and calculating unloading combined optimization scheme
CN110113140A (en) * 2019-03-08 2019-08-09 北京邮电大学 A kind of mist calculates the calculating discharging method in wireless network
CN110149401A (en) * 2019-05-22 2019-08-20 湖南大学 It is a kind of for optimizing the method and system of edge calculations task
CN110167059A (en) * 2019-05-22 2019-08-23 电子科技大学 BTS service amount prediction technique under a kind of edge calculations scene
CN110287024A (en) * 2019-06-12 2019-09-27 浙江理工大学 The dispatching method of multi-service oriented device multi-user in a kind of industrial intelligent edge calculations
CN110377353A (en) * 2019-05-21 2019-10-25 湖南大学 Calculating task uninstalling system and method
CN110399210A (en) * 2019-07-30 2019-11-01 中国联合网络通信集团有限公司 Method for scheduling task and device based on edge cloud
CN110401714A (en) * 2019-07-25 2019-11-01 南京邮电大学 A kind of unloading target in edge calculations based on Chebyshev's distance determines method
CN110418356A (en) * 2019-06-18 2019-11-05 深圳大学 A kind of calculating task discharging method, device and computer readable storage medium
CN110460650A (en) * 2019-07-25 2019-11-15 北京信息科技大学 The decision-making technique and device of unloading are calculated under multiple edge server scene
CN110493313A (en) * 2019-07-19 2019-11-22 北京邮电大学 A kind of method and system of the dispatch service use-case in based on mobile edge calculations network
CN110536308A (en) * 2019-08-07 2019-12-03 中科边缘智慧信息科技(苏州)有限公司 A kind of multinode calculating discharging method based on game
CN110708713A (en) * 2019-10-29 2020-01-17 安徽大学 Mobile edge calculation mobile terminal energy efficiency optimization method adopting multidimensional game
CN110765365A (en) * 2019-10-25 2020-02-07 国网河南省电力公司信息通信公司 Method, device, equipment and medium for realizing distributed edge cloud collaborative caching strategy
CN110809291A (en) * 2019-10-31 2020-02-18 东华大学 Double-layer load balancing method of mobile edge computing system based on energy acquisition equipment
CN110990130A (en) * 2019-10-28 2020-04-10 华东师范大学 Reproducible self-adaptive computation unloading layered service quality optimization method
CN111124666A (en) * 2019-11-25 2020-05-08 哈尔滨工业大学 Efficient and safe multi-user multi-task unloading method in mobile Internet of things
CN111160525A (en) * 2019-12-17 2020-05-15 天津大学 Task unloading intelligent decision method based on unmanned aerial vehicle group in edge computing environment
CN111262947A (en) * 2020-02-10 2020-06-09 深圳清华大学研究院 Calculation-intensive data state updating implementation method based on mobile edge calculation
CN111258677A (en) * 2020-01-16 2020-06-09 重庆邮电大学 Task unloading method for heterogeneous network edge computing
CN111400001A (en) * 2020-03-09 2020-07-10 清华大学 Online computing task unloading scheduling method facing edge computing environment
CN111556143A (en) * 2020-04-27 2020-08-18 中南林业科技大学 Method for minimizing time delay under cooperative unloading mechanism in mobile edge computing
CN111935677A (en) * 2020-08-10 2020-11-13 无锡太湖学院 Internet of vehicles V2I mode task unloading method and system
CN111930436A (en) * 2020-07-13 2020-11-13 兰州理工大学 Random task queuing and unloading optimization method based on edge calculation
CN112004239A (en) * 2020-08-11 2020-11-27 中国科学院计算机网络信息中心 Computing unloading method and system based on cloud edge cooperation
CN112039965A (en) * 2020-08-24 2020-12-04 重庆邮电大学 Multitask unloading method and system in time-sensitive network
WO2021023042A1 (en) * 2019-08-07 2021-02-11 华为技术有限公司 Method for searching edge computing server and related device
CN112835637A (en) * 2021-01-26 2021-05-25 天津理工大学 Task unloading method for vehicle user mobile edge calculation
CN113238814A (en) * 2021-05-11 2021-08-10 燕山大学 MEC task unloading system and optimization method based on multiple users and classification tasks
CN113347267A (en) * 2021-06-22 2021-09-03 中南大学 MEC server deployment method in mobile edge cloud computing network
CN113613260A (en) * 2021-08-12 2021-11-05 西北工业大学 Method and system for optimizing distance-distance cooperative perception delay moving edge calculation
CN113687876A (en) * 2021-08-17 2021-11-23 华北电力大学(保定) Information processing method, automatic driving control method and electronic equipment
CN113743012A (en) * 2021-09-06 2021-12-03 山东大学 Cloud-edge collaborative mode task unloading optimization method under multi-user scene
CN113791878A (en) * 2021-07-21 2021-12-14 南京大学 Distributed task unloading method for deadline perception in edge calculation
CN113934534A (en) * 2021-09-27 2022-01-14 苏州大学 Method and system for computing and unloading multi-user sequence tasks under heterogeneous edge environment
CN113950103A (en) * 2021-09-10 2022-01-18 西安电子科技大学 Multi-server complete computing unloading method and system under mobile edge environment
CN114051266A (en) * 2021-11-08 2022-02-15 首都师范大学 Wireless body area network task unloading method based on mobile cloud-edge computing
CN115134364A (en) * 2022-06-28 2022-09-30 西华大学 Energy-saving calculation unloading system and method based on O-RAN internet of things system
CN115696405A (en) * 2023-01-05 2023-02-03 山东省计算中心(国家超级计算济南中心) Computing task unloading optimization method and system considering fairness
CN116600348A (en) * 2023-07-18 2023-08-15 北京航空航天大学 Mobile edge computing device computing unloading method based on game theory

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170013495A1 (en) * 2015-07-10 2017-01-12 Lg Electronics Inc. Method and apparatus for an input data processing via a local computing or offloading based on power harvesting in a wireless communication system
CN107466482A (en) * 2017-06-07 2017-12-12 香港应用科技研究院有限公司 Joint determines the method and system for calculating unloading and content prefetches in a cellular communication system
CN107682443A (en) * 2017-10-19 2018-02-09 北京工业大学 Joint considers the efficient discharging method of the mobile edge calculations system-computed task of delay and energy expenditure
CN107819840A (en) * 2017-10-31 2018-03-20 北京邮电大学 Distributed mobile edge calculations discharging method in the super-intensive network architecture

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170013495A1 (en) * 2015-07-10 2017-01-12 Lg Electronics Inc. Method and apparatus for an input data processing via a local computing or offloading based on power harvesting in a wireless communication system
CN107466482A (en) * 2017-06-07 2017-12-12 香港应用科技研究院有限公司 Joint determines the method and system for calculating unloading and content prefetches in a cellular communication system
CN107682443A (en) * 2017-10-19 2018-02-09 北京工业大学 Joint considers the efficient discharging method of the mobile edge calculations system-computed task of delay and energy expenditure
CN107819840A (en) * 2017-10-31 2018-03-20 北京邮电大学 Distributed mobile edge calculations discharging method in the super-intensive network architecture

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JUNFENG GUO ET AL.: "Energy-Efficient Resource Allocation for Multi-User Mobile Edge Computing", 《GLOBECOM 2017 - 2017 IEEE GLOBAL COMMUNICATIONS CONFERENCE》 *
YIBO YANG ET AL.: "Joint Optimization of Energy Consumption and Packet Scheduling for Mobile Edge Computing in Cyber-Physical Networks", 《SPECIAL SECTION ON CYBER-PHYSICAL-SOCIAL COMPUTING AND NETWORKING》 *

Cited By (73)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109767117A (en) * 2019-01-11 2019-05-17 中南林业科技大学 The power distribution method of Joint Task scheduling in mobile edge calculations
CN109767117B (en) * 2019-01-11 2021-05-18 中南林业科技大学 Power distribution method for joint task scheduling in mobile edge computing
CN109951897A (en) * 2019-03-08 2019-06-28 东华大学 A kind of MEC discharging method under energy consumption and deferred constraint
CN110113140A (en) * 2019-03-08 2019-08-09 北京邮电大学 A kind of mist calculates the calculating discharging method in wireless network
CN110113140B (en) * 2019-03-08 2020-08-11 北京邮电大学 Calculation unloading method in fog calculation wireless network
CN110062026A (en) * 2019-03-15 2019-07-26 重庆邮电大学 Mobile edge calculations resources in network distribution and calculating unloading combined optimization scheme
CN109756912B (en) * 2019-03-25 2022-03-08 重庆邮电大学 Multi-user multi-base station joint task unloading and resource allocation method
CN109756912A (en) * 2019-03-25 2019-05-14 重庆邮电大学 A kind of multiple base stations united task unloading of multi-user and resource allocation methods
CN110022381A (en) * 2019-05-14 2019-07-16 中国联合网络通信集团有限公司 A kind of load sharing method and device
CN110377353A (en) * 2019-05-21 2019-10-25 湖南大学 Calculating task uninstalling system and method
CN110377353B (en) * 2019-05-21 2022-02-08 湖南大学 System and method for unloading computing tasks
CN110167059A (en) * 2019-05-22 2019-08-23 电子科技大学 BTS service amount prediction technique under a kind of edge calculations scene
CN110149401A (en) * 2019-05-22 2019-08-20 湖南大学 It is a kind of for optimizing the method and system of edge calculations task
CN110287024B (en) * 2019-06-12 2021-09-28 浙江理工大学 Multi-server and multi-user oriented scheduling method in industrial intelligent edge computing
CN110287024A (en) * 2019-06-12 2019-09-27 浙江理工大学 The dispatching method of multi-service oriented device multi-user in a kind of industrial intelligent edge calculations
CN110418356A (en) * 2019-06-18 2019-11-05 深圳大学 A kind of calculating task discharging method, device and computer readable storage medium
CN110493313A (en) * 2019-07-19 2019-11-22 北京邮电大学 A kind of method and system of the dispatch service use-case in based on mobile edge calculations network
CN110401714A (en) * 2019-07-25 2019-11-01 南京邮电大学 A kind of unloading target in edge calculations based on Chebyshev's distance determines method
CN110460650B (en) * 2019-07-25 2022-02-15 北京信息科技大学 Decision-making method and device for computation unloading in multi-edge server scene
CN110401714B (en) * 2019-07-25 2022-02-01 南京邮电大学 Unloading target determination method based on Chebyshev distance in edge calculation
CN110460650A (en) * 2019-07-25 2019-11-15 北京信息科技大学 The decision-making technique and device of unloading are calculated under multiple edge server scene
CN110399210A (en) * 2019-07-30 2019-11-01 中国联合网络通信集团有限公司 Method for scheduling task and device based on edge cloud
CN110399210B (en) * 2019-07-30 2021-10-01 中国联合网络通信集团有限公司 Task scheduling method and device based on edge cloud
WO2021023042A1 (en) * 2019-08-07 2021-02-11 华为技术有限公司 Method for searching edge computing server and related device
CN110536308A (en) * 2019-08-07 2019-12-03 中科边缘智慧信息科技(苏州)有限公司 A kind of multinode calculating discharging method based on game
CN110765365A (en) * 2019-10-25 2020-02-07 国网河南省电力公司信息通信公司 Method, device, equipment and medium for realizing distributed edge cloud collaborative caching strategy
CN110990130B (en) * 2019-10-28 2023-05-12 华东师范大学 Reproducible adaptive calculation unloading layering service quality optimization method
CN110990130A (en) * 2019-10-28 2020-04-10 华东师范大学 Reproducible self-adaptive computation unloading layered service quality optimization method
CN110708713A (en) * 2019-10-29 2020-01-17 安徽大学 Mobile edge calculation mobile terminal energy efficiency optimization method adopting multidimensional game
CN110708713B (en) * 2019-10-29 2022-07-29 安徽大学 Mobile edge calculation mobile terminal energy efficiency optimization method adopting multidimensional game
CN110809291B (en) * 2019-10-31 2021-08-27 东华大学 Double-layer load balancing method of mobile edge computing system based on energy acquisition equipment
CN110809291A (en) * 2019-10-31 2020-02-18 东华大学 Double-layer load balancing method of mobile edge computing system based on energy acquisition equipment
CN111124666A (en) * 2019-11-25 2020-05-08 哈尔滨工业大学 Efficient and safe multi-user multi-task unloading method in mobile Internet of things
CN111124666B (en) * 2019-11-25 2023-05-12 哈尔滨工业大学 Efficient and safe multi-user multi-task unloading method in mobile Internet of things
CN111160525A (en) * 2019-12-17 2020-05-15 天津大学 Task unloading intelligent decision method based on unmanned aerial vehicle group in edge computing environment
CN111258677B (en) * 2020-01-16 2023-12-15 北京兴汉网际股份有限公司 Task unloading method for heterogeneous network edge computing
CN111258677A (en) * 2020-01-16 2020-06-09 重庆邮电大学 Task unloading method for heterogeneous network edge computing
CN111262947A (en) * 2020-02-10 2020-06-09 深圳清华大学研究院 Calculation-intensive data state updating implementation method based on mobile edge calculation
CN111400001B (en) * 2020-03-09 2022-09-23 清华大学 Online computing task unloading scheduling method facing edge computing environment
CN111400001A (en) * 2020-03-09 2020-07-10 清华大学 Online computing task unloading scheduling method facing edge computing environment
CN111556143A (en) * 2020-04-27 2020-08-18 中南林业科技大学 Method for minimizing time delay under cooperative unloading mechanism in mobile edge computing
CN111930436A (en) * 2020-07-13 2020-11-13 兰州理工大学 Random task queuing and unloading optimization method based on edge calculation
CN111935677A (en) * 2020-08-10 2020-11-13 无锡太湖学院 Internet of vehicles V2I mode task unloading method and system
CN111935677B (en) * 2020-08-10 2023-05-16 无锡太湖学院 Internet of vehicles V2I mode task unloading method and system
CN112004239B (en) * 2020-08-11 2023-11-21 中国科学院计算机网络信息中心 Cloud edge collaboration-based computing and unloading method and system
CN112004239A (en) * 2020-08-11 2020-11-27 中国科学院计算机网络信息中心 Computing unloading method and system based on cloud edge cooperation
CN112039965B (en) * 2020-08-24 2022-07-12 重庆邮电大学 Multitask unloading method and system in time-sensitive network
CN112039965A (en) * 2020-08-24 2020-12-04 重庆邮电大学 Multitask unloading method and system in time-sensitive network
CN112835637B (en) * 2021-01-26 2022-05-17 天津理工大学 Task unloading method for vehicle user mobile edge calculation
CN112835637A (en) * 2021-01-26 2021-05-25 天津理工大学 Task unloading method for vehicle user mobile edge calculation
CN113238814A (en) * 2021-05-11 2021-08-10 燕山大学 MEC task unloading system and optimization method based on multiple users and classification tasks
CN113238814B (en) * 2021-05-11 2022-07-15 燕山大学 MEC task unloading system and optimization method based on multiple users and classification tasks
CN113347267B (en) * 2021-06-22 2022-03-18 中南大学 MEC server deployment method in mobile edge cloud computing network
CN113347267A (en) * 2021-06-22 2021-09-03 中南大学 MEC server deployment method in mobile edge cloud computing network
CN113791878A (en) * 2021-07-21 2021-12-14 南京大学 Distributed task unloading method for deadline perception in edge calculation
CN113791878B (en) * 2021-07-21 2023-11-17 南京大学 Distributed task unloading method for perceiving expiration date in edge calculation
CN113613260A (en) * 2021-08-12 2021-11-05 西北工业大学 Method and system for optimizing distance-distance cooperative perception delay moving edge calculation
CN113613260B (en) * 2021-08-12 2022-08-19 西北工业大学 Method and system for optimizing distance-distance cooperative perception delay moving edge calculation
CN113687876A (en) * 2021-08-17 2021-11-23 华北电力大学(保定) Information processing method, automatic driving control method and electronic equipment
CN113687876B (en) * 2021-08-17 2023-05-23 华北电力大学(保定) Information processing method, automatic driving control method and electronic device
CN113743012A (en) * 2021-09-06 2021-12-03 山东大学 Cloud-edge collaborative mode task unloading optimization method under multi-user scene
CN113743012B (en) * 2021-09-06 2023-10-10 山东大学 Cloud-edge collaborative mode task unloading optimization method under multi-user scene
CN113950103B (en) * 2021-09-10 2022-11-04 西安电子科技大学 Multi-server complete computing unloading method and system under mobile edge environment
CN113950103A (en) * 2021-09-10 2022-01-18 西安电子科技大学 Multi-server complete computing unloading method and system under mobile edge environment
CN113934534A (en) * 2021-09-27 2022-01-14 苏州大学 Method and system for computing and unloading multi-user sequence tasks under heterogeneous edge environment
CN114051266A (en) * 2021-11-08 2022-02-15 首都师范大学 Wireless body area network task unloading method based on mobile cloud-edge computing
CN114051266B (en) * 2021-11-08 2024-01-12 首都师范大学 Wireless body area network task unloading method based on mobile cloud-edge calculation
CN115134364A (en) * 2022-06-28 2022-09-30 西华大学 Energy-saving calculation unloading system and method based on O-RAN internet of things system
CN115134364B (en) * 2022-06-28 2023-06-16 西华大学 Energy-saving computing and unloading system and method based on O-RAN (O-radio Access network) Internet of things system
CN115696405A (en) * 2023-01-05 2023-02-03 山东省计算中心(国家超级计算济南中心) Computing task unloading optimization method and system considering fairness
CN115696405B (en) * 2023-01-05 2023-04-07 山东省计算中心(国家超级计算济南中心) Computing task unloading optimization method and system considering fairness
CN116600348A (en) * 2023-07-18 2023-08-15 北京航空航天大学 Mobile edge computing device computing unloading method based on game theory
CN116600348B (en) * 2023-07-18 2023-09-15 北京航空航天大学 Mobile edge computing device computing unloading method based on game theory

Also Published As

Publication number Publication date
CN108920279B (en) 2021-06-08

Similar Documents

Publication Publication Date Title
CN108920279A (en) A kind of mobile edge calculations task discharging method under multi-user scene
CN108809695B (en) Distributed uplink unloading strategy facing mobile edge calculation
KR101766708B1 (en) Service provisioning using abstracted network resource requirements
Shi et al. Large-scale convex optimization for ultra-dense cloud-RAN
CN110351754B (en) Industrial Internet machine equipment user data calculation unloading decision method based on Q-learning
CN110708713B (en) Mobile edge calculation mobile terminal energy efficiency optimization method adopting multidimensional game
CN107819840A (en) Distributed mobile edge calculations discharging method in the super-intensive network architecture
CN108541027A (en) A kind of communication computing resource method of replacing based on edge cloud network
CN107682443A (en) Joint considers the efficient discharging method of the mobile edge calculations system-computed task of delay and energy expenditure
Baştuğ et al. Proactive caching in 5G small cell networks
CN111182570A (en) User association and edge computing unloading method for improving utility of operator
CN113810233B (en) Distributed computation unloading method based on computation network cooperation in random network
CN107734482B (en) The content distribution method unloaded based on D2D and business
Zhao et al. Task proactive caching based computation offloading and resource allocation in mobile-edge computing systems
CN110519849B (en) Communication and computing resource joint allocation method for mobile edge computing
CN107105455A (en) It is a kind of that load-balancing method is accessed based on the user perceived from backhaul
CN107645731A (en) Load-balancing method based on self-organizing resource allocation in a kind of non-orthogonal multiple access system
CN108600020A (en) Method for processing business, device and server
CN110430593B (en) Method for unloading tasks of edge computing user
Le et al. Joint cache allocation with incentive and user association in cloud radio access networks using hierarchical game
Zhou et al. Knowledge transfer based radio and computation resource allocation for 5G RAN slicing
CN106105282B (en) The system and method for carrying out traffic engineering using link buffer zone state
CN113038583A (en) Inter-cell downlink interference control method, device and system suitable for ultra-dense network
CN114615705B (en) Single-user resource allocation strategy method based on 5G network
Park et al. Successful edge computing probability analysis in heterogeneous networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: He Hui

Inventor after: Zhang Weizhe

Inventor after: Fang Binxing

Inventor after: Liu Chuanyi

Inventor after: Yu Xiangzhan

Inventor after: Liu Yawei

Inventor after: Liu Guoqiang

Inventor before: Zhang Weizhe

Inventor before: Fang Binxing

Inventor before: He Hui

Inventor before: Liu Chuanyi

Inventor before: Yu Xiangzhan

Inventor before: Liu Yawei

Inventor before: Liu Guoqiang

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant