CN109362064A - The task buffer allocation strategy based on MEC in mobile edge calculations network - Google Patents
The task buffer allocation strategy based on MEC in mobile edge calculations network Download PDFInfo
- Publication number
- CN109362064A CN109362064A CN201811074167.7A CN201811074167A CN109362064A CN 109362064 A CN109362064 A CN 109362064A CN 201811074167 A CN201811074167 A CN 201811074167A CN 109362064 A CN109362064 A CN 109362064A
- Authority
- CN
- China
- Prior art keywords
- video
- mec
- caching
- mec server
- server
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/50—Service provisioning or reconfiguring
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/433—Content storage operation, e.g. storage operation in response to a pause request, caching operations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W28/00—Network traffic management; Network resource management
- H04W28/02—Traffic management, e.g. flow control or congestion control
- H04W28/10—Flow control between communication endpoints
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Multimedia (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Information Transfer Between Computers (AREA)
Abstract
The present invention discloses a kind of task buffer allocation strategy based on MEC in mobile edge calculations network, the edge cache system architecture including using a kind of MEC server;Using the collaboration feature of MEC server, the caching for carrying out task merges and caching segmentation;The time delay that accesses under constraint condition and energy consumption analysis;Access time delay and energy consumption are optimized using coordinate descent algorithm;The present invention can effectively cache more videos on MEC server, and greatly reduce access delay and energy consumption.
Description
Technical field
The present invention relates to wireless communication technology fields, in particular to moving in edge calculations network based on MEC for task is slow
Deposit distribution.
Background technique
According to Cisco's mobile data volume forecasting, global data flow is estimated to reach monthly 49EB in 2021.View at present
Frequency flow accounts for the 60% of smart phone total flow, it is contemplated that will rise to 78% by 2021.Telecom operators are faced with huge
Pressure, it is desirable that they need to expand the network capacity of oneself, to cope with so huge flow.
Be by the video flow that cellular network accesses it is intermittent, it is usual non-with off-peak period compared in peak period
Chang Gao.This fragmentary video flow behavior makes telecom operators be difficult to effectively manage network.
Summary of the invention
For the above the deficiencies in the prior art, the present invention proposes a kind of appointing based on MEC in mobile edge calculations network
Business caching allocation strategy, can effectively cache more videos, and greatly reduce access delay on MEC server
And energy consumption.
A kind of task buffer allocation strategy for moving in edge calculations network based on MEC, comprising the following steps:
Step 101: using a kind of edge cache system architecture of MEC server;
Step 102: using the collaboration feature of MEC server, the caching for carrying out task merges and segmentation;
Step 103: the time delay that accesses under constraint condition and energy consumption analysis;
Step 104: being optimized access time delay using coordinate descent algorithm;
Preferably, the step 101 includes: MEC network by more using a kind of edge cache system architecture of MEC server
A MEC server composition, is connected by backward chaining.Each MEC server is deployed in a cellular network together with eNB
In, calculating, storage and network function are provided, to support the application program of context-aware and delay-sensitive near user.MEC
The processing of server and storage capacity are used for Video Quality Metric and caching.These MEC servers can be with their calculating of cooperation share
And storage resource.Based on the video request received, MEC server can provide in video from its caching (if available)
Hold, or downloads content from internet, and provide service for user, while caching identical content for accessing in the future.If
There is the request video of a higher bit rate version in the buffer, then available Video Quality Metric is request by MEC server
Low bit rate version, to serve user.That is a higher bitrate video is compressed to one by video trans-rating
A lower bit rate version, Video Quality Metric are a computation-intensive tasks, and calculating cost can be on MEC server
Cpu cycle across grading measures.
Preferably, step 102 is according to the collaboration feature using MEC server, and the caching for carrying out task merges and caching point
It cuts.The solution of proposal is attempted simultaneously using caching and processing capacity on MEC server, to meet user to different bits
The request of the video of rate version.Its processing capacity can be used in MEC server, by a Video Quality Metric at lower bit
Rate, to meet the request of user.
Preferably, it is analyzed described in step 103 in access delay and energy consumption, it is characterised in that institute in step 103
That states analyzes in access delay and energy consumption.Firstly, calculating separately out the delay T of local computing and mobile edge cloud computingl、
TcWith energy consumption cost El、Ec, for task buffer problem, we define integer cache decision variable, and caching is divided into xk(0
≤xk≤ 1) it is (x that, task k, which is all cached to edge cloud,k=1) it, does not cache as (xk=0).When we distribute more cachings
Come when storing complete video, average access latency be will increase, because the number of videos that can store in the buffer can be reduced, from
And reduce hit rate.When more storages, which are assigned to, caches complete video, external flow load will be reduced, because
Each being buffered in full video cache hit and will lead to external flow is zero.Select the caching ration of division appropriate be it is very intractable,
Because reducing access delay and external flow load is conflicting target.
In view of the joint of task buffer and mobile edge cloud, user the general assignment duration is,
T=xkTc+(1-xk)Tl (1)
In view of the joint of task buffer and mobile edge cloud, user's general assignment energy consumption is,
E=xkEc+(1-xk)El (2)
So the problem is to seek the minimum value of Copula G (x),
G (x)=T+E (3)
Fig. 3 shows the average access latency of different caching segmentations and the variation of energy consumption.
From the figures it is clear that when we distribute more cachings to store complete video, average access
Delay will increase, because the number of videos that can store in the buffer can be reduced, to reduce hit rate.When more storage quilts
Be assigned to when caching complete video, additional energy consumption will be reduced, select the caching ration of division appropriate be it is very intractable,
Because reducing access delay and external flow load is conflicting target.
Preferably, the task buffer allocation strategy according to claim 1 based on MEC, it is characterised in that step 104
Described in, under based on access delay and energy consumption analysis, task buffer is allocated using a kind of coordinate descent algorithm.
Access delay can be reduced due to replicating complete video on MEC network, but the number of videos cached is reduced therewith, directly
Influence hit rate.Pass through the initial segment of buffered video, it is possible to reduce access delay, because buffered on MEC server
Complete video, the energy consumption of mobile device are only calculated by the way that task definition to be unloaded to the communications cost of edge cloud.
Therefore, in order to reduce access delay, the initial segment of video is copied to the lower network of carrying cost.
Detailed description of the invention
Fig. 1 present invention is used to move the task buffer allocation strategy implementation flow chart in edge calculations network based on MEC;
Fig. 2 present invention is used for the edge cache system architecture of MEC server;
The average access latency of the different caching segmentations of Fig. 3 present invention and the variation of energy consumption;
Specific embodiment
To make the object, technical solutions and advantages of the present invention express to be more clearly understood, with reference to the accompanying drawing and specifically
Case study on implementation is described in further details the present invention.
Fig. 1 present invention is used to move the task buffer allocation strategy flow chart in edge calculations network based on MEC, the strategy
The following steps are included:
Step 101: using a kind of edge cache system architecture of MEC server;
Step 102: using the collaboration feature of MEC server, the caching for carrying out task merges and segmentation;
Step 103: the time delay that accesses under constraint condition and energy consumption analysis;
Step 104: being optimized access time delay and energy consumption using coordinate descent algorithm;
Fig. 2 is the present invention for moving the system model of the task buffer allocation strategy in edge calculations network based on MEC
Figure: MEC network is made of multiple MEC servers, is connected by backward chaining.Each MEC server is disposed together with eNB
In a cellular network, calculating, storage and network function are provided, to support the application program of context-aware and delay-sensitive
Near user.The processing of MEC server and storage capacity are used for Video Quality Metric and caching.These MEC servers can cooperate
Share their calculating and storage resource.Based on the video request received, MEC server can be from its caching (if can
With) in video content is provided, or download content from internet, and provide service for user, at the same cache identical content with
It is accessed for future.If there is the request video of a higher bit rate version in the buffer, MEC server will be available
Video Quality Metric is the low bit rate version of request, to serve user.That is video trans-rating, by a higher bit
For rate video compress to a lower bit rate version, Video Quality Metric is a computation-intensive task, and calculating cost can use
Cpu cycle across grading on MEC server measures.
It is the event that may occur when user requests video below.
1) video is obtained from the MEC of the eNB of connection caching.
2) video of a higher bit rate version is transferred to desired bit rate version from the caching of the eNB of connection
And consign to user.
3) video is retrieved from the MEC of adjacent eNB or source content server caching.
4) from the MEC cache of neighbouring eNB, a higher bit rate version is located at same position by one
The inverter set carries out across grade conversion, is then communicated to the eNB of connection.
5) (4) are similar to, but trans- grading is completed on the MEC server of the eNB of connection.
6) video is not buffered, and needs to download by internet from content server.
For using the collaboration feature of MEC server, the caching for carrying out task merges and caching segmentation, tool in step 102
Body implementation strategy are as follows:
The solution of proposal is attempted simultaneously using caching and processing capacity on MEC server, to meet user to not
With the request of the video of bit rate version.Its processing capacity can be used in MEC server, by a Video Quality Metric at lower
Bit rate, to meet the request of user.
A. caching merge: if there is enough processing capacities by a video from higher bit rate version go to one compared with
Low bit rate version, then there is no need to cache low bit rate when a higher bit rate version is buffered
Video.We extend cooperation caching example by merging caching.By using across grading and MEC cooperation, accessed with reducing
Delay and external flow load, we have proposed following solution.By utilizing the cooperation between MEC server, in network
The caching at edge can merge.MEC server can be in the Collaborative environment of shared data, not need to service in different MEC
Identical video is replicated on device.
Identical content is replicated on MEC server with it, and the content of request can be from a MEC server transport to another
One MEC server.Merged by caching, more videos can be cached on MEC server, to substantially increase life
Middle rate and external flow load.
B. caching segmentation: in video stream media, initial access latency depends on the initial segment of player's foradownloaded video
Time.User does not need to download complete video to start to watch video, once video player has buffered the initial sheets of video
Section, it will start playback and the downloading of rest section time.Caching initial segment is enough to reduce the access delay of video.It can also band
Carry out better hit rate, because can cache more videos in MEC server, and only complete video is buffered.So
And if all buffer memories are all used to the initial segment of buffered video, for each click, the residue of video
Part requires to download by internet from content server.In order to balance external flow load and access delay, it is proposed that
The logic of buffer memory is split.
A part of buffer memory is for caching complete video, and another part is then used for the initial sheets of buffered video
Section.In caching segmentation, the rank of selection caching segmentation is vital, because reducing delay and external energy consumption is phase
The target mutually to conflict.
Further, for accessing time delay described in Fig. 1 flow chart step 103 under constraint condition and resilience amount disappears
The process of consumption analysis is as follows:
Firstly, calculating separately out the delay T of local computinglAnd TcWith energy consumption cost ElAnd Ec, task buffer is asked
Topic, we define integer cache decision variable, and caching is divided into xk(0≤xk≤ 1) it is (x that, task k, which is all cached to edge cloud,k
=1) it, does not cache as (xk=0).
In view of the joint of task buffer and mobile edge cloud, user the general assignment duration is,
T=xkTc+(1-xk)Tl (1)
In view of the joint of task buffer and mobile edge cloud, user's general assignment energy consumption is,
E=xkEc+(1-xk)El (2)
So the problem is to seek the minimum value of Copula G (x),
G (x)=T+E (3)
Fig. 3 is the average access latency of the different caching segmentations of the present invention and the variation of energy consumption, when we distribute more
Caching come when storing complete video, average access latency be will increase because can store number of videos meeting in the buffer
It reduces, to reduce hit rate.When more storages, which are assigned to, caches complete video, external energy consumption will subtract
Few, it is very intractable for selecting the caching ration of division appropriate, because reduction access delay and additional energy consumption are to conflict with each other
Target.
Task buffer allocation strategy according to claim 1 based on MEC, it is characterised in that described in step 104,
Under based on access delay and external energy Consumption Analysis, task buffer is allocated using a kind of coordinate descent algorithm.By
Access delay can be reduced in replicating complete video on MEC network, but the number of videos cached is reduced therewith, direct shadow
Ring hit rate.Pass through the initial segment of buffered video, it is possible to reduce access delay, because buffered on MEC server
Complete video so external energy consumption does not need additional cost, therefore preferably goes out optimal view by coordinate descent
The frequency ration of division can effectively cache more videos on MEC server, and greatly reduce access delay and energy
Consumption.
The lifted embodiment of the present invention or embodiment have carried out further the object, technical solutions and advantages of the present invention
Detailed description, it should be understood that embodiment provided above or embodiment be only the preferred embodiment of the present invention and
, be not intended to limit the invention, all within the spirits and principles of the present invention it is made for the present invention it is any modification, equally replace
It changes, improve, should all be included in the protection scope of the present invention.
Claims (5)
1. a kind of task buffer allocation strategy for moving in edge calculations network based on MEC, which is characterized in that including following
Step:
Step 101: using a kind of edge cache system architecture of MEC server;
Step 102: using the collaboration feature of MEC server, the caching for carrying out task merges and segmentation;
Step 103: the time delay that accesses under constraint condition and energy consumption analysis;
Step 104: being optimized access time delay and energy consumption using coordinate descent algorithm.
2. a kind of task buffer allocation strategy for moving in edge calculations network based on MEC according to claim 1,
It is characterized in that, the step 101 uses a kind of MEC server edge cache framework (as shown in Figure 2), comprising: MEC network
It is made of multiple MEC servers, is connected by backward chaining, each MEC server is deployed in a Cellular Networks together with eNB
In network, calculating, storage and network function are provided, to support the application program of context-aware and delay-sensitive near user,
The processing of MEC server and storage capacity are used for Video Quality Metric and caching, these MEC servers can be by cooperation share in terms of them
Calculation and storage resource, based on the video request received, MEC server can provide video from its caching (if available)
Content, or content is downloaded from internet, and provide service for user, while caching identical content for accessing in the future, such as
Fruit has the request video of a higher bit rate version in the buffer, then available Video Quality Metric is to ask by MEC server
One higher bitrate video is compressed to by the low bit rate version asked with serving user, i.e. video trans-rating
One lower bit rate version, Video Quality Metric are a computation-intensive tasks, and calculating cost can be on MEC server
Cpu cycle across grading measure
It is the event that may occur when user requests video below:
1) video is obtained from the MEC of the eNB of connection caching
2) video of a higher bit rate version is transferred to desired bit rate version and hands over from the caching of the eNB of connection
Pay user
3) video is retrieved from the MEC of adjacent eNB or source content server caching
4) video is not buffered, and needs to download by internet from source content server.
3. a kind of task buffer allocation strategy for moving in edge calculations network based on MEC according to claim 1,
It is characterized in that, the step 102 is according to the collaboration feature using MEC server, the caching for carrying out task merges and caching point
Cut, it is proposed that solution attempt on MEC server simultaneously using caching and processing capacity, to meet user to different bits
Its processing capacity can be used in the request of the video of rate version, MEC server, by a Video Quality Metric at lower bit
Rate, to meet the request of user
A. caching merge if there is enough processing capacities by a video from higher bit rate version go to one it is lower
Bit rate version, then there is no need to cache the video of low bit rate when a higher bit rate version is buffered
, we extend cooperation caching example by merging caching, cooperate by using across grading and MEC, to reduce access delay
It is loaded with external flow, we have proposed following solution, by utilizing the cooperation between MEC server, in network edge
Caching can merge, MEC server can be in the Collaborative environment of shared data, not need on different MEC servers
Identical video is replicated, replicates identical content on MEC server with it, the content of request can be from a MEC server
It is transferred to another MEC server, is merged by caching, more videos can be cached on MEC server, thus significantly
Improve hit rate and flow load
B. in video stream media, initial access latency depends on the time of the initial segment of player's foradownloaded video, and user is not required to
Complete video is downloaded to start to watch video, once video player has buffered the initial segment of video, it will start
The downloading of playback and rest section time, caching initial segment are enough to reduce the access delay of video, it can also bring better hit
Rate, because can cache more videos in MEC server, and only complete video is buffered, however, if all
Buffer memory is all used to the initial segment of buffered video, then for each click, the remainder of video requires to lead to
Internet is crossed to download from source content server, so that energy consumption is produced, in order to balance external energy consumption and access delay,
We have proposed the fractionations of the logic of buffer memory
A part of buffer memory is for caching complete video, and another part is then used for the initial segment of buffered video,
In caching segmentation, it is vital for selecting the rank of caching segmentation, because it is conflicting for reducing delay and energy consumption
Target, in order to thoroughly understand it, we theoretically analyze the access delay and energy consumption of step 103.
4. a kind of task buffer allocation strategy for moving in edge calculations network based on MEC according to claim 1,
It is characterized in that, analyzed described in step 103 in access delay and energy consumption, stored when we distribute more cachings
When complete video, average access latency be will increase, because the number of videos that can store in mec server can be reduced, from
And hit rate is reduced, task needs to download by internet from source content server, has cached when more storages are assigned to
When whole video, external flow load will be reduced, because being each buffered in hit in full video cache will lead to external stream
Amount is zero, and it is very intractable for selecting the caching ration of division appropriate, because reducing access delay and external flow load is mutually to rush
Prominent target.
5. a kind of task buffer allocation strategy for moving in edge calculations network based on MEC according to claim 1,
It is characterized in that under based on access delay and energy consumption analysis, utilizing a kind of coordinate descent algorithm pair described in step 104
Task buffer is allocated, the video that can be reduced access delay due to replicating complete video on MEC network, but cache
Quantity is reduced therewith, directly affects hit rate,, can be with by the initial segment of buffered video to increase the consumption of energy
Reduce access delay, because of the buffered complete video on MEC server, the energy consumption of mobile device only pass through by
Task definition is unloaded to the communications cost of edge cloud to calculate, and therefore, in order to reduce access delay, the initial segment of video is answered
It makes to the lower network of carrying cost, and provides benefit identical with access delay.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811074167.7A CN109362064A (en) | 2018-09-14 | 2018-09-14 | The task buffer allocation strategy based on MEC in mobile edge calculations network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811074167.7A CN109362064A (en) | 2018-09-14 | 2018-09-14 | The task buffer allocation strategy based on MEC in mobile edge calculations network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109362064A true CN109362064A (en) | 2019-02-19 |
Family
ID=65350768
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811074167.7A Pending CN109362064A (en) | 2018-09-14 | 2018-09-14 | The task buffer allocation strategy based on MEC in mobile edge calculations network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109362064A (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109995865A (en) * | 2019-04-03 | 2019-07-09 | 广东工业大学 | A kind of request responding method and device of the data information based on mobile edge calculations |
CN110022482A (en) * | 2019-03-05 | 2019-07-16 | 咪咕视讯科技有限公司 | Video playing starting method, video service system and storage medium |
CN110351760A (en) * | 2019-07-19 | 2019-10-18 | 重庆邮电大学 | A kind of mobile edge calculations system dynamic task unloading and resource allocation methods |
CN110362400A (en) * | 2019-06-17 | 2019-10-22 | 中国平安人寿保险股份有限公司 | Distribution method, device, equipment and the storage medium of caching resource |
CN110446158A (en) * | 2019-08-09 | 2019-11-12 | 南京邮电大学 | User device association method in cloud Radio Access Network based on edge cache |
CN110572432A (en) * | 2019-08-05 | 2019-12-13 | 西北工业大学 | Spatial cooperation caching and optimizing method for heterogeneous network |
CN110602103A (en) * | 2019-09-17 | 2019-12-20 | 中国联合网络通信集团有限公司 | Electronic lock protocol conversion optimization method and electronic lock protocol conversion optimizer |
CN110765365A (en) * | 2019-10-25 | 2020-02-07 | 国网河南省电力公司信息通信公司 | Method, device, equipment and medium for realizing distributed edge cloud collaborative caching strategy |
CN110913239A (en) * | 2019-11-12 | 2020-03-24 | 西安交通大学 | Video cache updating method for refined mobile edge calculation |
CN111324839A (en) * | 2020-02-20 | 2020-06-23 | 盈嘉互联(北京)科技有限公司 | Building big data caching method and device |
CN111432004A (en) * | 2020-03-27 | 2020-07-17 | 北京邮电大学 | Mobile communication system and cache method thereof |
CN111491331A (en) * | 2020-04-14 | 2020-08-04 | 重庆邮电大学 | Network perception self-adaptive caching method based on transfer learning in fog computing network |
CN113114733A (en) * | 2021-03-24 | 2021-07-13 | 重庆邮电大学 | Distributed task unloading and computing resource management method based on energy collection |
CN113157344A (en) * | 2021-04-30 | 2021-07-23 | 杭州电子科技大学 | DRL-based energy consumption perception task unloading method in mobile edge computing environment |
CN113411826A (en) * | 2021-06-17 | 2021-09-17 | 天津大学 | Edge network equipment caching method based on attention mechanism reinforcement learning |
CN113542330A (en) * | 2020-04-21 | 2021-10-22 | 中移(上海)信息通信科技有限公司 | Method and system for acquiring mobile edge calculation data |
CN113810931A (en) * | 2021-08-27 | 2021-12-17 | 南京邮电大学 | Self-adaptive video caching method facing mobile edge computing network |
CN114301922A (en) * | 2020-10-07 | 2022-04-08 | 智捷科技股份有限公司 | Reverse proxy method with delay perception load balancing and storage device |
CN115051996A (en) * | 2022-06-16 | 2022-09-13 | 桂林电子科技大学 | Video cache management method based on local video utility value under multi-access edge calculation |
US11558928B2 (en) * | 2020-05-18 | 2023-01-17 | At&T Intellectual Property I, L.P. | Proactive content placement for low latency mobile access |
-
2018
- 2018-09-14 CN CN201811074167.7A patent/CN109362064A/en active Pending
Non-Patent Citations (2)
Title |
---|
BAOFENG LIU等: "Proxy Caching Based on Segments for Layered Encode Video over the Internet", 《IEEE》 * |
TUYEN X. TRAN等: "Collaborative Multi-bitrate Video Caching and Processing in Mobile-Edge Computing Networks", 《2017 13TH ANNUAL CONFERENCE ON WIRELESS ON-DEMAND NETWORK SYSTEMS AND SERVICES (WONS)》 * |
Cited By (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110022482A (en) * | 2019-03-05 | 2019-07-16 | 咪咕视讯科技有限公司 | Video playing starting method, video service system and storage medium |
CN110022482B (en) * | 2019-03-05 | 2021-07-27 | 咪咕视讯科技有限公司 | Video playing starting method, video service system and storage medium |
CN109995865A (en) * | 2019-04-03 | 2019-07-09 | 广东工业大学 | A kind of request responding method and device of the data information based on mobile edge calculations |
CN110362400A (en) * | 2019-06-17 | 2019-10-22 | 中国平安人寿保险股份有限公司 | Distribution method, device, equipment and the storage medium of caching resource |
CN110362400B (en) * | 2019-06-17 | 2022-06-17 | 中国平安人寿保险股份有限公司 | Resource cache allocation method, device, equipment and storage medium |
CN110351760B (en) * | 2019-07-19 | 2022-06-03 | 重庆邮电大学 | Dynamic task unloading and resource allocation method for mobile edge computing system |
CN110351760A (en) * | 2019-07-19 | 2019-10-18 | 重庆邮电大学 | A kind of mobile edge calculations system dynamic task unloading and resource allocation methods |
CN110572432A (en) * | 2019-08-05 | 2019-12-13 | 西北工业大学 | Spatial cooperation caching and optimizing method for heterogeneous network |
CN110572432B (en) * | 2019-08-05 | 2020-12-04 | 西北工业大学 | Spatial cooperation caching and optimizing method for heterogeneous network |
CN110446158A (en) * | 2019-08-09 | 2019-11-12 | 南京邮电大学 | User device association method in cloud Radio Access Network based on edge cache |
CN110446158B (en) * | 2019-08-09 | 2020-12-01 | 南京邮电大学 | User equipment association method in cloud wireless access network based on edge cache |
CN110602103A (en) * | 2019-09-17 | 2019-12-20 | 中国联合网络通信集团有限公司 | Electronic lock protocol conversion optimization method and electronic lock protocol conversion optimizer |
CN110602103B (en) * | 2019-09-17 | 2021-09-10 | 中国联合网络通信集团有限公司 | Electronic lock protocol conversion optimization method and electronic lock protocol conversion optimizer |
CN110765365B (en) * | 2019-10-25 | 2023-07-21 | 国网河南省电力公司信息通信公司 | Method, device, equipment and medium for realizing distributed Bian Yun collaborative caching strategy |
CN110765365A (en) * | 2019-10-25 | 2020-02-07 | 国网河南省电力公司信息通信公司 | Method, device, equipment and medium for realizing distributed edge cloud collaborative caching strategy |
CN110913239A (en) * | 2019-11-12 | 2020-03-24 | 西安交通大学 | Video cache updating method for refined mobile edge calculation |
CN111324839B (en) * | 2020-02-20 | 2021-07-27 | 盈嘉互联(北京)科技有限公司 | Building big data caching method and device |
CN111324839A (en) * | 2020-02-20 | 2020-06-23 | 盈嘉互联(北京)科技有限公司 | Building big data caching method and device |
CN111432004A (en) * | 2020-03-27 | 2020-07-17 | 北京邮电大学 | Mobile communication system and cache method thereof |
CN111491331A (en) * | 2020-04-14 | 2020-08-04 | 重庆邮电大学 | Network perception self-adaptive caching method based on transfer learning in fog computing network |
CN111491331B (en) * | 2020-04-14 | 2022-04-15 | 重庆邮电大学 | Network perception self-adaptive caching method based on transfer learning in fog computing network |
CN113542330B (en) * | 2020-04-21 | 2023-10-27 | 中移(上海)信息通信科技有限公司 | Mobile edge calculation data acquisition method and system |
CN113542330A (en) * | 2020-04-21 | 2021-10-22 | 中移(上海)信息通信科技有限公司 | Method and system for acquiring mobile edge calculation data |
US11558928B2 (en) * | 2020-05-18 | 2023-01-17 | At&T Intellectual Property I, L.P. | Proactive content placement for low latency mobile access |
CN114301922A (en) * | 2020-10-07 | 2022-04-08 | 智捷科技股份有限公司 | Reverse proxy method with delay perception load balancing and storage device |
CN113114733B (en) * | 2021-03-24 | 2022-07-08 | 重庆邮电大学 | Distributed task unloading and computing resource management method based on energy collection |
CN113114733A (en) * | 2021-03-24 | 2021-07-13 | 重庆邮电大学 | Distributed task unloading and computing resource management method based on energy collection |
CN113157344B (en) * | 2021-04-30 | 2022-06-14 | 杭州电子科技大学 | DRL-based energy consumption perception task unloading method in mobile edge computing environment |
CN113157344A (en) * | 2021-04-30 | 2021-07-23 | 杭州电子科技大学 | DRL-based energy consumption perception task unloading method in mobile edge computing environment |
CN113411826B (en) * | 2021-06-17 | 2022-05-20 | 天津大学 | Edge network equipment caching method based on attention mechanism reinforcement learning |
CN113411826A (en) * | 2021-06-17 | 2021-09-17 | 天津大学 | Edge network equipment caching method based on attention mechanism reinforcement learning |
CN113810931A (en) * | 2021-08-27 | 2021-12-17 | 南京邮电大学 | Self-adaptive video caching method facing mobile edge computing network |
CN113810931B (en) * | 2021-08-27 | 2023-08-22 | 南京邮电大学 | Self-adaptive video caching method for mobile edge computing network |
CN115051996A (en) * | 2022-06-16 | 2022-09-13 | 桂林电子科技大学 | Video cache management method based on local video utility value under multi-access edge calculation |
CN115051996B (en) * | 2022-06-16 | 2023-07-11 | 桂林电子科技大学 | Video cache management method based on local video utility value under multi-access edge calculation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109362064A (en) | The task buffer allocation strategy based on MEC in mobile edge calculations network | |
CN105263050B (en) | Mobile terminal real-time rendering system and method based on cloud platform | |
CN110213627A (en) | Flow medium buffer distributor and its working method based on multiple cell user mobility | |
CN110069341B (en) | Method for scheduling tasks with dependency relationship configured according to needs by combining functions in edge computing | |
CN110062357B (en) | D2D auxiliary equipment caching system and caching method based on reinforcement learning | |
CN107122249A (en) | A kind of task unloading decision-making technique based on edge cloud pricing mechanism | |
CN108093435B (en) | Cellular downlink network energy efficiency optimization system and method based on cached popular content | |
Jokhio et al. | A computation and storage trade-off strategy for cost-efficient video transcoding in the cloud | |
WO2023116460A1 (en) | Multi-user multi-task computing offloading method and system in mobile edge computing environment | |
CN108833352A (en) | A kind of caching method and system | |
CN102857548A (en) | Mobile cloud computing resource optimal allocation method | |
US20200196209A1 (en) | Framework for a 6g ubiquitous access network | |
CN113810931B (en) | Self-adaptive video caching method for mobile edge computing network | |
US20220086664A1 (en) | Apparatuses and methods for network resource dimensioning in accordance with differentiated quality of service | |
Wu et al. | Deep reinforcement learning-based video quality selection and radio bearer control for mobile edge computing supported short video applications | |
CN107820278A (en) | The task discharging method of cellular network time delay and cost Equilibrium | |
Engidayehu et al. | Deep reinforcement learning-based task offloading and resource allocation in MEC-enabled wireless networks | |
Mashaly et al. | Load balancing in cloud-based content delivery networks using adaptive server activation/deactivation | |
Li et al. | Collaborative optimization of edge-cloud computation offloading in internet of vehicles | |
CN113010317A (en) | Method, device, computer equipment and medium for joint service deployment and task unloading | |
US20230231813A1 (en) | Enhanced network with data flow differentiation | |
CN102520879B (en) | Priority-based file information storage method, device and system | |
CN105704037B (en) | A kind of list item store method and controller | |
CN111447506A (en) | Streaming media content placement method based on delay and cost balance in cloud edge environment | |
Zhang et al. | Joint optimization of multi-user computing offloading and service caching in mobile edge computing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190219 |