CN109040298A - Data processing method and device based on edge calculations technology - Google Patents

Data processing method and device based on edge calculations technology Download PDF

Info

Publication number
CN109040298A
CN109040298A CN201811019205.9A CN201811019205A CN109040298A CN 109040298 A CN109040298 A CN 109040298A CN 201811019205 A CN201811019205 A CN 201811019205A CN 109040298 A CN109040298 A CN 109040298A
Authority
CN
China
Prior art keywords
server
terminal
edge server
data
service data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811019205.9A
Other languages
Chinese (zh)
Inventor
覃毅芳
周旭
范鹏飞
李灵玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Computer Network Information Center of CAS
Original Assignee
Computer Network Information Center of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Computer Network Information Center of CAS filed Critical Computer Network Information Center of CAS
Priority to CN201811019205.9A priority Critical patent/CN109040298A/en
Publication of CN109040298A publication Critical patent/CN109040298A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/10Flow control between communication endpoints
    • H04W28/14Flow control between communication endpoints using intermediate storage

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention discloses a kind of data processing method and device based on edge calculations technology.Wherein, this method comprises: caching the business datum of first terminal to first edge server, business datum is sent to central server by first edge server, the address of second edge server locating for second terminal is determined by central server, business datum is sent to second edge server by central server, so that the second business datum of second edge server buffer.By caching the business datum of first terminal on the second edge server for the second terminal that login user associated with login user on first terminal is logged in.The present invention is solved since buffering scheme in the related technology is limited to the accuracy rate of Hot Contents prediction, and be not suitable for the network optimization scene for carrying out real-time sharing in existing network by social networks channel for user-generated content, and network data blocking is resulted in, the technical issues of network optimization low efficiency.

Description

Data processing method and device based on edge computing technology
Technical Field
The invention relates to the field of communication, in particular to a data processing method and device based on an edge computing technology.
Background
With the development of social economy and the continuous progress of science and technology, the wireless mobile communication technology has rapidly developed and realizes continuous upgrade and update. Currently, fourth generation mobile communication technology (4G) has begun to scale to commercial deployment worldwide. In the 2020 and future development needs of networks, the fifth generation mobile communication technology (5G) is also beginning to be researched and explored in the industry and academia. It is anticipated that future 5G networks will be across aspects of people's daily work, learning, and social life, such as mobile office, smart home, wireless payment, telemedicine, augmented reality, and so on; meanwhile, the 5G network is deeply integrated with the traditional industries of electric power, transportation, manufacturing, home furnishing and the like.
Meanwhile, with the popularization of intelligent terminals and the explosion of mobile service applications, mobile internet and internet of things show an explosive development trend, no matter in many aspects such as the number of users, the number of connected terminals, service traffic and the like. Therefore, the widely applied mobile internet and internet of things services will become a strong driving force for the development of the future 5G network. Statistics show that wireless traffic data traffic will grow at a rate approaching 100% per year. Among them, the User-generated Content (UGC) generated by the smart terminal is an important component. The user-generated content generally has multiple representations, including videos, audios, files, pictures, texts, etc., and can be published on various platforms of the internet, and simultaneously shared through various channels such as social networks, websites, virtual communities, etc., especially social channels (such as instant messaging software WeChat, QQ, etc.). The mass user generated content brings huge impact to the existing wireless network in terms of data flow and signaling flow, thereby leading to network overload, increased call drop rate, frequent occurrence of network accidents such as data congestion and the like.
To address this problem, caching techniques are introduced into wireless communication networks. The existing caching technology generally increases the deployment of storage devices at the edge of a wireless network (such as a core network, a macro-cellular base station, a small base station, a micro base station, a home base station, a wireless access point and the like), predicts network hot content and distribution trend through analysis of traffic data, user data and network data, and pushes the hot content to a proper edge storage device in advance in an off-peak time period of the network, so that the bandwidth occupancy rate of a wireless network return network/core network is reduced, the overall capacity of the network is improved, and the user service experience is improved by using local cache data at the edge of the network in a busy time period of the network. However, the effect of the existing caching scheme is often limited by the accuracy rate of hot content prediction; meanwhile, the existing caching scheme in the related art is not suitable for a network optimization scene in the existing network, wherein the network optimization scene is generated by users aiming at the content and is shared in real time through a social network channel.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a data processing method and device based on an edge computing technology, which are used for at least solving the technical problems of network data blockage and low network optimization efficiency caused by the fact that a caching scheme in the related technology is limited by the accuracy of hot content prediction and is not suitable for a network optimization scene in the existing network for real-time sharing of user generated content through a social network channel.
According to an aspect of the embodiments of the present invention, there is provided a data processing method based on an edge computing technology, including: caching service data of a first terminal to a first edge server, wherein the first terminal is connected with the first edge server; the service data are sent to a central server through the first edge server, so that the central server stores the service data, wherein the central server is connected with the first edge server through a core network; determining, by the central server, an address of a second edge server where a second terminal is located, where a login user on the second terminal is associated with a login user on the first terminal through a social network; and sending the service data to the second edge server through the central server so that the second edge server caches the service data.
Further, the sending the service data to a central server by the first edge server includes: generating summary data corresponding to the business data through the first edge server; and sending the abstract data and the business data to the central server through the first edge server so that the central server stores the business data and the abstract data.
Further, after the summary data and the business data are sent to the central server through the first edge server, the method further includes: after the second terminal receives a first operation instruction, sending a summary data request to the central server through the second terminal, wherein the first operation instruction is used for controlling the second terminal to request the central server for acquiring the summary data; receiving the summary data sent by the central server at the second terminal; and after receiving a second operation instruction, the second terminal sends a service data request to the second edge server, wherein the second operation instruction is used for controlling the second terminal to acquire the service data.
Further, after the second terminal receives the second operation instruction and sends a service data request to the second edge server through the second terminal, the method further includes: inquiring a local cache of the second edge server according to the service data request; judging whether the service data exists in a local cache of the second edge server or not; under the condition that the service data exist in the local cache of the second edge server, the service data are sent to the second terminal through the second edge server; and sending the service data request to the central server through the second edge server under the condition that the service data does not exist in the local cache of the second edge server.
Further, the method further comprises: under the condition that a third edge server has a plurality of terminals requesting the same service data, adding the terminals into a multicast aggregation group according to the service data requests of the terminals received within a preset time interval; and transmitting the service data requested by the plurality of terminals to the plurality of terminals through the multicast aggregation group by the third edge server.
According to another aspect of the embodiments of the present invention, there is also provided a data processing apparatus based on an edge computing technology, including: the system comprises a caching unit, a processing unit and a processing unit, wherein the caching unit is used for caching service data of a first terminal to a first edge server, and the first terminal is connected with the first edge server; a first sending unit, configured to send the service data to a central server through the first edge server, so that the central server stores the service data, where the central server is connected to the first edge server through a core network; the determining unit is used for determining the address of a second edge server where a second terminal is located through the central server, wherein a login user on the second terminal is associated with a login user on the first terminal through a social network; and the second sending unit is used for sending the service data to the second edge server through the central server so as to enable the second edge server to cache the service data.
Further, the first transmitting unit includes: the processing module is used for generating abstract data corresponding to the business data through the first edge server; and the sending module is used for sending the summary data and the service data to the central server through the first edge server so that the central server stores the service data and the summary data.
Further, the apparatus further comprises: a third sending unit, configured to send, after the first edge server sends the summary data and the service data to the center server and the second terminal receives a first operation instruction, a summary data request to the center server through the second terminal, where the first operation instruction is used to control the second terminal to request the center server to obtain the summary data; a receiving unit, configured to receive, at the second terminal, the summary data sent by the central server; and a fourth sending unit, configured to send a service data request to the second edge server through the second terminal after the second terminal receives a second operation instruction, where the second operation instruction is used to control the second terminal to obtain the service data.
Further, the apparatus further comprises: the query unit is used for querying a local cache of the second edge server according to the service data request after the second terminal receives a second operation instruction and the second terminal sends the service data request to the second edge server; a determining unit, configured to determine whether the service data exists in a local cache of the second edge server; a fifth sending unit, configured to send the service data to the second terminal through the second edge server when the service data exists in the local cache of the second edge server; a sixth sending unit, configured to send the service data request to the central server through the second edge server when the service data does not exist in the local cache of the second edge server.
Further, the apparatus further comprises: the aggregation unit is used for adding the plurality of terminals into a multicast aggregation group through the third edge server according to the service data requests of the plurality of terminals received within a preset time interval under the condition that the plurality of terminals request the same service data in the third edge server; and a seventh sending unit, configured to send the service data requested by the multiple terminals to the multiple terminals through the multicast aggregation group.
According to another aspect of the embodiments of the present invention, there is also provided a storage medium including a stored program, wherein when the program runs, a device on which the storage medium is located is controlled to execute the data processing method based on the edge computing technology.
According to another aspect of the embodiments of the present invention, there is also provided a processor, configured to execute a program, where the program executes a data processing method based on an edge computing technology as described above.
In the embodiment of the invention, the service data of the first terminal is cached to the first edge server, the service data is sent to the center server through the first edge server, the address of the second edge server where the second terminal is located is determined through the center server, and the service data is sent to the second edge server through the center server, so that the second edge server caches the second service data. The method and the device have the advantages that the business data of the first terminal are cached on the second edge server of the second terminal logged in by the logging user associated with the first terminal, so that the purpose of sharing the flow of the central server is achieved, the technical effect of improving the overall capacity of the network is achieved, and the technical problems that the caching scheme in the related technology is limited by the accuracy rate of hot content prediction and is not suitable for a network optimization scene in the existing network for real-time sharing of the content generated by the user through a social network channel, so that the network data are blocked, and the network optimization efficiency is low are solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a network topology diagram of an alternative data processing method based on edge computing techniques, according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating an alternative data processing method based on edge computing techniques according to an embodiment of the present invention;
FIG. 3 is a network topology diagram of an alternative data processing method based on edge computing techniques, according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of an alternative method for local data caching based on an MEC server according to an embodiment of the present invention;
fig. 5 is a flowchart illustrating an alternative MEC server-based traffic offload method according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an alternative data processing apparatus based on an edge computing technique according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
In an embodiment of the present invention, an embodiment of the above data processing method based on an edge computing technology is provided. As an alternative embodiment, the data processing method based on the edge computing technology can be applied, but not limited to, in the application environment as shown in fig. 1. The method comprises the steps that a first terminal 101 caches service data of the first terminal to a first edge server 103, wherein the first terminal 101 is connected with the first edge server 103; sending the service data to the center server 110 through the first edge server 103 so that the center server 110 stores the service data, wherein the center server 110 is connected to the first edge server 103 through the core network 120; determining, by the central server 110, an address of a second edge server 104 where the second terminal 102 is located, wherein a logged-on user on the second terminal 102 is associated with a logged-on user on the first terminal 101 through a social network; the traffic data is sent to the second edge server 104 through the central server 110 so that the second edge server 104 caches the traffic data.
In this embodiment, the first terminal 101 and the second terminal 102 belong to terminals close to the user side, and the user performs corresponding operations on the first terminal 101 to generate related application data traffic, and performs data interaction with the second terminal 102 via the central server 110. In this process, the central server 110 is located in a core network and belongs to a cloud server, and the first Edge server 103 and the second Edge server 104 are typically general servers deployed on a wireless network access side, and provide IT and cloud Computing capabilities for a network Edge, such as a Mobile Edge Computing (MEC) server.
It should be noted that the edge server described above may be deployed by an operator and opened to a content provider. The content provider can utilize the virtualization and storage functions provided by the local server (e.g., MEC server) to sink the services of the data center cloud service server into the virtual machine of the local server to run so as to form an edge server, thereby providing the localization service for the user at the network edge.
Optionally, in this embodiment, the terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a desktop PC, a digital television, a vehicle-mounted terminal, and other terminals installed with applications that need data interaction. The mobile network in the above network is only an example, and the network may include but is not limited to at least one of the following: wide area networks, metropolitan area networks, and local area networks. The above is only an example, and the present embodiment is not limited to this.
According to an embodiment of the present invention, there is provided a data processing method based on an edge computing technique, as shown in fig. 2, the method including:
s202, caching the service data of the first terminal to a first edge server, wherein the first terminal is connected with the first edge server;
s204, the service data are sent to a central server through a first edge server so that the central server stores the service data, wherein the central server is connected with the first edge server through a core network;
s206, determining the address of a second edge server where a second terminal is located through a central server, wherein a login user on the second terminal is associated with a login user on the first terminal through a social network;
and S208, sending the service data to the second edge server through the central server so that the second edge server caches the service data.
In this embodiment, the login user on the first terminal is associated with the login user on the second terminal, and the two may be associated on a social network, such as a social application; meanwhile, some attributes may be associated, for example, an "affinity number" service in an operator service, and terminal numbers of the two terminals are associated.
Preferably, in this embodiment, the first terminal and the second terminal are both located in the wireless communication network, and are connected to the first edge server and the second edge server through the wireless communication network, respectively. The following describes a configuration of an edge server, in which a first local server and a second local server are configured to support provision of virtualization and storage functions, and are configured to sink a service of a center server into the first local server and the second local server, and run the service of the center server in the first local server and the second local server to form the first edge server and the second edge server of a terminal, thereby providing a localized service to a user at an edge of a core network.
In a specific application scenario, a corresponding service application (e.g., a social application) is run in the first terminal, service data is uploaded to the central server through the service application, and the service data is redirected to the first edge server after being received by the first local server. And the first edge server receives the service data uploaded by the login user on the first terminal in real time and caches the service data in the first edge server. And simultaneously, the first edge server starts a background asynchronous service process and uploads the service data to the central server. After receiving the service data sent by the first edge server, the central server judges a login user on a second terminal associated with a login user (service data publisher) on the first terminal through a social network while receiving the service data, and acquires a second edge server address connected with the second terminal. The central server caches the service data and sends the service data to the second edge server. And after receiving the service data sent by the central server, the second edge server performs local caching of the service data.
In the above process, if there is a login user on the third terminal that is not associated with the login user on the first terminal, the central server does not send the service data to the third edge server where the third terminal is located, so as to implement accurate caching of the service data.
It should be noted that, by caching the service data of the first terminal to the first edge server, the first edge server sends the service data to the center server, the center server determines the address of the second edge server where the second terminal is located, and the center server sends the service data to the second edge server, so that the second edge server caches the second service data. The method and the device achieve the purpose of sharing the flow of the central server by caching the service data of the first terminal on the second edge server of the second terminal logged in by the logging user associated with the logging user on the first terminal, thereby achieving the technical effect of improving the whole capacity of the network.
Optionally, in this embodiment, the service data is sent to the central server through the first edge server, which includes but is not limited to: generating abstract data corresponding to the business data through a first edge server; and sending the abstract data and the business data to the central server through the first edge server so that the central server stores the business data and the abstract data.
In a specific application scenario, when the first edge server receives and caches service data sent by the first terminal, summary data corresponding to the service data is generated. The first edge server caches the summary data and sends the summary to the central server. The summary data is used for forming preview data browsed by a user at a terminal application provider, and the user is used for requesting service data corresponding to the summary data from the edge server or the central server by selecting the summary data. Therefore, in order to save the traffic cost of the user application, the processing pressure of the central server and the edge server is further reduced.
Optionally, in this embodiment, after the first edge server sends the summary data and the service data to the central server, the method further includes: after receiving a first operation instruction, a second terminal sends a summary data request to a central server, wherein the first operation instruction is used for controlling the second terminal to request the central server for obtaining summary data; receiving the summary data sent by the central server at the second terminal; and after the second terminal receives a second operation instruction, sending a service data request to the second edge server through the second terminal, wherein the second operation instruction is used for controlling the second terminal to acquire service data.
In a specific application scenario, when a user executes a first operation instruction in a second terminal, the first operation instruction is used for controlling the second terminal to request the central server to acquire summary data, for example, a refresh operation in friend dynamics in a social application. The summary data here is summary data of other users associated with the login user on the second terminal. After receiving the summary data request of the second terminal, the central server sends the summary data of other users related to the login user on the second terminal to the second terminal, receives the summary data sent by the central server, and forms a summary data preview interface on the second terminal. The user selects specific summary data on a preview interface of the second terminal, namely after a second operation instruction is received on the second terminal, a service data request is sent to the second edge server through the second terminal, and the service data request is used for requesting service data corresponding to the summary data selected by the user.
Optionally, in this embodiment, after the second terminal receives the second operation instruction and sends a service data request to the second edge server through the second terminal, the method further includes: inquiring a local cache of a second edge server according to the service data request; judging whether business data exist in a local cache of the second edge server or not; under the condition that the service data exist in the local cache of the second edge server, the service data are sent to the second terminal through the second edge server; and under the condition that the service data does not exist in the local cache of the second edge server, sending the service data request to the central server through the second edge server.
In a specific application scenario, a local cache in the second edge server is queried according to a service data request, whether service data corresponding to summary data selected by a user exists in the local cache of the second edge server is judged, whether service data requested by a second terminal exists in the local cache of the second edge server is judged, and the following two situations specifically exist:
1) the local cache of the second edge server has the service data requested by the second terminal, and the service data is sent to the second terminal through the second edge server.
2) If no service data exists in the local cache of the second edge server, the second edge server sends the service data request of the second terminal to the central server, so that the central server queries corresponding service data according to the abstract data requested by the second terminal, and then the central server sends the service data to the second terminal.
Optionally, in this embodiment, the method further includes: under the condition that a plurality of terminals request the same service data in a network governed by a third edge server, adding the plurality of terminals into a multicast aggregation group through the third edge server according to the service data requests of the plurality of terminals received within a preset time interval; and transmitting the service data requested by the plurality of terminals to the plurality of terminals through the multicast aggregation group.
Specifically, when a network governed by a third edge server has multiple logged-in users on terminals requesting the same service data, the edge server may use a network Multicast technology (Multicast) to aggregate the terminals requesting the same service data within a preset time interval, that is, multiple terminals join a Multicast aggregation group, and use one Multicast stream to serve all the requesting users. For example, a plurality of mobile terminals are accessed to a wireless network of a third edge server, the plurality of mobile terminals send service requests to the third edge server, when the service requests of the plurality of terminals are service data requesting a mobile terminal of the same user, the third edge server aggregates the terminals receiving the service data requests within a certain time and adds the aggregated terminals to a multicast aggregation group, and when the service data exists in the third edge server, the service data is sent to the plurality of mobile terminals through the multicast aggregation group; and when the third edge server does not have the service data requested by the multicast aggregation group, the third edge server leads to the central server to request the service data, and sends the service data returned by the central server to the plurality of mobile terminals through multicast. Therefore, the edge server can adopt an optimized transmission mode to realize the high-efficiency distribution of the service data.
In the above embodiment, the third edge server adds the transmission links of the service requests of the plurality of mobile terminals to the multicast aggregation group, and the third edge server sends the service data requested by the service requests in the multicast aggregation group to the plurality of mobile terminals through the multicast aggregation group. In a preferred technical solution, the plurality of mobile terminals may also join the multicast aggregation group in the application layer, and the third edge server sends the requested service data to the plurality of mobile terminals in an application layer multicast manner.
According to the embodiment, the service data of the first terminal is cached to the first edge server, the service data is sent to the center server through the first edge server, the address of the second edge server where the second terminal is located is determined through the center server, and the service data is sent to the second edge server through the center server, so that the second edge server caches the second service data. The method and the device achieve the purpose of sharing the flow of the central server by caching the service data of the first terminal on the second edge server of the second terminal logged in by the logging user associated with the logging user on the first terminal, thereby achieving the technical effect of improving the whole capacity of the network.
The following describes a data processing method based on the edge computing technology in the above embodiment by using a specific embodiment, and as shown in fig. 3, the data processing method is an MEC local data caching network corresponding to a network formed by the first terminal, the first edge server, and the center server.
The MEC server described below is equivalent to the local server, and the configuration of server virtualization performed by the MEC server is equivalent to the edge server and the data center cloud service server are equivalent to the center server in the foregoing embodiment.
For convenience of explanation of the data processing method based on the edge calculation technique in the specific embodiment, the following is set:
(1) the method comprises the following steps that a user 1 and a user 2 access a network through a wireless access point 1, and the wireless access point 1 is connected with a core network through an MEC1 server to realize communication with a cloud service server of a data center;
(2) the user 3 and the user 4 access the network through the wireless access point 2, and the wireless access point 2 is connected with the core network through the MEC2 server to realize communication with the cloud service server of the data center;
(3) the users 5 and 6 access the network through the wireless access point 3, and the wireless access point 3 is connected with the core network through the MEC3 server to realize communication with the cloud service server of the data center;
(4) user 1, user 2, user 3, user 4, user 5, and user 6 use the same social application. The user 1, the user 2 and the user 3 have social friend relationships and are associated with each other, and content data can be shared among the users by using the same social application program (for example, a friend circle content sharing function among friends, friend dynamic browsing and the like); the user 1, the user 4, the user 5, and the user 6 do not have a social friend relationship, and cannot share content data with each other.
In this embodiment, the data processing method based on the edge computing technology is divided into a local data cache based on the MEC server and a traffic offload.
As shown in fig. 4, the flow of the method for caching local data based on the MEC server is as follows:
s401, constructing a local service server;
specifically, the data center cloud service server sinks the service to the MEC1 server, the MEC2 server and the MEC3 server, and virtual machines on the MEC1 server, the MEC2 server and the MEC3 server run the service to form respective local service servers;
s402, uploading original data;
specifically, the user 1 uploads original data (equivalent to the aforementioned service data) to the cloud service server of the data center in real time through the social application program, and the original data is captured by the MEC1 server and redirected to the local service server;
s403, the local service server on the MEC1 receives the original data uploaded by the user 1 in real time and stores the original data in a local disk;
s404, the MEC server forms abstract data and uploads the abstract data and the original data to a cloud service server of the data center;
specifically, the local business server on the MEC1 generates summary data and uploads the summary data to the cloud business server of the data center; meanwhile, the local business server starts a background asynchronous service process and uploads original data to a cloud business server of the data center;
s405, the data center cloud service server determines a related MEC server;
specifically, the data center cloud service server receives the original data content, synchronously starts an original data judgment process while receiving the original data content, and judges the MEC server addresses of the positions of other users who have social relations with the original data publisher (in this example, user 2, user 3 and user 1 have social relations, and the associated MEC servers include an MEC1 server and an MEC2 server);
s406, the data center cloud service server sends the original data to the associated MEC server;
specifically, the data center cloud service server starts a data active caching process, selects an MEC server of which the original data needs to be actively cached, and pushes the original data to a corresponding MEC server (in this example, an MEC2 server);
s407, the related MEC server caches the original data;
specifically, a local service server on the MEC2 server receives original data pushed by a cloud service server of the data center, and performs local caching;
s408, the local caching task of the MEC server in the network is completed.
As can be known from the above, in the process of uploading the original data to the cloud service server of the data center, the local service server on the MEC1 server captures and performs local caching. This process is completely transparent to user 1; meanwhile, the distance from the user 1 to the cloud service server of the data center is shortened to the distance from the user 1 to the local service server on the MEC1 server in the complete transmission path of the original data, so that the data transmission completion time is greatly shortened, the error probability of service transmission is reduced, and the service experience of the user is improved.
In addition, after the local business server on the MEC1 server receives all the original data, the summary data and the original data can be synchronously transmitted to the data center cloud business server, and the data center cloud business server accurately distributes the original data to the MEC server of the network where other social users associated with the data are located according to the social relationship, so that accurate caching is realized, and the defect that the traditional scheme depends on hot content and the prediction accuracy of the distribution trend is overcome.
As shown in fig. 5, the flow of the traffic offload method based on the MEC server is as follows:
s501, a user sends a summary data request;
specifically, the user 2 sends a summary data request to a cloud service server of the data center by using a social application program to acquire information of latest shared contents of friends;
s502, a user receives abstract data sent by a cloud service server of a data center;
specifically, the data center cloud service server returns summary data of the latest shared content of the friend user 1 to the user 2, wherein the summary data comprises the summary data of the shared content of the user 1;
s503, the user sends out an original data request corresponding to the summary data according to the summary data;
specifically, the user 2 analyzes data, and sends an original data request to a cloud service server of the data center by using a social application program to request the user 1 to share the original data of the content;
s504, the original data request of the user is directed to a local service server;
specifically, the original data request of the user 2 reaches the MEC1 server, is captured by the MEC1 server, and redirects the request to the local service server;
s505, the local service server on the MEC1 server parses the request, and searches whether the original data exists in the local cache
Specifically, if there is the original data, step S506 is executed; if the original data does not exist, jumping to step S507;
s506, the local service server on the MEC1 server finds out the original data cached locally, sends the original data to the user 2, and ends the task;
s507, the local service server on the MEC1 server forwards the request to the cloud service server of the data center;
and S508, the data center cloud service server responds to the original data request of the user, sends the original data to the user, and ends the task.
In another embodiment of the present invention, still taking the network topology in fig. 3 as an example for explanation, if there are multiple users requesting the same original data under the same wireless access point, for example, the MEC server may use a network multicast technology to join terminals where the multiple users are located into a network multicast aggregation group, aggregate the requests for the same original data in the same time interval, and use a multicast stream to serve all the requesting users.
The time for the user 2 to obtain the content shared by the user 1 can be calculated by the following formula:
wherein,acquiring a content sharing time interval of the user 1 for the user 2;it takes time for user 1 to transfer the original data to the MEC1 server;it takes time for the MEC1 server to transmit the summary data to the data center cloud service server;it takes time for the data center cloud service server to transmit the summary data to the user 2;it takes time for the MEC1 server to transmit the raw data to user 2.
As can be known from the above, since the original data of the user 1 is stored in the local service server on the MEC1 server, the user 2 can locally obtain the original data shared by the user 1 from the MEC1 server, which greatly reduces the transmission distance of the data in the network, shortens the completion time of data transmission, and improves the service experience of the user.
According to the technical scheme of the embodiment of the invention, the service data of the first terminal is cached to the first edge server, the service data is sent to the center server through the first edge server, the address of the second edge server where the second terminal is located is determined through the center server, and the service data is sent to the second edge server through the center server, so that the second edge server caches the second service data. The method and the device achieve the purpose of sharing the flow of the central server by caching the service data of the first terminal on the second edge server of the second terminal logged in by the logging user associated with the logging user on the first terminal, thereby achieving the technical effect of improving the whole capacity of the network.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 2
According to an embodiment of the present invention, there is also provided an edge computing technology-based data processing apparatus for implementing the above-mentioned edge computing technology-based data processing method, as shown in fig. 6, the apparatus including:
1) a caching unit 602, configured to cache service data of a first terminal to a first edge server, where the first terminal is connected to the first edge server;
2) a first sending unit 604, configured to send the service data to a central server through the first edge server, so that the central server stores the service data, where the central server is connected to the first edge server through a core network;
3) a determining unit 606, configured to determine, by using the central server, an address of a second edge server where a second terminal is located, where a login user on the second terminal is associated with a login user on the first terminal through a social network;
4) a second sending unit 608, configured to send the service data to the second edge server through the central server, so that the second edge server caches the service data.
Further, the first sending unit 604 includes:
1) the processing module is used for generating abstract data corresponding to the business data through the first edge server;
2) and the sending module is used for sending the summary data and the service data to the central server through the first edge server so that the central server stores the service data and the summary data.
Further, the apparatus further comprises:
1) a third sending unit, configured to send, after the first edge server sends the summary data and the service data to the center server and the second terminal receives a first operation instruction, a summary data request to the center server through the second terminal, where the first operation instruction is used to control the second terminal to request the center server to obtain the summary data;
2) a receiving unit, configured to receive, at the second terminal, the summary data sent by the central server;
3) and a fourth sending unit, configured to send a service data request to the second edge server through the second terminal after the second terminal receives a second operation instruction, where the second operation instruction is used to control the second terminal to obtain the service data.
Further, the apparatus further comprises:
1) the query unit is used for querying a local cache of the second edge server according to the service data request after the second terminal receives a second operation instruction and the second terminal sends the service data request to the second edge server;
2) a determining unit, configured to determine whether the service data exists in a local cache of the second edge server;
3) a fifth sending unit, configured to send the service data to the second terminal through the second edge server when the service data exists in the local cache of the second edge server;
4) a sixth sending unit, configured to send the service data request to the central server through the second edge server when the service data does not exist in the local cache of the second edge server.
Further, the apparatus further comprises:
1) the aggregation unit is used for adding the plurality of terminals into a multicast aggregation group through the third edge server according to the service data requests of the plurality of terminals received within a preset time interval under the condition that the plurality of terminals request the same service data in the network governed by the third edge server;
2) and a seventh sending unit, configured to send the service data requested by the multiple terminals to the multiple terminals through the multicast aggregation group.
Optionally, the specific example in this embodiment may refer to the example described in embodiment 1 above, and this embodiment is not described again here.
Example 3
According to an embodiment of the present invention, there is also provided a storage medium including a stored program, where the apparatus on which the storage medium is located is controlled to execute the data processing method based on the edge computing technology as described above when the program runs.
Optionally, in this embodiment, the storage medium is configured to store program codes for performing the following steps:
s1, caching the service data of a first terminal to a first edge server, wherein the first terminal is connected with the first edge server;
s2, sending the service data to a central server through the first edge server, so that the central server stores the service data, wherein the central server is connected to the first edge server through a core network;
s3, determining the address of a second edge server where a second terminal is located through the central server, wherein a login user on the second terminal is associated with a login user on the first terminal through a social network;
s4, sending the service data to the second edge server through the central server, so that the second edge server caches the service data.
Optionally, the specific example in this embodiment may refer to the example described in embodiment 1 above, and this embodiment is not described again here.
Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, which can store program codes.
Example 4
Embodiments of the present invention also provide a processor, configured to execute a program, where the program executes a data processing method based on an edge computing technique as described above.
Optionally, in this embodiment, the processor is configured to execute the program code of the following steps:
s1, caching the service data of a first terminal to a first edge server, wherein the first terminal is connected with the first edge server;
s2, sending the service data to a central server through the first edge server, so that the central server stores the service data, wherein the central server is connected to the first edge server through a core network;
s3, determining the address of a second edge server where a second terminal is located through the central server, wherein a login user on the second terminal is associated with a login user on the first terminal through a social network;
s4, sending the service data to the second edge server through the central server, so that the second edge server caches the service data.
Optionally, the storage medium is further configured to store program codes for executing the steps included in the method in embodiment 1, which is not described in detail in this embodiment.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing one or more computer devices (which may be personal computers, servers, network devices, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (12)

1. A data processing method based on an edge computing technology is characterized by comprising the following steps:
caching service data of a first terminal to a first edge server, wherein the first terminal is connected with the first edge server;
the service data are sent to a central server through the first edge server, so that the central server stores the service data, wherein the central server is connected with the first edge server through a core network;
determining, by the central server, an address of a second edge server where a second terminal is located, where a login user on the second terminal is associated with a login user on the first terminal through a social network;
and sending the service data to the second edge server through the central server so that the second edge server caches the service data.
2. The method of claim 1, wherein sending the traffic data to a central server via the first edge server comprises:
generating summary data corresponding to the business data through the first edge server;
and sending the abstract data and the business data to the central server through the first edge server so that the central server stores the business data and the abstract data.
3. The method of claim 2, wherein after sending the summary data and the traffic data to the central server via the first edge server, the method further comprises:
after the second terminal receives a first operation instruction, sending a summary data request to the central server through the second terminal, wherein the first operation instruction is used for controlling the second terminal to request the central server for acquiring the summary data;
receiving the summary data sent by the central server at the second terminal;
and after receiving a second operation instruction, the second terminal sends a service data request to the second edge server, wherein the second operation instruction is used for controlling the second terminal to acquire the service data.
4. The method according to claim 3, wherein after the second terminal receives the second operation instruction and sends a service data request to the second edge server through the second terminal, the method further comprises:
inquiring a local cache of the second edge server according to the service data request;
judging whether the service data exists in a local cache of the second edge server or not;
under the condition that the service data exist in the local cache of the second edge server, the service data are sent to the second terminal through the second edge server;
and sending the service data request to the central server through the second edge server under the condition that the service data does not exist in the local cache of the second edge server.
5. The method of claim 1, further comprising:
under the condition that a plurality of terminals request the same service data in a network governed by a third edge server, adding the plurality of terminals into a multicast aggregation group through the third edge server according to the service data requests of the plurality of terminals received within a preset time interval;
and transmitting the service data requested by the plurality of terminals to the plurality of terminals through the multicast aggregation group.
6. A data processing apparatus based on edge computing technology, comprising:
the system comprises a caching unit, a processing unit and a processing unit, wherein the caching unit is used for caching service data of a first terminal to a first edge server, and the first terminal is connected with the first edge server;
a first sending unit, configured to send the service data to a central server through the first edge server, so that the central server stores the service data, where the central server is connected to the first edge server through a core network;
the determining unit is used for determining the address of a second edge server where a second terminal is located through the central server, wherein a login user on the second terminal is associated with a login user on the first terminal through a social network;
and the second sending unit is used for sending the service data to the second edge server through the central server so as to enable the second edge server to cache the service data.
7. The apparatus of claim 6, wherein the first sending unit comprises:
the processing module is used for generating abstract data corresponding to the business data through the first edge server;
and the sending module is used for sending the summary data and the service data to the central server through the first edge server so that the central server stores the service data and the summary data.
8. The apparatus of claim 7, further comprising:
a third sending unit, configured to send, after the first edge server sends the summary data and the service data to the center server and the second terminal receives a first operation instruction, a summary data request to the center server through the second terminal, where the first operation instruction is used to control the second terminal to request the center server to obtain the summary data;
a receiving unit, configured to receive, at the second terminal, the summary data sent by the central server;
and a fourth sending unit, configured to send a service data request to the second edge server through the second terminal after the second terminal receives a second operation instruction, where the second operation instruction is used to control the second terminal to obtain the service data.
9. The apparatus of claim 8, further comprising:
the query unit is used for querying a local cache of the second edge server according to the service data request after the second terminal receives a second operation instruction and the second terminal sends the service data request to the second edge server;
a determining unit, configured to determine whether the service data exists in a local cache of the second edge server;
a fifth sending unit, configured to send the service data to the second terminal through the second edge server when the service data exists in the local cache of the second edge server;
a sixth sending unit, configured to send the service data request to the central server through the second edge server when the service data does not exist in the local cache of the second edge server.
10. The apparatus of claim 6, further comprising:
the aggregation unit is used for adding the plurality of terminals into a multicast aggregation group through the third edge server according to the service data requests of the plurality of terminals received within a preset time interval under the condition that the plurality of terminals request the same service data in the network governed by the third edge server;
and a seventh sending unit, configured to send the service data requested by the multiple terminals to the multiple terminals through the multicast aggregation group.
11. A storage medium, characterized in that the storage medium comprises a stored program, wherein when the program runs, a device in which the storage medium is located is controlled to execute the data processing method based on the edge computing technology according to any one of claims 1 to 5.
12. A processor, characterized in that the processor is configured to execute a program, wherein the program executes the data processing method based on the edge computing technology according to any one of claims 1 to 5.
CN201811019205.9A 2018-08-31 2018-08-31 Data processing method and device based on edge calculations technology Pending CN109040298A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811019205.9A CN109040298A (en) 2018-08-31 2018-08-31 Data processing method and device based on edge calculations technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811019205.9A CN109040298A (en) 2018-08-31 2018-08-31 Data processing method and device based on edge calculations technology

Publications (1)

Publication Number Publication Date
CN109040298A true CN109040298A (en) 2018-12-18

Family

ID=64622848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811019205.9A Pending CN109040298A (en) 2018-08-31 2018-08-31 Data processing method and device based on edge calculations technology

Country Status (1)

Country Link
CN (1) CN109040298A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109451517A (en) * 2018-12-27 2019-03-08 同济大学 A kind of caching placement optimization method based on mobile edge cache network
CN109819459A (en) * 2019-02-20 2019-05-28 北京邮电大学 A kind of the caching dispositions method and device of content
CN110032576A (en) * 2019-03-12 2019-07-19 平安科技(深圳)有限公司 A kind of method for processing business and device
CN110418194A (en) * 2019-07-19 2019-11-05 咪咕文化科技有限公司 Video distribution method and base station
CN110944033A (en) * 2019-10-14 2020-03-31 珠海格力电器股份有限公司 Equipment control method, device, edge layer server, system and storage medium
CN111935246A (en) * 2020-07-21 2020-11-13 山东省计算中心(国家超级计算济南中心) User generated content uploading method and system based on cloud edge collaboration
CN112560946A (en) * 2020-12-14 2021-03-26 武汉大学 Edge server hot spot prediction method for online and offline associated reasoning
CN112584439A (en) * 2020-11-27 2021-03-30 重庆邮电大学 Caching method in edge calculation
WO2021087778A1 (en) * 2019-11-05 2021-05-14 北京小米移动软件有限公司 Data processing system, method, and apparatus, device, and readable storage medium
CN113542330A (en) * 2020-04-21 2021-10-22 中移(上海)信息通信科技有限公司 Method and system for acquiring mobile edge calculation data
CN117729585A (en) * 2023-12-14 2024-03-19 阳光凯讯(北京)科技股份有限公司 5G communication-based space-based information distribution method and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102014053A (en) * 2010-11-17 2011-04-13 华为技术有限公司 Service transmitting method and device and communication system
CN102594875A (en) * 2012-01-13 2012-07-18 华为技术有限公司 Content distribution method and device as well as access network device
CN102869003A (en) * 2012-08-28 2013-01-09 中兴通讯股份有限公司 Method for distributing service contents in heterogeneous network and service management platform
CN103959740A (en) * 2011-09-12 2014-07-30 Sca艾普拉控股有限公司 Communications terminal and method
US20150120368A1 (en) * 2013-10-29 2015-04-30 Steelwedge Software, Inc. Retail and downstream supply chain optimization through massively parallel processing of data using a distributed computing environment
CN104967642A (en) * 2014-08-21 2015-10-07 腾讯科技(深圳)有限公司 Content distribution method and apparatus
CN107707616A (en) * 2017-08-21 2018-02-16 贵州白山云科技有限公司 A kind of data transmission method and system
CN107908695A (en) * 2017-10-31 2018-04-13 平安普惠企业管理有限公司 Operation system operation method, device, system and readable storage medium storing program for executing

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102014053A (en) * 2010-11-17 2011-04-13 华为技术有限公司 Service transmitting method and device and communication system
CN103959740A (en) * 2011-09-12 2014-07-30 Sca艾普拉控股有限公司 Communications terminal and method
CN102594875A (en) * 2012-01-13 2012-07-18 华为技术有限公司 Content distribution method and device as well as access network device
CN102869003A (en) * 2012-08-28 2013-01-09 中兴通讯股份有限公司 Method for distributing service contents in heterogeneous network and service management platform
US20150120368A1 (en) * 2013-10-29 2015-04-30 Steelwedge Software, Inc. Retail and downstream supply chain optimization through massively parallel processing of data using a distributed computing environment
CN104967642A (en) * 2014-08-21 2015-10-07 腾讯科技(深圳)有限公司 Content distribution method and apparatus
CN107707616A (en) * 2017-08-21 2018-02-16 贵州白山云科技有限公司 A kind of data transmission method and system
CN107908695A (en) * 2017-10-31 2018-04-13 平安普惠企业管理有限公司 Operation system operation method, device, system and readable storage medium storing program for executing

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109451517A (en) * 2018-12-27 2019-03-08 同济大学 A kind of caching placement optimization method based on mobile edge cache network
CN109819459A (en) * 2019-02-20 2019-05-28 北京邮电大学 A kind of the caching dispositions method and device of content
CN109819459B (en) * 2019-02-20 2020-09-18 北京邮电大学 Content cache deployment method and device
CN110032576A (en) * 2019-03-12 2019-07-19 平安科技(深圳)有限公司 A kind of method for processing business and device
CN110032576B (en) * 2019-03-12 2023-06-16 平安科技(深圳)有限公司 Service processing method and device
CN110418194B (en) * 2019-07-19 2022-03-25 咪咕文化科技有限公司 Video distribution method and base station
CN110418194A (en) * 2019-07-19 2019-11-05 咪咕文化科技有限公司 Video distribution method and base station
CN110944033A (en) * 2019-10-14 2020-03-31 珠海格力电器股份有限公司 Equipment control method, device, edge layer server, system and storage medium
CN110944033B (en) * 2019-10-14 2021-01-08 珠海格力电器股份有限公司 Equipment control method, device, edge layer server, system and storage medium
WO2021087778A1 (en) * 2019-11-05 2021-05-14 北京小米移动软件有限公司 Data processing system, method, and apparatus, device, and readable storage medium
CN113542330A (en) * 2020-04-21 2021-10-22 中移(上海)信息通信科技有限公司 Method and system for acquiring mobile edge calculation data
CN113542330B (en) * 2020-04-21 2023-10-27 中移(上海)信息通信科技有限公司 Mobile edge calculation data acquisition method and system
CN111935246A (en) * 2020-07-21 2020-11-13 山东省计算中心(国家超级计算济南中心) User generated content uploading method and system based on cloud edge collaboration
CN112584439A (en) * 2020-11-27 2021-03-30 重庆邮电大学 Caching method in edge calculation
CN112560946A (en) * 2020-12-14 2021-03-26 武汉大学 Edge server hot spot prediction method for online and offline associated reasoning
CN112560946B (en) * 2020-12-14 2022-04-29 武汉大学 Edge server hot spot prediction method for online and offline associated reasoning
CN117729585A (en) * 2023-12-14 2024-03-19 阳光凯讯(北京)科技股份有限公司 5G communication-based space-based information distribution method and system

Similar Documents

Publication Publication Date Title
CN109040298A (en) Data processing method and device based on edge calculations technology
US10601947B2 (en) Application service delivery through an application service avatar
CN108513290B (en) Network slice selection method and device
JP5885310B2 (en) Sharing content between mobile devices
CN106549878B (en) Service distribution method and device
US10812580B2 (en) Using resource timing data for server push
MX2014007165A (en) Application-driven cdn pre-caching.
US20150088995A1 (en) Method and apparatus for sharing contents using information of group change in content oriented network environment
US11372937B1 (en) Throttling client requests for web scraping
CN113301079B (en) Data acquisition method, system, computing device and storage medium
CN105100158A (en) Message pushing and obtaining methods and apparatuses
CN104967642A (en) Content distribution method and apparatus
CN105045873A (en) Data file pushing method, apparatus and system
US20140289307A1 (en) Method for transmitting data between electronic devices
CN105025042B (en) A kind of method and system of determining data information, proxy server
Zhang et al. Processing geo-dispersed big data in an advanced mapreduce framework
EP2999266B1 (en) Method, device and system for obtaining mobile network data resources
CN113966602A (en) Distributed storage of blocks in a blockchain
EP3040931A1 (en) Application service delivery through an application service avatar
CN105677829A (en) Retrieving method and system
US11086822B1 (en) Application-based compression
CN105162720A (en) Data transmission reducing communication network and method
EP3128711A1 (en) Information object acquisition method, server and user equipment
KR20190119497A (en) Offering system for large scale multi vod streaming service based on distributed file system and method thereof
KR102329074B1 (en) Content transmission method and apparatus for transmitting user's preferred content extracted using reinforcement learning module to a plurality of edge nodes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20181218

RJ01 Rejection of invention patent application after publication