WO2014040244A1 - 业务数据缓存处理方法、设备和系统 - Google Patents

业务数据缓存处理方法、设备和系统 Download PDF

Info

Publication number
WO2014040244A1
WO2014040244A1 PCT/CN2012/081299 CN2012081299W WO2014040244A1 WO 2014040244 A1 WO2014040244 A1 WO 2014040244A1 CN 2012081299 W CN2012081299 W CN 2012081299W WO 2014040244 A1 WO2014040244 A1 WO 2014040244A1
Authority
WO
WIPO (PCT)
Prior art keywords
cache
service data
data
statistical information
request
Prior art date
Application number
PCT/CN2012/081299
Other languages
English (en)
French (fr)
Inventor
韦安妮
熊春山
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201280004558.6A priority Critical patent/CN103875227B/zh
Priority to EP12884619.3A priority patent/EP2887618B1/en
Priority to PCT/CN2012/081299 priority patent/WO2014040244A1/zh
Publication of WO2014040244A1 publication Critical patent/WO2014040244A1/zh
Priority to US14/656,416 priority patent/US10257306B2/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/5682Policies or rules for updating, deleting or replacing the stored data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/55Push-based network services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching

Definitions

  • the present invention relates to communications technologies, and in particular, to a service data cache processing method, device, and system. Background technique
  • UE User Equipment
  • the process of the UE accessing or downloading service data such as audio and video is: the UE sends a service request message to the access network device, and the access network device sends the service request message to the core network device, where the core network device can
  • the service request is sent to a service provider (Service Provider, SP for short) through a packet data network (PDN), and the SP can send the service data requested by the UE to the core network device through the PDN, and the core network device
  • the service data such as audio and video is sent to the UE through the access network device, so that the UE completes the access or download process.
  • the embodiment of the invention provides a service data cache processing method, device and system, which are used to accelerate the speed at which a UE accesses or downloads service data.
  • the embodiment of the present invention provides a service data cache processing method, including: receiving statistical information of service data;
  • the embodiment of the present invention further provides another service data cache processing method, including: Statistics of business data and acquisition of statistical information;
  • the embodiment of the present invention further provides a cache policy control entity, including: a receiving module, configured to receive statistical information of service data;
  • a sending module configured to send, according to the statistical information, a service data push request to a service provider SP device, to enable the SP device to send service data corresponding to the service data push request to a host deployed in a core network Cache and/or edge cache deployed in the access network.
  • the embodiment of the invention further provides a network device, including:
  • An obtaining module configured to perform statistics on service data, and obtain statistics information
  • a sending module configured to send the statistics information to the cache policy control entity, so that the cache policy control entity requests the service provider SP device to send the service data corresponding to the request to the deployed core according to the statistical information.
  • the embodiment of the present invention further provides a service data cache processing system, including the cache policy control entity and the network device provided above.
  • the cache policy control entity sends a service data push request to the service provider SP device according to the statistical information, so that the SP device will perform the service corresponding to the service data push request.
  • the data is sent to the primary cache deployed in the core network and/or the edge cache deployed in the access network, so that when the UE accesses or downloads the service data, the service data can be obtained by accessing the primary cache and the edge cache, thereby accelerating the UE access or The speed at which business data is downloaded.
  • FIG. 1 is a schematic flowchart of a service data cache processing method according to an embodiment of the present invention
  • 2 is a schematic flowchart of a service data cache processing method according to another embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of a cache policy control entity according to an embodiment of the present invention
  • FIG. 4 is a schematic structural diagram of a cache policy control entity according to another embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a cache policy control entity according to another embodiment of the present invention.
  • FIG. 6 is a cache policy control according to another embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of a network device according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a network device according to another embodiment of the present invention.
  • FIG. 9 is a schematic diagram of a service data cache processing system according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of a device and related interfaces in a 3GPP network according to an embodiment of the present invention.
  • FIG. 11 is a signaling diagram 1 of a service data cache processing method according to an embodiment of the present invention
  • FIG. 12 is a signaling diagram 2 of a service data cache processing method according to an embodiment of the present invention
  • FIG. 13 is a service data cache according to an embodiment of the present invention
  • FIG. 14 is a signaling diagram of a service data buffer processing method according to an embodiment of the present invention
  • FIG. 15 is a signaling diagram 5 of a service data buffer processing method according to an embodiment of the present invention
  • FIG. 17 is a signaling diagram 7 of a service data buffer processing method according to an embodiment of the present invention.
  • LTE Long Term Evolution
  • UMTS Universal Mobile Telecommunication System
  • the LTE network is taken as an example for description.
  • Different network elements can be included in the system.
  • the network elements of the Radio Access Network (RAN) in the LTE and the Advanced Long Term Evolution (LTE6) include an evolved base station eNB, and the network elements of the radio access network in the WCDMA include wireless.
  • a network controller Radio Network Controller, RNC for short
  • RNC Radio Network Controller
  • NodeB NodeB
  • WiMax Worldwide Interoperability for Microwave Access
  • the related modules in the system may be different, and the embodiments of the present invention are not limited, but for convenience of description, the following embodiments will be described by taking an eNodeB as an example.
  • the terminal may also be referred to as a user equipment (User Equipment, UE for short), a mobile station (Mobile Station, MS for short), a mobile terminal (MT), etc., the terminal.
  • the wireless access network can communicate with one or more core networks.
  • the terminal can be a mobile phone (or "cellular" phone), a computer with communication function, etc., for example, the terminal can also be portable, pocket-sized. , handheld, computer built-in or in-vehicle mobile devices.
  • FIG. 1 is a schematic flowchart of a method for processing a service data cache according to an embodiment of the present invention.
  • the execution entity of the embodiment of the present invention is a cache policy control entity. As shown in FIG. 1 , the method includes the following steps:
  • Step S101 Receive statistical information of service data.
  • Step S102 Send a service data push request to the service provider SP device according to the statistical information, so that the SP device sends the service data corresponding to the service data push request to the main cache deployed in the core network. / or edge cache deployed in the access network.
  • the cache policy control entity may receive the statistics of the service data periodically or according to the preset time.
  • the statistics of the service data may be the ranking of the clicks and the attention of the resources of the popular video, the popular audio or the popular news. Ranking, evaluation score ranking, etc., are not specifically limited here.
  • the cache policy control entity sends a service data push request to the SP device,
  • the SP device can directly send the service data corresponding to the service data push request to the main cache deployed in the core network.
  • the SP device can also send the service data corresponding to the service data push request to the edge cache deployed in the access network, and implement the service data directly to the edge cache deployed in the access network.
  • the SP device first sends the service data to the primary cache deployed in the core network, and then the cache policy control entity sends a service data push request to the primary cache, so that the primary cache sends the service data to the edge cache in the access network.
  • the manner in which the above cache policy control entity sends a push request for service data to the SP device may be pull or push.
  • the pull mode the primary cache and the edge cache actively obtain service data from the SP device.
  • the push mode the SP device actively sends service data to the primary cache and the edge cache.
  • the cache policy control entity sends a service data push request to the service provider SP device according to the statistical information, so that the SP device sends the service data corresponding to the service data push request to the deployment.
  • the primary cache in the core network and/or the edge cache deployed in the access network can manage and maintain the primary cache and the edge cache, so that when the UE accesses or downloads service data, it does not need to obtain from the SP, but can directly pass Accessing the primary cache deployed on the core network and deploying the edge cache of the access network to obtain service data accelerates the speed at which the UE accesses or downloads service data.
  • the sending the service data push request to the SP device according to the statistical information includes: determining a data cache policy according to the statistics information; and sending a service data push request to the SP device according to the cache policy.
  • the data cache policy for storing the service data in the statistical information in the primary cache or the edge cache is determined according to the statistical information such as the traffic information and the attention of the service data in the statistical information.
  • the specific data caching strategy can be a preset one or more data caching strategies.
  • the service data in the statistic information that is greater than the preset value in the preset time may be stored in the main cache or the edge cache, or the service data in the statistic information whose attention degree is greater than the preset value in the preset time may be Stored in the main cache or the edge cache, or store the service data in the statistics with the traffic larger than the preset value and the storage space of the sum of the main cache or the edge cache is stored in the main cache or the edge cache.
  • the cache policy control entity does If the service data whose access amount reaches the preset value is stored in the main buffer or the edge cache managed by the server, the cache policy control entity sends a service data download request to the SP device during the idle time of the network, and stores the service data in the main service. In the cache.
  • the cache policy control entity sends a service data download request to the SP device during the idle time of the network, and stores the service data in the main service.
  • the cache policy control entity sends a service data download request to the SP device during the idle time of the network, and stores the service data in the main service.
  • the cache policy control entity sends a service data download request to the SP device during the idle time of the network, and stores the service data in the main service.
  • the cache policy control entity sends a service data download request to the SP device during the idle time of the network, and stores the service data in the main service.
  • the cache policy control entity sends a service data download request to the SP device during the idle time of the network, and stores the service data
  • the service data cache processing method provided by the embodiment of the present invention, by determining a data cache policy, sends a service data push request to the SP device, so that the SP device determines the number of push service data to the primary cache and the edge cache.
  • the receiving the statistics of the service data may include: receiving, by the SP device, statistics obtained according to the service data request sent by the user equipment; or receiving the main cache according to the service data sent by the user equipment.
  • the obtained statistical information is obtained by the receiving edge buffer according to the service data sent by the user equipment; or receiving the statistical information obtained by the SP device according to the service data collected from each service platform.
  • the information that needs to be collected in this embodiment is mainly the video and audio resources that have been advertised on the network, and the SP device, the main cache, and the edge cache can all make resources according to the request of the service data sent by the user equipment.
  • Statistics For example, you can count clicks based on clicks on popular resources.
  • the SP device can also collect service data requests from various service platforms. For example, the SP device can comprehensively evaluate the hot resources according to the attention, preference, and score evaluation of the popular resources in each service platform, and count the hot resources.
  • Each of the business platforms may be a portal website, a video download website, or various forums, and is not particularly limited herein.
  • the cache policy control entity may perform a cache policy according to the statistics.
  • the sending the service data push request to the SP device according to the hot statistics information includes: determining, according to the statistical information, a first data cache policy corresponding to the primary cache and a second data cache policy corresponding to the edge cache; a policy, the first service data download request is sent to the SP device, so that the SP device sends the service data to the primary cache according to the first service data download request, and sends the second service data download request to the SP device according to the second data cache policy. So that the SP device sends the service data to the edge cache according to the second service data download request.
  • the cache policy control entity can determine and The first data cache policy corresponding to the primary cache and the second data cache policy corresponding to the edge cache.
  • the first data caching policy determines the service data sent by the SP device to the main cache.
  • the caching policy control entity may determine, according to the first data caching policy, the first 100 resource caches with the attention time in the preset time. In the main cache, it sends a first service data download request to the SP device during the idle time of the network, requesting to store the 100 resources in the main cache.
  • the second data caching policy determines the service data sent by the SP device to the edge cache.
  • the cache policy control entity determines to cache the top 100 resources such as attention or popularity directly in the edge cache according to the second data caching policy, and then sends the second service data download to the SP device during the idle time of the network. Request, request to store the 100 resources in the edge cache.
  • the technical solution provided by the embodiment of the present invention can store the hot resource in the main cache deployed in the core network and deploy in the edge cache of the access network, and pre-allocate the hot resource when the user accesses or downloads the popular resource. , there will be no network congestion.
  • the method further includes:
  • the cache policy control entity determines, according to the third data caching policy, that the top 100 resources such as the attention or the preference in the preset time are stored in the edge cache, and the cached to the main cache in the idle time of the network. Sending a second service data download request, requesting to store the 100 resources in the main cache.
  • the technical solution provided by the embodiment of the present invention can store the hot resources in the edge cache of the access network, and pre-allocate the hot resources. When the user accesses or downloads the hot resources, network congestion does not occur.
  • the statistical information of the foregoing service data includes any one or a combination of the following: the statistical information of the service data whose access quantity is greater than a preset value; the statistical information of the service data whose degree of interest is greater than a preset value; The statistical information of the business data of the preset keyword.
  • the data caching strategy that meets the actual demand can be determined according to the diversification of the statistical information.
  • the foregoing service data cache processing method further includes: receiving an update message of the statistical information sent by the SP device; sending a data update request message to the SP device according to the update message, so that the SP device sends an update request according to the data update request message.
  • Corresponding service data is sent to the primary cache and/or the edge cache.
  • the cache policy control entity may not only periodically receive the update request of the statistical information sent by the SP device, but also initiate an update request to the SP device according to the update period of the hot resource in the statistical information.
  • the cache policy control entity requests the SP device to update the cache resources in the primary cache during the idle time of the network.
  • the cache policy control entity requests the SP device to update the cache resource in the primary cache during the idle time of the network, and the cache policy control entity requests the primary cache to update the expired resource stored in the edge cache during the idle time of the network.
  • the technical solution provided by the embodiment of the present invention controls the update of the resources in the primary cache and the edge cache by the cache policy, so that the UE can quickly access the latest hot resources.
  • FIG. 2 is a schematic flowchart of a service data cache processing method according to Embodiment 2 of the present invention.
  • the executor of the embodiment may be an SP device, a primary cache, and an edge cache. As shown in FIG. 2, the method includes the following steps:
  • Step S201 Perform statistics on service data to obtain statistical information.
  • Step S202 Send the statistics information to the cache policy control entity, so that the cache policy control entity requests the service provider SP device to send the service data corresponding to the request to the core network according to the statistics information.
  • the SP device, the primary cache, and the edge cache may perform statistics on the service data, and obtain statistical information, and the statistical information may be the ranking of the clicks and the attention of the resources such as the popular video, the popular audio, or the popular news. Ranking, evaluation score ranking, etc., are not specifically limited here.
  • the SP device, the primary cache and the edge cache may feed back the result of the statistical information to the cache policy control entity, so that the cache policy control entity requests the service provider SP device to send the service data corresponding to the request to the service information according to the statistical information.
  • the service data cache processing method provided by the embodiment of the present invention the primary cache and the edge cache can feed the result of the statistical information to the cache policy control entity, and the cache policy control entity can request the service provider SP device according to the statistical information.
  • the service data corresponding to the request is sent to the primary cache deployed in the core network and/or the edge cache deployed in the access network, so that when the UE accesses or downloads the service data, it does not need to obtain from the SP, and can be directly deployed through the access.
  • the primary cache on the core network and the edge cache deployed in the access network obtain service data, which accelerates the speed at which the UE accesses or downloads service data.
  • the statistic is performed on the service data
  • the obtaining the statistic information includes: performing statistical processing on the service data request sent by the user equipment, and acquiring the statistic information.
  • the service data request sent by the user equipment is statistically processed, mainly for video and audio resources that have been published on the network.
  • the SP device, the main cache, and the edge cache can count the specific service requests of the user equipment.
  • the user equipment can count the clicks of the popular resources according to the clicks of the user resources.
  • performing statistics on the service data, and obtaining the statistics information includes: performing statistical processing on the service data requests collected from the service platforms, and obtaining the statistics information.
  • the SP device can request service data collected from each service platform. For example, the SP device can comprehensively evaluate the hot resources according to the attention, preference, and score evaluation of the popular resources in each service platform, and count the hot resources.
  • Each of the business platforms may be a portal website, a video download website, or various forums, and is not particularly limited herein.
  • the service data request sent by the user equipment may be statistically processed, or the service data request collected from each service platform may be statistically processed to obtain statistical information, so that the statistical range of the statistical information is large, and the statistical data is reliable.
  • the statistical information of the foregoing service data includes any one or a combination of the following: the statistical information of the service data whose access quantity is greater than a preset value; the statistical information of the service data whose degree of interest is greater than a preset value; The statistical information of the business data of the preset keyword.
  • the data caching strategy that meets the actual demand can be determined according to the diversification of the statistical information.
  • the foregoing service data cache processing method further includes: sending an update message of the statistics information to the cache policy control entity; and receiving the number of the cache policy control entity according to the update message According to the update request message, the service data corresponding to the update request is sent to the primary cache and/or the edge cache according to the data update request message.
  • the SP device may periodically send an update message of the statistics information to the cache policy control entity, and send the service data corresponding to the updated statistics information to the main cache when the cache policy control entity sends the data update request message to the cache policy control entity. And / or edge cache.
  • the process of transmitting the service data to the primary cache and/or the edge cache by the SP device refers specifically to the description of the above embodiment, and details are not described herein again.
  • the technical solution provided by the embodiment of the present invention controls the update of the resources in the primary cache and the edge cache by the cache policy, so that the UE can quickly access the latest hot resources.
  • FIG. 3 is a schematic structural diagram of a cache policy control entity according to an embodiment of the present invention.
  • the cache policy control entity 10 provided by the embodiment of the present invention includes a receiving module 101 and a sending module 102.
  • the receiving module 101 is configured to receive the statistical information of the service data.
  • the sending module 102 is configured to send, according to the statistical information, a service data push request to the service provider SP device, so that the SP device sends the service data corresponding to the service data push request.
  • the cache policy control entity may send the service data corresponding to the statistics information to the primary cache deployed in the core network and/or the edge cache deployed in the access network according to the statistics, and the cache policy may be implemented.
  • the control entity maintains and manages the primary cache and the edge cache.
  • the service can be directly accessed through the primary cache deployed on the core network and the edge cache deployed on the access network. Data, speeding up the speed at which the UE accesses or downloads business data.
  • FIG. 4 is a schematic structural diagram of a cache policy control entity according to another embodiment of the present invention.
  • the sending module 102 provided by the embodiment of the present invention includes a first determining unit 1021 and a first sending unit 1022.
  • the first determining unit 1021 is configured to determine a data caching policy according to the statistical information.
  • the first sending unit 1022 is configured to send a service data pushing request to the SP device according to the data caching policy.
  • the cache policy control entity provided by the embodiment of the present invention determines the data cache policy by using the first determining unit 1021, and the first sending unit 1022 sends a service data download request to the SP device, so that the SP device determines to push the service data to the primary cache and the edge cache. quantity.
  • the receiving module 101 provided by the embodiment of the present invention is specifically configured to receive the SP setting. And receiving the statistical information obtained according to the service data request sent by the user equipment; or receiving the statistical information obtained by the primary cache according to the service data sent by the user equipment; or receiving the edge cache according to the user equipment
  • the sent service data requests statistics to obtain the statistical information; or, the receiving SP device obtains statistical information obtained according to the service data collected from each service platform.
  • the receiving module 101 receives the statistics information obtained by the SP device, the primary cache, and the edge cache, so that the cache policy control entity can generate a cache policy according to the statistics.
  • FIG. 5 is a schematic structural diagram of a cache policy control entity according to another embodiment of the present invention.
  • the sending module 102 includes a second determining unit 1025 and a second sending unit 1026.
  • the second determining unit 1025 is configured to determine, according to the statistical information, a first data caching policy corresponding to the main cache and a second data caching policy corresponding to the edge cache.
  • the second sending unit 1026 is configured to send, according to the first data caching policy, a first service data download request to the SP device, so that the SP device sends the service data to the main cache according to the first service data download request; according to the second data caching policy And sending a second service data download request to the SP device, so that the SP device sends the service data to the edge cache according to the second service data download request.
  • the second determining unit 1025 is further configured to determine, according to the statistical information, a third data caching policy corresponding to the edge cache.
  • the second sending unit 1026 is further configured to send, according to the third data caching policy, a third service data download request to the primary cache, so that the primary cache sends the service data to the edge cache according to the third service data download request.
  • the technical solution provided by the embodiment of the present invention can store the hot resource in the main cache deployed in the core network and deploy in the edge cache of the access network, and pre-allocate the hot resource when the user accesses or downloads the popular resource. , there will be no network congestion.
  • the statistical information of the foregoing service data includes any one or a combination of the following: the statistical information of the service data whose access quantity is greater than a preset value; the statistical information of the service data whose degree of interest is greater than a preset value; The statistical information of the business data of the preset keyword.
  • the data caching strategy that meets the actual demand can be determined according to the diversification of the statistical information.
  • FIG. 6 is a schematic structural diagram of a cache policy control entity according to another embodiment of the present invention.
  • the cache policy control entity 10 provided by the embodiment of the present invention includes the entity described in any of the foregoing embodiments, and includes an update module 103 in addition to the receiving module 101 and the sending module 102.
  • the update module 103 is specifically configured to receive an update message of the statistical information sent by the SP device, and send a data update request message to the SP device according to the update message, so that the SP device sends the service data corresponding to the update request to the primary cache according to the data update request message. And / or edge cache.
  • the technical solution provided by the embodiment of the present invention can update the resources of the primary cache and the edge cache by the update module, so that the UE can access the latest hot resource.
  • FIG. 7 is a schematic structural diagram of a network device according to an embodiment of the present invention.
  • the network device 20 provided by the embodiment of the present invention includes an obtaining module 201 and a sending module 202.
  • the obtaining module 201 is configured to perform statistics on the service data, and obtain the statistics information.
  • the sending module 202 is configured to send the statistics information to the cache policy control entity, so that the cache policy control entity requests the service provider SP device to request the request according to the statistical information.
  • Corresponding service data is sent to the primary cache deployed in the core network and/or the edge cache deployed in the access network.
  • the network device provided by the embodiment of the present invention can send the hot statistics information to the cache policy control entity by using the sending module, so that the cache policy control entity requests the service provider SP device to send the service data corresponding to the request to the deployment according to the hot statistics information.
  • the primary cache in the core network and/or the edge cache deployed in the access network enable the UE to access or download the service data without acquiring from the SP, and can directly access the primary cache deployed on the core network and deployed in the connection.
  • the edge cache of the network obtains service data, which speeds up the access or download of service data by the UE.
  • the obtaining module 201 is specifically configured to perform statistical processing on the service data request sent by the user equipment, and obtain statistics.
  • the network device is an SP device, a primary cache, or an edge cache.
  • the obtaining module 201 is specifically configured to perform statistical processing on the service data request collected from each service platform to obtain statistical information.
  • the service data request sent by the user equipment may be statistically processed, or the service data request collected from each service platform may be statistically processed to obtain statistical information, so that the statistical range of the statistical information is large, and the statistical data is reliable.
  • the statistical information of the foregoing service data includes any one or a combination of the following: the statistical information of the service data whose access quantity is greater than a preset value; the statistical information of the service data whose degree of interest is greater than a preset value; The statistical information of the business data of the preset keyword.
  • the data caching strategy that meets the actual demand can be determined according to the diversification of the statistical information.
  • FIG. 8 is a schematic structural diagram of a network device according to another embodiment of the present invention.
  • the network device provided by the embodiment of the present invention includes an update module 203 in addition to the obtaining module 201 and the sending module 202.
  • the update module 203 is configured to send an update message of the statistical information to the cache policy control entity; the receive cache policy control entity updates the request message according to the update message; and sends the service data corresponding to the update request to the main cache according to the data update request message. And / or edge cache.
  • the technical solution provided by the embodiment of the present invention controls the update of the resources in the primary cache and the edge cache by the cache policy, so that the UE can quickly access the latest hot resources.
  • FIG. 9 is a schematic diagram of a service data cache processing system according to an embodiment of the present invention.
  • the service data cache processing system 30 provided by the embodiment of the present invention includes the cache policy control entity 10 and the network device 20 described in any of the above.
  • the data cache processing system can implement the management, maintenance, and update of the network device by the cache policy control entity, so that the UE does not need to obtain from the SP when accessing or downloading the service data, and can be directly deployed on the core network through access.
  • the main cache and the edge cache deployed in the access network obtain service data, which speeds up the access or download of service data by the UE.
  • the main cache in the cache policy control entity and the network device may be set in the same device in the form of a functional entity; of course, it may be separately set in a different device in the form of a functional entity; One is set up in the form of a functional entity, and the other is a network device that is set independently.
  • the primary cache device can be set in the P-GW or GGSN, or on the SGi or Gi interface between the P-GW or the GGSN and the SP device.
  • the cache policy control entity can be a network device of a standalone device or a cache policy.
  • the control entity is set as a function module in the Policy and Charging Rules Function (PCRF).
  • PCRF Policy and Charging Rules Function
  • the edge cache is generally placed on the network user plane data channel that is closer to the UE, and may also be set in the eNB, the RNC, or the BSC in the form of a functional entity.
  • the edge cache device can be set in a WiFi access point (AP).
  • AP WiFi access point
  • FIG. 10 is a schematic diagram of a device and related interfaces in a 3GPP network according to an embodiment of the present invention.
  • the primary cache and the cache policy control entity are independently set network devices.
  • the dotted lines between the devices indicate the interface that does not transfer the content, but only the metadata of the control and content related to the content; the solid line indicates the interface for transmitting the content (the same below).
  • the following functional entities are set:
  • the main cache on the SGi-U interface used to directly cache the contents of the external SP device.
  • the primary cache is located on the data path between the SP device and the P-GW (or GGSN).
  • the built-in edge cache on the RAN node is connected to the main cache through a C-D interface, and the edge cache can be set to multiple to directly cache the external SP device content provided by the main cache.
  • the Cache Policy Control Function (CPCF) is connected to the main cache through the CM interface, and is connected to the external SP device through the SGi-C interface, and is used to receive the content cache request of the external SP device, and control the main cache and the edge.
  • the cache caches related content is connected to the main cache through the CM interface, and is connected to the external SP device through the SGi-C interface, and is used to receive the content cache request of the external SP device, and control the main cache and the edge.
  • the cache caches related content.
  • the interface between the primary cache and the external SP device is represented by SGi-U, which is a logical interface, which may be part of the SGi/Gi interface function, which can be based on the prior art.
  • the SGi/Gi interface function is extended to have the function of the SGi-U interface, or it may be a newly defined interface to implement a content cache connection interface between the main cache and the external SP device.
  • the SGi/Gi interface function can be extended on the basis of the prior art to have the function of the SGi-C interface, or it can be a new definition.
  • the interface between the main cache and the CPCF is represented by C-M.
  • the CPCF controls the main cache through the C-M interface to obtain the content of the external SP device, and stores the content locally in the main cache.
  • the interface between the main cache and the edge cache is represented by C-D.
  • the edge cache obtains the content obtained by the main cache from the external SP device through the C-D interface, and stores the content locally.
  • the system architecture of the process described in the following embodiments may refer to the example of FIG. 10, which is a double-layer cache architecture, which is divided into a main cache and an edge cache.
  • the description of the following embodiments does not describe the system architecture.
  • the network device in the signaling diagram of the service data buffer processing method in the following embodiments may use the network device shown in FIG. 3 to FIG. 9 in the following manner; in the signaling diagram of the service data buffer processing method in the following embodiments
  • the service data cache processing method can use the cache processing method of the service data of the above FIG. 1 to FIG. 2 .
  • FIG. 11 is a signaling diagram 1 of a service data cache processing method according to an embodiment of the present invention. As shown in FIG. 11, the method includes:
  • Step 401 The user equipment sends a service data request to the SP device.
  • Step 402 The SP device performs hot statistics according to the service data request sent by the user equipment, and obtains statistics information.
  • the statistics information may be the ranking of the clicks, the attention ranking, the evaluation score, and the like of the hot video, the popular audio, or the popular news.
  • Step 403 The SP device sends the statistics information to the cache policy control entity.
  • Step 404 The cache policy control entity executes the first data cache policy according to the statistics, and the cache policy control entity may determine, according to the first data cache policy, that the top resource in the statistics is stored in the master cache managed by the cache.
  • Step 405 The cache policy control entity sends a service data push request to the SP device, and requests the stored resource to be stored in the main cache.
  • Step 406 After receiving the service data push request of the cache policy control entity, the SP device sends a request
  • the primary cache sends the service data, that is, sends the top priority resource to the primary cache;
  • Step 407 The primary cache performs data storage.
  • the service data cache processing method provided by the embodiment of the present invention mainly provides that the user equipment sends a service data request to the SP device, and the SP device sends the pre-ranked service data to the main cache by using the cache policy control entity.
  • FIG. 12 is a signaling diagram 2 of a service data cache processing method according to an embodiment of the present invention. As shown in FIG. 12, the method includes:
  • Step 501 The user equipment sends a service data request to the primary cache.
  • Step 502 The primary cache performs hot statistics according to the service data request sent by the user equipment, and obtains statistical information.
  • Step 503 The primary cache sends the statistics to the cache policy control entity.
  • Step 504 The cache policy control entity executes a third data cache policy according to the statistics, and the cache policy control entity may determine, according to the data cache policy, that the hottest resource in the statistics is stored in the edge cache managed by the cache.
  • Step 505 The cache policy control entity sends a service data push request to the primary cache, and requests to store the top ranked hot resource in the edge cache.
  • Step 506 After receiving the service data push request of the cache policy control entity, the primary cache sends the service data to the edge cache, that is, sends the top priority resource to the edge cache.
  • the service data cache processing method mainly provides that the user equipment sends a service data request to the primary cache, and the cached policy controls the entity, and the primary cache sends the ranked service data to the edge cache. It should be noted that, in step 505, if the cache policy control entity finds that some service data has not been cached and is in the main cache, the service data is cached in the main cache according to the method described in FIG.
  • FIG. 13 is a signaling diagram 3 of a service data cache processing method according to an embodiment of the present invention. As shown in FIG. 13, the method includes:
  • Step 601 The user equipment sends a service data request to the edge cache.
  • Step 602 The edge cache performs hot statistics according to the service data request sent by the user equipment, and obtains statistical information.
  • Step 603 The edge cache sends the statistics to the cache policy control entity.
  • Step 604 The cache policy control entity executes a third data cache policy according to the statistics.
  • the cache policy control entity finds that some hot resources are not cached in the edge cache, and the data cache policy may decide to store the top ranked resources in the statistics in its managed edge cache;
  • Step 605 The cache policy controls the entity to the primary cache. Sending a service data push request, requesting to store the top ranked hot resource in the edge cache;
  • Step 606 After receiving the service data push request of the cache policy control entity, the primary cache sends the service data to the edge cache, that is, sends the top priority resource to the edge cache.
  • the service data cache processing method provided by the embodiment of the present invention mainly provides that the user equipment sends a service data request to the edge cache, and the cache policy controls the entity, and the edge cache can store the previous service data.
  • the service data cache processing method shown in FIG. 1 to FIG. 13 can store the hot resources in the main cache and the edge cache respectively according to the service data request of the user equipment, so that the user can quickly access and download the required service data.
  • FIG. 14 is a signaling diagram 4 of a service data buffer processing method according to an embodiment of the present invention. As shown in FIG. 14, the method includes:
  • Step 701 The SP device performs popular statistics on the service data requests collected from the service platforms to obtain statistical information.
  • Step 702 The SP device sends the statistics information to the cache policy control entity.
  • Step 703 The cache policy control entity executes a data cache policy according to the statistics.
  • the cache policy control entity may determine, according to the first data cache policy, that the hottest resource in the statistics is stored in the master cache managed by the cache.
  • Step 704 The cache policy control entity sends a service data push request to the SP device, requesting to store the top ranked hot resource in the main cache.
  • Step 705 After receiving the service data push request of the cache policy control entity, the SP device sends the service data to the primary cache, that is, sends the top priority resource to the primary cache.
  • Step 706 The main cache performs data storage.
  • the SP device performs hot statistics on the service data request collected from each service platform, obtains statistics, and sends the statistics information to the cache policy control entity, so that the main cache can be cached.
  • a popular resource that gives users quick access to the latest resources.
  • FIG. 15 is a signaling diagram 5 of a service data buffer processing method according to an embodiment of the present invention, as shown in FIG. 15 As shown, the method includes:
  • Step 801 The cache policy control entity performs a third data cache decision according to the statistical information, and determines that the top ranked hot resource is stored in the edge cache.
  • Step 802 The cache policy control entity sends a service data push request to the primary cache, and requests to store the top ranked hot resource in the edge cache.
  • Step 803 After receiving the service data push request of the cache policy control entity, the primary cache sends the service data to the edge cache, that is, sends the top priority resource to the edge cache.
  • the cache policy control entity may select the hot resource in the main cache according to the hot statistics information and pre-allocate it in the edge cache, so that the user can quickly access the latest resource.
  • FIG. 16 is a signaling diagram of a service data buffer processing method according to an embodiment of the present invention. As shown in FIG. 16, the method includes:
  • Step 901 The SP device periodically sends an update request to the cache policy control entity.
  • Step 902 The cache policy control entity performs a second data cache decision to determine to update the service data in the main cache.
  • Step 903 The cache policy control entity sends an update request to the SP device during the idle time of the network, requesting to update the cache resource in the primary cache.
  • Step 904 The SP device sends service data to the primary cache to update the cache resource in the primary cache.
  • Step 905 The main cache updates the service data.
  • Step 906 The cache policy control entity sends a service data push request to the primary cache when the network is idle, requesting to update the expired resource in the edge cache.
  • Step 907 The primary cache sends the service data to the edge cache, and updates the expired resources in the edge cache.
  • the SP device periodically sends an update request to the cache policy control entity, and the cache policy control entity performs a data cache decision, and the expired resources in the primary cache and the edge cache may be updated.
  • FIG. 17 is a signaling diagram 7 of a service data cache processing method according to an embodiment of the present invention. As shown in FIG. 17, the method includes:
  • Step 1001 The cache policy control entity actively sends the SP device according to the update period of the resource. Send an update request;
  • Step 1002 The SP device sends service data to the primary cache to update the cache resource in the primary cache.
  • Step 1003 The main cache updates the service data.
  • Step 1004 The cache policy control entity sends a service data push request to the primary cache when the network is idle, requesting to update the expired resource in the edge cache.
  • Step 1005 The primary cache sends the service data to the edge cache, and updates the expired resources in the edge cache.
  • the cache policy control entity may further update the expired service data in the primary cache and the edge cache according to the update period of the resource.
  • the difference between the service data cache processing method and the foregoing embodiment is that the information interaction between the cache policy control entity and the main cache is internal implementation, and the rest The parts are the same and will not be described here.
  • the aforementioned program can be stored in a computer readable storage medium.
  • the program when executed, performs the steps including the above method embodiments; and the foregoing storage medium includes: a medium that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

本发明实施例提供一种业务数据緩存处理方法、设备和系统,方法包括接收业务数据的统计信息;根据所述统计信息,向服务提供商SP设备发送业务数据推送请求,以使所述SP设备将业务数据发送给部署在核心网中的主緩存和/或部署在接入网中的边缘緩存。本发明实施例提供的业务数据緩存处理方法、设备和系统,加速了UE访问或下载业务数据的速度。

Description

业务数据緩存处理方法、 设备和系统
技术领域 本发明实施例涉及通信技术, 尤其涉及一种业务数据緩存处理方法、 设备和系统。 背景技术
随着用户设备(User Equipment, 简称 UE ) 功能的不断扩展, 用户能够 通过无线网络随时随地访问或者下载音视频等业务数据到 UE,进而使用户收 听音频和观看视频变得更加便捷。
现有技术中, UE访问或者下载音视频等业务数据的过程为: UE向接入 网设备发送业务请求消息,接入网设备将该业务请求消息发送给核心网设备, 核心网设备即可将该业务请求通过分组数据网络( Public Data Network, 简称 PDN )发送给服务提供商 ( Service Provider, 简称 SP ) , SP即可将 UE所请 求下载的业务数据通过 PDN发送给核心网设备,核心网设备再通过接入网设 备将音视频等业务数据下发给 UE, 从而使得 UE完成访问或者下载过程。
在实现本发明实施例的过程中, 发明人发现现有技术中, 大量 UE访 问或者下载业务数据将会占用较多的网络传输资源, 从而时常出现访问或 者下载緩慢的问题。 发明内容
本发明实施例提供一种业务数据緩存处理方法、设备和系统,用以加速 UE访问或下载业务数据的速度。
一方面, 本发明实施例提供了一种业务数据緩存处理方法, 包括: 接收业务数据的统计信息;
根据所述统计信息, 向服务提供商 SP设备发送业务数据推送请求, 以使所述 SP设备将与所述业务数据推送请求对应的业务数据发送给部署 在核心网中的主緩存和 /或部署在接入网中的边缘緩存。
本发明实施例还提供另一种业务数据緩存处理方法, 包括: 对业务数据进行统计, 获取统计信息;
将所述统计信息发送给緩存策略控制实体, 以使所述緩存策略控制实 体根据所述统计信息请求服务提供商 SP设备将与所述请求对应的业务数 据发送给部署在核心网中的主緩存和 /或部署在接入网中的边缘緩存。
另一方面, 本发明实施例还提供一种緩存策略控制实体, 包括: 接收模块, 用于接收业务数据的统计信息;
发送模块, 用于根据所述统计信息, 向服务提供商 SP设备发送业务 数据推送请求, 以使所述 SP设备将与所述业务数据推送请求对应的业务 数据发送给部署在核心网中的主緩存和 /或部署在接入网中的边缘緩存。
本发明实施例还提供一种网络设备, 包括:
获取模块, 用于对业务数据进行统计, 获取统计信息;
发送模块, 用于将所述统计信息发送给緩存策略控制实体, 以使所述 緩存策略控制实体根据所述统计信息请求服务提供商 SP设备将与所述请 求对应的业务数据发送给部署在核心网中的主緩存和 /或部署在接入网中 的边缘緩存。
再一方面, 本发明实施例还提供一种业务数据緩存处理系统, 包括上 述提供的緩存策略控制实体和网络设备。
本发明实施例提供的业务数据緩存处理方法、 设备和系统, 通过緩存 策略控制实体根据统计信息, 向服务提供商 SP设备发送业务数据推送请 求, 以使 SP设备将与业务数据推送请求对应的业务数据发送给部署在核 心网中的主緩存和 /或部署在接入网中的边缘緩存,使 UE访问或者下载业 务数据时, 可以通过访问主緩存和边缘緩存获得业务数据, 加速了 UE访 问或下载业务数据的速度。 附图说明 为了更清楚地说明本发明实施例或现有技术中的技术方案, 下面将对实 施例或现有技术描述中所需要使用的附图作简单地介绍, 显而易见地, 下面 描述中的附图仅仅是本发明的一些实施例, 对于本领域普通技术人员来讲, 在不付出创造性劳动性的前提下, 还可以根据这些附图获得其他的附图。
图 1为本发明一实施例提供的业务数据緩存处理方法的流程示意图; 图 2为本发明又一实施例提供的业务数据緩存处理方法的流程示意图; 图 3为本发明一实施例提供的緩存策略控制实体结构示意图;
图 4为本发明又一实施例提供的緩存策略控制实体结构示意图; 图 5为本发明再一实施例提供的緩存策略控制实体结构示意图; 图 6为本发明另一实施例提供的緩存策略控制实体结构示意图; 图 7为本发明一实施例提供的网络设备结构示意图;
图 8为本发明又一实施例提供的网络设备结构示意图;
图 9为本发明一实施例提供的业务数据緩存处理系统示意图; 图 10为本发明一实施例提供的一种 3GPP网络中的设备和相关接口的 示意图;
图 11为本发明实施例提供的业务数据緩存处理方法信令图一; 图 12为本发明实施例提供的业务数据緩存处理方法信令图二; 图 13为本发明实施例提供的业务数据緩存处理方法信令图三; 图 14为本发明实施例提供的业务数据緩存处理方法信令图四; 图 15为本发明实施例提供的业务数据緩存处理方法信令图五; 图 16为本发明实施例提供的业务数据緩存处理方法信令图六; 图 17为本发明实施例提供的业务数据緩存处理方法信令图七。 具体实施方式
下面将结合本发明实施例中的附图, 对本发明实施例中的技术方案进 行清楚、 完整地描述, 显然, 所描述的实施例仅仅是本发明一部分实施例, 而不是全部的实施例。 基于本发明中的实施例, 本领域普通技术人员在没 有做出创造性劳动前提下所获得的所有其他实施例, 都属于本发明保护的 范围。
本发明的技术方案可以应用于各种通信系统, 例如: 全球移动通讯
( Global System of Mobile communication, 简称 GSM ) 系统、 码分多址 ( Code Division Multiple Access , 简称 CDMA ) 系统、 宽带码分多址 ( Wideband Code Division Multiple Access, 简称 WCDMA ) 系统、 通用分 组无线业务( General Packet Radio Service, 简称 GPRS )、 长期演进( Long Term Evolution , 简称 LTE ) 系统、 先进的长期演进 ( Advanced long termevolution, 简称 LTE-A ) 系统、 通用移动通信系统 ( Universal Mobile TelecommunicationSystem , 简称 UMTS ) 等, 本发明实施例并不限定, 但 为描述方便, 本发明实施例将以 LTE网络为例进行说明。 的系统中可包括不同的网元。例如, LTE和先进的长期演进(Advanced long term evolution, 简称 LTE6 ) 中无线接入网络( Radio Access Network, 简 称 RAN ) 的网元包括演进型基站 eNB , WCDMA中无线接入网络的网元包 括无线网络控制器 (Radio Network Controller, 简称 RNC ) 和 NodeB , 类 似地, 全球微波互联接入 ( Worldwide Interoperability for Microwave Access, 简称 WiMax ) 等其它无线网络也可以使用与本发明实施例类似的 方案,只是基站系统中的相关模块可能有所不同,本发明实施例并不限定, 但为描述方便, 下述实施例将以 eNodeB 为例进行说明。
还应理解, 在本发明实施例中, 终端也可称之为用户设备 ( User Equipment,简称 UE )、移动台 ( Mobile Station,简称 MS )、移动终端( Mobile Terminal, 简称 MT )等, 该终端可以经无线接入网与一个或多个核心网进 行通信, 例如, 终端可以是移动电话(或称为 "蜂窝" 电话) 、 具有通信 功能的计算机等, 例如, 终端还可以是便携式、 袖珍式、 手持式、 计算机 内置的或者车载的移动装置。
图 1为本发明一实施例提供的业务数据緩存处理方法的流程示意图, 本 发明实施例执行主体为緩存策略控制实体, 如图 1所示, 该方法具体包括如下 步骤:
步骤 S101 : 接收业务数据的统计信息;
步骤 S102: 根据所述统计信息, 向服务提供商 SP设备发送业务数据 推送请求, 以使所述 SP设备将与所述业务数据推送请求对应的业务数据 发送给部署在核心网中的主緩存和 /或部署在接入网中的边缘緩存。
在步骤 S 101 中, 緩存策略控制实体可以周期性的或按照预设时间接 收业务数据的统计信息, 业务数据的统计信息可以为热门视频, 热门音频 或者热门新闻等资源的点击量排行、 关注度排行、 评价得分排行等, 在此 不作特别限制。
在步骤 S 102中,緩存策略控制实体向 SP设备发送业务数据推送请求, SP 设备可直接将与业务数据推送请求对应的业务数据发送给部署在核心 网中的主緩存。 SP 设备还可将与业务数据推送请求对应的业务数据发送 给部署在接入网中的边缘緩存, 其实现方式有两种, 一是直接将业务数据 发送给部署在接入网中的边缘緩存, 二是 SP设备先将业务数据发送给部 署在核心网的主緩存, 然后緩存策略控制实体再向主緩存发送业务数据推 送请求, 使主緩存将业务数据发送给接入网中的边缘緩存。
上述的緩存策略控制实体向 SP设备发送业务数据的推送请求的方式 可以为拉(pull ) 或推 (push ) 。 当釆用拉的方式时, 主緩存和边缘緩存 主动向 SP设备获取业务数据, 当釆用推的方式时, SP设备主动向主緩存 和边缘緩存发送业务数据。
本发明实施例提供的业务数据緩存处理方法, 通过緩存策略控制实体 根据统计信息, 向服务提供商 SP设备发送业务数据推送请求, 以使 SP设 备将与业务数据推送请求对应的业务数据发送给部署在核心网中的主緩 存和 /或部署在接入网中的边缘緩存,可对主緩存和边缘緩存进行管理和维 护, 使 UE访问或者下载业务数据时, 不需从 SP获取, 可直接通过访问 部署在核心网上的主緩存和部署在接入网的边缘緩存获得业务数据, 加速 了 UE访问或下载业务数据的速度。
可选地, 在本实施例中, 上述根据统计信息, 向 SP设备发送业务数据 推送请求包括: 根据统计信息, 确定数据緩存策略; 根据緩存策略, 向 SP 设备发送业务数据推送请求。
具体地,根据统计信息中业务数据的访问量信息、关注度等统计信息, 确定将统计信息中的业务数据存储在主緩存或边缘緩存中的数据緩存策 略。 具体的数据緩存策略可以为预设的一个或多个数据緩存策略。 例如, 可将统计信息中的在预设时间内访问量大于预设值的业务数据存储在主 緩存或边缘緩存, 或者将统计信息中的在预设时间内关注度大于预设值的 业务数据存储在主緩存或边缘緩存中, 或者将统计信息中的访问量大于预 设值的业务数据且其总和的存储空间达到主緩存或边缘緩存的存储上限 的业务数据存储在主緩存或边缘緩存中, 或者将统计信息中访问量大于预 设值的前 100个业务数据存储在主緩存或边缘緩存。 例如, 在预设时间内, 统计信息中的业务数据的访问量首先达到预设值时, 緩存策略控制实体确 定将访问量达到预设值的业务数据存储在其管理的主緩存或边缘緩存中, 则緩存策略控制实体会在网络空闲的时间内向 SP设备发送业务数据下载 请求, 将这些业务数据存储在主緩存中。 根据统计信息, 确定数据緩存策 略可以有多种方式, 在此不作特别限制。
本发明实施例提供的业务数据緩存处理方法, 通过确定数据緩存策 略, 向 SP设备发送业务数据推送请求, 可使 SP设备确定向主緩存和边缘緩 存推送业务数据的数量。
可选地, 在本实施例中, 接收业务数据的统计信息可以包括: 接收 SP 设备根据用户设备发送的业务数据请求统计获取的统计信息; 或者, 接收 主緩存根据用户设备发送的业务数据请求统计获取的统计信息; 或者, 接 收边缘緩存根据用户设备发送的业务数据请求统计获取的统计信息; 或 者,接收所述 SP设备根据从各业务平台收集的业务数据请求统计获取的所 述统计信息。
在具体实现过程中, 本实施例中需要统计的信息主要为已经发布在网 络上的视频音频资源等, SP 设备、 主緩存以及边缘緩存均可根据用户设 备发送的业务数据的请求, 对资源做出统计。 例如, 可以根据热门资源的 点击量, 统计出点击量排名。 SP 设备还可从各业务平台收集业务数据请 求, 例如 SP设备可以根据各业务平台中热门资源的关注度、 喜爱度、 得 分评价等, 对热门资源做出综合评价, 统计出热门资源。 其中各业务平台 可以为门户网站、 视频下载网站或者各种论坛等, 在此不作特别限制。
本实施例中 ,通过接收 SP设备、主緩存以及边缘緩存获取的统计信息, 緩存策略控制实体可以根据统计信息做出緩存策略。
上述的根据热门统计信息, 向 SP设备发送业务数据推送请求, 包括: 根据统计信息, 确定与主緩存对应的第一数据緩存策略以及与边缘緩存对 应的第二数据緩存策略; 根据第一数据緩存策略, 向 SP设备发送第一业 务数据下载请求, 以使 SP设备根据第一业务数据下载请求将业务数据发 送给主緩存; 根据第二数据緩存策略, 向 SP设备发送第二业务数据下载 请求, 以使 SP设备根据第二业务数据下载请求将业务数据发送给边缘緩 存。
在统计出热门信息之后, 緩存策略控制实体可通过统计信息, 确定与 主緩存对应的第一数据緩存策略以及与边缘緩存对应的第二数据緩存策 略。
在具体实施过程中,第一数据緩存策略决定了 SP设备发送给主緩存的 业务数据, 例如緩存策略控制实体可根据第一数据緩存策略, 确定将预设 时间内关注度排名前 100的资源緩存在主緩存中, 则其会在网络空闲的时 间内, 向 SP设备发送第一业务数据下载请求, 请求将这 100个资源存储在 主緩存中。 对应地, 第二数据緩存策略确定了 SP设备发送给边缘緩存的业 务数据。 緩存策略控制实体根据第二数据緩存策略, 决定将关注度或喜爱 度等排名前 100的资源直接緩存在边缘緩存中, 则其会在网络空闲的时间 内, 向 SP设备发送第二业务数据下载请求, 请求将该 100个资源存储在边 缘緩存中。
本发明实施例提供的技术方案, 可将热门资源存储在部署在核心网中 的主緩存和部署在接入网的边缘緩存中, 可对热门资源进行预分配, 当用 户访问或下载热门资源时, 不会出现网络拥堵。
在上述的根据第一数据緩存策略, 向 SP设备发送第一业务数据下载 请求, 以使 SP设备根据第一业务数据下载请求将业务数据发送给主緩存 之后, 还包括:
根据统计信息, 确定与边缘緩存对应的第三数据緩存策略;
根据第三数据緩存策略, 向主緩存发送第三业务数据下载请求, 以使 主緩存根据第三业务数据下载请求将业务数据发送给边缘緩存。
具体地, 緩存策略控制实体根据第三数据緩存策略, 决定将预设时间 内关注度或喜爱度等排名前 100的资源存储在边缘緩存中, 则其会在网络 空闲的时间内, 向主緩存发送第二业务数据下载请求, 请求将该 100个资 源存储在主緩存中。
本发明实施例提供的技术方案, 可将热门资源存储在部署在接入网的 边缘緩存中, 可对热门资源进行预分配, 当用户访问或下载热门资源时, 不会出现网络拥堵。
优选地, 上述业务数据的统计信息包括如下中的任一或其组合: 访问 量大于预设值的所述业务数据的统计信息; 关注度大于预设值的所述业务 数据的统计信息; 包含预设关键词的所述业务数据的统计信息。 本发明实施例, 通过业务数据的统计信息的形式的多样化, 可根据统 计信息的多样化, 确定符合实际需求的数据緩存策略。
可选地, 上述的业务数据緩存处理方法还包括: 接收 SP设备发送的 统计信息的更新消息; 根据更新消息向 SP设备发送数据更新请求消息, 以使 SP设备根据数据更新请求消息将与更新请求对应的业务数据发送给 主緩存和 /或边缘緩存。
具体地, 緩存策略控制实体不仅可以定期接收 SP设备发送的统计信 息的更新请求, 还可以根据统计信息中的热门资源的更新期限主动向 SP 设备发起更新请求。 緩存策略控制实体会在网络空闲的时间内向 SP设备 请求更新在主緩存中的緩存资源。 緩存策略控制实体在网络空闲的时间内 向 SP设备请求更新在主緩存中的緩存资源, 緩存策略控制实体在网络空 闲的时间内向主緩存请求将边缘緩存中存储的过期资源进行更新。
本发明实施例提供的技术方案, 通过緩存策略控制实体对主緩存和边 缘緩存中资源的更新, 使 UE可以快速访问到最新的热门资源。
图 2为本发明实施例二提供的业务数据緩存处理方法的流程示意图, 本 实施例的执行主体可以为 SP设备, 主緩存以及边缘緩存, 如图 2所示, 该方法 具体包括如下步骤:
步骤 S201 : 对业务数据进行统计, 获取统计信息;
步骤 S202: 将所述统计信息发送给緩存策略控制实体, 以使所述緩存 策略控制实体根据所述统计信息请求服务提供商 SP设备将与所述请求对 应的业务数据发送给部署在核心网中的主緩存和 /或部署在接入网中的边 缘緩存。
在具体实现过程中, 在步骤 S201 , SP设备, 主緩存及边缘緩存可以对业 务数据进行统计, 得到统计信息, 统计信息可以为热门视频, 热门音频或者 热门新闻等资源的点击量排行、 关注度排行、 评价得分排行等, 在此不作 特别限制。
在步骤 S202中, SP设备, 主緩存及边缘緩存可将统计信息的结果反 馈给緩存策略控制实体, 以使緩存策略控制实体根据统计信息请求服务提 供商 SP设备将与请求对应的业务数据发送给部署在核心网中的主緩存和 / 或部署在接入网中的边缘緩存。 SP 设备将业务数据发送给主緩存和边缘 緩存的过程具体参照上述实施例的描述, 在此不再赘述。
本发明实施例提供的业务数据緩存处理方法, 通过 SP设备, 主緩存 及边缘緩存可将统计信息的结果反馈给緩存策略控制实体, 可以使緩存策 略控制实体根据统计信息请求服务提供商 SP设备将与请求对应的业务数 据发送给部署在核心网中的主緩存和 /或部署在接入网中的边缘緩存, 使 UE访问或者下载业务数据时, 不需从 SP获取, 可直接通过访问部署在核 心网上的主緩存和部署在接入网的边缘緩存获得业务数据, 加速了 UE访 问或下载业务数据的速度。
可选地, 对业务数据进行统计, 获取统计信息包括: 对用户设备发送 的业务数据请求进行统计处理, 获取统计信息。
本实施例中对用户设备发送的业务数据请求进行统计处理, 主要为已 经发布在网络上的视频音频资源等。 SP 设备、 主緩存以及边缘緩存可对 用户设备具体的业务请求作出统计, 例如可以根据用户设备对热门资源的 点击量, 统计出点击量排名。
可选地, 对业务数据进行统计, 获取统计信息包括: 对从各业务平台 收集的业务数据请求进行统计处理, 获取统计信息。
SP设备可以从各业务平台收集的业务数据请求, 例如 SP设备可以根 据各业务平台中热门资源的关注度、 喜爱度、 得分评价等, 对热门资源做 出综合评价, 统计出热门资源。 其中各业务平台可以为门户网站、 视频下 载网站或者各种论坛等, 在此不作特别限制。
基于上述, 可对用户设备发送的业务数据请求进行统计处理, 或者对 从各业务平台收集的业务数据请求进行统计处理, 获取统计信息, 使统计 信息的统计范围大, 统计数据可靠。
优选地, 上述业务数据的统计信息包括如下中的任一或其组合: 访问 量大于预设值的所述业务数据的统计信息; 关注度大于预设值的所述业务 数据的统计信息; 包含预设关键词的所述业务数据的统计信息。
本发明实施例, 通过业务数据的统计信息的形式的多样化, 可根据统 计信息的多样化, 确定符合实际需求的数据緩存策略。
可选地, 上述的业务数据緩存处理方法还包括: 向緩存策略控制实体 发送统计信息的更新消息; 接收緩存策略控制实体根据更新消息发送的数 据更新请求消息; 根据数据更新请求消息将与所述更新请求对应的业务数 据发送给主緩存和 /或边缘緩存。
在具体实现过程中, SP 设备可以定期向緩存策略控制实体发送统计 信息的更新消息, 并在緩存策略控制实体向其发送数据更新请求消息时, 将更新的统计信息对应的业务数据发送给主緩存和 /或边缘緩存。 SP设备 将业务数据发送给主緩存和 /或边缘緩存的过程具体参照上述实施例的描 述, 在此不再赘述„
本发明实施例提供的技术方案, 通过緩存策略控制实体对主緩存和边 缘緩存中资源的更新, 使 UE可以快速访问到最新的热门资源。
图 3为本发明一实施例提供的緩存策略控制实体结构示意图, 如图 3 所示, 本发明实施例提供的緩存策略控制实体 10包括接收模块 101 ,发送 模块 102。 其中, 接收模块 101用于接收业务数据的统计信息; 发送模块 102用于根据统计信息, 向服务提供商 SP设备发送业务数据推送请求, 以使 SP设备将与业务数据推送请求对应的业务数据发送给部署在核心网 中的主緩存和 /或部署在接入网中的边缘緩存。
本发明实施例提供的緩存策略控制实体, 可根据统计信息, 将统计信 息对应的业务数据发送给部署在核心网中的主緩存和 /或部署在接入网中 的边缘緩存, 可实现緩存策略控制实体对主緩存和边缘緩存的维护和管 理, 使 UE访问或者下载业务数据时, 不需从 SP获取, 可直接通过访问 部署在核心网上的主緩存和部署在接入网的边缘緩存获得业务数据, 加速 了 UE访问或下载业务数据的速度。
图 4为本发明又一实施例提供的緩存策略控制实体结构示意图, 如图 4所示, 本发明实施例提供的发送模块 102包括第一确定单元 1021、 第一 发送单元 1022。 其中, 第一确定单元 1021用于根据统计信息, 确定数据 緩存策略; 第一发送单元 1022用于根据数据緩存策略, 向 SP设备发送业 务数据推送请求。
本发明实施例提供的緩存策略控制实体, 通过第一确定单元 1021确定 数据緩存策略, 第一发送单元 1022向 SP设备发送业务数据下载请求, 可使 SP设备确定向主緩存和边缘緩存推送业务数据的数量。
可选地, 本发明实施例提供的接收模块 101具体用于接收所述 SP设 备根据用户设备发送的业务数据请求统计获取的所述统计信息; 或者, 接 收所述主緩存根据用户设备发送的业务数据请求统计获取的所述统计信 息; 或者, 接收所述边缘緩存根据用户设备发送的业务数据请求统计获取 的所述统计信息; 或者, 接收 SP设备根据从各业务平台收集的业务数据 请求统计获取的统计信息。
本实施例中, 接收模块 101通过接收 SP设备、 主緩存以及边缘緩存获 取的统计信息, 使緩存策略控制实体可以根据统计信息做出緩存策略。
图 5为本发明再一实施例提供的緩存策略控制实体结构示意图, 如图 5 所示, 发送模块 102包括第二确定单元 1025 , 第二发送单元 1026。
第二确定单元 1025 用于根据统计信息, 确定与主緩存对应的第一数 据緩存策略以及与边缘緩存对应的第二数据緩存策略;
第二发送单元 1026用于根据第一数据緩存策略, 向 SP设备发送第一 业务数据下载请求, 以使 SP设备根据第一业务数据下载请求将业务数据 发送给主緩存; 根据第二数据緩存策略, 向 SP设备发送第二业务数据下 载请求, 以使 SP设备根据第二业务数据下载请求将业务数据发送给边缘 緩存。
上述的第二确定单元 1025还用于根据统计信息, 确定与边缘緩存对 应的第三数据緩存策略;
第二发送单元 1026还用于根据第三数据緩存策略, 向主緩存发送第 三业务数据下载请求, 以使主緩存根据第三业务数据下载请求将业务数据 发送给边缘緩存。
本发明实施例提供的技术方案, 可将热门资源存储在部署在核心网中 的主緩存和部署在接入网的边缘緩存中, 可对热门资源进行预分配, 当用 户访问或下载热门资源时, 不会出现网络拥堵。
优选地, 上述业务数据的统计信息包括如下中的任一或其组合: 访问 量大于预设值的所述业务数据的统计信息; 关注度大于预设值的所述业务 数据的统计信息; 包含预设关键词的所述业务数据的统计信息。
本发明实施例, 通过业务数据的统计信息的形式的多样化, 可根据统 计信息的多样化, 确定符合实际需求的数据緩存策略。
图 6为本发明另一实施例提供的緩存策略控制实体结构示意图; 如图 6所示,本发明实施例提供的緩存策略控制实体 10包括上述任一实施例所 述的实体, 除了包括接收模块 101、 发送模块 102之外, 还包括更新模块 103。 更新模块 103具体用于接收 SP设备发送的统计信息的更新消息; 根 据更新消息向 SP设备发送数据更新请求消息,以使 SP设备根据数据更新 请求消息将与更新请求对应的业务数据发送给主緩存和 /或边缘緩存。
本发明实施例提供的技术方案, 通过更新模块对主緩存和边缘緩存中 资源的更新, 使 UE可以访问到最新的热门资源。
所属领域的技术人员可以清楚地了解到, 为描述的方便和简洁, 上述 緩存策略控制实体的具体工作过程, 可以参考前述方法实施例中的对应过 程, 在此不再赘述。
图 7为本发明一实施例提供的网络设备结构示意图; 如图 7所示, 本 发明实施例提供的网络设备 20包括获取模块 201和发送模块 202。 其中, 获取模块 201用于对业务数据进行统计, 获取统计信息, 发送模块 202用 于将统计信息发送给緩存策略控制实体, 以使緩存策略控制实体根据统计 信息请求服务提供商 SP设备将与请求对应的业务数据发送给部署在核心 网中的主緩存和 /或部署在接入网中的边缘緩存。
本发明实施例提供的网络设备, 通过发送模块可将热门统计信息反馈 给緩存策略控制实体, 可以使緩存策略控制实体根据热门统计信息请求服 务提供商 SP设备将与请求对应的业务数据发送给部署在核心网中的主緩 存和 /或部署在接入网中的边缘緩存, 使 UE访问或者下载业务数据时, 不 需从 SP获取, 可直接通过访问部署在核心网上的主緩存和部署在接入网 的边缘緩存获得业务数据, 加速了 UE访问或下载业务数据的速度。
可选地, 获取模块 201具体用于对用户设备发送的业务数据请求进行 统计处理, 获取统计信息。
可选地, 网络设备为 SP设备、 主緩存或者边缘緩存。
可选地, 获取模块 201具体用于对从各业务平台收集的业务数据请求 进行统计处理, 获取统计信息。
基于上述, 可对用户设备发送的业务数据请求进行统计处理, 或者对 从各业务平台收集的业务数据请求进行统计处理, 获取统计信息, 使统计 信息的统计范围大, 统计数据可靠。 优选地, 上述业务数据的统计信息包括如下中的任一或其组合: 访问 量大于预设值的所述业务数据的统计信息; 关注度大于预设值的所述业务 数据的统计信息; 包含预设关键词的所述业务数据的统计信息。
本发明实施例, 通过业务数据的统计信息的形式的多样化, 可根据统 计信息的多样化, 确定符合实际需求的数据緩存策略。
图 8为本发明又一实施例提供的网络设备结构示意图, 如图 8所示, 本发明实施例提供的网络设备除了获取模块 201 , 发送模块 202外, 还包 括更新模块 203。 更新模块 203用于向緩存策略控制实体发送统计信息的 更新消息; 接收緩存策略控制实体根据更新消息发送的数据更新请求消 息; 根据数据更新请求消息将与更新请求对应的的业务数据发送给主緩存 和 /或边缘緩存。
本发明实施例提供的技术方案, 通过緩存策略控制实体对主緩存和边 缘緩存中资源的更新, 使 UE可以快速访问到最新的热门资源。
所属领域的技术人员可以清楚地了解到, 为描述的方便和简洁, 上述 网络设备的具体工作过程, 可以参考前述方法实施例中的对应过程, 在此 不再赘述。
图 9为本发明一实施例提供的业务数据緩存处理系统示意图。 如图 9 所示, 本发明实施例提供的业务数据緩存处理系统 30 包括上述任一所述 的緩存策略控制实体 10和网络设备 20。
本发明实施例提供的数据緩存处理系统, 可实现緩存策略控制实体对 网络设备的管理、 维护和更新, 使 UE访问或者下载业务数据时, 不需从 SP 获取, 可直接通过访问部署在核心网上的主緩存和部署在接入网的边 缘緩存获得业务数据, 加速了 UE访问或下载业务数据的速度。
在具体实施例中, 緩存策略控制实体和网络设备中的主緩存可以是以 功能实体的形式设置在同一设备中; 当然, 也可以以功能实体的形式分别 设置在不同的设备中; 也可以其中一个以功能实体的形式设置某一设备 中, 另一个则为独立设置的网络设备。
如当网络系统为 2G、 3G 或 4G网络时,可将主緩存设备设置在 P-GW 或 GGSN 中,或是设置在 P-GW 或 GGSN 与 SP设备之间的 SGi 或 Gi接 口上。 緩存策略控制实体可以为独立设备的网络设备, 也可以将緩存策略 控制实体设置为策略与计费规则功能 ( Policy and Charging Rules Function, PCRF ) 中的功能模块。 而边缘緩存一般是放置在较接近于 UE 的网络用 户面数据通道上, 也可以以功能实体的形式设置在 eNB、 RNC、 BSC 之 中。 在 WiFi 网络中, 则可以将边缘緩存设备设置在 WiFi接入点( Access Point, AP ) 中。 当然, 在上述方案中并没有描述各设备或功能模块之间 的接口, 以及其与网络中其他设备或功能模块之间的接口。 考虑到这些接 口具体与网络的实际类型相关, 在本发明实施例中不能——举例描述, 以 下仅以一种网络类型描述, 该接口和接口关系, 对于本发明普通技术人员 而言, 只是为了描述清楚而进行的定义, 实际中实现该接口的功能的任何 名称或者定义的接口皆在本发明保护范围之内, 以下不再——描述。
图 10为本发明一实施例提供的一种 3GPP网络中的设备和相关接口的 示意图。 其中, 主緩存与緩存策略控制实体为独立设置的网络设备。 各设 备之间用虚线表示的是不传输内容的接口而只是传输与内容相关的控制 与内容的元数据; 用实线表示的是传输内容的接口 (下同) 。 在本例中, 设置了如下功能实体:
SGi-U接口上的主緩存, 用于直接緩存外部 SP设备的内容。 在本实 施例中, 主緩存位于 SP设备与 P-GW (或 GGSN )之间的数据通道上。
RAN节点上的内置的边缘緩存, 与主緩存通过 C-D 接口连接, 其中 边缘緩存可设置为多个,用于直接緩存主緩存所提供的外部 SP设备内容。
緩存策略控制实体( CPCF, Cache Policy Control Function ) ,通过 C-M 接口与主緩存连接, 通过 SGi-C接口与外部 SP设备连接, 用于接收外部 SP 设备的内容緩存的请求, 并控制主緩存及边缘緩存进行相关内容的緩 存。
相关的接口包括:
主緩存与外部 SP设备之间的接口, 用 SGi-U来表示, 该 SGi-U接口 是一个逻辑上的接口,该接口可以是 SGi/Gi接口功能的一部分,可在现有 技术的基础上扩展该 SGi/Gi 接口功能, 使其具有 SGi-U接口的功能, 也 可以是新定义的一个接口, 实现主緩存与外部 SP设备之间内容緩存的连 接接口。
CPCF与外部 SP设备之间的接口, 用 SGi-C 来表示, 该 SGi-C接口 是一个逻辑上的接口, 该接口可以是 SGi/Gi 接口功能的一部分, 可在现 有技术的基础上扩展该 SGi/Gi 接口功能, 使其具有 SGi-C 接口的功能, 也可以是新定义的一个接口, 实现 CPCF 与外部 SP设备之间内容緩存控 制作用的连接接口。
主緩存与 CPCF 之间的接口, 用 C-M 来表示, CPCF 通过该 C-M接 口控制主緩存获取外部 SP设备的内容, 并将该内容存储在主緩存本地。
主緩存与边缘緩存之间的接口,用 C-D 来表示,边缘緩存通过该 C-D 接口获取主緩存从外部 SP设备获取的内容, 并将该内容存贮在其本地。
需要说明的是, 图 10 中仅描述与緩存相关的网络的结构, 关于网络 的完整结构则没有描述, 应当理解为其他结构与现有技术中的一致, 此处 不做赘述。
下述实施例中描述的流程的系统架构可参考图 10 的示例, 该系统架 构为双层緩存架构, 分为主緩存和边缘緩存, 在下述实施例的描述中不对 系统架构做——赘述。 下述实施例中的业务数据緩存处理方法信令图中的 网络设备, 均可釆用上述图 3至图 9所示的网络设备; 下述实施例中的业 务数据緩存处理方法信令图中的业务数据緩存处理方法, 均可釆用上述图 1至图 2的业务数据的緩存处理方法。
图 11为本发明实施例提供的业务数据緩存处理方法信令图一,如图 11 所示, 该方法包括:
步骤 401 : 用户设备向 SP设备发送业务数据请求;
步骤 402: SP设备根据用户设备发送的业务数据请求,进行热门统计, 获得统计信息, 统计信息可以为热门视频, 热门音频或者热门新闻等资源 的点击量排行、 关注度排行、 评价得分排行等。
步骤 403: SP设备将统计信息发送给緩存策略控制实体;
步骤 404: 緩存策略控制实体根据统计信息执行第一数据緩存策略, 緩存策略控制实体根据第一数据緩存策略可决定将统计信息中排名在前 的热门资源存储在其管理的主緩存中;
步骤 405: 緩存策略控制实体向 SP设备发送业务数据推送请求, 请 求将该排名在前的资源存储在主緩存中;
步骤 406: SP设备收到緩存策略控制实体的业务数据推送请求后, 向 主緩存发送业务数据, 即将该排名在前的热门资源发送给主緩存; 步骤 407: 主緩存进行数据存储。
本发明实施例提供的业务数据緩存处理方法, 主要为用户设备向 SP 设备发送业务数据请求, 通过緩存策略控制实体, SP 设备将排名在前的 业务数据发送给主緩存。
图 12为本发明实施例提供的业务数据緩存处理方法信令图二,如图 12 所示, 该方法包括:
步骤 501 : 用户设备向主緩存发送业务数据请求;
步骤 502: 主緩存根据用户设备发送的业务数据请求, 进行热门统计, 获得统计信息;
步骤 503 : 主緩存将统计信息发送给緩存策略控制实体;
步骤 504: 緩存策略控制实体根据统计信息执行第三数据緩存策略, 緩存策略控制实体根据数据緩存策略可决定将统计信息中排名在前的热 门资源存储在其管理的边缘緩存中;
步骤 505 : 緩存策略控制实体向主緩存发送业务数据推送请求, 请求 将该排名在前的热门资源存储在边缘緩存中;
步骤 506: 主緩存收到緩存策略控制实体的业务数据推送请求后, 向 边缘緩存发送业务数据, 即将该排名在前的热门资源发送给边缘緩存。
本发明实施例提供的业务数据緩存处理方法, 主要为用户设备向主緩 存发送业务数据请求, 通过緩存策略控制实体, 主緩存将排名在前的业务 数据发送给边缘緩存。 需要说明的是, 在步骤 505中, 若緩存策略控制实 体发现一些业务数据还未緩存与主緩存中, 则会按照图 12所述的方法, 将这些业务数据緩存在主緩存中。
图 13为本发明实施例提供的业务数据緩存处理方法信令图三, 如图 13 所示, 该方法包括:
步骤 601 : 用户设备向边缘緩存发送业务数据请求;
步骤 602: 边缘緩存根据用户设备发送的业务数据请求, 进行热门统 计, 获得统计信息;
步骤 603 : 边缘緩存将统计信息发送给緩存策略控制实体;
步骤 604: 緩存策略控制实体根据统计信息执行第三数据緩存策略, 緩存策略控制实体发现一些热门资源还未緩存在边缘緩存中, 数据緩存策 略可决定将统计信息中排名在前的热门资源存储在其管理的边缘緩存中; 步骤 605 : 緩存策略控制实体向主緩存发送业务数据推送请求, 请求 将该排名在前的热门资源存储在边缘緩存中;
步骤 606: 主緩存收到緩存策略控制实体的业务数据推送请求后, 向 边缘緩存发送业务数据, 即将该排名在前的热门资源发送给边缘緩存。
本发明实施例提供的业务数据緩存处理方法, 主要为用户设备向边缘 緩存发送业务数据请求, 通过緩存策略控制实体, 边缘緩存可存储排名在 前的业务数据。
上述图 1 1至图 13所示的业务数据緩存处理方法, 可根据用户设备的 业务数据请求, 将热门资源分别存储在主緩存和边缘緩存中, 使用户能够 快速访问和下载所需业务数据。
图 14为本发明实施例提供的业务数据緩存处理方法信令图四, 如图 14 所示, 该方法包括:
步骤 701 : SP 设备对从各业务平台收集的业务数据请求进行热门统 计, 获取统计信息;
步骤 702: SP设备将统计信息发送给緩存策略控制实体;
步骤 703 : 緩存策略控制实体根据统计信息执行数据緩存策略, 緩存 策略控制实体根据第一数据緩存策略可决定将统计信息中排名在前的热 门资源存储在其管理的主緩存中;
步骤 704: 緩存策略控制实体向 SP设备发送业务数据推送请求, 请 求将该排名在前的热门资源存储在主緩存中;
步骤 705 : SP设备收到緩存策略控制实体的业务数据推送请求后, 向 主緩存发送业务数据, 即将该排名在前的热门资源发送给主緩存;
步骤 706: 主緩存进行数据存储。
本发明实施例提供的业务数据緩存处理方法, SP 设备对从各业务平 台收集的业务数据请求进行热门统计, 获取统计信息, 并将统计信息发送 给緩存策略控制实体, 以使主緩存可以緩存最新的热门资源, 使用户能够 快速访问到最新资源。
图 15为本发明实施例提供的业务数据緩存处理方法信令图五, 如图 15 所示, 该方法包括:
步骤 801 : 緩存策略控制实体根据统计信息,执行第三数据緩存决策, 确定将排名在前的热门资源存储在边缘緩存中;
步骤 802: 緩存策略控制实体向主緩存发送业务数据推送请求, 请求 将该排名在前的热门资源存储在边缘緩存中;
步骤 803 : 主緩存收到緩存策略控制实体的业务数据推送请求后, 向 边缘緩存发送业务数据, 即将该排名在前的热门资源发送给边缘緩存。
本发明实施例提供的业务数据緩存处理方法, 緩存策略控制实体根据 热门统计信息, 可选取主緩存中的热门资源并预先分配存储在边缘緩存 中, 使用户能够快速访问到最新资源中。
图 16本发明实施例提供的业务数据緩存处理方法信令图六,如图 16所 示, 该方法包括:
步骤 901 : SP设备定期向緩存策略控制实体发送更新请求;
步骤 902: 緩存策略控制实体执行第二数据緩存决策, 确定对主緩存 中的业务数据进行更新;
步骤 903 : 緩存策略控制实体在网络空闲的时间内向 SP设备发送更 新请求, 请求更新在主緩存中的緩存资源;
步骤 904: SP设备向主緩存发送业务数据, 以更新在主緩存中的緩存 资源;
步骤 905 : 主緩存对业务数据进行更新;
步骤 906: 緩存策略控制实体在网络空闲时, 向主緩存发送业务数据 推送请求, 请求将边缘緩存中过期资源进行更新;
步骤 907:主緩存向边缘緩存发送业务数据, 更新边缘緩存中的过期资 源。
本发明实施例提供的业务数据緩存处理方法, 通过 SP设备定期向緩 存策略控制实体发送更新请求, 以及緩存策略控制实体执行数据緩存决 策, 可将主緩存和边缘緩存中过期资源进行更新。
图 17为本发明实施例提供的业务数据緩存处理方法信令图七, 如图 17 所示, 该方法包括:
步骤 1001 : 緩存策略控制实体根据资源的更新期限, 主动向 SP设备 发送更新请求;
步骤 1002: SP设备向主緩存发送业务数据, 以更新在主緩存中的緩 存资源;
步骤 1003 : 主緩存对业务数据进行更新;
步骤 1004: 緩存策略控制实体在网络空闲时, 向主緩存发送业务数据 推送请求, 请求将边缘緩存中过期资源进行更新;
步骤 1005: 主緩存向边缘緩存发送业务数据, 更新边缘緩存中的过期 资源。
本发明实施例提供的业务数据緩存处理方法, 緩存策略控制实体还可 以根据资源的更新期限, 更新主緩存和边缘緩存中的过期业务数据。
在上述的实施例中, 若緩存策略控制实体与主緩存处于同一功能实 体, 则业务数据緩存处理方法与上述实施例的区别为緩存策略控制实体与 主緩存之间的信息交互为内部实现, 其余部分相同, 在此不再赘述。
本领域普通技术人员可以理解: 实现上述各方法实施例的全部或部分 步骤可以通过程序指令相关的硬件来完成。 前述的程序可以存储于一计算 机可读取存储介质中。 该程序在执行时, 执行包括上述各方法实施例的步 骤; 而前述的存储介质包括: ROM、 RAM, 磁碟或者光盘等各种可以存 储程序代码的介质。
最后应说明的是: 以上各实施例仅用以说明本发明的技术方案, 而非对 其限制; 尽管参照前述各实施例对本发明进行了详细的说明, 本领域的普通 技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改, 或者对其中部分或者全部技术特征进行等同替换; 而这些修改或者替换, 并 不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims

权 利 要 求 书
1、 一种业务数据緩存处理方法, 其特征在于, 包括:
接收业务数据的统计信息;
根据所述统计信息, 向服务提供商 SP设备发送业务数据推送请求, 以使所述 SP设备将与所述业务数据推送请求对应的业务数据发送给部署 在核心网中的主緩存和 /或部署在接入网中的边缘緩存。
2、 根据权利要求 1 所述的方法, 其特征在于, 所述根据所述统计信 息, 向 SP设备发送业务数据推送请求, 包括:
根据所述统计信息, 确定数据緩存策略;
根据所述数据緩存策略, 向所述 SP设备发送业务数据推送请求。
3、 根据权利要求 1 所述的方法, 其特征在于, 所述接收业务数据的 统计信息, 包括:
接收所述 SP设备根据用户设备发送的业务数据请求统计获取的所述 统计信息; 或者,
接收所述主緩存根据用户设备发送的业务数据请求统计获取的所述 统计信息; 或者,
接收所述边缘緩存根据用户设备发送的业务数据请求统计获取的所 述统计信息; 或者
接收所述 SP设备根据从各业务平台收集的业务数据请求统计获取的 所述统计信息。
4、 根据权利要求 3 所述的方法, 其特征在于, 所述根据所述统计信 息, 向 SP设备发送业务数据推送请求, 包括:
根据所述统计信息, 确定与所述主緩存对应的第一数据緩存策略以及 与所述边缘緩存对应的第二数据緩存策略;
根据所述第一数据緩存策略, 向所述 SP设备发送第一业务数据下载 请求, 以使所述 SP设备根据所述第一业务数据下载请求将业务数据发送 给所述主緩存;
根据所述第二数据緩存策略, 向所述 SP设备发送第二业务数据下载 请求, 以使所述 SP设备根据所述第二业务数据下载请求将业务数据发送 给所述边缘緩存。
5、 根据权利要求 4所述的方法, 其特征在于, 所述根据所述第一数 据緩存策略, 向所述 SP设备发送第一业务数据下载请求, 以使所述 SP设 备根据所述第一业务数据下载请求将业务数据发送给所述主緩存之后, 还 包括:
根据所述统计信息, 确定与所述边缘緩存对应的第三数据緩存策略; 根据所述第三数据緩存策略, 向所述主緩存发送第三业务数据下载请 求, 以使所述主緩存根据所述第三业务数据下载请求将业务数据发送给边 缘緩存。
6、 根据权利要求 1〜5 中任一项所述的方法, 其特征在于, 所述业务 数据的统计信息包括如下中的任一或其组合:
访问量大于预设值的所述业务数据的统计信息;
关注度大于预设值的所述业务数据的统计信息;
包含预设关键词的所述业务数据的统计信息。
7、 根据权利要求 1〜5中任一项所述的方法, 其特征在于, 还包括: 接收所述 SP设备发送的所述统计信息的更新消息;
根据所述更新消息向所述 SP设备发送数据更新请求消息, 以使所述 SP 设备根据所述数据更新请求消息将与更新请求对应的业务数据发送给 所述主緩存和 /或所述边缘緩存。
8、 一种业务数据緩存处理方法, 其特征在于, 包括:
对业务数据进行统计, 获取统计信息;
将所述统计信息发送给緩存策略控制实体, 以使所述緩存策略控制实 体根据所述统计信息请求服务提供商 SP设备将与所述请求对应的业务数 据发送给部署在核心网中的主緩存和 /或部署在接入网中的边缘緩存。
9、 根据权利要求 8 所述的方法, 其特征在于, 所述对业务数据进行 统计, 获取统计信息, 包括:
对用户设备发送的业务数据请求进行统计处理, 获取所述统计信息。
10、 根据权利要求 8所述的方法, 其特征在于, 所述对业务数据进行 统计, 获取统计信息, 包括:
对从各业务平台收集的业务数据请求进行统计处理, 获取所述统计信 息。
1 1、 根据权利要求 8〜10 中任一项所述的方法, 其特征在于, 所述统 计信息包括如下中的任一或其组合:
访问量大于预设值的所述业务数据的信息;
关注度大于预设值的所述业务数据的信息;
包含预设关键词的所述业务数据的信息。
12、 根据权利要求 8〜10中任一项所述的方法, 其特征在于, 还包括: 向所述緩存策略控制实体发送统计信息的更新消息;
接收所述緩存策略控制实体根据所述更新消息发送的数据更新请求 消息;
根据所述数据更新请求消息将与所述更新请求对应业务数据发送给 所述主緩存和 /或所述边缘緩存。
13、 一种緩存策略控制实体, 其特征在于, 包括:
接收模块, 用于接收业务数据的统计信息;
发送模块, 用于根据所述统计信息, 向服务提供商 SP设备发送业务 数据推送请求, 以使所述 SP设备将与所述业务数据推送请求对应的业务 数据发送给部署在核心网中的主緩存和 /或部署在接入网中的边缘緩存。
14、 根据权利要求 13 所述的实体, 其特征在于, 所述发送模块, 包 括:
第一确定单元, 用于根据所述统计信息, 确定数据緩存策略; 第一发送单元, 用于根据所述数据緩存策略, 向所述 SP设备发送业 务数据推送请求。
15、 根据权利要求 13 所述的实体, 其特征在于, 所述接收模块, 具 体用于:
接收所述 SP设备根据用户设备发送的业务数据请求统计获取的所述 统计信息; 或者,
接收所述主緩存根据用户设备发送的业务数据请求统计获取的所述 统计信息; 或者,
接收所述边缘緩存根据用户设备发送的业务数据请求统计获取的所 述统计信息; 或者
接收所述 SP设备根据从各业务平台收集的业务数据请求统计获取的 所述统计信息。
16、 根据权利要求 15 所述的实体, 其特征在于, 所述发送模块, 包 括:
第二确定单元, 用于根据所述统计信息, 确定与所述主緩存对应的第 一数据緩存策略以及与所述边缘緩存对应的第二数据緩存策略;
第二发送单元, 用于根据所述第一数据緩存策略, 向所述 SP设备发 送第一业务数据下载请求, 以使所述 SP设备根据所述第一业务数据下载 请求将业务数据发送给所述主緩存; 根据所述第二数据緩存策略, 向所述
SP设备发送第二业务数据下载请求, 以使所述 SP设备根据所述第二业务 数据下载请求将业务数据发送给所述边缘緩存。
17、 根据权利要求 16所述的实体, 其特征在于:
所述第二确定单元: 还用于根据所述统计信息, 确定与所述边缘緩存 对应的第三数据緩存策略;
所述第二发送单元: 还用于根据所述第三数据緩存策略, 向所述主緩 存发送第三业务数据下载请求, 以使所述主緩存根据所述第三业务数据下 载请求将业务数据发送给边缘緩存。
18、 根据权利要求 13〜17中任一项所述的实体, 其特征在于, 所述统 计信息包括如下中的任一或其组合:
访问量大于预设值的所述业务数据的信息;
关注度大于预设值的所述业务数据的信息;
包含预设关键词的所述业务数据的信息。
19、根据权利要求 13〜17中任一项所述的实体, 其特征在于,还包括: 更新模块, 用于接收所述 SP设备发送的统计信息的更新消息; 根据 所述更新消息向所述 SP设备发送数据更新请求消息,以使所述 SP设备根 据所述数据更新请求消息将与更新请求对应的业务数据发送给所述主緩 存和 /或所述边缘緩存。
20、 一种网络设备, 其特征在于, 包括:
获取模块, 用于对业务数据进行统计, 获取统计信息;
发送模块, 用于将所述统计信息发送给緩存策略控制实体, 以使所述 緩存策略控制实体根据所述统计信息请求服务提供商 SP设备将与所述请 求对应的业务数据发送给部署在核心网中的主緩存和 /或部署在接入网中 的边缘緩存。
21、 根据权利要求 20所述的网络设备, 其特征在于, 所述获取模块, 具体用于: 对用户设备发送的业务数据请求进行统计处理, 获取所述统计 信息。
22、 根据权利要求 20或 21所述的网络设备, 其特征在于, 所述网络 设备为所述 SP设备、 所述主緩存或者所述边缘緩存。
23、 根据权利要求 20所述的网络设备, 其特征在于, 所述获取模块, 具体用于: 对从各业务平台收集的业务数据请求进行统计处理, 获取所述 统计信息。
24、 根据权利要求 20、 21或 23所述的网络设备, 其特征在于, 还包 括:
更新模块, 用于向所述緩存策略控制实体发送统计信息的更新消息; 接收所述緩存策略控制实体根据所述更新消息发送的数据更新请求消息; 根据所述数据更新请求消息将与更新请求对应的业务数据发送给所述主 緩存和 /或所述边缘緩存。
25、 根据权利要求 24所述的网络设备, 其特征在于, 所述网络设备 为 SP设备。
26、 根据权利要求 20〜25中任一项所述的实体, 其特征在于, 所述统 计信息包括如下中的任一或其组合:
访问量大于预设值的所述业务数据的信息;
关注度大于预设值的所述业务数据的信息;
包含预设关键词的所述业务数据的信息。
27、 一种业务数据緩存处理系统,其特征在于, 包括: 权利要求 13〜19 中任一项所述的緩存策略控制实体和权利要求 20〜26中任一项所述的网络 设备。
PCT/CN2012/081299 2012-09-12 2012-09-12 业务数据缓存处理方法、设备和系统 WO2014040244A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201280004558.6A CN103875227B (zh) 2012-09-12 2012-09-12 业务数据缓存处理方法、设备和系统
EP12884619.3A EP2887618B1 (en) 2012-09-12 2012-09-12 Service data caching processing method, device and system
PCT/CN2012/081299 WO2014040244A1 (zh) 2012-09-12 2012-09-12 业务数据缓存处理方法、设备和系统
US14/656,416 US10257306B2 (en) 2012-09-12 2015-03-12 Service data cache processing method and system and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2012/081299 WO2014040244A1 (zh) 2012-09-12 2012-09-12 业务数据缓存处理方法、设备和系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/656,416 Continuation US10257306B2 (en) 2012-09-12 2015-03-12 Service data cache processing method and system and device

Publications (1)

Publication Number Publication Date
WO2014040244A1 true WO2014040244A1 (zh) 2014-03-20

Family

ID=50277491

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2012/081299 WO2014040244A1 (zh) 2012-09-12 2012-09-12 业务数据缓存处理方法、设备和系统

Country Status (4)

Country Link
US (1) US10257306B2 (zh)
EP (1) EP2887618B1 (zh)
CN (1) CN103875227B (zh)
WO (1) WO2014040244A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106488504B (zh) * 2015-08-28 2019-12-24 华为技术有限公司 网络系统和网络通信的方法
CN109995836B (zh) 2017-12-29 2021-12-03 华为技术有限公司 缓存决策方法及装置
CN108833556B (zh) * 2018-06-22 2021-04-27 中国联合网络通信集团有限公司 边缘缓存更新方法及系统
CN109614347B (zh) * 2018-10-22 2023-07-21 中国平安人寿保险股份有限公司 多级缓存数据的处理方法、装置、存储介质及服务器
CN112084219A (zh) * 2020-09-16 2020-12-15 京东数字科技控股股份有限公司 用于处理数据的方法、装置、电子设备和介质
US20220109616A1 (en) * 2020-10-02 2022-04-07 Arris Enterprises Llc Multi access point-cloud controller for collecting network statistical data

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101448019A (zh) * 2007-11-02 2009-06-03 阿尔卡泰尔卢森特公司 被管理的多媒体传送网络中的有弹性的业务质量
CN101764828A (zh) * 2008-12-23 2010-06-30 华为终端有限公司 推送会话的建立方法、推送系统和相关设备
CN101827309A (zh) * 2009-03-06 2010-09-08 华为技术有限公司 一种推送消息的发送方法、终端、服务器及系统

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6076082A (en) * 1995-09-04 2000-06-13 Matsushita Electric Industrial Co., Ltd. Information filtering method and apparatus for preferentially taking out information having a high necessity
US7143170B2 (en) * 2003-04-30 2006-11-28 Akamai Technologies, Inc. Automatic migration of data via a distributed computer network
US8849793B2 (en) * 2007-06-05 2014-09-30 SafePeak Technologies Ltd. Devices for providing distributable middleware data proxy between application servers and database servers
EP2053831B1 (en) * 2007-10-26 2016-09-07 Alcatel Lucent Method for caching content data packages in caching nodes
CN101404585B (zh) * 2008-11-20 2011-08-31 中国电信股份有限公司 实现内容分发网络内容管理的策略化系统和方法
EP2420035B1 (en) * 2009-04-15 2017-09-27 Telefonaktiebolaget LM Ericsson (publ) Method and apparatus for reducing traffic in a communications network
WO2011116819A1 (en) * 2010-03-25 2011-09-29 Telefonaktiebolaget Lm Ericsson (Publ) Caching in mobile networks
US8863204B2 (en) * 2010-12-20 2014-10-14 Comcast Cable Communications, Llc Cache management in a video content distribution network
US8874845B2 (en) * 2012-04-10 2014-10-28 Cisco Technology, Inc. Cache storage optimization in a cache network
US9529724B2 (en) * 2012-07-06 2016-12-27 Seagate Technology Llc Layered architecture for hybrid controller

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101448019A (zh) * 2007-11-02 2009-06-03 阿尔卡泰尔卢森特公司 被管理的多媒体传送网络中的有弹性的业务质量
CN101764828A (zh) * 2008-12-23 2010-06-30 华为终端有限公司 推送会话的建立方法、推送系统和相关设备
CN101827309A (zh) * 2009-03-06 2010-09-08 华为技术有限公司 一种推送消息的发送方法、终端、服务器及系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2887618A4 *

Also Published As

Publication number Publication date
US20150189040A1 (en) 2015-07-02
US10257306B2 (en) 2019-04-09
EP2887618A1 (en) 2015-06-24
CN103875227A (zh) 2014-06-18
EP2887618A4 (en) 2015-06-24
CN103875227B (zh) 2017-11-24
EP2887618B1 (en) 2018-10-17

Similar Documents

Publication Publication Date Title
JP6941679B2 (ja) ネットワークスライス選択方法、ユーザ機器、及びネットワーク装置
KR102711784B1 (ko) 홈 plmn 구성 변경으로 인한 ue에서의 vplmn 구성 업데이트 관리
US11245626B2 (en) Congestion notification method, related device, and system
US10257306B2 (en) Service data cache processing method and system and device
US8725128B2 (en) Pre-fetching of assets to user equipment
US20120184258A1 (en) Hierarchical Device type Recognition, Caching Control & Enhanced CDN communication in a Wireless Mobile Network
US8527648B2 (en) Systems, methods, and computer program products for optimizing content distribution in data networks
JP2014527654A (ja) データ使用頻度値をキャッシュするための同期方法、ならびに分散キャッシングの方法、デバイス、およびシステム
EP2866406A1 (en) Policy control method and apparatus
WO2022171051A1 (zh) 一种通信方法和通信装置
US20180069919A1 (en) System, method, and device for providing application service
US20140310339A1 (en) Content delivery method and apparatus, and access network device
WO2011144164A1 (zh) 数据传输方法、设备及系统
US9544944B2 (en) Adaptive, personal localized cache control server
TW201233211A (en) Caching at the wireless tower with remote charging services
US20150142882A1 (en) Content processing method and network side device
EP2651152A1 (en) Optimizing backhaul and wireless link capacity in mobile telecommunication systems
WO2013097184A1 (zh) 业务分发方法、设备和系统
WO2015062062A1 (zh) 控制应用接入网络的方法和设备
EP2997489B1 (en) Method and device for efficient mobile data transmission
WO2016041365A1 (zh) 数据传输的方法和设备
CN113678490A (zh) 用于在上行链路中发送后台数据的机制

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12884619

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2012884619

Country of ref document: EP