WO2022121124A1 - 一种面向跨界服务网络的服务缓存方法 - Google Patents

一种面向跨界服务网络的服务缓存方法 Download PDF

Info

Publication number
WO2022121124A1
WO2022121124A1 PCT/CN2021/078804 CN2021078804W WO2022121124A1 WO 2022121124 A1 WO2022121124 A1 WO 2022121124A1 CN 2021078804 W CN2021078804 W CN 2021078804W WO 2022121124 A1 WO2022121124 A1 WO 2022121124A1
Authority
WO
WIPO (PCT)
Prior art keywords
cache
area
service
node
value
Prior art date
Application number
PCT/CN2021/078804
Other languages
English (en)
French (fr)
Inventor
尹建伟
郑邦蓬
邓水光
张欢
庞盛业
郭玉成
张毛林
Original Assignee
浙江大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江大学 filed Critical 浙江大学
Priority to US17/622,693 priority Critical patent/US11743359B2/en
Publication of WO2022121124A1 publication Critical patent/WO2022121124A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/5682Policies or rules for updating, deleting or replacing the stored data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Definitions

  • the invention belongs to the field of cross-border service integration and computing, and particularly relates to a service caching method oriented to a cross-border service network.
  • Web service is an important carrier of Internet product research and development. Its introduction improves the development efficiency of engineers and the iteration cycle of products.
  • Web service is a service-oriented architecture technology. It is a software system that provides services through standard Web protocols, hides service implementation details, supports communication between different machines, and ensures that cross-platform, cross-language, and cross-protocol application services can interoperate.
  • the Chinese patent document with publication number CN109286530A discloses a cross-border service network operation and support structure, and the cross-border service network is defined as an undirected graph of four-tuple (V, E, ⁇ , f, event), where V is the node set, E is the undirected graph edge set, ⁇ is the node quality evaluation function, f is the mapping relationship between the service and the service switch node and service router node, event is the event, and the service switch node is responsible for transforming enterprise services into a unified service style After that, it is opened to the cross-border service network; the service router node synchronizes the services opened by the service switch to the cross-border service network, forwards the service requests of the service consumers to accelerate service consumption, and provides a support carrier for service standardization and service combination. ; The service super node is responsible for the management of service routers, service switches and message queues; in the cross-border service network, the communication mechanism of nodes includes service information event broadcasting mechanism
  • a routing path is established between service nodes through a service routing mechanism, and then a service call is initiated to obtain service resources.
  • a service call is initiated to obtain service resources.
  • the direct service calls between nodes will put enormous pressure on the service nodes, thus bringing a huge burden to the entire service network.
  • the increase in the number of nodes makes the network topology more complex, and direct service calls often result in relatively slow return speeds, affecting the return speed of user service calls.
  • a service caching technology is needed, that is, the result of the service call is cached, and the service resource is no longer occupied when a cache hit occurs in the service call phase, and the service call is directly accelerated by the cache return method, which improves the service call speed, thereby reducing the cross-border service network.
  • the overall burden is improved, and the user experience of calling services is improved.
  • the purpose of the present invention is to propose a service caching method oriented to a cross-border service network, which can improve the cache utilization efficiency in the cross-border service network, thereby accelerating service invocation.
  • the present invention provides the following technical solutions:
  • a service caching method oriented to a cross-border service network comprising:
  • the cache space of the service switch node is divided into resident area, change area, pre-recycling area and maintenance index area; among them, the cache hit frequency is: resident area > changing area > pre-recycling area, and the maintenance index area is used to store services separately call path;
  • the cache content in the cache space is replaced according to the cache value of the missed cache or the hit cache;
  • the service router and the service switch nodes in the corresponding area jointly form a hierarchical cache mode.
  • the service switch nodes in the same area will perform cooperative caching and store them in other services by indexing. switch node.
  • the content cached in the resident area is the cached content that has been frequently called recently (cache hit frequency)
  • the content cached in the pre-recycling area is the cached content that is rarely called
  • the content call frequency of the cached content in the changing area is between the two. between.
  • the resident area is 81%-100%
  • the variable area is 51%-80%
  • the pre-recycling area is below 50% .
  • the proportion of the resident area and the pre-recycling area in the cache space is less than or equal to 20%
  • the proportion of the change area in the cache space is less than or equal to 60%
  • the remaining cache space is the maintenance index area.
  • the resident area stores the hotspot information in the cache service call, that is, it has been called frequently in the recent period of time.
  • the node will allocate Greater cache tolerance and cache lifetime; service invocation information stored in the changing area is frequently replaced in the cache.
  • nodes will allocate relatively less cache tolerance and cache lifetime;
  • the recycle area stores cache contents with a low cache hit rate, which are infrequently used caches. For this type of information node, the least cache tolerance and cache lifetime are allocated.
  • the cache contents in the resident area and the change area are replaced with each other, the cache content in the change area is replaced with the pre-reclaim area, and the cache content in the pre-reclaim area loses the cache tolerance and cache survival space after losing the cache tolerance and cache survival space. Removed from cache space. That is, the content cached in the resident area and the change area can be changed to each other, the content cached in the change area and the pre-reclaim area can be changed to each other, and the content cached in the pre-collection area cannot be changed directly with the resident area.
  • Methods for replacing caches in the cache space based on the cache value of a missed cache or a hit cache include:
  • step (2) When the service call is generated, if the cache is hit, step (2) is performed, and if the cache is not hit, step (3) is performed;
  • step (2) the calculation formula of cache value is:
  • V represents the cache value
  • Size(i) represents the size of the i-th parameter in the service call information that needs to be cached
  • Fr is a function related to the access frequency
  • T now represents the current time
  • T score represents the time recorded when the cache entered.
  • step (2) when the cache is completely hit, check the location mark of the cache. If the cache is in the changing area, check whether the current cache value reaches the threshold of the changing area. If it reaches the threshold, adjust the cache from the changing area to the resident area. ; If the cache is in the pre-collection area, check whether the current cache reaches the pre-collection area threshold; if it exceeds the pre-collection area threshold, adjust the cache from the pre-collection area to the change area.
  • step (2) when the cache part hits, update the cache content, and partially replace the information that needs to be stored into the cache space; check the location mark of the cache, check whether the cache reaches the corresponding area threshold, and if so, change the location of the cache area marker.
  • the cache When the cache is replaced out of the cache space, the content in the storage space is replaced, and the service call path corresponding to the cache is retained to the index area, and the index area is managed by the LRU strategy to improve the efficiency of the index area.
  • the service switch nodes in the same area perform cooperative caching, and the methods for storing to other service switch nodes by means of indexes include:
  • the service router maintains the node value of the service switch node in the area
  • node i After hitting and caching in node i, it will initiate a cache hit request to node j through the index, with the address of the service invocation initiator, and node j returns the cache of the service invocation result to the service invocation initiator after learning the address. ;
  • node j When a service hotspot occurs in node j, and the cache value of the remaining cache values is greater than the value of the collaborative cache initiated by node i, node j will replace the collaborative cache and send cache invalidation information to node i at the same time. After node i receives the cache invalidation message, it will re-initiate a collaborative cache request in the region;
  • node i After the service hotspot phenomenon in node i disappears, node i will initiate a request to all nodes in the cooperative cache, and other nodes will invalidate the cooperative cache in this node after receiving the message.
  • the service hotspot phenomenon in steps (5) and (6) refers to insufficient cache space.
  • step (2) the calculation formula of the node value is:
  • Load i is the current load of node i
  • n is the number of nodes in the area
  • Value(i) static is the static node value calculated according to the network topology
  • R jk is the number of shortest paths between any two nodes j and k in the area
  • R jk (i) is the number of shortest paths passing through node i
  • n is the number of nodes in the area
  • V(i) is the node cache value
  • is the remaining rate of cache space in the region.
  • the node cache in the region improves the cache utilization efficiency in the entire region through cooperative cache, expands the logical cache space of a single node, and relieves the cache pressure of a single node during the peak period of service invocation.
  • the present invention has the beneficial effects that: by dividing the cache space, the problem that the service cache takes up too long caused by the peak service call in a short time can be solved, so as to improve the utilization efficiency of the cache space.
  • the cache content in the cache space is optimally replaced, which effectively improves the utilization efficiency of the node cache space, thereby accelerating service calls.
  • the logical cache space of a single node can be expanded within the same region, that is, the total cache space in the region remains unchanged, and the cache space of each node can be used optimally, thereby reducing the service call peak of a single node. Cache pressure and improve cache efficiency.
  • Fig. 1 is the cache space division diagram of the service switch node
  • Fig. 2 is a partial replacement diagram of service invocation information in the cache space
  • Fig. 3 is the flow chart of service cache replacement method
  • FIG. 4 is a flow chart of a method for collaborative caching within a region.
  • the cross-border service network-oriented service caching method provided by the present invention includes:
  • the cache space of the service switch node is divided into resident area, change area, pre-recycling area and maintenance index area; among them, the cache hit frequency is: resident area > changing area > pre-recycling area, and the maintenance index area is used to store services separately call path;
  • the cache content in the cache space is replaced according to the cache value of the missed cache or the hit cache;
  • the service router and the service switch nodes in the corresponding area jointly form a hierarchical cache mode.
  • the service switch nodes in the same area will perform cooperative caching and store them in other services by indexing. switch node.
  • the resident area stores the hotspot information in the cache service call, that is, it has been called frequently in the recent period of time.
  • the node will allocate a larger cache tolerance and cache life time.
  • the service invocation information stored in the change area is frequently replaced in the cache.
  • the node will allocate a relatively small cache tolerance and cache lifetime.
  • the pre-recycling area stores cache contents with a low cache hit rate, which are infrequently used caches. For this type of information node, the least cache tolerance and cache lifetime are allocated. Among them, FIG.
  • FIG. 1 is a partition diagram of a cache space in a cache space node of a service switch node.
  • the space allocation of the three areas is that the resident area and the pre-collection area each account for about 20% of the cache space, and the change area accounts for about 60% of the cache space.
  • the cache content in the resident area and the change area are replaced with each other, the cache content in the change area is replaced to the pre-reclaim area, and the cache content in the pre-reclaim area loses the cache tolerance and cache survival space. removed from the cache space.
  • the service invocation information in the changing area is used relatively frequently and exceeds the upper threshold of the cache hit frequency of the changing area, it will be converted from the changing area to the resident area.
  • the usage frequency of the cache in the change area decreases and falls below the lower threshold of the cache hit frequency of the change area, it will be converted from the change area to the pre-collection area.
  • the service information of the resident area will be transferred from the resident area to the change area after the hit frequency drops to the lower threshold of the cache hit frequency of the resident area.
  • the service information of the pre-collection area is removed from the cache space after the cache tolerance and cache survival space are lost.
  • the service information is in the change area when it is first replaced into the cache.
  • Figure 2 is a partial replacement diagram of the service call information in the cache space.
  • the service call Query hits the cache, first check the parameters required by the Query, and find that the three parameters Parameter_1, Parameter_2, and Parameter_k are hit, and the content in the Parameter_2 cache is the same as that of the cache.
  • the actual content of the current service has been different, so it needs to be replaced with Parameter_2_new, but the content of Parameter_k is not hit.
  • the required content is transferred to the cache to achieve partial replacement, and the service call information will not be changed as a whole. replacement case.
  • the method for replacing the cache in the cache space according to the cache value of the missed cache or the hit cache includes:
  • V represents the cache value
  • Size(i) represents the size of the i-th parameter in the service call information that needs to be cached
  • Fr is a function related to the access frequency
  • T now represents the current time
  • T score represents the time recorded when the cache entered.
  • step (3) In the case of a cache hit, check whether the cache is completely hit, if the cache is completely hit, go to step (4), if the cache is partially hit, go to step (5)
  • the cache is completely hit, check the location mark of the cache, if the cache is in the change area, check whether the current cache value reaches the threshold of the change area, if it reaches the threshold, adjust the cache from the change area to the resident area; Reclaim area, check whether the current cache reaches the pre-reclaim area threshold. If the pre-reclaim area threshold is exceeded, the cache is adjusted from the pre-reclaim area to the mutated area.
  • the cache when the cache is replaced out of the cache space, the content in the storage space is replaced, and at the same time, the service calling path corresponding to the cache is retained to the index area, and the index area is managed by the LRU strategy to improve the efficiency of the index area.
  • the service router maintains the node value of the service switch node in the area.
  • the calculation formula of the node value is:
  • Load i is the current load of node i
  • n is the number of nodes in the area
  • Value(i) static is the static node value calculated according to the network topology.
  • R jk is the number of shortest paths between any two nodes j and k in the area
  • R jk (i) is the number of shortest paths passing through node i
  • n is the number of nodes in the area.
  • V(i) is the node cache value described in claim 3
  • is the residual rate of cache space in the area.
  • the node i will forward the content to be cached to the node j, and the node j will save it, and the node j will return the specific storage location of the cache to the node i; j , index> to save the index.
  • node i After hitting and caching in node i, it will initiate a cache hit request to node j through the index, with the address of the service invocation initiator, and node j returns the cache of the service invocation result to the service invocation initiator after learning the address. .
  • node j When a service hotspot phenomenon also occurs in node j, and the cache value of the remaining cache values is greater than the value of the collaborative cache initiated by node i, node j will replace the collaborative cache and send cache invalidation information to node i at the same time. , node i will re-initiate a collaborative cache request in the region after receiving the cache invalidation message.
  • node i After the service hotspot phenomenon in node i disappears, node i will initiate a request to all nodes in the cooperative cache, and other nodes will invalidate the cooperative cache in this node after receiving the message.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

本发明公开了一种面向跨界服务网络的服务缓存方法:将服务交换机节点的缓存空间划分为常驻区、变动区、预回收区和维护索引区;其中,缓存命中频率为:常驻区>变动区>预回收区,维护索引区用于单独存储服务调用路径;服务调用产生时,根据未命中的缓存或命中的缓存的缓存价值替换缓存空间中的缓存内容;服务路由器和对应区域的服务交换机节点共同组成分层缓存模式,当服务交换机节点中的任一节点的缓存空间不足时,同一区域内的服务交换机节点进行协同缓存,通过索引的方式存储到其他服务交换机节点。本发明提供的方法可以提高跨界服务网络中的缓存利用效率,从而加速服务调用。

Description

一种面向跨界服务网络的服务缓存方法 技术领域
本发明属于跨界服务集成与计算领域,特别涉及一种面向跨界服务网络的服务缓存方法。
背景技术
随着网络信息技术的发展,互联网时代己经全面到来,其中Web服务是互联网产品研发的重要载体,它的引入提高了工程师的开发效率和产品的迭代周期。Web服务是服务导向架构技术,是一个软件系统,通过标准的Web协议提供服务,隐藏服务实现细节,支持不同机器间的通信,保证跨平台、跨语言、跨协议的应用服务可以互相操作。
越来越多的web服务被企业发布在互联网上,服务开放已经成为互联网发展趋势,第三方web服务已经渗透进人们生活的点点滴滴,提高人们的生活质量。2017年,API通用资源网站Programmable Web宣布网站目录中的API已达5000并预言最终所有公司都将拥有自己的API,其中甚至还包括政府部门。开放服务,实现数据共享,赋价值于数据,不仅可以为企业带来额外的经济效益,还可以作为企业的发展战略。然而现在的企业大多独立开放服务,服务处于孤立状态,这些跨越不同行业、组织、地域、价值链等边界的Web服务,将这些Web服务称为跨界服务,如何将跨界服务进行集成再利用成为课题研究的重大挑战。如公开号为CN109286530A的中国专利文献 公开了一种跨界服务网络运行与支撑架构,跨界服务网络定义为四元组(V,E,ρ,f,event)的无向图,其中,V是节点集合,E是无向图边集合,ρ是节点质量评估函数,f是服务与服务交换机节点、服务路由器节点的映射关系,event是事件,服务交换机节点负责将企业服务转化为统一服务风格后,开放到跨界服务网络中;服务路由器节点将服务交换机开放的服务同步到跨界服务网络中,对服务消费者的服务请求进行转发以加速服务消费,为服务标准化和服务组合提供支撑载体;服务超级节点负责服务路由器、服务交换机和消息队列的管理;跨界服务网络中,节点的通讯机制包括服务信息事件广播机制和服务调用路由机制。
在跨界服务网络架构中,服务节点之间通过服务路由机制建立起路由路径而后发起服务调用获取服务资源。随着服务调用数量的增长,节点之间的直接服务调用会对服务节点带来巨大的压力,从而给整个服务网络带来巨大的负担。而节点数量增加导致网络拓扑更加复杂,直接进行服务调用往往会产生返回速度相对缓慢,影响用户服务调用返回的速度。因此需要服务缓存技术,即对服务调用结果进行缓存,在服务调用阶段发生缓存命中时不再占用服务资源,直接通过缓存返回的方式加速服务调用,提高服务调用速度,从而降低跨界服务网络的整体负担,提高用户调用服务的体验。
发明内容
本发明的目的在于提出一种面向跨界服务网络的服务缓存方法, 该方法可以提高跨界服务网络中的缓存利用效率,从而加速服务调用。
为实现上述目的,本发明提供以下技术方案:
一种面向跨界服务网络的服务缓存方法,所述方法包括:
将服务交换机节点的缓存空间划分为常驻区、变动区、预回收区和维护索引区;其中,缓存命中频率为:常驻区>变动区>预回收区,维护索引区用于单独存储服务调用路径;
服务调用产生时,根据未命中的缓存或命中的缓存的缓存价值替换缓存空间中的缓存内容;
服务路由器和对应区域的服务交换机节点共同组成分层缓存模式,当服务交换机节点中的任一节点的缓存空间不足时,同一区域内的服务交换机节点进行协同缓存,通过索引的方式存储到其他服务交换机节点。
其中,常驻区内缓存的内容为最近得到频繁调用(缓存命中频率)的缓存内容,预回收区内缓存的内容为较少调用的缓存内容,变动区内缓存的内容调用频率介于两者之间。根据实际需要,可以设定常驻区、变动区和预回收区的缓存命中频率范围,如常驻区为81%-100%,变动区为51%-80%,预回收区为50%以下。
优选的,缓存空间的空间分配为:变动区>常驻区=预回收区>维护索引区;缓存容忍度和缓存生存时间为:常驻区>变动区>预回收区。如常驻区和预回收区占缓存空间的比例为小于等于20%,变动区占缓存空间的比例为小于等于60%,剩余的缓存空间为维护索引区。
对于缓存容忍度和缓存生存时间的划分比例关系:常驻区存储缓 存服务调用中的热点信息,即在最近一段时间内得到了频繁地调用,对于这一分类的服务调用信息,节点将会分配更大的缓存容忍度和缓存生存时间;变动区存储的服务调用信息在缓存中经常发生替换,对于这一分类的服务调用信息,节点会分配相对较少的缓存容忍度和缓存生存时间;预回收区存储缓存命中率较低的缓存内容,属于不频繁被使用的缓存,对于这一类信息节点会分配最少的缓存容忍度和缓存生存时间。
优选的,根据缓存命中频率,常驻区和变动区中的缓存内容相互替换,变动区中的缓存内容替换到预回收区,预回收区中的缓存内容在失去缓存容忍度和缓存生存空间之后从缓存空间中去除。即,常驻区和变动区中缓存的内容可以相互转变,变动区和预回收区缓存的内容可以相互转变,预回收区中缓存的内容不可以和常驻区直接进行转变。
根据未命中的缓存或命中的缓存的缓存价值替换缓存空间中的缓存的方法包括:
(1)服务调用产生时,如果命中缓存,则执行步骤(2),如果未命中缓存,则执行步骤(3);
(2)更新命中缓存的缓存价值,根据缓存价值判断为完全命中缓存或部分命中缓存;如果完全命中缓存,服务调用返回,如果部分命中缓存,缓存局部替换;检查缓存所处的区域,如果需要进行分区调整,则进行缓存空间区域替换;
(3)检查索引区是否有对应缓存的调用路径信息,如果有,则 进行相应的服务调用,如果没有则需要重新发起服务调用流程;
根据服务调用结果替换预回收区中缓存价值最低的缓存,将需要的缓存替换进入变动区,如果变动区存储空间已满,则根据变动区中缓存的缓存价值,将缓存价值最低的缓存替换进入预回收区。
在步骤(2)中,缓存价值的计算公式为:
Figure PCTCN2021078804-appb-000001
其中V表示缓存价值,Size(i)表示需要缓存的服务调用信息中第i个参数的大小,Fr是一个和访问频率相关的函数,T now表示当前时间,T score表示缓存进入时记录的时间;
Figure PCTCN2021078804-appb-000002
即在缓存命中时,函数值加一,在缓存未命中时,函数值不变。
在步骤(2)中,缓存完全命中时,检查缓存所处位置标记,如果缓存在变动区,检查当前缓存价值是否到达变动区阈值,如果到达阈值,则将缓存从变动区调整到常驻区;如果缓存在预回收区,检查当前缓存是否达到预回收区阈值;如果超过预回收区阈值,则将缓存从预回收区调整到变动区。
在步骤(2)中,缓存部分命中时,更新缓存内容,将需要存储的信息局部替换到缓存空间中;检查缓存所处位置标记,检查缓存是否到达对应区域阈值,如果到达,则更改缓存所在区域标记。
在缓存被替换出缓存空间时替换存储空间中的内容,同时保留该缓存对应的服务调用路径到索引区,索引区使用LRU策略进行管理,提高索引区的效率。
当服务交换机节点中的任一节点的缓存空间不足时,同一区域内的服务交换机节点进行协同缓存,通过索引的方式存储到其他服务交换机节点的方法包括:
(1)当服务交换机节点i中的缓存空间不足时,在区域内发起协同缓存流程;
(2)服务路由器维护本区域内服务交换机节点的节点价值;
(3)选取区域内节点价值最低的节点j,节点i会将需要缓存的内容转发到节点j,由节点j保存,同时节点j会向节点i返回缓存具体的存储位置;节点i以<IP j,index>的形式保存索引;
(4)在节点i中命中和缓存之后会通过索引向节点j发起缓存命中的请求,同时附带服务调用发起方的地址,节点j在获悉了地址之后向服务调用发起方返回服务调用结果的缓存;
(5)当节点j中发生了服务热点现象,同时其余缓存值的缓存价值大于节点i发起的协同缓存价值,此时节点j会将协同缓存替换掉,同时向节点i发送缓存失效的信息,节点i接收到缓存失效的消息后会重新在区域内发起协同缓存的请求;
(6)在节点i中的服务热点现象消失之后,节点i会向所有协同缓存的节点发起请求,其他节点在收到消息之后会将本节点内的协同缓存失效。
其中,步骤(5)和(6)中的服务热点现象是指缓存空间不足。
在步骤(2)中,所述节点价值的计算公式为:
Figure PCTCN2021078804-appb-000003
其中,Load i为节点i当前负载,n为区域内节点数量,Value(i) static为根据网络拓扑所计算出的静态节点价值;
Figure PCTCN2021078804-appb-000004
其中,R jk为区域内任意两个节点j、k之间的最短路径条数,R jk(i)为其中经过节点i的最短路径条数,n为区域内节点数量;
Figure PCTCN2021078804-appb-000005
其中,V(i)为节点缓存价值,ρ为区域内缓存空间剩余率。
在本发明中,区域内的节点缓存通过协同缓存提高整个区域内的缓存利用效率,扩大单个节点的逻辑缓存空间,缓解服务调用高峰时期的单个节点缓存压力。
与现有技术相比,本发明的有益效果在于:通过件缓存空间进行划分,可以解决短时间内服务高峰调用所产生的服务缓存占用时间过长问题,以提高缓存空间利用效率。通过计算缓存价值,从而最优化地替换缓存空间中的缓存内容,有效提高节点缓存空间的利用效率,从而对服务调用进行加速。通过采用协同缓存,在同一区域内,即,区域内总缓存空间不变的情况下扩大单个节点的逻辑缓存空间,最优化地利用各个节点的缓存空间,从而在服务调用高峰时降低单个节点的缓存压力,提高缓存效率。
附图说明
图1为服务交换机节点的缓存空间划分图;
图2为服务调用信息在缓存空间中的局部替换图;
图3为服务缓存替换方法的流程图;
图4为区域内协同缓存方法的流程图。
具体实施方式
为使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例对本发明进行进一步的详细说明。应当理解,此处所描述的具体实施方式仅用于解释本发明,并不限定本发明的保护范围。
本发明提供的面向跨界服务网络的服务缓存方法包括:
将服务交换机节点的缓存空间划分为常驻区、变动区、预回收区和维护索引区;其中,缓存命中频率为:常驻区>变动区>预回收区,维护索引区用于单独存储服务调用路径;
服务调用产生时,根据未命中的缓存或命中的缓存的缓存价值替换缓存空间中的缓存内容;
服务路由器和对应区域的服务交换机节点共同组成分层缓存模式,当服务交换机节点中的任一节点的缓存空间不足时,同一区域内的服务交换机节点进行协同缓存,通过索引的方式存储到其他服务交换机节点。
其中,缓存空间的空间分配为:变动区>常驻区=预回收区>维护索引区;缓存容忍度和缓存生存时间为:常驻区>变动区>预回收区。常驻区存储缓存服务调用中的热点信息,即在最近一段时间内得到了频繁地调用,对于这一分类的服务调用信息,节点将会分配更大 的缓存容忍度和缓存生存时间。变动区存储的服务调用信息在缓存中经常发生替换,对于这一分类的服务调用信息,节点会分配相对较少的缓存容忍度和缓存生存时间。预回收区存储缓存命中率较低的缓存内容,属于不频繁被使用的缓存,对于这一类信息节点会分配最少的缓存容忍度和缓存生存时间。其中,图1为服务交换机节点的缓存空间节点内缓存空间划分图。如三个区域的空间分配为常驻区和预回收区各占缓存空间的约20%,变动区占缓存空间的约60%。
其中,根据缓存命中频率,常驻区和变动区中的缓存内容相互替换,变动区中的缓存内容替换到预回收区,预回收区中的缓存内容在失去缓存容忍度和缓存生存空间之后从缓存空间中去除。如变动区中的服务调用信息在得到了相对频繁地使用之后,超过了变动区的缓存命中频率的上阈值之后,会从变动区转化到常驻区。而变动区的缓存使用频率下降之后,低于变动区的缓存命中频率的下阈值之后,会从变动区转化到预回收区。而常驻区的服务信息在命中频率下降至常驻区的缓存命中频率的下阈值之后会从常驻区转到变动区。预回收区的服务信息则在失去缓存容忍度和缓存生存空间之后从缓存空间中去除。服务信息在第一次被替换进缓存的时候处于变动区。
图2为服务调用信息在缓存空间中的局部替换图,在服务调用Query命中缓存时首先检查Query所需要的参数,发现其中命中Parameter_1、Parameter_2、Parameter_k这三个参数,其中Parameter_2缓存中的内容与当前服务的实际提供内容已经产生了不同,因此需要替换为Parameter_2_new,而Parameter_k的内容则未命中,此时将需 要的内容调入缓存,实现局部替换,而不会发生该条服务调用信息被整体替换的情况。
如图3所示,根据未命中的缓存或命中的缓存的缓存价值替换缓存空间中的缓存的方法包括:
(1)服务调用产生时,如果命中缓存或者部分命中缓存,则执行步骤(2),如果未命中缓存,则执行步骤(6)。
(2)更新命中缓存的缓存价值,缓存价值的计算公式为:
Figure PCTCN2021078804-appb-000006
其中V表示缓存价值,Size(i)表示需要缓存的服务调用信息中第i个参数的大小,Fr是一个和访问频率相关的函数,T now表示当前时间,T score表示缓存进入时记录的时间。
Figure PCTCN2021078804-appb-000007
即在缓存命中时,函数值加一,在缓存未命中时,函数值不变。
(3)命中缓存的情况下,检查缓存是否完全命中,如果缓存完全命中则执行步骤(4),如果缓存局部命中则执行步骤(5)
(4)缓存完全命中,检查缓存所处位置标记,如果缓存在变动区,检查当前缓存价值是否到达变动区阈值,如果到达阈值,则将缓存从变动区调整到常驻区;如果缓存在预回收区,检查当前缓存是否达到预回收区阈值。如果超过预回收区阈值,则将缓存从预回收区调整到变动区。
(5)更新缓存内容,将需要存储的信息局部替换到缓存空间中。检查缓存所处位置标记,检查缓存是否到达对应区域阈值,如果到达, 则更改缓存所在区域标记;
其中,在缓存被替换出缓存空间时替换存储空间中的内容,同时保留该缓存对应的服务调用路径到索引区,索引区使用LRU策略进行管理,提高索引区的效率。
(6)检查索引区是否有对应缓存的调用路径信息,如果有,则进行相应的服务调用,如果没有则需要重新发起服务调用流程。替换预回收区中缓存价值最低的缓存内容,将需要的缓存替换进入变动区,如果变动区存储空间已满,则根据变动区缓存价值,将缓存价值最低的缓存内容替换进入预回收区。
如图4所示,当服务交换机节点中的任一节点的缓存空间不足时,同一区域内的服务交换机节点进行协同缓存,通过索引的方式存储到其他服务交换机节点的方法包括:
(1)当服务交换机节点i中的缓存空间不足时,在区域内发起协同缓存流程。
(2)服务路由器维护本区域内服务交换机节点的节点价值,节点价值的计算公式为:
Figure PCTCN2021078804-appb-000008
其中,Load i为节点i当前负载,n为区域内节点数量,Value(i) static为根据网络拓扑所计算出的静态节点价值。
Figure PCTCN2021078804-appb-000009
其中,R jk为区域内任意两个节点j、k之间的最短路径条数,R jk(i) 为其中经过节点i的最短路径条数,n为区域内节点数量。
Figure PCTCN2021078804-appb-000010
其中,V(i)为权利要求3中所述的节点缓存价值,ρ为区域内缓存空间剩余率。
(3)选取区域内节点价值最低的节点j,节点i会将需要缓存的内容转发到节点j,由节点j保存,同时节点j会向节点i返回缓存具体的存储位置;节点i以<IP j,index>的形式保存索引。
(4)在节点i中命中和缓存之后会通过索引向节点j发起缓存命中的请求,同时附带服务调用发起方的地址,节点j在获悉了地址之后向服务调用发起方返回服务调用结果的缓存。
(5)当节点j中也发生了服务热点现象,同时其余缓存值的缓存价值大于节点i发起的协同缓存价值,此时节点j会将协同缓存替换掉,同时向节点i发送缓存失效的信息,节点i接收到缓存失效的消息后会重新在区域内发起协同缓存的请求。
(6)在节点i中的服务热点现象消失之后,节点i会向所有协同缓存的节点发起请求,其他节点在收到消息之后会将本节点内的协同缓存失效。
以上所述的具体实施方式对本发明的技术方案和有益效果进行了详细说明,应理解的是以上所述仅为本发明的最优选实施例,并不用于限制本发明,凡在本发明的原则范围内所做的任何修改、补充和等同替换等,均应包含在本发明的保护范围之内。

Claims (10)

  1. 一种面向跨界服务网络的服务缓存方法,其特征在于,所述方法包括:
    将服务交换机节点的缓存空间划分为常驻区、变动区、预回收区和维护索引区;其中,缓存命中频率为:常驻区>变动区>预回收区,维护索引区用于单独存储服务调用路径;
    服务调用产生时,根据未命中的缓存或命中的缓存的缓存价值替换缓存空间中的缓存内容;
    服务路由器和对应区域的服务交换机节点共同组成分层缓存模式,当服务交换机节点中的任一节点的缓存空间不足时,同一区域内的服务交换机节点进行协同缓存,通过索引的方式存储到其他服务交换机节点。
  2. 根据权利要求1所述的面向跨界服务网络的服务缓存方法,其特征在于,缓存空间的空间分配为:变动区>常驻区=预回收区>维护索引区;缓存容忍度和缓存生存时间为:常驻区>变动区>预回收区。
  3. 根据权利要求2所述的面向跨界服务网络的服务缓存方法,其特征在于,根据缓存命中频率,常驻区和变动区中的缓存内容相互替换,变动区中的缓存内容替换到预回收区,预回收区中的缓存内容在失去缓存容忍度和缓存生存空间之后从缓存空间中去除。
  4. 根据权利要求1所述的面向跨界服务网络的服务缓存方法,其特征在于,根据未命中的缓存或命中的缓存的缓存价值替换缓存空间中的缓存的方法包括:
    (1)服务调用产生时,如果命中缓存,则执行步骤(2),如果未命中缓存,则执行步骤(3);
    (2)更新命中缓存的缓存价值,根据缓存价值判断为完全命中缓存或部分命中缓存;如果完全命中缓存,服务调用返回,如果部分命中缓存,缓存局部替换;检查缓存所处的区域,如果需要进行分区调整,则进行缓存空间区域替换;
    (3)检查索引区是否有对应缓存的调用路径信息,如果有,则进行相应的服务调用,如果没有则需要重新发起服务调用流程;
    根据服务调用结果替换预回收区中缓存价值最低的缓存,将需要的缓存替换进入变动区,如果变动区存储空间已满,则根据变动区中缓存的缓存价值,将缓存价值最低的缓存替换进入预回收区。
  5. 根据权利要求4所述的面向跨界服务网络的服务缓存方法,其特征在于,在步骤(2)中,缓存价值的计算公式为:
    Figure PCTCN2021078804-appb-100001
    其中V表示缓存价值,Size(i)表示需要缓存的服务调用信息中第i个参数的大小,Fr是一个和访问频率相关的函数,T now表示当前时间,T score表示缓存进入时记录的时间;
    Figure PCTCN2021078804-appb-100002
    即在缓存命中时,函数值加一,在缓存未命中时,函数值不变。
  6. 根据权利要求4所述的面向跨界服务网络的服务缓存方法,其特征在于,在步骤(2)中,缓存完全命中时,检查缓存所处位置标记,如果缓存在变动区,检查当前缓存价值是否到达变动区阈值,如 果到达阈值,则将缓存从变动区调整到常驻区;如果缓存在预回收区,检查当前缓存是否达到预回收区阈值;如果超过预回收区阈值,则将缓存从预回收区调整到变动区。
  7. 根据权利要求4所述的面向跨界服务网络的服务缓存方法,其特征在于,在步骤(2)中,缓存部分命中时,更新缓存内容,将需要存储的信息局部替换到缓存空间中;检查缓存所处位置标记,检查缓存是否到达对应区域阈值,如果到达,则更改缓存所在区域标记。
  8. 根据权利要求7所述的面向跨界服务网络的服务缓存方法,其特征在于,在缓存被替换出缓存空间时替换存储空间中的内容,同时保留该缓存对应的服务调用路径到索引区,索引区使用LRU策略进行管理,提高索引区的效率。
  9. 根据权利要求1所述的面向跨界服务网络的服务缓存方法,其特征在于,当服务交换机节点中的任一节点的缓存空间不足时,同一区域内的服务交换机节点进行协同缓存,通过索引的方式存储到其他服务交换机节点的方法包括:
    (1)当服务交换机节点i中的缓存空间不足时,在区域内发起协同缓存流程;
    (2)服务路由器维护本区域内服务交换机节点的节点价值;
    (3)选取区域内节点价值最低的节点j,节点i会将需要缓存的内容转发到节点j,由节点j保存,同时节点j会向节点i返回缓存具体的存储位置;节点i以<IP j,index>的形式保存索引;
    (4)在节点i中命中和缓存之后会通过索引向节点j发起缓存命 中的请求,同时附带服务调用发起方的地址,节点j在获悉了地址之后向服务调用发起方返回服务调用结果的缓存;
    (5)当节点j发生了服务热点现象,同时其余缓存的缓存价值大于节点i发起的协同缓存价值,此时节点j会将协同缓存替换掉,同时向节点i发送缓存失效的信息,节点i接收到缓存失效的消息后会重新在区域内发起协同缓存的请求;
    (6)在节点i中的服务热点现象消失之后,节点i会向所有协同缓存的节点发起请求,其他节点在收到消息之后会将本节点内的协同缓存失效。
  10. 根据权利要求9所述的面向跨界服务网络的服务缓存方法,其特征在于,在步骤(2)中,所述节点价值的计算公式为:
    Figure PCTCN2021078804-appb-100003
    其中,Load i为节点i当前负载,n为区域内节点数量,Value(i) static为根据网络拓扑所计算出的静态节点价值;
    Figure PCTCN2021078804-appb-100004
    其中,R jk为区域内任意两个节点j、k之间的最短路径条数,R jk(i)为其中经过节点i的最短路径条数,n为区域内节点数量;
    Figure PCTCN2021078804-appb-100005
    其中,V(i)为节点缓存价值,ρ为区域内缓存空间剩余率。
PCT/CN2021/078804 2020-12-07 2021-03-03 一种面向跨界服务网络的服务缓存方法 WO2022121124A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/622,693 US11743359B2 (en) 2020-12-07 2021-03-03 Service caching method for a cross-border service network

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011437726.3A CN112565437B (zh) 2020-12-07 2020-12-07 一种面向跨界服务网络的服务缓存方法
CN202011437726.3 2020-12-07

Publications (1)

Publication Number Publication Date
WO2022121124A1 true WO2022121124A1 (zh) 2022-06-16

Family

ID=75060381

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/078804 WO2022121124A1 (zh) 2020-12-07 2021-03-03 一种面向跨界服务网络的服务缓存方法

Country Status (3)

Country Link
US (1) US11743359B2 (zh)
CN (1) CN112565437B (zh)
WO (1) WO2022121124A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115174471B (zh) * 2021-04-07 2024-03-26 中国科学院声学研究所 一种icn路由器存储单元的缓存管理方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103634231A (zh) * 2013-12-02 2014-03-12 江苏大学 一种基于内容流行度的ccn缓存分区置换方法
US20150052314A1 (en) * 2013-08-13 2015-02-19 Fujitsu Limited Cache memory control program, processor incorporating cache memory, and cache memory control method
CN106131182A (zh) * 2016-07-12 2016-11-16 重庆邮电大学 命名数据网络中一种基于流行度预测的协作缓存方法
CN107948247A (zh) * 2017-11-01 2018-04-20 西安交通大学 一种软件定义网络的虚拟缓存通道缓存管理方法
CN110933692A (zh) * 2019-12-02 2020-03-27 山东大学 一种基于边缘计算框架的优化缓存系统及其应用

Family Cites Families (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5594885A (en) * 1991-03-05 1997-01-14 Zitel Corporation Method for operating a cache memory system using a recycled register for identifying a reuse status of a corresponding cache entry
US5590300A (en) * 1991-03-05 1996-12-31 Zitel Corporation Cache memory utilizing address translation table
US5499354A (en) * 1993-05-19 1996-03-12 International Business Machines Corporation Method and means for dynamic cache management by variable space and time binding and rebinding of cache extents to DASD cylinders
US5745729A (en) * 1995-02-16 1998-04-28 Sun Microsystems, Inc. Methods and apparatuses for servicing load instructions
US5668987A (en) * 1995-08-31 1997-09-16 Sybase, Inc. Database system with subquery optimizer
US6286080B1 (en) * 1999-02-16 2001-09-04 International Business Machines Corporation Advanced read cache emulation
US6338115B1 (en) * 1999-02-16 2002-01-08 International Business Machines Corporation Advanced read cache management
US6711652B2 (en) * 2001-06-21 2004-03-23 International Business Machines Corporation Non-uniform memory access (NUMA) data processing system that provides precise notification of remote deallocation of modified data
US6857045B2 (en) * 2002-01-25 2005-02-15 International Business Machines Corporation Method and system for updating data in a compressed read cache
US7337271B2 (en) * 2003-12-01 2008-02-26 International Business Machines Corporation Context look ahead storage structures
US7447869B2 (en) * 2005-04-07 2008-11-04 Ati Technologies, Inc. Method and apparatus for fragment processing in a virtual memory system
US8195890B1 (en) * 2006-08-22 2012-06-05 Sawyer Law Group, P.C. Method for maintaining cache coherence using a distributed directory with event driven updates
US8732163B2 (en) * 2009-08-04 2014-05-20 Sybase, Inc. Query optimization with memory I/O awareness
US8352687B2 (en) * 2010-06-23 2013-01-08 International Business Machines Corporation Performance optimization and dynamic resource reservation for guaranteed coherency updates in a multi-level cache hierarchy
US8560803B2 (en) * 2010-06-23 2013-10-15 International Business Machines Corporation Dynamic cache queue allocation based on destination availability
US8566532B2 (en) * 2010-06-23 2013-10-22 International Business Machines Corporation Management of multipurpose command queues in a multilevel cache hierarchy
US8447905B2 (en) * 2010-06-23 2013-05-21 International Business Machines Corporation Dynamic multi-level cache including resource access fairness scheme
US20120297009A1 (en) * 2011-05-18 2012-11-22 Oversi Networks Ltd. Method and system for cahing in mobile ran
US9372810B2 (en) * 2012-04-27 2016-06-21 Hewlett Packard Enterprise Development Lp Collaborative caching
US9003125B2 (en) * 2012-06-14 2015-04-07 International Business Machines Corporation Cache coherency protocol for allowing parallel data fetches and eviction to the same addressable index
US20140281232A1 (en) * 2013-03-14 2014-09-18 Hagersten Optimization AB System and Method for Capturing Behaviour Information from a Program and Inserting Software Prefetch Instructions
JP6131170B2 (ja) * 2013-10-29 2017-05-17 株式会社日立製作所 計算機システム、及びデータ配置制御方法
CN103716254A (zh) * 2013-12-27 2014-04-09 中国科学院声学研究所 内容中心网络中自集结协同缓存方法
US20150378900A1 (en) * 2014-06-27 2015-12-31 International Business Machines Corporation Co-processor memory accesses in a transactional memory
US9740614B2 (en) * 2014-06-27 2017-08-22 International Business Machines Corporation Processor directly storing address range of co-processor memory accesses in a transactional memory where co-processor supplements functions of the processor
US10067960B2 (en) * 2015-06-04 2018-09-04 Microsoft Technology Licensing, Llc Controlling atomic updates of indexes using hardware transactional memory
US20170091117A1 (en) * 2015-09-25 2017-03-30 Qualcomm Incorporated Method and apparatus for cache line deduplication via data matching
EP3206348B1 (en) * 2016-02-15 2019-07-31 Tata Consultancy Services Limited Method and system for co-operative on-path and off-path caching policy for information centric networks
US10691613B1 (en) * 2016-09-27 2020-06-23 EMC IP Holding Company LLC Caching algorithms for multiple caches
US10282299B2 (en) * 2017-06-23 2019-05-07 Cavium, Llc Managing cache partitions based on cache usage information
CN110035092A (zh) * 2018-01-11 2019-07-19 中国科学院声学研究所 一种icn网络中的基于lcd的隐式缓存策略
CN109450820B (zh) * 2018-11-09 2020-07-07 浙江大学 一种面向服务网络的服务交换机及服务网络系统
CN109450795B (zh) * 2018-11-09 2020-08-11 浙江大学 一种面向服务网络的服务路由器及服务网络系统
US11470176B2 (en) * 2019-01-29 2022-10-11 Cisco Technology, Inc. Efficient and flexible load-balancing for clusters of caches under latency constraint
CN110222251B (zh) * 2019-05-27 2022-04-01 浙江大学 一种基于网页分割和搜索算法的服务包装方法
US11030115B2 (en) * 2019-06-27 2021-06-08 Lenovo Enterprise Solutions (Singapore) Pte. Ltd Dataless cache entry
US11010210B2 (en) * 2019-07-31 2021-05-18 International Business Machines Corporation Controller address contention assumption
US11080195B2 (en) * 2019-09-10 2021-08-03 Marvell Asia Pte, Ltd. Method of cache prefetching that increases the hit rate of a next faster cache
US11449397B2 (en) * 2019-09-11 2022-09-20 International Business Machines Corporation Cache array macro micro-masking
CN113138851B (zh) * 2020-01-16 2023-07-14 华为技术有限公司 一种数据管理方法、相关装置及系统
KR20210097345A (ko) * 2020-01-30 2021-08-09 삼성전자주식회사 캐시 메모리 장치, 이를 포함하는 시스템 및 캐시 메모리 장치의 동작 방법
US11037269B1 (en) * 2020-03-27 2021-06-15 Intel Corporation High-speed resume for GPU applications
US11294829B2 (en) * 2020-04-21 2022-04-05 International Business Machines Corporation Cache replacement with no additional memory space
CN111782612B (zh) * 2020-05-14 2022-07-26 北京航空航天大学 跨域虚拟数据空间中文件数据边缘缓存方法
US20220269615A1 (en) * 2021-02-22 2022-08-25 Microsoft Technology Licensing, Llc Cache-based trace logging using tags in system memory
US20220283296A1 (en) * 2021-03-03 2022-09-08 Qualcomm Incorporated Facial recognition using radio frequency sensing
US20220291955A1 (en) * 2021-03-09 2022-09-15 Intel Corporation Asynchronous input dependency resolution mechanism
KR102474288B1 (ko) * 2021-04-01 2022-12-05 서울대학교산학협력단 쓰기 간섭 문제를 완화하는 상변화 메모리 모듈
US11836080B2 (en) * 2021-05-07 2023-12-05 Ventana Micro Systems Inc. Physical address proxy (PAP) residency determination for reduction of PAP reuse
US20220358048A1 (en) * 2021-05-07 2022-11-10 Ventana Micro Systems Inc. Virtually-indexed cache coherency using physical address proxies
US11860794B2 (en) * 2021-05-07 2024-01-02 Ventana Micro Systems Inc. Generational physical address proxies
US11868263B2 (en) * 2021-05-07 2024-01-09 Ventana Micro Systems Inc. Using physical address proxies to handle synonyms when writing store data to a virtually-indexed cache
US11487672B1 (en) * 2021-08-20 2022-11-01 International Business Machines Corporation Multiple copy scoping bits for cache memory

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150052314A1 (en) * 2013-08-13 2015-02-19 Fujitsu Limited Cache memory control program, processor incorporating cache memory, and cache memory control method
CN103634231A (zh) * 2013-12-02 2014-03-12 江苏大学 一种基于内容流行度的ccn缓存分区置换方法
CN106131182A (zh) * 2016-07-12 2016-11-16 重庆邮电大学 命名数据网络中一种基于流行度预测的协作缓存方法
CN107948247A (zh) * 2017-11-01 2018-04-20 西安交通大学 一种软件定义网络的虚拟缓存通道缓存管理方法
CN110933692A (zh) * 2019-12-02 2020-03-27 山东大学 一种基于边缘计算框架的优化缓存系统及其应用

Also Published As

Publication number Publication date
US20220407940A1 (en) 2022-12-22
US11743359B2 (en) 2023-08-29
CN112565437B (zh) 2021-11-19
CN112565437A (zh) 2021-03-26

Similar Documents

Publication Publication Date Title
US11431791B2 (en) Content delivery method, virtual server management method, cloud platform, and system
CN111966284B (zh) 一种OpenFlow大规模流表弹性节能与高效查找系统及方法
CN105022700A (zh) 一种基于缓存空间划分和内容相似度的命名数据网络缓存管理系统和管理方法
CN108900570B (zh) 一种基于内容价值的缓存替换方法
CN110808910A (zh) 一种支持QoS的OpenFlow流表节能存储架构及其应用
WO2022121124A1 (zh) 一种面向跨界服务网络的服务缓存方法
Liu et al. Efficient FIB caching using minimal non-overlapping prefixes
CN103236989B (zh) 一种内容分发网络中的缓存控制方法、设备及系统
CN108900599B (zh) 一种软件定义的内容中心网络装置及其聚类缓存决策方法
CN108965479B (zh) 一种基于内容中心网络的域协同缓存方法及装置
CN109450795A (zh) 一种面向服务网络的服务路由器及服务网络系统
WO2017117942A1 (zh) 一种用于多层sdn控制器的信息查询方法及系统
CN106790469A (zh) 一种缓存控制方法、装置和系统
Li et al. Taming the wildcards: Towards dependency-free rule caching with freecache
CN106790705A (zh) 一种分布式应用本地缓存的实现系统及实现方法
CN103905539A (zh) 内容中心网络中基于内容受欢迎度的最优缓存放置方法
Miao et al. Multi-level plru cache algorithm for content delivery networks
Li et al. SDN flow entry adaptive timeout mechanism based on resource preference
Yufei et al. A centralized control caching strategy based on popularity and betweenness centrality in ccn
Ugwuanyi et al. A novel predictive-collaborative-replacement (PCR) intelligent caching scheme for multi-access edge computing
Amemiya et al. Layer-integrated edge distributed data store for real-time and stateful services
CN105657054A (zh) 一种基于k均值算法的内容中心网络缓存方法
CN115134300A (zh) 面向软件定义网络的交换设备规则缓存管理方法
CN109726146B (zh) 基于区块高度的可定制淘汰策略的可伸缩缓存方法
Li et al. Content caching strategy for edge and cloud cooperation computing

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21901841

Country of ref document: EP

Kind code of ref document: A1