CN111797341B - Programmable switch-based in-network caching method - Google Patents

Programmable switch-based in-network caching method Download PDF

Info

Publication number
CN111797341B
CN111797341B CN202010572744.6A CN202010572744A CN111797341B CN 111797341 B CN111797341 B CN 111797341B CN 202010572744 A CN202010572744 A CN 202010572744A CN 111797341 B CN111797341 B CN 111797341B
Authority
CN
China
Prior art keywords
content
cache
data packet
request data
content request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010572744.6A
Other languages
Chinese (zh)
Other versions
CN111797341A (en
Inventor
王雄
周坪
任婧
徐世中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202010572744.6A priority Critical patent/CN111797341B/en
Publication of CN111797341A publication Critical patent/CN111797341A/en
Application granted granted Critical
Publication of CN111797341B publication Critical patent/CN111797341B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • G06F16/9574Browsing optimisation, e.g. caching or content distillation of access to content, e.g. by caching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention discloses an in-network caching method based on a programmable switch, which selects partial nodes in a network as content caching nodes, wherein the content caching nodes consist of caching servers and the programmable switch. The request for the cache server to cache the content is responded to in the cache server without being transmitted to the content providing server. According to the invention, the hot content is cached in the network, so that the response time delay of the user to the hot request is reduced; meanwhile, because the hot content request is responded in the content cache node, the requests which need to be processed by the content providing server are reduced, and the load of the content providing server is reduced; in addition, most of the traffic of the content request is responded to in the network, and the traffic in the network is reduced.

Description

Programmable switch-based in-network caching method
Technical Field
The invention belongs to the technical field of content caching, and particularly relates to an in-network caching method based on a programmable switch.
Background
In order to meet the requirements of users for efficient network services and reduce the server load of the network content service providers, the network content service providers generally use a content caching technology to share user requests to content caching nodes which are closer to the users for processing.
Due to the limited hardware resources of the content caching node, it is not possible to cache all the content. The hot content will often receive more requests than other content, which is a prerequisite for the effectiveness of the content caching technology. The content caching node can effectively reduce the request response delay and reduce the load of the server only by caching partial content, namely hot content.
The existing content caching schemes are mainly divided into caching server caching and in-network caching, and the CDN and the CCN are the two most classical caching schemes respectively. However, these two caching schemes still have their respective disadvantages: the CDN is called a Content Delivery Network, that is, a Content Delivery Network, and implements Content caching by deploying a cache server at a Network edge without implementing in-Network caching of Content, and in addition, it needs a complex DNS resolution mechanism and consumes more hardware resources; the CCN is called Content central Network, i.e. Content-Centric Network, which uses routers as cache in the Network, but it has no mature hardware support at present and needs to change the existing whole Network system for deployment, and it takes a long time to land from deployment.
Disclosure of Invention
The invention aims to overcome the defects of the existing content caching scheme and provide an in-network caching method based on a programmable switch, so as to reduce the response delay of content requests, reduce the load of servers and reduce the network flow.
In order to achieve the above object, the in-network caching method based on the programmable switch of the present invention is characterized by comprising the following steps:
(1) Hardware processing of networks
Selecting a part of nodes in a network to be tested as content cache nodes, wherein the content cache nodes are composed of a programmable switch and a cache server, the programmable switch is configured with a match-action table for identifying hot content (content with higher popularity in the network), and the cache server caches the hot content;
(2) User content acquisition
(2.1) the user packages the content name of the content required to be acquired into a content request data packet and sends the content request data packet to a specified content providing server;
(2.2) according to whether the content request data packet passes through the content cache node, the method is divided into two conditions: if not, the content request data packet is sent to the content providing server, the step (2.6) is executed, and if so, the step (2.3) is executed;
(2.3) when the content request data packet passes through the content cache node, the programmable switch in the content cache node analyzes the content request data packet and identifies the content name in the content request data packet;
(2.4) the programmable switch matches the content name of the content request data packet with the hot content name in the match-action table, if the matching is not successful, the corresponding content is not cached in the cache server, the content request data packet is sent to the content providing server, the step (2.6) is executed, if the matching is successful, the corresponding content is indicated to be the hot content and cached in the cache server, and the step (2.5) is executed;
(2.5) the programmable switch modifies the destination IP address of the content request data packet, forwards the content request data packet to a cache server in the content cache node, and executes the step (2.7);
(2.6) the content providing server receives the content request data packet, responds to the content request and finishes after the response is finished;
and (2.7) the cache server receives the content request data packet, responds to the content request and finishes responding after the response is finished.
The invention aims to realize the following steps:
the invention is based on the in-network cache method of the programmable exchanger, on the basis of the in-network cache thought of the CCN (content-centric networking), and utilizes the characteristic that the programmable exchanger can flexibly analyze the data packet, a part of nodes are selected in the network as content cache nodes, the content cache nodes are composed of a cache server and the programmable exchanger, wherein the cache server caches the content (hot content) with higher popularity in the network; the programmable switch is used as a content request data packet (content request for short) identification device, and when a content request passes through a content cache node, whether the cache server in the content cache node caches the content (hot content) requested by the content can be judged. The request for the cache server to cache the content is responded to in the cache server without being transmitted to the content providing server. According to the invention, the hot content is cached in the network, so that the response time delay of the user to the hot request is reduced; meanwhile, because the hot content request is responded in the content cache node, the requests which need to be processed by the content providing server are reduced, and the load of the content providing server is reduced; in addition, most of the traffic of the content request is responded to in the network, and the traffic in the network is reduced.
Drawings
FIG. 1 is a flow chart of an embodiment of the in-network caching method based on a programmable switch according to the present invention;
FIG. 2 is a diagram illustrating an embodiment of the present invention;
FIG. 3 is a diagram of an embodiment of a content request packet structure according to the present invention;
FIG. 4 is a diagram of different numbers of content cache nodes selected in a GEANT topology, wherein (a) is one content cache node, (b) is two content cache nodes, (c) is three content cache nodes, and (d) is four content cache nodes;
fig. 5 is a diagram of different numbers of content cache nodes selected in the BICS topology, wherein (a) is one content cache node, (b) is two content cache nodes, (c) is three content cache nodes, (d) is four content cache nodes, (e) is five content cache nodes, and (f) is six content cache nodes;
FIG. 6 is a graph of content request response latency for different numbers of content cache nodes and different cache hit rates, where (a) is a GEANT topology and (b) is a BICS topology;
fig. 7 is a graph of the number of packets processed by the server with different numbers of content cache nodes and different cache hit rates, wherein (a) is the genant topology and (b) is the BICS topology.
Detailed Description
The following description of the embodiments of the present invention is provided in order to better understand the present invention for those skilled in the art with reference to the accompanying drawings. It is to be expressly noted that in the following description, a detailed description of known functions and designs will be omitted when it may obscure the subject matter of the present invention.
Fig. 1 is a flow chart of an embodiment of the in-network caching method based on the programmable switch.
In this embodiment, as shown in fig. 1, the in-network caching method based on a programmable switch of the present invention includes the following steps:
step S1: hardware processing of networks
A part of nodes are selected from a network to be tested as content cache nodes, the content cache nodes are composed of a programmable switch and a cache server, the programmable switch is provided with a match-action table for identifying hot content (content with higher popularity in the network), and the cache server caches the hot content.
In this embodiment, the network architecture is as shown in fig. 2, and the user 1 and the user 2 can access the content providing server through the content caching node, and the content caching node is composed of a programmable switch and a caching server.
Step S2: user content acquisition
Step S2.1: and the user packages the content name of the content required to be acquired into a content request data packet and sends the content request data packet to the specified content providing server.
In this embodiment, the user puts the content name of the content that the user needs to obtain into the cname field of the content request packet, the format of the content request packet is as shown in fig. 3, and sends the content request packet to the specified content providing server, and sets the type to 0, which indicates that the packet is a request packet.
Step S2.2: according to whether the content request data packet passes through the content cache node, the method is divided into two conditions: if not, the content request packet is sent to the content providing server, step S2.2 is performed, and if so, step S2.3 is performed.
Step S2.3: when the content request data packet passes through the content cache node, the programmable switch in the content cache node analyzes the content request data packet and identifies the content name in the content request data packet.
In this embodiment, as shown in fig. 2, the content request packets sent by the users 1 and 2 pass through the content cache node, and at this time, the programmable switch in the content cache node parses the content request packet to identify the content name in the content request packet.
Step S2.4: the programmable switch matches the content name of the content request data packet with the hot content name in the match-action table, if the matching is not successful, the corresponding content is not cached in the cache server, the content request data packet is sent to the content providing server, the step S2.6 is executed, if the matching is successful, the corresponding content is indicated to be the hot content, the hot content is cached in the cache server, and the step S2.5 is executed.
In this embodiment, as shown in fig. 2, the content name of the content request packet sent by the user 1 is successfully matched with the hotspot content name in the match-action table; however, the content name of the content request packet sent by the user 2 is not successfully matched with the hotspot content name in the match-action table, and at this time, the content request packet is sent to the content providing server.
Step S2.5: the programmable switch modifies the destination IP address of the content request packet, forwards the content request packet to the cache server in the content cache node, and performs step S2.7.
Step S2.6: and the content providing server receives the content request data packet, responds to the content request and finishes responding after the response is finished.
The content providing server receives a content request packet sent by the user 2, encapsulates the content in the content field of the packet in response to the content request, and sends the content packet to the user, wherein the type field of the content packet is set to 1, indicating that the packet is a content packet.
Step S2.7: and the cache server receives the content request data packet, responds to the content request and finishes responding after the response is finished.
The cache server receives a content request data packet sent by the user 1, packages the content into the content field of the data packet in response to the content request, and sends the content data packet to the user, wherein the type field of the content data packet is set to 1, which indicates that the data packet is a content data packet.
Examples of the invention
Using the Zipf distribution generation, 10000 requests are generated for each user, and the user sends the 10000 requests for content acquisition. Different numbers of cache nodes are placed in the network topology, respectively. The different data cache nodes are placed as shown in fig. 4 and 5, the circles in the figures represent switch nodes, the numbers in the circles represent numbers of switches, and both the users and the content providing servers are represented by squares. Fig. 4 shows hardware processing results of placing 1 to 4 cache nodes in the genant topology, as shown in fig. 4 (a) to (d), where there are three users, P1, P2, and P3, respectively, and there are four content providing servers, S1, S2, S3, and S4, respectively. Fig. 5 shows hardware processing results of 1 to 6 cache nodes placed in the BICS topology, as shown in fig. 5 (a) to (d), where there are three users, P1, P2, and P3, respectively, and six content providing servers, S1, S2, S3, S4, S5, and S6, respectively. Caching different amounts of hot contents at different caching node numbers and caching servers, and measuring the sum of time delay of all requests of a user being responded and the number of content requests needing to be processed by each content providing server.
Fig. 6 is a graph of content request response latency for different numbers of content cache nodes and different cache hit rates, where (a) is a genant topology and (b) is a BICS topology. The different cache hit rates represent the probability that a requested packet can be processed by a cache node when passing through the cache node, and it can be seen from fig. 6 that the latency can be reduced by 51% at most in the GENAT topology, the latency can be reduced by 58% at most in the BICS topology, and the latency is reduced as the number of cache nodes in the network topology increases. Cache hit rate is also an important factor affecting latency, and an increase in cache hit rate means that more content request packets can be responded to at nodes closer to the user. Latency decreases as the cache hit rate increases from 50% to 80%, while at 90% there is a slight increase relative to the 80% cache hit rate. The main reason is that in order to achieve higher cache hit rate, more contents need to be stored in the content cache node, resulting in an increase in the number of entries matched in the content cache node, and more time is required for matching on the programmable switch.
Fig. 7 is a graph of the number of packets processed by the server with different numbers of content cache nodes and different cache hit rates, wherein (a) is the genant topology and (b) is the BICS topology. It can be seen from fig. 7 that as the number of content cache nodes and the cache hit rate increase, the number of packets processed by the server decreases.
From this example, by placing the cache nodes in the topology, it is possible to increase the request response speed of the user and reduce the load of the content providing server, and also reduce the flow in the network.
Although illustrative embodiments of the present invention have been described above to facilitate the understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, and various changes may be made apparent to those skilled in the art as long as they are within the spirit and scope of the present invention as defined and defined by the appended claims, and all matters of the invention which utilize the inventive concepts are protected.

Claims (1)

1. An in-network caching method based on a programmable switch is characterized by comprising the following steps:
(1) Hardware processing of networks
Selecting a part of nodes in a network to be tested as content cache nodes, wherein the content cache nodes are composed of a programmable switch and a cache server, the programmable switch is provided with a match-action table for identifying hot content, and the cache server caches the hot content;
(2) User's content acquisition
(2.1) the user packages the content name of the content required to be acquired into a content request data packet and sends the content request data packet to a specified content providing server;
(2.2) according to whether the content request data packet passes through the content cache node, the method is divided into two conditions: if not, the content request data packet is sent to the content providing server, the step (2.6) is executed, and if so, the step (2.3) is executed;
(2.3) when the content request data packet passes through the content cache node, the programmable switch in the content cache node analyzes the content request data packet and identifies the content name in the content request data packet;
(2.4) the programmable switch matches the content name of the content request data packet with the hot content name in the match-action table, if the matching is not successful, the corresponding content is not cached in the cache server, the content request data packet is sent to the content providing server, the step (2.6) is executed, if the matching is successful, the corresponding content is indicated to be the hot content and cached in the cache server, and the step (2.5) is executed;
(2.5) the programmable switch modifies the destination IP address of the content request data packet, forwards the content request data packet to a cache server in the content cache node, and executes the step (2.7);
(2.6) the content providing server receives the content request data packet, responds to the content request and finishes after the response is finished;
and (2.7) the cache server receives the content request data packet, responds to the content request and finishes after the response is finished.
CN202010572744.6A 2020-06-22 2020-06-22 Programmable switch-based in-network caching method Active CN111797341B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010572744.6A CN111797341B (en) 2020-06-22 2020-06-22 Programmable switch-based in-network caching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010572744.6A CN111797341B (en) 2020-06-22 2020-06-22 Programmable switch-based in-network caching method

Publications (2)

Publication Number Publication Date
CN111797341A CN111797341A (en) 2020-10-20
CN111797341B true CN111797341B (en) 2023-04-18

Family

ID=72803658

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010572744.6A Active CN111797341B (en) 2020-06-22 2020-06-22 Programmable switch-based in-network caching method

Country Status (1)

Country Link
CN (1) CN111797341B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114844846A (en) * 2022-04-14 2022-08-02 南京大学 Multi-level cache distributed key value storage system based on programmable switch

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003256276A (en) * 2002-02-27 2003-09-10 Nec Corp Switch device with incorporated cache with inter-switch data transfer function, and control method
CN102523165A (en) * 2011-12-23 2012-06-27 中山大学 Programmable switchboard system applicable to future internet
CN109905480A (en) * 2019-03-04 2019-06-18 陕西师范大学 Probability cache contents laying method based on content center
CN110290092A (en) * 2018-03-19 2019-09-27 中国科学院沈阳自动化研究所 A kind of SDN network configuring management method based on programmable switch
CN115022283A (en) * 2022-05-24 2022-09-06 中国科学院计算技术研究所 Programmable switch supporting domain name resolution and network message processing method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9436501B2 (en) * 2014-08-26 2016-09-06 International Business Machines Corporation Thread-based cache content saving for task switching

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003256276A (en) * 2002-02-27 2003-09-10 Nec Corp Switch device with incorporated cache with inter-switch data transfer function, and control method
CN102523165A (en) * 2011-12-23 2012-06-27 中山大学 Programmable switchboard system applicable to future internet
CN110290092A (en) * 2018-03-19 2019-09-27 中国科学院沈阳自动化研究所 A kind of SDN network configuring management method based on programmable switch
CN109905480A (en) * 2019-03-04 2019-06-18 陕西师范大学 Probability cache contents laying method based on content center
CN115022283A (en) * 2022-05-24 2022-09-06 中国科学院计算技术研究所 Programmable switch supporting domain name resolution and network message processing method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
周坪.在网计算技术及其应用研究.《中国优秀硕士学位论文全文数据库 信息科技辑》.2020,I139-13. *
陈晨.基于SDN的ICN网络及其缓存策略的设计与实现.《中国优秀硕士学位论文全文数据库 信息科技辑》.2019,I139-64. *

Also Published As

Publication number Publication date
CN111797341A (en) 2020-10-20

Similar Documents

Publication Publication Date Title
US11336614B2 (en) Content node network address selection for content delivery
US6850980B1 (en) Content routing service protocol
US9705799B2 (en) Server-side load balancing using parent-child link aggregation groups
US6928485B1 (en) Method for network-aware clustering of clients in a network
US8694675B2 (en) Generalized dual-mode data forwarding plane for information-centric network
US10581797B2 (en) Hybrid access DNS optimization for multi-source download
CN109040243B (en) Message processing method and device
EP2475132A1 (en) Name-to-address mapping system, data transmission method and name-to-address mapping maintenance method
US9602378B2 (en) Route decision method, content delivery apparatus, and content delivery network interconnection system
CN101656765A (en) Address mapping system and data transmission method of identifier/locator separation network
WO2013029569A1 (en) A Generalized Dual-Mode Data Forwarding Plane for Information-Centric Network
Pitkänen et al. Opportunistic web access via wlan hotspots
WO2011116726A2 (en) Method and system for network caching, domain name system redirection sub-system thereof
CN105357281B (en) A kind of Mobile Access Network distributed content cache access control method and system
CN109743414B (en) Method for improving address translation availability using redundant connections and computer readable storage medium
CN108769252B (en) ICN network pre-caching method based on request content relevance
CN111797341B (en) Programmable switch-based in-network caching method
EP1324546A1 (en) Dynamic content delivery method and network
WO2017071591A1 (en) Icn based distributed resource directory for iot resource discovery and routing
CN112087382A (en) Service routing method and device
CN110958186A (en) Network equipment data processing method and system
CN111917658B (en) Privacy protection cooperative caching method based on grouping under named data network
WO2023097856A1 (en) Popularity-based channel-associated collaborative caching method in icn
Azgin et al. H 2 N4: Packet forwarding on hierarchical hash-based names for content centric networks
CN113660162A (en) Semi-centralized routing method and system for sensing adjacent cache

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant