CN108900509B - Copy selector based on programmable network equipment - Google Patents

Copy selector based on programmable network equipment Download PDF

Info

Publication number
CN108900509B
CN108900509B CN201810700159.2A CN201810700159A CN108900509B CN 108900509 B CN108900509 B CN 108900509B CN 201810700159 A CN201810700159 A CN 201810700159A CN 108900509 B CN108900509 B CN 108900509B
Authority
CN
China
Prior art keywords
data packet
netrs
packet
request
response
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810700159.2A
Other languages
Chinese (zh)
Other versions
CN108900509A (en
Inventor
冯丹
苏毅
华宇
施展
曹孟媛
朱挺炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201810700159.2A priority Critical patent/CN108900509B/en
Publication of CN108900509A publication Critical patent/CN108900509A/en
Application granted granted Critical
Publication of CN108900509B publication Critical patent/CN108900509B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/06Notations for structuring of protocol data, e.g. abstract syntax notation one [ASN.1]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a copy selector based on programmable network equipment, which faces a distributed key value storage system in a data center, wherein the data center uses a multilayer network topology based on a tree structure, and the hardware of the copy selector comprises a programmable switch and network accelerators, wherein one or more network accelerators are directly connected with one programmable switch to form the copy selector; the software composition of the copy selector comprises a forwarding rule and an executor, wherein the forwarding rule runs on the programmable switch and is responsible for forwarding the data packet; the executor runs on the network accelerator and is responsible for selecting a copy for the distributed key value storage request. The invention utilizes the programmable capability of the programmable network equipment to move the task selected by the copy from the terminal host to the network equipment, thereby reducing the probability of occurrence of the 'sheep flock effect'; the invention designs different data packet formats for the NetRS request and the NetRS response respectively so as to reduce the additional network overhead introduced by using the NetRS protocol.

Description

Copy selector based on programmable network equipment
Technical Field
The invention belongs to the field of distributed key value storage, and particularly relates to a copy selector based on programmable network equipment.
Background
Distributed key value stores are a core component in modern Web applications. In view of the interactivity of modern Web applications, it is important to reduce the response latency of key-value stores. Since even one end-user request can result in hundreds or thousands of memory accesses, the high tail latency of key-value stores alone can have a significant impact on user perceived latency.
Distributed key value storage to achieve high availability and high reliability, copies of data are typically stored on multiple servers. A read request may obtain data from either copy. But since the server often has performance fluctuation (especially in a cloud environment with multi-tenant shared resources), the copy selection can directly affect the response delay of the read request. Considering that the workload of key-value stores is mainly read-based, the replica selection scheme plays a very important role in reducing response latency.
In conventional copy selection schemes, the copy selection task is typically performed by a client (host) of a distributed key value store. However, when the client is used as a replica selection node (RSNode), since a large number of clients exist in the system, the probability of the occurrence of the "herd effect" when the client executes the replica selection algorithm in selecting the replica is high, and the effectiveness of the replica selection algorithm is reduced. The term "herd effect" refers to the situation that multiple RSNodes select one copy at the same time, resulting in the performance degradation of the copy.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to solve the technical problems that the prior copy selection algorithm is low in execution efficiency and the distributed key value storage system is high in response delay.
In order to achieve the above object, the present invention provides a copy selector based on a programmable network device, where the copy selector faces a distributed key value storage system located in a data center, where the data center uses a tree-structure based multi-layer network topology;
the copy selector comprises hardware components including programmable switches and network accelerators, wherein one or more network accelerators are directly connected with one programmable switch to form the copy selector;
the software composition of the copy selector comprises a forwarding rule and an executor, wherein the forwarding rule runs on the programmable switch and is responsible for forwarding the data packet; the executor runs on the network accelerator and is responsible for selecting a copy for the distributed key value storage request.
In particular, the core switches, aggregation switches, and rack top switches in the data center all belong to programmable switches.
Specifically, the data packets are divided into NetRS data packets and non-NetRS data packets, and the NetRS data packets are divided into NetRS request data packets and NetRS response data packets.
Specifically, the programmable switch applies the forwarding rule only to the NetRS data packets, and for non-NetRS data packets, forwards the non-NetRS data packets directly to the destination IP.
Specifically, for a NetRS request packet, the programmable switch needs to determine a next hop node of the request packet according to the forwarding rule, where possible next hop nodes include a network accelerator directly connected to the programmable switch, a programmable switch leading to a replica selection node, and a programmable switch leading to a target replica server; for a NetRS response packet, the programmable switch needs to determine the next hop node of the response packet according to the forwarding rules and whether cloning of the response packet to a network accelerator is required.
Specifically, the format of the NetRS request data packet is as follows: RID + MF + RV + RGID + application program load, wherein the RID + MF + RV + RGID is a packet header part, the application program load is a data part, and the description of each field is as follows:
RID (duplicate selection node identifier), for the request data packet, the RID stores the ID of the duplicate selector where the duplicate selection node is located, wherein the duplicate selection node refers to the node which performs the duplicate selection operation for the request;
MF (special field): a label of the packet type;
RV (retention value): for a request packet, its reserved value is set by its RSNode;
RGID (duplicate group identifier): corresponding to a set of copies to which the request relates;
application program load: the distributed key value stores the content of the request;
the format of the NetRS response data packet is as follows: RID + MF + RV + SSL + SS + application program load, wherein RID + MF + RV + SSL + SS is a packet header part, the application program load is a data part, and the description of each field is as follows:
RID (duplicate selection node identifier) for responding the data packet, the RID stores ID of a duplicate selector where the duplicate selection node is located, wherein the duplicate selection node is a node for responding to a corresponding request to execute duplicate selection operation;
MF (special field): a label of the packet type;
RV (retention value): for the response data packet, the reserved value is the reserved value in the request data packet corresponding to the response;
SSL (server status length): responding to the length of the server state piggybacked in the data packet;
SS (server status): responding to the server state piggybacked in the data packet;
application program load: the distributed key value stores the content of the response.
Specifically, the forwarding rule is: the programmable exchanger judges the type of the data packet according to the MF field of the data packet:
if the data packet is a NetRS request data packet, obtaining a flow group ID through a rack top switch, setting an RID according to the flow group ID, comparing a local ID stored in a programmable switch with a field RID in the request data packet, and if the field RID in the request data packet is different from the local ID in the programmable switch, forwarding the request data packet to a next hop switch facing to the RSNode by the programmable switch; otherwise, the programmable switch forwards the request data packet to a network accelerator of the operation executor, and the network accelerator converts the request data packet into a non-NetRS data packet;
if the data packet is a NetRS response data packet, comparing a local ID stored in the programmable switch with a field RID in the response data packet, if the field RID in the response data packet is different from the local ID in the programmable switch, the programmable switch forwards the response data packet to a next hop switch towards the RSNode; otherwise, the programmable switch clones the response data packet to the network accelerator, then modifies the MF field of the response data packet, and marks the MF field as a non-NetRS data packet;
and if the data packet is a non-NetRS data packet, sending the non-NetRS data packet to an outlet port of the programmable switch through the processing pipeline.
Specifically, the operation of the actuator is specifically:
for a NetRS request packet, the executor first extracts a duplicate group identifier, RGID, from the request packet; secondly, the executor searches a local database based on the RGID to determine candidate replica servers, then selects a target replica server from the candidate replica servers, and finally sends the reconstructed data packet to the switch;
and for the NetRS response data packet, the executor updates the local state information according to the SS field in the response data packet, and then abandons the response data packet.
Specifically, when the executor reconstructs a packet, the executor sets the specific field MF of the packet to f (M)resp) F is a reversible function and must satisfy f (M) at the same timeresp)≠MreqAnd f (M)resp)≠MrespWherein M isreqA constant value used for judging whether the data packet is a NetRS request data packet or not in the programmable switch; mrespIs a constant value used in the programmable switch to determine whether a packet is a NetRS response packet.
Generally, compared with the prior art, the above technical solution conceived by the present invention has the following beneficial effects:
(1) the invention utilizes the programmable capability of the programmable network equipment to move the task of copy selection from the terminal host (the client of the distributed key value storage) to the network equipment, and because the number of the network equipment is less than that of the client, the probability of the occurrence of the 'herd effect' when the copy selection is executed is reduced, thereby improving the execution efficiency of the copy selection algorithm and reducing the response delay of the distributed key value storage system.
(2) The invention designs different data packet formats for the NetRS request and the NetRS response respectively, formulates the forwarding rule and the working flow of the actuator, and can support different copy selection algorithms, so that the adaptability of the NetRS is stronger.
Drawings
FIG. 1 is a schematic diagram of a modern data center multi-layer tree network topology provided by the present invention.
Fig. 2 is a schematic diagram of a UDP packet format provided in the present invention.
Fig. 3 is a flow chart of the programmable switch packet processing based on forwarding rules according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
FIG. 1 is a schematic diagram of a modern data center multi-layer tree network topology provided by the present invention. As shown in fig. 1, the data center uses a tree structure based multi-tiered network topology. The end hosts are organized in racks, each containing approximately 20 to 40 end hosts. The end hosts in one Rack are connected to a ToR switch (Top of Rack) on Top of the Rack. One ToR switch is connected to multiple aggregation switches Ag to achieve higher stability and performance. One aggregation switch is connected to a plurality of Core switches Core. The redundant aggregation switches and the redundant core switches establish multiple network paths between two end hosts that are not in the same chassis. In addition to this there is a centralized SDN controller C, which is connected to all switches via low speed links.
The hardware of NetRS (Network-based replay Selector, Network-oriented copy Selector) includes a programmable switch and a Network accelerator, and one programmable switch is directly connected to one or more Network accelerators. The programmable switch parses the application specific data packet header, matches custom fields in the packet header and performs corresponding operations. The Core switch Core, the aggregation switch Ag and the rack top switch TOR all belong to programmable switches.
The software composition of NetRS includes forwarding rules and executors. NetRS relies on forwarding rules to forward packets to the correct location.
In order to reduce the delay overhead of the read request, a stateless network protocol (such as UDP) is used in the invention to design the data packet format of NetRS. Meanwhile, in order to reduce the bandwidth overhead of the NetRS protocol, the invention designs a separate data packet format for the request and the response. Fig. 2 is a schematic diagram of a UDP packet format provided in the present invention. The NetRS packet is encapsulated within the UDP packet as the data portion of the UDP packet. The NetRS data packet comprises a NetRS request data packet and a NetRS response data packet.
As shown in fig. 2, the NetRS request packet includes the following fields: and RID + MF + RV + RGID + application program load, wherein the RID + MF + RV + RGID is the head part of the NetRS request data packet, and the application program load is the data part of the NetRS request data packet. The fields are described as follows:
RID (duplicate selection node identifier) < 2 bytes > for a request packet, the RID stores the ID of the NetRS where the duplicate selection node is located, wherein the duplicate selection node is the node that performs the duplicate selection operation for the request.
MF (special field): [6 bytes ] the programmable switch is used to determine the label of the packet type.
RV (retention value): [2 bytes ] for a request packet, its reserved value is set by its RSNode. The RSNode may use the RV field to collect input information needed by the replica selection algorithm. For example, if the replica selection algorithm requires a response delay to obtain a request. The RSNode may set the RV of the request packet as the timestamp when sending the request packet, and when the response of the request reaches the RSNode, the RSNode may subtract the RV value in the response packet from the timestamp when receiving the corresponding packet, to obtain the response delay of the request.
RGID (duplicate group identifier): [3 bytes ] corresponds to the set of copies to which the request relates. The enforcer may query its local database using the replica group identifier to obtain a set of all candidate replica servers.
Application program load: variable bytes, i.e., the distributed key value stores the contents of the request.
As shown in fig. 2, the NetRS response packet includes the following fields: and RID + MF + RV + SSL + SS + application program load, wherein the RID + MF + RV + SSL + SS is the packet head part of the NetRS response data packet, and the application program load is the data part of the NetRS response data packet. The fields are described as follows:
RID (duplicate selection node identifier) < 2 bytes > for a response packet, the RID stores the ID of the NetRS where the duplicate selection node is located, wherein the duplicate selection node is a node that performs a duplicate selection operation for responding to a corresponding request.
MF (special field): [6 bytes ] the programmable switch is used to determine the label of the packet type.
RV (retention value): [2 bytes ] for the response packet, the reserved value is the reserved value in the response corresponding request packet.
SSL (server status length): [2 bytes ] the length of the server state piggybacked in the response packet.
SS (server status): variable bytes respond to the server state piggybacked in the packet.
Application program load: variable bytes, i.e., the distributed key value stores the contents of the response.
Each NetRS has a unique ID. NetRS stores its ID in its local programmable switch. The RID field in the NetRS packet stores the ID of the NetRS in which its replica selection node is located. The NetRS data packet performs duplicate selection on which NetRS, and the RID field of the data packet is set to the ID of the NetRS.
The replica selection node of a request is determined by the traffic group to which the node belongs, wherein the traffic group is a set of all requests meeting certain characteristics, for example, requests from the same host can be considered to belong to the same traffic group. To reduce the variation required for distributed key-value storage systems to use NetRS, both the traffic group and replica selection nodes are agnostic to the end hosts (including clients and servers). The NetRS uses a ToR switch directly connected to the host to set a replica selection node identifier for each NetRS request packet. Compared with other types of switches, the ToR switch has additional forwarding rules to process NetRS request packets, including matching the source IP of the packet and obtaining a traffic group ID, and setting a replica selection node identifier according to the traffic group ID. The NetRS response packet does not need to obtain the duplicate selection node identifier from the ToR switch because the server will copy the duplicate selection node identifier from its corresponding request into the NetRS response packet.
The processing pipeline of the programmable switch includes two stages: an inlet treatment and an outlet treatment. The forwarding rules are located in an ingress processing pipeline of the programmable switch. The programmable switch classifies the data packet into 3 types according to the value of a special field MF in the data packet: a non-NetRS data packet, a NetRS request data packet, and a NetRS response data packet. The programmable switch only applies the forwarding rule to forwarding of the NetRS data packet, and directly forwards the non-NetRS data packet to the target IP. For NetRS request packets, the programmable switch needs to determine the next hop node for the packet according to forwarding rules, where possible next hop nodes include a network accelerator directly connected to the programmable switch, a programmable switch leading to the replica selection node, and a programmable switch leading to the target replica server. For the NetRS response packet, the programmable switch needs to determine the next hop node of the packet according to the forwarding rule and whether the response packet needs to be cloned to the network accelerator.
Fig. 3 is a flow chart of the programmable switch packet processing based on forwarding rules according to the present invention. As shown in fig. 3, after the ingress port of the programmable switch transmits a packet, the programmable switch determines the type of the packet according to the MF field of the packet:
if the data packet is a NetRS request data packet, obtaining a flow group ID through the TOR switch, setting an RID according to the flow group ID, comparing a local ID stored in the programmable switch with a copy selection node identifier field in the NetRS data packet, and if the copy selection node identifier in the data packet is different from the local ID in the switch, forwarding the data packet to a next hop switch towards the RSNode by the switch; otherwise, the exchanger forwards the data packet to a network accelerator for operating the actuator, and the network accelerator converts the NetRS request data packet into a non-NetRS data packet.
If the data packet is a non-NetRS data packet, the data packet is sent through the processing pipeline to an egress port of the programmable switch.
If the data packet is a NetRS response data packet, comparing the local ID stored in the programmable switch with the replica selection node identifier field in the NetRS data packet, if the replica selection node identifier in the data packet is different from the local ID in the switch, the switch will forward the data packet to the next hop switch towards the RSNode; otherwise, the programmable switch sends a clone of the packet to the network accelerator, then modifies a special field of the packet to mark it as a non-NetRS packet, and finally pushes the modified packet into the conventional ingress processing pipeline.
Considering that the network accelerator generally has a multi-core low-power processor and a memory space of several GB, the executor runs on the network accelerator and is responsible for executing copy selection and maintaining corresponding local state information for distributed key value storage requests. The corresponding operation comprises the following steps: and (4) sorting the copies, updating local state information related to a copy selection algorithm, and reconstructing a data packet according to the selected copies.
The executor first extracts the NetRS packet from the UDP packet. And for the NetRS request data packet, the executor determines a target replica server of the data packet according to the local state information. The executor will first extract the duplicate group identifier RGID from the NetRS packet; the enforcer then looks up the local database based on the RGID to determine candidate replica servers and selects a target replica server from the candidate replica servers. Finally, the executor transmits the reconstructed data packet to the switch. For the NetRS response data packet, the executor updates the local state information according to the information carried in the data packet, and then abandons the data packet.
When the executor rebuilds the data packet, the executor sets the special field MF of the executor to f (M)resp) F is a reversible function and must satisfy f (M) at the same timeresp)≠MreqAnd f (M)resp)≠MrespWherein M isreqA constant value used for judging whether the data packet is a NetRS request data packet or not in the switch; mrespIs a constant value used in the switch to determine whether the packet is a NetRS response packet.
The server will set the special field in the NetRS response packet to f-1(m), where m is a special field value of the corresponding request. This mechanism ensures that the server marks the response packet as a NetRS response packet only if its corresponding request flows through the enforcer.
The above description is only for the preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (6)

1. A programmable network device based replica selector facing a distributed key-value storage system located in a data center, wherein the data center uses a tree structure based multi-tiered network topology,
the copy selector comprises hardware components including programmable switches and network accelerators, wherein one or more network accelerators are directly connected with one programmable switch to form the copy selector;
the software composition of the copy selector comprises a forwarding rule and an executor, wherein the forwarding rule runs on the programmable switch and is responsible for forwarding the data packet; the executor runs on the network accelerator and is responsible for selecting a copy for the distributed key value storage request;
the data packets are divided into NetRS data packets and non-NetRS data packets, the NetRS data packets are divided into NetRS request data packets and NetRS response data packets, the programmable switch only applies forwarding rules to the NetRS data packets, and the non-NetRS data packets are directly forwarded to a target IP;
the format of the NetRS request data packet is as follows: RID + MF + RV + RGID + application program load, wherein the RID + MF + RV + RGID is a packet header part, the application program load is a data part, and the description of each field is as follows:
RID, a copy selection node identifier; for the request data packet, the RID stores the ID of a copy selector where a copy selection node is located, wherein the copy selection node is a node for requesting to execute copy selection operation;
MF: a special field; a label of the packet type;
RV: a reserved value; for a request packet, its reserved value is set by its RSNode;
RGID: a replica group identifier; corresponding to a set of copies to which the request relates;
application program load: the distributed key value stores the content of the request;
the format of the NetRS response data packet is as follows: RID + MF + RV + SSL + SS + application program load, wherein RID + MF + RV + SSL + SS is a packet header part, the application program load is a data part, and the description of each field is as follows:
RID, a copy selection node identifier; for the response data packet, the RID stores the ID of the copy selector where the copy selection node is located, wherein the copy selection node is a node for executing copy selection operation in response to the corresponding request;
MF: a special field; a label of the packet type;
RV: a reserved value; for the response data packet, the reserved value is the reserved value in the request data packet corresponding to the response;
SSL: length of server state; responding to the length of the server state piggybacked in the data packet;
and SS: a server state; responding to the server state piggybacked in the data packet;
application program load: the distributed key value stores the content of the response.
2. The replica selector of claim 1 wherein the core switches, aggregation switches, and rack top switches in the data center are all programmable switches.
3. The replica selector of claim 1 wherein for a NetRS request packet, said programmable switch is required to determine a next hop node for said request packet in accordance with said forwarding rules, wherein possible next hop nodes include a network accelerator directly connected to the programmable switch, the programmable switch leading to the replica selection node, and the programmable switch leading to the target replica server; for a NetRS response packet, the programmable switch needs to determine the next hop node of the response packet according to the forwarding rules and whether cloning of the response packet to a network accelerator is required.
4. The replica selector as claimed in claim 1, wherein said forwarding rule is: the programmable exchanger judges the type of the data packet according to the MF field of the data packet:
if the data packet is a NetRS request data packet, obtaining a flow group ID through a rack top switch, setting an RID according to the flow group ID, comparing a local ID stored in a programmable switch with a field RID in the request data packet, and if the field RID in the request data packet is different from the local ID in the programmable switch, forwarding the request data packet to a next hop switch facing to the RSNode by the programmable switch; otherwise, the programmable switch forwards the request data packet to a network accelerator of the operation executor, and the network accelerator converts the request data packet into a non-NetRS data packet;
if the data packet is a NetRS response data packet, comparing a local ID stored in the programmable switch with a field RID in the response data packet, if the field RID in the response data packet is different from the local ID in the programmable switch, the programmable switch forwards the response data packet to a next hop switch towards the RSNode; otherwise, the programmable switch clones the response data packet to the network accelerator, then modifies the MF field of the response data packet, and marks the MF field as a non-NetRS data packet;
and if the data packet is a non-NetRS data packet, sending the non-NetRS data packet to an outlet port of the programmable switch through the processing pipeline.
5. A replica selector as claimed in claim 1, characterized in that the actuator is operative to:
for a NetRS request packet, the executor first extracts a duplicate group identifier, RGID, from the request packet; secondly, the executor searches a local database based on the RGID to determine candidate replica servers, then selects a target replica server from the candidate replica servers, and finally sends the reconstructed data packet to the switch;
and for the NetRS response data packet, the executor updates the local state information according to the SS field in the response data packet, and then abandons the response data packet.
6. The copy selector of claim 5 wherein the executor, upon reconstruction of a packet, sets the special field MF of the packet to f (M)resp) F is a reversible function and must satisfy f (M) at the same timeresp)≠MreqAnd f (M)resp)≠MrespWherein M isreqA constant value used for judging whether the data packet is a NetRS request data packet or not in the programmable switch; mrespIs a constant value used in the programmable switch to determine whether a packet is a NetRS response packet.
CN201810700159.2A 2018-06-29 2018-06-29 Copy selector based on programmable network equipment Active CN108900509B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810700159.2A CN108900509B (en) 2018-06-29 2018-06-29 Copy selector based on programmable network equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810700159.2A CN108900509B (en) 2018-06-29 2018-06-29 Copy selector based on programmable network equipment

Publications (2)

Publication Number Publication Date
CN108900509A CN108900509A (en) 2018-11-27
CN108900509B true CN108900509B (en) 2020-06-02

Family

ID=64347311

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810700159.2A Active CN108900509B (en) 2018-06-29 2018-06-29 Copy selector based on programmable network equipment

Country Status (1)

Country Link
CN (1) CN108900509B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115065632B (en) * 2022-03-31 2023-11-17 重庆金美通信有限责任公司 Lightweight tree network data forwarding method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103327116A (en) * 2013-07-05 2013-09-25 山东大学 Dynamic copy storage method for network file
CN106209563A (en) * 2016-08-07 2016-12-07 付宏伟 A kind of cloud computing platform network virtualization implementation method and accordingly plug-in unit and agency
CN107241442A (en) * 2017-07-28 2017-10-10 中南大学 A kind of key assignments data storage storehouse copy selection method based on prediction

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8904224B2 (en) * 2012-07-20 2014-12-02 International Business Machines Corporation Providing replication and fail-over as a network service in data centers

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103327116A (en) * 2013-07-05 2013-09-25 山东大学 Dynamic copy storage method for network file
CN106209563A (en) * 2016-08-07 2016-12-07 付宏伟 A kind of cloud computing platform network virtualization implementation method and accordingly plug-in unit and agency
CN107241442A (en) * 2017-07-28 2017-10-10 中南大学 A kind of key assignments data storage storehouse copy selection method based on prediction

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
《Inc Bricks Toward In-Network Computation with an In-Network Cache》;Ming Liu,et.al;《ACM》;20170430;第1-8节 *
《NetCache Balancing Key-Value Stores with Fast In-Network Caching》;Xin Jin,et.al;《ACM》;20171030;全文 *
《Performance Analysis and Improvement of Replica Selection Algorithms for Key-Value Stores》;Wanchun Jiang,et.al;《2017 IEEE 10th International Conference on Cloud Computing》;20171230;全文 *
《面向海量高清视频数据的高性能分布式存储系统》;操顺德等;《软件学报》;20170830;第28卷(第8期);全文 *

Also Published As

Publication number Publication date
CN108900509A (en) 2018-11-27

Similar Documents

Publication Publication Date Title
US10999184B2 (en) Health checking in a distributed load balancer
US10567506B2 (en) Data storage method, SDN controller, and distributed network storage system
US7979671B2 (en) Dual hash indexing system and methodology
US20180375928A1 (en) Distributed load balancer
CA2909686C (en) Asymmetric packet flow in a distributed load balancer
EP2987303B1 (en) Connection publishing in a distributed load balancer
US9559961B1 (en) Message bus for testing distributed load balancers
CN108023812B (en) Content distribution method and device of cloud computing system, computing node and system
CN106936662B (en) method, device and system for realizing heartbeat mechanism
US11637787B2 (en) Preventing duplication of packets in a network
US20050097300A1 (en) Processing system and method including a dedicated collective offload engine providing collective processing in a distributed computing environment
US10652142B2 (en) SDN-based ARP implementation method and apparatus
US20160216891A1 (en) Dynamic storage fabric
US10103988B2 (en) Switching device, controller, method for configuring switching device, and method and system for processing packet
US20210368006A1 (en) Request response method, device, and system applied to bit torrent system
Takruri et al. {FLAIR}: Accelerating reads with {Consistency-Aware} network routing
CN112367278A (en) Cloud gateway system based on programmable data switch and message processing method thereof
CN108900509B (en) Copy selector based on programmable network equipment
Wu et al. N-DISE: NDN-based data distribution for large-scale data-intensive science
CN112087382A (en) Service routing method and device
CN113268540A (en) Data synchronization method and device
WO2022267909A1 (en) Method for reading and writing data and related apparatus
US10608956B2 (en) Adaptive fabric multicast schemes
Chen et al. Pache: a packet management scheme of cache in data center networks
CN108900334B (en) I L P model-based layout method for replica selection nodes on network equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant