WO2019127915A1 - 基于分布式一致性协议实现的数据读取方法及装置 - Google Patents

基于分布式一致性协议实现的数据读取方法及装置 Download PDF

Info

Publication number
WO2019127915A1
WO2019127915A1 PCT/CN2018/079028 CN2018079028W WO2019127915A1 WO 2019127915 A1 WO2019127915 A1 WO 2019127915A1 CN 2018079028 W CN2018079028 W CN 2018079028W WO 2019127915 A1 WO2019127915 A1 WO 2019127915A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
service node
primary service
data
nodes
Prior art date
Application number
PCT/CN2018/079028
Other languages
English (en)
French (fr)
Inventor
陈宗志
Original Assignee
北京奇虎科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京奇虎科技有限公司 filed Critical 北京奇虎科技有限公司
Publication of WO2019127915A1 publication Critical patent/WO2019127915A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/22Arrangements for detecting or preventing errors in the information received using redundant apparatus to increase reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0654Management of faults, events, alarms or notifications using network fault recovery
    • H04L41/0668Management of faults, events, alarms or notifications using network fault recovery by dynamic selection of recovery network elements, e.g. replacement by the most appropriate element after failure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/62Establishing a time schedule for servicing the requests

Definitions

  • the present disclosure relates to the field of data processing technologies, and in particular, to a data reading method, apparatus, computing device, and non-transitory computer readable storage medium implemented based on a distributed consistency protocol.
  • a complete distributed system is formed by a number of service nodes in different locations connected through a network, and a large amount of data is distributed among different service nodes of the entire network system. All clients connected to the distributed system can access data from any one of the service nodes.
  • the existing data reading method based on the distributed consistent protocol requires that each operation of reading data follows a distributed consensus protocol (Raft protocol), that is, a service node that receives a data read request requests the network through a network.
  • the other service nodes sent to the distributed system after the majority of the service nodes in the distributed system confirm the data corresponding to the data read request, the data is returned to the client, and also needs to be recorded to the service node log, thereby
  • the overhead of reading data is too large, which affects read performance.
  • the present disclosure has been made in order to provide a data reading method, apparatus, computing device, and non-transitory computer readable storage medium based on distributed consistency protocol that overcomes the above problems or at least partially solves the above problems. .
  • a data reading method based on a distributed coherency protocol is provided, which is applied to a distributed system including a plurality of nodes, the method comprising:
  • any node When any node receives the data read request sent by the client, compares the node information of the node with the node information of the master node to determine whether the node is the master node;
  • the data read request is forwarded to the primary node to return the data stored by the primary node to the client.
  • a data reading apparatus implemented based on a distributed coherency protocol, the apparatus being applied to a distributed system comprising a plurality of nodes, the apparatus comprising:
  • the master node processing module is adapted to select one node from the plurality of nodes as the master node, set the lease time of the master node, and broadcast the node information of the master node to other nodes;
  • the comparison module is adapted to compare the node information of the node with the node information of the master node when any node receives the data read request sent by the client, to determine whether the node is the master node;
  • a forwarding module configured to forward the data read request to the master node if the node is not the master node
  • the sending module is adapted to return data stored by the master node to the client.
  • a computing device including: a processor, a memory, a communication interface, and a communication bus, wherein the processor, the memory, and the communication interface complete communication with each other through a communication bus;
  • the memory is configured to store at least one executable instruction, and the executable instruction causes the processor to perform the operation corresponding to the data reading method implemented by the distributed consistency protocol.
  • a computer program comprising:
  • Computer readable code when the computer readable code is run on a computing device, causes the computing device to perform the operations corresponding to the data read method implemented above based on the distributed coherency protocol.
  • a non-transitory computer readable storage medium having stored therein at least one executable instruction that causes a processor to perform execution as described above The operation corresponding to the data reading method implemented by the distributed consistency protocol.
  • one node is selected as a master node from multiple nodes, and the lease time of the master node is set.
  • the data read request is forwarded to the node.
  • the master node returns the data stored by the master node to the client, thereby being able to respond to the data read request in time, improving data reading efficiency, improving read performance, and overcoming the prior art based on distribution
  • the request is sent to other nodes based on the network service. Only when more than half of the nodes reach an agreement, the data is returned to the client, and the overhead of reading the data is large.
  • FIG. 1 is a flow chart showing a data reading method implemented based on a distributed consistency protocol according to an embodiment of the present disclosure
  • FIG. 2 is a schematic flow chart of a data reading method implemented based on a distributed coherence protocol according to another embodiment of the present disclosure
  • FIG. 3 is a schematic structural diagram of a data reading apparatus implemented based on a distributed consistency protocol according to an embodiment of the present disclosure
  • FIG. 4 is a schematic structural diagram of a data reading apparatus implemented based on a distributed coherence protocol according to another embodiment of the present disclosure
  • FIG. 5 shows a schematic structural diagram of a computing device in accordance with an embodiment of the present disclosure.
  • FIG. 1 is a flow chart showing a data reading method implemented based on a distributed coherency protocol according to an embodiment of the present disclosure. The method is applied to a distributed system comprising a plurality of nodes, as shown in FIG. 1, the method comprising the following steps:
  • Step S100 Select one node from the plurality of nodes as the master node, set the lease time of the master node, and broadcast the node information of the master node to other nodes.
  • the main node is used to provide a data reading service to the client, so that each time the data is read, more than half of the nodes are required to reach an agreement to return data to the client, resulting in poor data reading performance.
  • the time required to read the data is too long.
  • the lease time is set for the master node.
  • the lease time refers specifically to the time when the master node provides the service.
  • the master node can be re-selected, and the lease time of different master nodes is different, so that there is one master node at any time.
  • the node information of the master node needs to be broadcast to other nodes in the distributed system, so that other nodes can receive the data read request sent by the client.
  • the data read request is forwarded to the master node.
  • Step S101 When any node receives the data read request sent by the client, compare the node information of the node with the node information of the master node to determine whether the node is the master node.
  • Each node in the distributed system can establish a connection with the client and provide data processing services to the client.
  • the node needs to determine whether the node itself is the master node. Specifically, the node information of the node itself may be compared with the node information of the master node. If the node information of the node itself is consistent with the node information of the master node, it indicates that the node is the master node; if the node information of the node itself is If the node information of the master node is inconsistent, it indicates that the node is not the master node.
  • Step S102 If the node is not the primary node, forward the data read request to the primary node to return the data stored by the primary node to the client.
  • the data read request needs to be forwarded to the master node, and the master node responds to the data read request, and returns the data stored by the master node to the client.
  • the above node is a service node.
  • one node is selected as a master node from a plurality of nodes, and a lease time of the master node is set, and when any node receives a data read request sent by the client, the data is read.
  • the request is forwarded to the master node to return the data stored by the master node to the client, thereby being able to respond to the data read request in time, improving data reading efficiency, improving read performance, and overcoming the prior art based on
  • the distributed consistency protocol reads data, it needs to send the request to other nodes through the network service. Only when more than half of the nodes reach an agreement, the data will be returned to the client.
  • the log of the primary node needs to be copied to the non-master. Nodes, which cause large overhead in reading data, including network overhead and log write overhead.
  • FIG. 2 is a flow chart showing a data reading method implemented based on a distributed coherency protocol according to another embodiment of the present disclosure. The method is applied to a distributed system comprising a plurality of service nodes, as shown in FIG. 2, the method comprising the following steps:
  • Step S200 Select a service node with the largest amount of log data among the plurality of service nodes as the primary service node, set a lease time of the primary service node, and broadcast the node information of the primary service node to other service nodes.
  • logs are used to record various operations on data.
  • the amount of log data reflects the storage data of the service node. The larger the amount of log data, the newer the data stored by the service node, and the more comprehensive the data, therefore, Select the service node with the largest amount of log data as the primary service node.
  • the amount of log data directly reflects the size of the space occupied by the log, which can be measured by KB, MB, and GB.
  • the larger the space occupied by the log the larger the amount of log data, and the logs of multiple service nodes are compared.
  • the size of the current occupied space can determine the service node with the largest amount of log data.
  • the service node is selected as the primary service node. In the subsequent operations, the primary service node only adds log entries, and does not delete entries in the log. Overwrite operation.
  • the lease time of the primary service node needs to be set.
  • the lease time defines the time when the service node serves as the primary service node.
  • the service node in the distributed system can Re-select the primary service node, each service node has the opportunity to become the primary service node.
  • the lease time of the primary service node is generally 60 seconds.
  • the setting is made, but in general, the lease time should not be set too long to prevent the main service node from being down, but because the lease time has not expired, there is a defect that the newly selected main service node cannot provide the service for a long time.
  • Step S201 When any service node receives the data read request sent by the client, compare the node information of the service node itself with the node information of the primary service node to determine whether the service node is the primary service node.
  • Each service node in the distributed system can establish a connection with the client and provide a data processing service to the client.
  • the service node needs to determine the service node itself. Whether it is a primary service node, specifically, the node information of the service node itself may be compared with the node information of the primary service node, where the node information may include: a node identifier, an IP address of the node, and a port number.
  • the node information may include: a node identifier, an IP address of the node, and a port number.
  • whether the service node is the primary service node may be determined by comparing the node information of the service node itself with the node information of the primary service node, if the node information of the service node itself and the primary service node If the node information is all consistent, it indicates that the service node is the primary service node; if the node information of the service node itself is inconsistent with the node information of the primary service node, it indicates that the service node is not the primary service node.
  • Step S202 If the service node is not the primary service node, forward the data read request to the primary service node to return the data stored by the primary service node to the client.
  • the data read request received by the service node may be sent to the primary service node during the validity period, and the primary service node is The data read request responds, and after receiving the data read request, the main service node returns the stored data to the client, so that no more than half of the service nodes reach an agreement to return data to the client, thereby saving network overhead.
  • Step S203 If the service node is the primary service node, return the data stored by the service node to the client.
  • the data stored by the service node can be directly returned to the client without sending a data read request to other service nodes through the network service, that is, no more than half of the service nodes need to be reached. Consistently returning data to the client saves network overhead. In addition, there is no need to write log to the service node locally when requesting the primary service node.
  • step S204 can be adopted:
  • Step S204 If the selected primary service node is down during the lease time, select one service node from the other service nodes as the primary service node, and set a lease time of the new primary service node, where the new primary service node The lease time is the lease time of the last primary service node.
  • the service node with the largest amount of log data is selected as the primary service node from the other service nodes, and after the new primary service node is selected, the new primary service is also needed.
  • the lease time of the node is set.
  • the lease time of the new primary service node is followed by the lease time of the previous primary service node.
  • the lease time of the selected primary service node is [14:08: 00,14:09:00), assuming that the primary service node is down at 14:08:30, the other service nodes will re-select the new primary service node, for example, possibly at 14:08:40.
  • the new primary service node is selected.
  • the lease time of the new primary service node is set to [14:09:00, 14:10:00), ie, at any time, There is only one primary service node.
  • Step S205 detecting whether the current time is within the lease time of the new primary service node, and if yes, executing step S206; if not, executing step S207.
  • the new primary service node receives the data read request forwarded by the non-primary service node, it needs to check whether the current time is within its lease time to determine whether to provide the service. If the current time is within its lease time, You can provide services to the client; if the current time is not within the lease time, you need to wait and wait for the lease time.
  • Step S206 returning data stored by the new primary service node to the client.
  • the data stored by the new primary service node may be directly returned to the client without sending a data read request to other service nodes through the network service, that is, no need More than half of the service nodes agree to return data to the client, saving network overhead.
  • Step S207 continuing to wait for the lease time to reach the new primary service node without providing a data processing service.
  • the lease time of the new primary service node is [14:09:00, 14:10:00), indicating that the lease time of the new primary service node has not yet arrived.
  • the new primary service node does not provide services, only time arrives. 14:09:00, the new primary service node can provide services.
  • the service node with the largest amount of log data is selected as the main service node, which can ensure that the read data is up-to-date, meets the requirements of the client for data consistency, and sets the lease time through the primary service node.
  • the new The primary service node also provides services only when its lease time arrives, ensuring consistency, all data read requests are forwarded to the primary service node, and the primary service node responds to the data read request and returns data to the client.
  • the data reading request can be responded in time, the data reading efficiency is improved, the reading performance is improved, and the request is sent through the network service when the data is read based on the distributed consistency protocol in the prior art.
  • the data reading efficiency is improved
  • the reading performance is improved
  • the request is sent through the network service when the data is read based on the distributed consistency protocol in the prior art.
  • For other service nodes only more than half of the service nodes agree, and will return to the client. It is, in addition, need to copy the log of the master service node to the non-primary service node, which led to a large overhead reading data (including network overhead and log write overhead) problem.
  • FIG. 3 shows a schematic structural diagram of a data reading apparatus implemented based on a distributed coherency protocol according to an embodiment of the present disclosure.
  • the device is applied to a distributed system including a plurality of nodes.
  • the device includes: a primary node processing module 300, a comparison module 310, a forwarding module 320, and a sending module 330.
  • the master node processing module 300 is adapted to select one node from the plurality of nodes as the master node, set the lease time of the master node, and broadcast the node information of the master node to other nodes.
  • the comparison module 310 is adapted to compare the node information of the node with the node information of the master node when any node receives the data read request sent by the client, to determine whether the node is the master node.
  • the forwarding module 320 is adapted to forward the data read request to the primary node if the node is not the primary node.
  • the sending module 330 is adapted to return data stored by the master node to the client.
  • the above node is a service node.
  • one node is selected as a master node from a plurality of nodes, and a lease time of the master node is set, and when any node receives a data read request sent by the client, the data is read.
  • the request is forwarded to the master node to return the data stored by the master node to the client, thereby being able to respond to the data read request in time, improving data reading efficiency, improving read performance, and overcoming the prior art based on
  • the distributed consistency protocol reads data, it needs to send the request to other nodes based on the network service. Only when more than half of the nodes reach an agreement, the data will be returned to the client, and the overhead of reading the data is large.
  • FIG. 4 shows a schematic structural diagram of a data reading apparatus implemented based on a distributed coherence protocol according to another embodiment of the present disclosure.
  • the device is applied to a distributed system including a plurality of service nodes.
  • the device includes: a primary service node processing module 400, a comparison module 410, a forwarding module 420, and a sending module 430.
  • the primary service node processing module 400 is configured to select a service node with the largest amount of log data among the plurality of service nodes as the primary service node, set a lease time of the primary service node, and broadcast the node information of the primary service node to other service nodes.
  • the comparison module 410 is adapted to compare the node information of the service node itself with the node information of the primary service node when any service node receives the data read request sent by the client, to determine whether the service node is the primary service node. .
  • the node information includes: a node identifier, an IP address of the node, and a port number.
  • the forwarding module 420 is adapted to forward the data read request to the primary serving node if the service node is not the primary serving node.
  • the sending module 430 is adapted to return data stored by the main service node to the client.
  • the sending module 430 is further adapted to: if the serving node is the primary serving node, return the data stored by the serving node to the client.
  • the primary service node processing module 400 is further adapted to: reselect a service node from the other service nodes as the primary service node, and set a lease time of the new primary service node, The lease time of the new primary service node continues the lease time of the last primary service node.
  • the apparatus further includes: a detection module 440 adapted to detect whether the current time is within a lease time of the new primary service node.
  • the sending module 430 is further adapted to: return the data stored by the new primary serving node to the client if the current time is within the lease time of the new primary serving node.
  • the apparatus further includes an exit module 450 adapted to exit the data processing service of the new primary service node if the current time is not within the lease time of the new primary service node.
  • the service node with the largest amount of log data is selected as the main service node, and the read data can be guaranteed to be up-to-date, satisfying the requirements of the client for data consistency, and setting the lease time through the main service node.
  • the new The primary service node also provides services only when its lease time arrives, ensuring consistency, all data read requests are forwarded to the primary service node, and the primary service node responds to the data read request and returns data to the client.
  • the data reading request can be responded in time, the data reading efficiency is improved, the reading performance is improved, and the request is sent through the network service when the data is read based on the distributed consistency protocol in the prior art.
  • the data reading efficiency is improved
  • the reading performance is improved
  • the request is sent through the network service when the data is read based on the distributed consistency protocol in the prior art.
  • For other service nodes only more than half of the service nodes agree, and will return to the client. It is, moreover, also need to copy the log of the master service node to the non-primary service node, which led to a large overhead read data (including network overhead and log write overhead) problem.
  • the present disclosure also provides a non-transitory computer readable storage medium storing at least one executable instruction, the computer executable instructions being executable based on any of the method embodiments described above Data reading method implemented by distributed consistency protocol.
  • FIG. 5 is a schematic structural diagram of a computing device according to an embodiment of the present disclosure.
  • the specific embodiment of the present disclosure does not limit the specific implementation of the computing device.
  • the computing device can include a processor 502, a communications interface 504, a memory 506, and a communications bus 508.
  • Processor 502, communication interface 504, and memory 506 complete communication with one another via communication bus 508.
  • the communication interface 504 is configured to communicate with network elements of other devices, such as clients or other servers.
  • the processor 502 is configured to execute the program 510, and specifically, the related steps in the foregoing embodiment of the data reading method based on the distributed consistency protocol.
  • program 510 can include program code, the program code including computer operating instructions.
  • the processor 502 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement the embodiments of the present disclosure.
  • the computing device includes one or more processors, which may be the same type of processor, such as one or more CPUs; or different types of processors, such as one or more CPUs and one or more ASICs.
  • the memory 506 is configured to store the program 510.
  • Memory 506 may include high speed RAM memory and may also include non-volatile memory, such as at least one disk memory.
  • the program 510 may be specifically configured to cause the processor 502 to perform a data reading method based on a distributed coherency protocol implementation in any of the above method embodiments.
  • a specific implementation of the steps in the program 510 reference may be made to the corresponding steps in the data reading embodiment implemented in the above-described distributed consistency protocol, and the corresponding description in the unit is not described herein.
  • a person skilled in the art can clearly understand that, for the convenience and brevity of the description, the specific working process of the device and the module described above may be referred to the corresponding process description in the foregoing method embodiment, and details are not described herein again.
  • modules in the devices of the embodiments can be adaptively changed and placed in one or more devices different from the embodiment.
  • the modules or units or components of the embodiments may be combined into one module or unit or component, and further they may be divided into a plurality of sub-modules or sub-units or sub-components.
  • any combination of the features disclosed in the specification, including the accompanying claims, the abstract and the drawings, and any methods so disclosed, or All processes or units of the device are combined.
  • Each feature disclosed in this specification (including the accompanying claims, the abstract and the drawings) may be replaced by alternative features that provide the same, equivalent or similar purpose.
  • Various component embodiments of the present disclosure may be implemented in hardware, or in a software module running on one or more processors, or in a combination thereof.
  • a microprocessor or digital signal processor may be used in practice to implement some or all of the components of a data reading device implemented in accordance with a distributed coherency protocol in accordance with an embodiment of the present disclosure.
  • the present disclosure may also be implemented as a device or device program (eg, a computer program and a computer program product) for performing some or all of the methods described herein.
  • Such a program implementing the present disclosure may be stored on a computer readable medium or may be in the form of one or more signals. Such signals may be downloaded from an Internet website, provided on a carrier signal, or provided in any other form.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Multi Processors (AREA)

Abstract

本公开公开了一种基于分布式一致性协议实现的数据读取方法、装置、计算设备及计算机存储介质。其中,该方法包括:从多个节点中选取一个节点作为主节点,设置主节点的租约时间,并将节点的节点信息广播给其它节点;当任一节点接收到客户端发送的数据读取请求时,将节点自身的节点信息与主节点的节点信息进行对比,以确定该节点是否为主节点;若节点不是主节点,则将数据读取请求转发给主节点,以将主节点存储的数据返回给客户端。

Description

基于分布式一致性协议实现的数据读取方法及装置
相关申请的交叉参考
本申请要求于2017年12月29日提交中国专利局、申请号为201711478287.9、名称为“基于分布式一致性协议实现的数据读取方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及数据处理技术领域,具体涉及一种基于分布式一致性协议实现的数据读取方法、装置、计算设备及非易失性计算机可读存储介质。
背景技术
一个完整的分布式系统是由很多处在不同位置的服务节点通过网络连接在一起形成的,海量的数据分布在整个网络系统的不同服务节点中。连接到该分布式系统的所有客户端都可以访问任意一个服务节点中的数据。
现有的基于分布式一致协议的数据读取方法,要求每一次读取数据的操作都要遵循分布式一致协议(Raft协议),即,接收到数据读取请求的服务节点通过网络方式将请求发送给分布式系统的其它服务节点,分布式系统中的超半数的服务节点确认了数据读取请求对应的数据之后,才会将数据返回给客户端,而且还需要记录到服务节点日志,从而导致读取数据开销太大,影响读取性能。
发明内容
鉴于上述问题,提出了本公开以便提供一种克服上述问题或者至少部分地解决上述问题的基于分布式一致性协议实现的数据读取方法、装置、计算设备及非易失性计算机可读存储介质。
根据本公开的一个方面,提供了一种基于分布式一致性协议实现的数据读取方法,该方法应用于包含多个节点的分布式系统中,该方法包括:
从多个节点中选取一个节点作为主节点,设置主节点的租约时间,并将 主节点的节点信息广播给其它节点;
当任一节点接收到客户端发送的数据读取请求时,将节点自身的节点信息与主节点的节点信息进行对比,以确定该节点是否为主节点;
若该节点不是主节点,则将数据读取请求转发给主节点,以将主节点存储的数据返回给客户端。
根据本公开的另一方面,提供了一种基于分布式一致性协议实现的数据读取装置,该装置应用于包含多个节点的分布式系统中,该装置包括:
主节点处理模块,适于从多个节点中选取一个节点作为主节点,设置主节点的租约时间,并将主节点的节点信息广播给其它节点;
对比模块,适于当任一节点接收到客户端发送的数据读取请求时,将节点自身的节点信息与主节点的节点信息进行对比,以确定该节点是否为主节点;
转发模块,适于若该节点不是主节点,则将数据读取请求转发给主节点;
发送模块,适于将主节点存储的数据返回给客户端。
根据本公开的又一方面,提供了一种计算设备,包括:处理器、存储器、通信接口和通信总线,处理器、存储器和通信接口通过通信总线完成相互间的通信;
存储器用于存放至少一可执行指令,可执行指令使处理器执行上述基于分布式一致性协议实现的数据读取方法对应的操作。
根据本公开的又一方面,提供了一种计算机程序,包括:
计算机可读代码,当计算机可读代码在计算设备上运行时,导致计算设备执行上述基于分布式一致性协议实现的数据读取方法对应的操作。
根据本公开的再一方面,提供了一种非易失性计算机可读存储介质,非易失性计算机可读存储介质中存储有至少一可执行指令,可执行指令使处理器执行如上述基于分布式一致性协议实现的数据读取方法对应的操作。
根据本公开提供的方案,从多个节点中选取一个节点作为主节点,并设置主节点的租约时间,当任一节点接收到客户端发送的数据读取请求时,将数据读取请求转发给主节点,以将主节点存储的数据返回给所述客户端,从 而能够及时地对数据读取请求作出响应,提升了数据读取效率,提升了读取性能,克服了现有技术中基于分布式一致性协议读取数据时,需要基于网络服务将请求发送给其它节点,只有超过半数的节点达成一致,才会向客户端返回数据,而导致的读数据的开销大的问题。
上述说明仅是本公开技术方案的概述,为了能够更清楚了解本公开的技术手段,而可依照说明书的内容予以实施,并且为了让本公开的上述和其它目的、特征和优点能够更明显易懂,以下特举本公开的具体实施方式。
附图概述
通过阅读下文优选实施方式的详细描述,各种其他的优点和益处对于本领域普通技术人员将变得清楚明了。附图仅用于示出优选实施方式的目的,而并不认为是对本公开的限制。而且在整个附图中,用相同的参考符号表示相同的部件。在附图中:
图1示出了根据本公开一个实施例的基于分布式一致性协议实现的数据读取方法的流程示意图;
图2示出了根据本公开另一个实施例的基于分布式一致性协议实现的数据读取方法的流程示意图;
图3示出了根据本公开一个实施例的基于分布式一致性协议实现的数据读取装置的结构示意图;
图4示出了根据本公开另一个实施例的基于分布式一致性协议实现的数据读取装置的结构示意图;
图5示出了根据本公开一个实施例的一种计算设备的结构示意图。
本公开的较佳实施方式
下面将参照附图更详细地描述本公开的示例性实施例。虽然附图中显示了本公开的示例性实施例,然而应当理解,可以以各种形式实现本公开而不应被这里阐述的实施例所限制。相反,提供这些实施例是为了能够更透彻地理解本公开,并且能够将本公开的范围完整的传达给本领域的技术人员。
图1示出了根据本公开一个实施例的基于分布式一致性协议实现的数据读取方法的流程示意图。该方法应用于包含多个节点的分布式系统中,如图1所示,该方法包括以下步骤:
步骤S100,从多个节点中选取一个节点作为主节点,设置主节点的租约时间,并将主节点的节点信息广播给其它节点。
在本公开实施例中,主要是利用主节点向客户端提供数据读取服务,避免每次读取数据都要求超过半数的节点达成一致才可以返回数据给客户端,造成数据读取性能差,读取数据所需时间太长的问题,为了实现数据读取一致性,这里为主节点设置了租约时间,其中,租约时间具体指主节点提供服务的时间,当主节点的租约时间到期后,可以重新选取主节点,不同主节点的租约时间不同,从而保证在任一时刻有且只有一个主节点。
在选取得到主节点并设置主节点的租约时间后,还需要将主节点的节点信息广播给分布式系统中的其它节点,以方便其它节点在接收到客户端发送的数据读取请求后,将数据读取请求转发给主节点。
步骤S101,当任一节点接收到客户端发送的数据读取请求时,将节点自身的节点信息与主节点的节点信息进行对比,以确定该节点是否为主节点。
分布式系统中的每个节点均可与客户端建立连接,向客户端提供数据处理服务,当任一节点接收到客户端发送的数据读取请求时,该节点需要判断节点自己是否为主节点,具体地,可以将节点自身的节点信息与主节点的节点信息进行比对,若节点自身的节点信息与主节点的节点信息一致,则表明该节点是主节点;若节点自身的节点信息与主节点的节点信息不一致,则表明该节点不是主节点。
步骤S102,若该节点不是主节点,则将数据读取请求转发给主节点,以将主节点存储的数据返回给客户端。
在确定出节点不是主节点的情况下,需要将数据读取请求转发给主节点,由主节点对数据读取请求作出响应,将主节点存储的数据返回给客户端。
可选地,上述节点为服务节点。
根据本公开上述实施例提供的方法,从多个节点中选取一个节点作为主 节点,并设置主节点的租约时间,当任一节点接收到客户端发送的数据读取请求时,将数据读取请求转发给主节点,以将主节点存储的数据返回给客户端,从而能够及时地对数据读取请求作出响应,提升了数据读取效率,提升了读取性能,克服了现有技术中基于分布式一致性协议读取数据时,需要通过网络服务将请求发送给其它节点,只有超过半数的节点达成一致,才会向客户端返回数据,此外,还需要将主节点的日志复制给非主节点,而导致读数据的开销大的问题,包括网络开销和日志写入开销。
图2示出了根据本公开另一个实施例的基于分布式一致性协议实现的数据读取方法的流程示意图。该方法应用于包含多个服务节点的分布式系统中,如图2所示,该方法包括以下步骤:
步骤S200,将多个服务节点中日志数据量最大的服务节点选取为主服务节点,设置主服务节点的租约时间,并将主服务节点的节点信息广播给其它服务节点。
在分布式系统中,日志用于记录对数据的各类操作,日志数据量体现了服务节点存储数据的情况,日志数据量越大,服务节点存储的数据越新,数据越全面,因此,可以选取日志数据量最大的服务节点为主服务节点。
具体地,日志数据量直接体现为日志当前所占用的空间大小,可以以KB、MB、GB进行衡量,日志当前所占用的空间越大,日志数据量越大,通过比较多个服务节点的日志当前所占用空间的大小,可以确定出日志数据量最大的服务节点,将该服务节点选取为主服务节点,主服务节点在后续操作中,仅增加日志条目,对日志中的条目不做删除、覆盖操作。
在选取主服务节点之后,还需要设置主服务节点的租约时间,其中,租约时间定义了该服务节点作为主服务节点提供服务的时间,当租约时间到期后,分布式系统中的服务节点可以重新选取主服务节点,每个服务节点均有机会成为主服务节点。
为了保证分布式系统的高可用性,避免由于主服务节点宕机或出现其它故障而导致系统不可用的问题,规定主服务节点的租约时间一般为60秒,当然,本领域技术人员可以根据实际需要进行设定,但是一般情况下,租约时间不宜设置过长,以免主服务节点已宕机,但是由于租约时间未过期,而 存在使得新选取的主服务节点长期无法提供服务的缺陷。
步骤S201,当任一服务节点接收到客户端发送的数据读取请求时,将服务节点自身的节点信息与主服务节点的节点信息进行对比,以确定该服务节点是否为主服务节点。
分布式系统中的每个服务节点均可与客户端建立连接,向客户端提供数据处理服务,当任一服务节点接收到客户端发送的数据读取请求时,该服务节点需要判断服务节点自己是否为主服务节点,具体地,可以将服务节点自身的节点信息与主服务节点的节点信息进行比对,其中,节点信息可以包括:节点标识、节点的IP地址以及端口号,这里仅是举例说明,不具有任何限定作用。
在本公开实施例中,可以通过将服务节点自身的节点信息与主服务节点的节点信息进行一一比较的方式来确定服务节点是否为主服务节点,若服务节点自身的节点信息与主服务节点的节点信息全部一致,则表明该服务节点是主服务节点;若服务节点自身的节点信息与主服务节点的节点信息有至少一项不一致,则表明该服务节点不是主服务节点。
步骤S202,若该服务节点不是主服务节点,则将数据读取请求转发给主服务节点,以将主服务节点存储的数据返回给客户端。
在确定出服务节点不是主服务节点的情况下,为了能够快速响应数据读取请求,可以将服务节点所接收到的数据读取请求发给租约时间在有效期内主服务节点,由主服务节点对数据读取请求作出响应,主服务节点在接收到数据读取请求后,将存储的数据返回给客户端,这样无需要超过半数的服务节点达成一致才返回数据给客户端,节省了网络开销,另外,也无需在请求主服务节点时,需要先将日志写入服务节点本地而导致的日志写入开销。
步骤S203,若服务节点是主服务节点,则将服务节点存储的数据返回给客户端。
在确定出服务节点是主服务节点的情况下,可以直接将服务节点存储的数据返回给客户端,而无需通过网络服务向其它服务节点发送数据读取请求,即无需要超过半数的服务节点达成一致才返回数据给客户端,节省了网络开销,另外,也无需在请求主服务节点时,需要先将日志写入服务节点本 地而导致的日志写入开销。
在分布式系统中,很容易出现已选取的主服务节点在租约时间内宕机或者出现其它故障导致该主服务节点无法提供服务的情况,此时,就需要重新选取新的主服务节点,具体可以采用步骤S204中的方法:
步骤S204,若已选取的主服务节点在租约时间内宕机,则从其它服务节点中选取一个服务节点作为主服务节点,设置新的主服务节点的租约时间,其中,新的主服务节点的租约时间接续上一主服务节点的租约时间。
若已选取的主服务节点在租约时间内宕机,则从其它服务节点中选取日志数据量最大的服务节点作为主服务节点,在选取出新的主服务节点后,还需要对新的主服务节点的租约时间进行设置,在本公开实施例中,新的主服务节点的租约时间接续上一主服务节点的租约时间,举例说明,已选取的主服务节点的租约时间是[14:08:00,14:09:00),假设该主服务节点在第14:08:30宕机了,其它服务节点会重新选出新的主服务节点,例如,可能在第14:08:40时就选取了新的主服务节点,虽然已选出新的主服务节点,但是新的主服务节点的租约时间被设置为[14:09:00,14:10:00),即,在任一时刻,有且只有一个主服务节点。
步骤S205,检测当前时间是否在新的主服务节点的租约时间内,若是,则执行步骤S206;若否,则执行步骤S207。
为了保证一致性,新的主服务节点接收到非主服务节点转发的数据读取请求时,需要检测当前时间是否在其租约时间内,以确定是否提供服务,若当前时间在其租约时间内,则可以向客户端提供服务;若当前时间不在其租约时间内,则需要继续等待,等待到达其租约时间。
步骤S206,将新的主服务节点存储的数据返回给客户端。
若检测到当前时间在新的主服务节点的租约时间内,可以直接将新的主服务节点存储的数据返回给客户端,而无需通过网络服务向其它服务节点发送数据读取请求,即无需要超过半数的服务节点达成一致才返回数据给客户端,节省了网络开销,另外,也无需在请求主服务节点时,需要先将日志写入服务节点本地而导致的日志写入开销。
步骤S207,继续等待到达新的主服务节点的租约时间,而不提供数据处理服务。
举例说明,若当前时间为14:08:50,新的主服务节点的租约时间为[14:09:00,14:10:00),说明,还未到达新的主服务节点的租约时间,为了保证一致性,虽然分布式系统中已经有了新的主服务节点,但由于还未到达新的主服务节点的租约时间,因此,该新的主服务节点是不提供服务的,只有时间到达14:09:00,该新的主服务节点才可以提供服务。
根据本公开上述实施例提供的方法,选取日志数据量最大的服务节点为主服务节点能够保证读取的数据是最新的,满足客户端对数据一致性的要求,通过为主服务节点设置租约时间,保证在任一时刻有且只有一个主服务节点提供服务,这样,即便已选取的主服务节点在租约时间内宕机或出现其他故障,虽在租约时间内选取出新的主服务节点,新的主服务节点也在仅在其租约时间到达时才会提供服务,保证了一致性,所有的数据读取请求都转发至主服务节点,由主服务节点响应数据读取请求,返回数据给客户端,从而能够及时地对数据读取请求作出响应,提升了数据读取效率,提升了读取性能,克服了现有技术中基于分布式一致性协议读取数据时,需要通过网络服务将请求发送给其它服务节点,只有超过半数的服务节点达成一致,才会向客户端返回数据,此外,还需要将主服务节点的日志复制给非主服务节点,而导致读数据的开销大(包括网络开销和日志写入开销)的问题。
图3示出了根据本公开一个实施例的基于分布式一致性协议实现的数据读取装置的结构示意图。该装置应用于包含多个节点的分布式系统中,如图3所示,该装置包括:主节点处理模块300、对比模块310、转发模块320、发送模块330。
主节点处理模块300,适于从多个节点中选取一个节点作为主节点,设置主节点的租约时间,并将主节点的节点信息广播给其它节点。
对比模块310,适于当任一节点接收到客户端发送的数据读取请求时,将节点自身的节点信息与主节点的节点信息进行对比,以确定该节点是否为主节点。
转发模块320,适于若该节点不是主节点,则将数据读取请求转发给主 节点。
发送模块330,适于将主节点存储的数据返回给客户端。
可选地,上述节点为服务节点。
根据本公开上述实施例提供的装置,从多个节点中选取一个节点作为主节点,并设置主节点的租约时间,当任一节点接收到客户端发送的数据读取请求时,将数据读取请求转发给主节点,以将主节点存储的数据返回给客户端,从而能够及时地对数据读取请求作出响应,提升了数据读取效率,提升了读取性能,克服了现有技术中基于分布式一致性协议读取数据时,需要基于网络服务将请求发送给其它节点,只有超过半数的节点达成一致,才会向客户端返回数据,而导致的读数据的开销大的问题。
图4示出了根据本公开另一个实施例的基于分布式一致性协议实现的数据读取装置的结构示意图。该装置应用于包含多个服务节点的分布式系统中,如图4所示,该装置包括:主服务节点处理模块400、对比模块410、转发模块420、发送模块430。
主服务节点处理模块400,适于将多个服务节点中日志数据量最大的服务节点选取为主服务节点,设置主服务节点的租约时间,并将主服务节点的节点信息广播给其它服务节点。
对比模块410,适于当任一服务节点接收到客户端发送的数据读取请求时,将服务节点自身的节点信息与主服务节点的节点信息进行对比,以确定该服务节点是否为主服务节点。
其中,节点信息包括:节点标识、节点的IP地址以及端口号。
转发模块420,适于若该服务节点不是主服务节点,则将数据读取请求转发给主服务节点。
发送模块430,适于将主服务节点存储的数据返回给客户端。
其中,发送模块430进一步适于:若服务节点是主服务节点,则将服务节点存储的数据返回给客户端。
若已选取的主服务节点在租约时间内宕机,则主服务节点处理模块400进一步适于:从其它服务节点中重新选取一个服务节点作为主服务节点,设 置新的主服务节点的租约时间,其中,新的主服务节点的租约时间接续上一主服务节点的租约时间。
虽然选择出新的主服务节点,但是为了保证数据的一致性,该装置还包括:检测模块440,适于检测当前时间是否在新的主服务节点的租约时间内。
发送模块430进一步适于:若当前时间在新的主服务节点的租约时间内,则将新的主服务节点存储的数据返回给客户端。
该装置还包括:退出模块450,适于若当前时间不在新的主服务节点的租约时间内,则退出新的主服务节点的数据处理服务。
根据本公开上述实施例提供的装置,选取日志数据量最大的服务节点为主服务节点能够保证读取的数据是最新的,满足客户端对数据一致性的要求,通过为主服务节点设置租约时间,保证在任一时刻有且只有一个主服务节点提供服务,这样,即便已选取的主服务节点在租约时间内宕机或出现其他故障,虽在租约时间内选取出新的主服务节点,新的主服务节点也在仅在其租约时间到达时才会提供服务,保证了一致性,所有的数据读取请求都转发至主服务节点,由主服务节点响应数据读取请求,返回数据给客户端,从而能够及时地对数据读取请求作出响应,提升了数据读取效率,提升了读取性能,克服了现有技术中基于分布式一致性协议读取数据时,需要通过网络服务将请求发送给其它服务节点,只有超过半数的服务节点达成一致,才会向客户端返回数据,此外,还需要将主服务节点的日志复制给非主服务节点,而导致读数据的开销大(包括网络开销和日志写入开销)的问题。
本公开还提供了一种非易失性计算机可读存储介质,该非易失性计算机可读存储介质存储有至少一可执行指令,该计算机可执行指令可执行上述任意方法实施例中的基于分布式一致性协议实现的数据读取方法。
图5示出了根据本公开一个实施例的一种计算设备的结构示意图,本公开具体实施例并不对计算设备的具体实现做限定。
如图5所示,该计算设备可以包括:处理器(processor)502、通信接口(Communications Interface)504、存储器(memory)506、以及通信总线508。
其中:
处理器502、通信接口504、以及存储器506通过通信总线508完成相互间的通信。
通信接口504,用于与其它设备比如客户端或其它服务器等的网元通信。
处理器502,用于执行程序510,具体可以执行上述基于分布式一致性协议实现的数据读取方法实施例中的相关步骤。
具体地,程序510可以包括程序代码,该程序代码包括计算机操作指令。
处理器502可能是中央处理器CPU,或者是特定集成电路ASIC(Application Specific Integrated Circuit),或者是被配置成实施本公开实施例的一个或多个集成电路。计算设备包括的一个或多个处理器,可以是同一类型的处理器,如一个或多个CPU;也可以是不同类型的处理器,如一个或多个CPU以及一个或多个ASIC。
存储器506,用于存放程序510。存储器506可能包含高速RAM存储器,也可能还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。
程序510具体可以用于使得处理器502执行上述任意方法实施例中的基于分布式一致性协议实现的数据读取方法。程序510中各步骤的具体实现可以参见上述基于分布式一致性协议实现的数据读取实施例中的相应步骤和单元中对应的描述,在此不赘述。所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的设备和模块的具体工作过程,可以参考前述方法实施例中的对应过程描述,在此不再赘述。
在此提供的算法和显示不与任何特定计算机、虚拟系统或者其它设备固有相关。各种通用系统也可以与基于在此的示教一起使用。根据上面的描述,构造这类系统所要求的结构是显而易见的。此外,本公开也不针对任何特定编程语言。应当明白,可以利用各种编程语言实现在此描述的本公开的内容,并且上面对特定语言所做的描述是为了披露本公开的最佳实施方式。
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本公开的实施例可以在没有这些具体细节的情况下实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。
类似地,应当理解,为了精简本公开并帮助理解各个公开方面中的一个或多个,在上面对本公开的示例性实施例的描述中,本公开的各个特征有时被一起分组到单个实施例、图、或者对其的描述中。然而,并不应将该公开的方法解释成反映如下意图:即所要求保护的本公开要求比在每个权利要求中所明确记载的特征更多的特征。更确切地说,如下面的权利要求书所反映的那样,公开方面在于少于前面公开的单个实施例的所有特征。因此,遵循具体实施方式的权利要求书由此明确地并入该具体实施方式,其中每个权利要求本身都作为本公开的单独实施例。
本领域那些技术人员可以理解,可以对实施例中的设备中的模块进行自适应性地改变并且把它们设置在与该实施例不同的一个或多个设备中。可以把实施例中的模块或单元或组件组合成一个模块或单元或组件,以及此外可以把它们分成多个子模块或子单元或子组件。除了这样的特征和/或过程或者单元中的至少一些是相互排斥之外,可以采用任何组合对本说明书(包括伴随的权利要求、摘要和附图)中公开的所有特征以及如此公开的任何方法或者设备的所有过程或单元进行组合。除非另外明确陈述,本说明书(包括伴随的权利要求、摘要和附图)中公开的每个特征可以由提供相同、等同或相似目的的替代特征来代替。
此外,本领域的技术人员能够理解,尽管在此所述的一些实施例包括其它实施例中所包括的某些特征而不是其它特征,但是不同实施例的特征的组合意味着处于本公开的范围之内并且形成不同的实施例。例如,在下面的权利要求书中,所要求保护的实施例的任意之一都可以以任意的组合方式来使用。
本公开的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器(DSP)来实现根据本公开实施例的基于分布式一致性协议实现的数据读取设备中的一些或者全部部件的一些或者全部功能。本公开还可以实现为用于执行这里所描述的方法的一部分或者全部的设备或者装置程序(例如,计算机程序和计算机程序产品)。这样的实现本公开的程序可以存储在计算机可读介质上,或者可 以具有一个或者多个信号的形式。这样的信号可以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。
应该注意的是上述实施例对本公开进行说明而不是对本公开进行限制,并且本领域技术人员在不脱离所附权利要求的范围的情况下可设计出替换实施例。在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。单词“包含”不排除存在未列在权利要求中的元件或步骤。位于元件之前的单词“一”或“一个”不排除存在多个这样的元件。本公开可以借助于包括有若干不同元件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。

Claims (17)

  1. 一种基于分布式一致性协议实现的数据读取方法,所述方法应用于包含多个节点的分布式系统中,所述方法包括:
    从多个节点中选取一个节点作为主节点,设置主节点的租约时间,并将主节点的节点信息广播给其它节点;
    当任一节点接收到客户端发送的数据读取请求时,将节点自身的节点信息与主节点的节点信息进行对比,以确定该节点是否为主节点;
    若该节点不是主节点,则将数据读取请求转发给主节点,以将主节点存储的数据返回给所述客户端。
  2. 根据权利要求1所述的方法,其中,所述节点为服务节点。
  3. 根据权利要求2所述的方法,其中,所述方法还包括:若服务节点是主服务节点,则将服务节点存储的数据返回给所述客户端。
  4. 根据权利要求2或3所述的方法,其中,所述方法还包括:若已选取的主服务节点在租约时间内宕机,则从其它服务节点中选取一个服务节点作为主服务节点,设置新的主服务节点的租约时间,其中,新的主服务节点的租约时间接续上一主服务节点的租约时间。
  5. 根据权利要求4所述的方法,其中,在所述设置新的主服务节点的租约时间之后,所述方法还包括:
    检测当前时间是否在新的主服务节点的租约时间内;
    若是,则将新的主服务节点存储的数据返回给所述客户端;
    若否,则新的主服务节点不提供数据处理服务。
  6. 根据权利要求2-5任一项所述的方法,其中,所述从多个服务节点中选取一个服务节点作为主服务节点进一步包括:
    将多个服务节点中日志数据量最大的服务节点选取为主服务节点。
  7. 根据权利要求2-6任一项所述的方法,其中,所述节点信息包括:节点标识、节点的IP地址以及端口号。
  8. 一种基于分布式一致性协议实现的数据读取装置,所述装置应用于包 含多个节点的分布式系统中,所述装置包括:
    主节点处理模块,适于从多个节点中选取一个节点作为主节点,设置主节点的租约时间,并将主节点的节点信息广播给其它节点;
    对比模块,适于当任一节点接收到客户端发送的数据读取请求时,将节点自身的节点信息与主节点的节点信息进行对比,以确定该节点是否为主节点;
    转发模块,适于若该节点不是主节点,则将数据读取请求转发给主节点;
    发送模块,适于将主节点存储的数据返回给所述客户端。
  9. 根据权利要求8所述的装置,其中,所述节点为服务节点。
  10. 根据权利要求9所述的装置,其中,所述发送模块进一步适于:若服务节点是主服务节点,则将服务节点存储的数据返回给所述客户端。
  11. 根据权利要求9或10所述的装置,其中,所述主服务节点处理模块进一步适于:若已选取的主服务节点在租约时间内宕机,则从其它服务节点中重新选取一个服务节点作为主服务节点,设置新的主服务节点的租约时间,其中,新的主服务节点的租约时间接续上一主服务节点的租约时间。
  12. 根据权利要求11所述的装置,其中,所述装置还包括:
    检测模块,适于检测当前时间是否在新的主服务节点的租约时间内;
    所述发送模块进一步适于:若当前时间在新的主服务节点的租约时间内,则将新的主服务节点存储的数据返回给所述客户端;
    退出模块,适于若当前时间不在新的主服务节点的租约时间内,则退出新的主服务节点的数据处理服务。
  13. 根据权利要求9-12任一项所述的装置,其中,所述主服务节点处理模块进一步适于:将多个服务节点中日志数据量最大的服务节点选取为主服务节点。
  14. 根据权利要求9-13任一项所述的装置,其中,所述节点信息包括:节点标识、节点的IP地址以及端口号。
  15. 一种计算设备,包括:处理器、存储器、通信接口和通信总线,所 述处理器、所述存储器和所述通信接口通过所述通信总线完成相互间的通信;
    所述存储器用于存放至少一可执行指令,所述可执行指令使所述处理器执行如权利要求1-7中任一项所述的基于分布式一致性协议实现的数据读取方法对应的操作。
  16. 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在计算设备上运行时,导致所述计算设备执行根据权利要求1-7中的任一项所述的基于分布式一致性协议实现的数据读取方法对应的操作。
  17. 一种非易失性计算机可读存储介质,所述非易失性计算机可读存储介质中存储有至少一可执行指令,所述可执行指令使处理器执行如权利要求1-7中任一项所述的基于分布式一致性协议实现的数据读取方法对应的操作。
PCT/CN2018/079028 2017-12-29 2018-03-14 基于分布式一致性协议实现的数据读取方法及装置 WO2019127915A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711478287.9 2017-12-29
CN201711478287.9A CN108234630B (zh) 2017-12-29 2017-12-29 基于分布式一致性协议实现的数据读取方法及装置

Publications (1)

Publication Number Publication Date
WO2019127915A1 true WO2019127915A1 (zh) 2019-07-04

Family

ID=62646894

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/079028 WO2019127915A1 (zh) 2017-12-29 2018-03-14 基于分布式一致性协议实现的数据读取方法及装置

Country Status (2)

Country Link
CN (1) CN108234630B (zh)
WO (1) WO2019127915A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111711526A (zh) * 2020-06-16 2020-09-25 深圳前海微众银行股份有限公司 一种区块链节点的共识方法及系统
CN112866406A (zh) * 2021-02-04 2021-05-28 建信金融科技有限责任公司 一种数据存储方法、系统、装置、设备及存储介质
CN112954008A (zh) * 2021-01-26 2021-06-11 网宿科技股份有限公司 一种分布式任务处理方法、装置、电子设备及存储介质
CN115102972A (zh) * 2022-07-15 2022-09-23 济南浪潮数据技术有限公司 一种存储nfs文件的方法、装置、设备及介质

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109088937B (zh) * 2018-08-28 2021-10-26 郑州云海信息技术有限公司 一种基于统一管理的集群授权方法及装置
CN111352943A (zh) * 2018-12-24 2020-06-30 华为技术有限公司 实现数据一致性的方法和装置、服务器和终端
CN110138863B (zh) * 2019-05-16 2021-11-02 哈尔滨工业大学(深圳) 基于Multi-Paxos分组的自适应一致性协议优化方法
CN114448781B (zh) * 2021-12-22 2024-06-07 天翼云科技有限公司 一种数据处理系统
CN114244859B (zh) * 2022-02-23 2022-08-16 阿里云计算有限公司 数据处理方法及装置和电子设备
CN114629806B (zh) * 2022-04-13 2023-12-12 腾讯科技(成都)有限公司 数据处理方法、装置、电子设备、存储介质及程序产品
CN116340431B (zh) * 2023-05-24 2023-09-01 阿里云计算有限公司 一种分布式系统、数据同步方法、电子设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070047466A1 (en) * 2005-09-01 2007-03-01 Fujitsu Limited Network management system
CN104598615A (zh) * 2015-01-31 2015-05-06 广州亦云信息技术有限公司 一种支持数据持久化的内存存取方法和装置
CN105426439A (zh) * 2015-11-05 2016-03-23 腾讯科技(深圳)有限公司 一种元数据的处理方法和装置
CN106911728A (zh) * 2015-12-22 2017-06-30 华为技术服务有限公司 分布式系统中主节点的选取方法和装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9367410B2 (en) * 2014-09-12 2016-06-14 Facebook, Inc. Failover mechanism in a distributed computing system
CN105592139B (zh) * 2015-10-28 2019-03-15 新华三技术有限公司 一种分布式文件系统管理平台的ha实现方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070047466A1 (en) * 2005-09-01 2007-03-01 Fujitsu Limited Network management system
CN104598615A (zh) * 2015-01-31 2015-05-06 广州亦云信息技术有限公司 一种支持数据持久化的内存存取方法和装置
CN105426439A (zh) * 2015-11-05 2016-03-23 腾讯科技(深圳)有限公司 一种元数据的处理方法和装置
CN106911728A (zh) * 2015-12-22 2017-06-30 华为技术服务有限公司 分布式系统中主节点的选取方法和装置

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111711526A (zh) * 2020-06-16 2020-09-25 深圳前海微众银行股份有限公司 一种区块链节点的共识方法及系统
CN111711526B (zh) * 2020-06-16 2024-03-26 深圳前海微众银行股份有限公司 一种区块链节点的共识方法及系统
CN112954008A (zh) * 2021-01-26 2021-06-11 网宿科技股份有限公司 一种分布式任务处理方法、装置、电子设备及存储介质
CN112954008B (zh) * 2021-01-26 2022-11-04 网宿科技股份有限公司 一种分布式任务处理方法、装置、电子设备及存储介质
CN112866406A (zh) * 2021-02-04 2021-05-28 建信金融科技有限责任公司 一种数据存储方法、系统、装置、设备及存储介质
CN115102972A (zh) * 2022-07-15 2022-09-23 济南浪潮数据技术有限公司 一种存储nfs文件的方法、装置、设备及介质

Also Published As

Publication number Publication date
CN108234630A (zh) 2018-06-29
CN108234630B (zh) 2021-03-23

Similar Documents

Publication Publication Date Title
WO2019127915A1 (zh) 基于分布式一致性协议实现的数据读取方法及装置
WO2019127916A1 (zh) 基于分布式一致性协议实现的数据读写方法及装置
WO2018177239A1 (zh) 一种业务受理及共识的方法及装置
JP6328596B2 (ja) 十分に接続されたメッシュトポロジーのためのPCIExpressファブリックルーティング
JP2020509445A5 (zh)
CN106302595B (zh) 一种对服务器进行健康检查的方法及设备
US9940020B2 (en) Memory management method, apparatus, and system
CN110730250B (zh) 信息处理方法及装置、服务系统、存储介质
WO2017071087A1 (zh) 信息的传输方法、装置和设备
CN106936662A (zh) 一种实现心跳机制的方法、装置及系统
US10721335B2 (en) Remote procedure call using quorum state store
CN113360077B (zh) 数据存储方法、计算节点及存储系统
CN113794764A (zh) 服务器集群的请求处理方法、介质和电子设备
CN112835885B (zh) 一种分布式表格存储的处理方法、装置及系统
WO2018023937A1 (zh) 用于识别无线接入点的方法与设备
WO2016095644A1 (zh) 数据库的高可用解决方法和装置
WO2019119269A1 (zh) 一种网络故障探测方法及控制中心设备
CN109783002B (zh) 数据读写方法、管理设备、客户端和存储系统
WO2014190700A1 (zh) 一种内存访问的方法、缓冲调度器和内存模块
CN112052091A (zh) 多机房部署下服务调用请求的处理方法及计算设备
CN113301173A (zh) 域名更新系统及方法、消息转发方法、服务器
US9270756B2 (en) Enhancing active link utilization in serial attached SCSI topologies
CN111367921A (zh) 数据对象的刷新方法及装置
WO2022194021A1 (zh) 并发控制方法、网卡、计算机设备、存储介质
US10367902B2 (en) Media resource address resolution and acquisition method, system, server and client terminal

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18896782

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 02.10.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18896782

Country of ref document: EP

Kind code of ref document: A1