CN106357449A - zedis distributed type buffer method - Google Patents

zedis distributed type buffer method Download PDF

Info

Publication number
CN106357449A
CN106357449A CN201610854537.3A CN201610854537A CN106357449A CN 106357449 A CN106357449 A CN 106357449A CN 201610854537 A CN201610854537 A CN 201610854537A CN 106357449 A CN106357449 A CN 106357449A
Authority
CN
China
Prior art keywords
node
server
data
zedis
host node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610854537.3A
Other languages
Chinese (zh)
Inventor
黄灿圳
张华杰
王国彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Bincent Technology Co Ltd
Original Assignee
Shenzhen Bincent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Bincent Technology Co Ltd filed Critical Shenzhen Bincent Technology Co Ltd
Priority to CN201610854537.3A priority Critical patent/CN106357449A/en
Publication of CN106357449A publication Critical patent/CN106357449A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0654Management of faults, events, alarms or notifications using network fault recovery
    • H04L41/0663Performing the actions predefined by failover planning, e.g. switching to standby network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer And Data Communications (AREA)
  • Hardware Redundancy (AREA)

Abstract

The invention provides a zedis distributed type buffer method. The zedis distributed type buffer method comprises the following steps of enabling a server judging module to read the complete information required by a redis server cluster of a server cluster monitoring module; enabling the server judging module to send the read information to a client, and enabling the client to read the complete information required by the redis server cluster; extracting unique information from the complete information of each physical node corresponding to other physical nodes, wherein the information comprises an ip (Internet protocol) address and a port number; enabling the client to generate a plurality of fixed keys according to the received information, generating a plurality of hash codes by a load balance core type ConsistentHash algorithm, mapping all hash codes generated by the same information into the same physical node, and filling a ConsistentHash core variable mapping table to form a hash ring by mapping the hash codes into the physical node. The zedis distributed type buffer method has the advantages that by utilizing the minimum configuration, the high availability and automatic control of the cluster are realized, and the data transfer, load balance, and dynamic and stable properties are realized.

Description

A kind of zedis distributed caching method
Technical field
The present invention relates to Data cache technology field, more particularly, to a kind of distributed caching method.
Background technology
Current data buffer storage scheme has redis group scheme and twitter scheme.
Redis group scheme, newer due to releasing the time, go back immaturity when our scheme proposes and make on a large scale With it provides a kind of hash algorithm of predistribution bucket, and motility is not so good as concordance hash algorithm, and fault transfer needs manually Go to distribute.
Twitter scheme, it is directly acted on behalf of in redis protocol layer, and its energy flexible configuration simultaneously supports such as concordance The such automatic algorithms of Hash, but it is static distributed type assemblies, does not support the work(of redis node automatic fault transfer Can, and the transfer of the fault of this agency itself is also required to develop oneself and does.
Redis cluster and twitter are combined industry, it, in addition to providing buffer service, additionally provides a set of management The things such as Data Migration, load balancing done by instrument.
Additionally provide a whole set of perfect hardware and software platform instrument at present, support large-scale clustered deployment, but its portion Administration is heavier, needs to dispose data base, web system, it should get up for the smaller application scenarios of our cluster scales Underaction.
Above scheme more or less both provides the characteristic of distributed caching, but they are not especially suitable for us Application scenarios it is intended that distributed caching meet claimed below:
A) Hash of node is dynamic and stable, therefore it is intended that concordance Hash or similar Hash are calculated Method;
B) simultaneously it is intended that fault transfer can automatically complete, it is not necessary to O&M is situated between manually including Data Migration etc. Enter;Scheme 1,2,3 all more or less can not fully meet the requirement of automatic fault transfer.
C) in addition it is intended that this set group scheme can be compatible better with our existing support operation platforms.Side Case 4 is just heavier, and the platform difficulty of our inside integrated is than larger.
Therefore, I needs badly and develops a kind of High Availabitity by minimum configuration, achieving cluster, automatization administers, and Enable Data Migration, load balancing, dynamic and stable zedis distributed caching method.
Content of the invention
The technical problem to be solved in the present invention is to provide a kind of zedis distributed caching method, this zedis distributed caching The High Availabitity that method by minimum configuration, achieves cluster, automatization administer, and enable Data Migration, load balancing, move State and stable.
For solving above-mentioned technical problem, the invention provides a kind of zedis distributed caching method, provide zookeeper Core processor, server cluster monitoring module, node data processing module, data recovery module, client, service end and number According to storage server, described zookeeper core processor includes server judge module and server display module, described Zedis distributed caching method comprises the following steps:
S1: described server judge module reads needed for the redis server cluster of described server cluster monitoring module Complete information;
S2: the information reading is sent to described client by described server judge module, described client will read Complete information needed for redis server cluster, extracts with respect to other physics sections from the complete information of each physical node The unique information of point, wherein, described information includes ip address and port numbers;
The information that s3: described client is passed through to receive generates fixing multiple key, by load balancing core classes Consistenthash algorithm generates corresponding multiple hash codes, and the Hash codes that same information generates all are mapped to same thing Reason node, forms Hash ring with the kernel variable mapping table that consistenthash is filled in the mapping of Hash codes to physical node;
S4: described Hash ring is connected, with described data storage server, the read-write completing that data backup is evaded with fault Journey;
S5: described fault transfer processing module, server cluster monitoring module and zookeeper core processor pass through number Recover to realize fault transfer according to backing up, evading malfunctioning node data;
The step of realizing of described step " read procedure that data backup and fault are evaded " includes:
S401: described read-write proxy module, according to key parameter, finds out host node by Hash ring, and standby section is found with this Point;
S402: judge whether host node can use, if host node can use, execution step s403, if host node can not With but slave node can use, execution step s404, if host node and slave node are all unavailable;
S403: write toward host node MDL and the MDL write toward slave node;
S404: toward standby database and the volatile data base write of slave node;
S405: write the data base of other enabled nodes;
The step of realizing of described step " data backup and fault evade write process " includes:
S406: described read-write proxy module is according to key parameter acquiring host node and backup node;
S407: judge whether host node can use, if host node can use, execution step s408, if host node is unavailable But slave node can use, execution step s409, if host node and slave node are all unavailable, execution step s410;
S408: read from host node;
S409: read from slave node;
S410: read from other nodes;
Wherein, the data storage method of described data storage server is as shown in the table:
Wherein, described host node is the nearest physics being found by load balancing core classes consistenthash algorithm Node, slave node is the next node of host node.
Preferably, described client includes load balance process module, " the load balancing core classes in described step s3 Consistenthash algorithm " includes:
Described load balance process module receives parameter key by murmurhash2 algorithm, generates and returns Hash codes;
Described load balance process module passes through a void_addnode algorithm receiving node s and initial parameter key, uses Parameter key generates multiple Hash codes, and all of Hash codes are all mapped to described node s, and mapping is stored in treemap;
Described load balance process module receives parameter key by s_getclosestnode algorithm, is generated according to key Hashcode, with the tailmap algorithm of treemap, finds nearest node and returns.
Preferably, described server cluster monitoring module include connect redis server needed for information, show that node can Table amount and the detection whether available strategy of redis server with character state.
Preferably, the whether available tactful inclusion of described detection redis server: initialization described server cluster monitoring Module is simultaneously connected with the foundation of redis server according to information;
Whether survived by the ping method detection node of client, then by the set method of client, check whether energy Normally it is stored in data, if checked all by returning server and can use, otherwise returning server unavailable;
Continuously call pingonce () for n time, record calls success or failure ratio and returns;
Described server cluster monitoring module first calls a pingonce (), if returning result and described server cluster Monitoring module is consistent, then redis server availability state is unchanged, and returns testing result, if testing result and institute first State server cluster monitoring module inconsistent, then call checkstateratio to be judged, returned with checkstateratio Return result to be defined, return testing result.
Preferably, described Hash ring includes redis node, the mapping table of redis nodal scheme to redis node itself, The maximum label of cluster and minimum label.
Preferably, further comprise the steps of: described server cluster monitoring module cluster server is monitored, and will monitor The result drawing is sent to zookeeper core processor, and the step of realizing of this step includes:
Read zedis cluster configuration, set up the corresponding Detection task of physical node, the change in availability situation detecting By being sent to client;
Initialization cluster, the cluster information of the construction incoming client of server cluster monitoring module, described server cluster Monitoring module reads described cluster information according to fixing configuration specification and initializes;
Described server cluster monitoring module constructs thread inner classes to the different physical node that each reads Redisping task, calls the ping method of the strategy of described " whether detection redis server can use ", detects physical node Availability;
According to monitored results, judge whether physical node change in availability, if change in availability in physical node, It is changed into unavailable from available, change described zookeeper core processor and configure and notify client;If physical node occurs not Change in availability, is changed into available from unavailable, first data recovery, then changes zookeeper core processor again and configures and lead to Know described client.
Preferably, the step of realizing of described step s5 includes:
Described fault transfer processing module generates Hash codes according to key parameter and finds host node, then finds host node Slave node, main-standby nodes are carried out with identical write operation, host node=n, then slave node=n+1, write operation is in host node master Database space is carried out together with the standby database space of slave node.
Preferably, the step of realizing of " the evading malfunctioning node step " of described step s5 includes:
Described fault transfer processing module is according to the data of java agent intercepts interface interchange, and is found by key parameter Host node, judges whether host node can use, if available, carries out data exchange with data storage server on the primary node Work;If host node is unavailable, carry out the work of data exchange in slave node and data storage server;If host node All unavailable with slave node, then in remaining physical node, find available node and carry out data with data storage server The work exchanging.
Preferably, the step of realizing of " data recovery " of described step s5 includes:
Whether failure judgement node is evaded and having been completed, if completed, carrying out data recovery, otherwise continuing to pay dues and carrying out Malfunctioning node is evaded;
Find out and recover the relative host node of normal node and slave node it is assumed that MDL=n, recovery nodes=n+2, Recovery nodes are destination node, host node=n+1, slave node=n+3;Recover data from slave node volatile data base space to mesh Mark node MDL space, then empties slave node volatile data base space, recovers data from host node MDL space To destination node standby database space.
After employing said method, described server judge module reads described server cluster monitoring module Complete information needed for redis server cluster;The information reading is sent to described client by described server judge module End, described client will read the complete information needed for redis server cluster, take out from the complete information of each physical node Take with respect to the unique information of other physical nodes, wherein, described information includes ip address and port numbers;Described client is passed through The information receiving generates fixing multiple key, generates correspondence by load balancing core classes consistenthash algorithm many Individual hash code, the Hash codes that same information generates all are mapped to same physical node, arrive physical node with Hash codes The kernel variable mapping table of mapping filling consistenthash forms Hash ring;Described Hash ring and described data storage service Device connection completes the read-write process that data backup is evaded with fault;Described fault transfer processing module, server cluster monitoring mould Block and zookeeper core processor pass through data backup, evade malfunctioning node data and recover to realize fault transfer;Should The High Availabitity that zedis distributed caching method by minimum configuration, achieves cluster, automatization administer, and enable data Migration, load balancing, dynamic and stable, based on the automatic fault metastasis of monitoring, using zookeeper core processor Distributed coordination mechanism, is monitored to cluster, thus realize automatic real time fail transfer data recovering, server cluster The monitoring of monitoring module using fault detection algorithm based on probability, can anti-jitter, ensured real-time again, concordance breathed out Uncommon algorithm is extended optimizing, and realizes automatic duplicating of data, also the automatic recovery of malfunctioning node.
Brief description
Fig. 1 is a kind of block mold schematic diagram of zedis distributed caching method of the present invention;
Fig. 2 is a kind of flowchart of zedis distributed caching method;
Fig. 3 is a kind of schematic diagram of the fault scenes diary in zedis distributed caching method;
Fig. 4 is a kind of schematic diagram of the server cluster monitoring scene diary in zedis distributed caching method;
Fig. 5 is a kind of schematic diagram with scene diary for the client in zedis distributed caching method.
Specific embodiment
In order that the objects, technical solutions and advantages of the present invention become more apparent, below in conjunction with drawings and Examples, right The present invention is further elaborated.It should be appreciated that specific embodiment described herein is only used for explaining the present invention, not For limiting the present invention.
Embodiment 1
Refer to Fig. 1 to Fig. 2, Fig. 1 is a kind of block mold schematic diagram of zedis distributed caching method of the present invention; Fig. 2 is the flowchart with a kind of zedis distributed caching method of Fig. 1.
The invention discloses a kind of zedis distributed caching method, provide zookeeper core processor, server set Group's monitoring module, node data processing module, data recovery module, client, service end and data storage server, described Zookeeper core processor includes server judge module and server display module, described zedis distributed caching method Comprise the following steps:
S1: described server judge module reads needed for the redis server cluster of described server cluster monitoring module Complete information;
S2: the information reading is sent to described client by described server judge module, described client will read Complete information needed for redis server cluster, extracts with respect to other physics sections from the complete information of each physical node The unique information of point, wherein, described information includes ip address and port numbers;
The information that s3: described client is passed through to receive generates fixing multiple key, by load balancing core classes Consistenthash algorithm generates corresponding multiple hash codes, and the Hash codes that same information generates all are mapped to same thing Reason node, forms Hash ring with the kernel variable mapping table that consistenthash is filled in the mapping of Hash codes to physical node;
S4: described Hash ring is connected, with described data storage server, the read-write completing that data backup is evaded with fault Journey;
S5: described fault transfer processing module, server cluster monitoring module and zookeeper core processor pass through number Recover to realize fault transfer according to backing up, evading malfunctioning node data;
The step of realizing of described step " read procedure that data backup and fault are evaded " includes:
S401: described read-write proxy module, according to key parameter, finds out host node by Hash ring, and standby section is found with this Point;
S402: judge whether host node can use, if host node can use, execution step s403, if host node can not With but slave node can use, execution step s404, if host node and slave node are all unavailable;
S403: write toward host node MDL and the MDL write toward slave node;
S404: toward standby database and the volatile data base write of slave node;
S405: write the data base of other enabled nodes;
The step of realizing of described step " data backup and fault evade write process " includes:
S406: described read-write proxy module is according to key parameter acquiring host node and backup node;
S407: judge whether host node can use, if host node can use, execution step s408, if host node is unavailable But slave node can use, execution step s409, if host node and slave node are all unavailable, execution step s410;
S408: read from host node;
S409: read from slave node;
S410: read from other nodes;
Wherein, the data storage method of described data storage server is as shown in the table:
Wherein, described host node is the nearest physics being found by load balancing core classes consistenthash algorithm Node, slave node is the next node of host node.
Described client includes load balance process module, " the load balancing core classes in described step s3 Consistenthash algorithm " includes:
Described load balance process module receives parameter key by murmurhash2 algorithm, generates and returns Hash codes;
Described load balance process module passes through a void_addnode algorithm receiving node s and initial parameter key, uses Parameter key generates multiple Hash codes, and all of Hash codes are all mapped to described node s, and mapping is stored in treemap;
Described load balance process module receives parameter key by s_getclosestnode algorithm, is given birth to according to parameter key Become hashcode, with the tailmap algorithm of treemap, find nearest node and return.
Described server cluster monitoring module include connect redsi server needed for information, show node availability shape The table amount of state and the detection whether available strategy of redis server.
The whether available tactful inclusion of described detection redis server: initialize described server cluster monitoring module simultaneously Set up with redis server according to information and be connected;
Whether survived by the ping method detection node of client, then by the set method of client, check whether energy Normally it is stored in data, if checked all by returning server and can use, otherwise returning server unavailable;
Continuously call pingonce () for n time, record calls success or failure ratio and returns;
Described server cluster monitoring module first calls a pingonce (), if returning result and described server cluster Monitoring module is consistent, then redis server availability state is unchanged, and returns testing result, if testing result and institute first State server cluster monitoring module inconsistent, then call checkstateratio to be judged, returned with checkstateratio Return result to be defined, return testing result.
In the present embodiment, described Hash ring includes redis node, the reflecting of redis nodal scheme to redis node itself Firing table, the maximum label of cluster and minimum label.
In the present embodiment, described zedis distributed caching method further comprises the steps of: described server cluster monitoring mould Block is monitored to cluster server, and the result that monitoring is drawn is sent to zookeeper core processor, the reality of this step Existing step includes:
Read zedis cluster configuration, set up the corresponding Detection task of physical node, the change in availability situation detecting It is sent to client;
Initialization cluster, the cluster information of the construction incoming client of server cluster monitoring module, described server cluster Monitoring module reads described cluster information according to fixing configuration specification and initializes;
Described server cluster monitoring module constructs thread inner classes to the different physical node that each reads Redisping task, calls the ping method of the strategy of described " whether detection redis server can use ", detects physical node Availability;
According to monitored results, judge whether physical node change in availability, if change in availability in physical node, It is changed into unavailable from available, change described zookeeper core processor and configure and notify client;If physical node occurs not Change in availability, is changed into available from unavailable, first data recovery, then changes zookeeper core processor again and configures and lead to Know described client.
The step of realizing of described step s5 includes:
Described fault transfer processing module generates Hash codes according to key parameter and finds host node, then finds host node Slave node, main-standby nodes are carried out with identical write operation, host node=n, then slave node=n+1, write operation is in host node master Database space is carried out together with the standby database space of slave node.
The step of realizing of " the evading malfunctioning node step " of described step s5 includes:
Described fault transfer processing module is according to the data of java agent intercepts interface interchange, and is found by key parameter Host node, judges whether host node can use, if available, carries out data exchange with data storage server on the primary node Work;If host node is unavailable, carry out the work of data exchange in slave node and data storage server;If host node All unavailable with slave node, then in remaining physical node, find available node and carry out data with data storage server The work exchanging.
The step of realizing of " data recovery " of described step s5 includes:
Whether failure judgement node is evaded and having been completed, if having completed, carrying out data recovery, otherwise proceeding Malfunctioning node is evaded;
Find out and recover the relative host node of normal node and slave node it is assumed that MDL=n, recovery nodes=n+2, Recovery nodes are destination node, host node=n+1, slave node=n+3;Recover data from slave node volatile data base space to mesh Mark node MDL space, then empties slave node volatile data base space, recovers data from host node MDL space To destination node standby database space.
The present embodiment, simulation experiment process and part of test results are as shown below.
(1) fault scenes
Fault test simulates fault scenes, starts several redis servers with program on the ground at this, and according to Certain configuration terminates server processes incessantly and restarts same time only one of which server meeting quilt in server, this scene Termination, the print record of server failure scene following Fig. 3 program.
(2) server cluster monitoring, after arranging fault scenes, starts server cluster monitoring module, detection redis clothes Business device availability, rewrites zookeeper core processor and notifies client, and be responsible for data recovery, start server cluster Monitoring module tackles the log recording of offline operation such as Fig. 4 on server.
(3) client uses scene
As shown in the diary of Fig. 5, under fault scenes, under the assistance of server cluster monitoring module, opening program mould Intend client and use zedis, constantly read and write redis company-data according to a certain percentage, record real time print goes out and successfully reads The percentage ratio write, under this fault scenes, being write as power can be 100%, and this is that read-write on client side simulated scenario is simply single Effect.
After employing said method, described server judge module reads described server cluster monitoring module Complete information needed for redis server cluster;The information reading is sent to described client by described server judge module End, described client will read the complete information needed for redis server cluster, take out from the complete information of each physical node Take with respect to the unique information of other physical nodes, wherein, described information includes ip address and port numbers;Described client is passed through The information receiving generates fixing multiple key, generates correspondence by load balancing core classes consistenthash algorithm many Individual hash code, the Hash codes that same information generates all are mapped to same physical node, arrive physical node with Hash codes The kernel variable mapping table of mapping filling consistenthash forms Hash ring;Described Hash ring and described data storage service Device connection completes the read-write process that data backup is evaded with fault;Described fault transfer processing module, server cluster monitoring mould Block and zookeeper core processor pass through data backup, evade malfunctioning node data and recover to realize fault transfer;Should The High Availabitity that zedis distributed caching method by minimum configuration, achieves cluster, automatization administer, and enable data Migration, load balancing, dynamic and stable, based on the automatic fault metastasis of monitoring, using zookeeper core processor Distributed coordination mechanism, is monitored to cluster, thus realize automatic real time fail transfer data recovering, server cluster The monitoring of monitoring module using fault detection algorithm based on probability, can anti-jitter, ensured real-time again, concordance breathed out Uncommon algorithm is extended optimizing, and realizes automatic duplicating of data, also the automatic recovery of malfunctioning node.
Simultaneously it should be appreciated that these are only the preferred embodiments of the present invention it is impossible to therefore limit the patent of the present invention Equivalent structure or equivalent implementation method that scope, every utilization description of the invention and accompanying drawing content are made, or directly or indirectly It is used in other related technical fields, be included within the scope of the present invention.

Claims (9)

1. a kind of zedis distributed caching method it is characterised in that: provide zookeeper core processor, server cluster prison Control module, node data processing module, data recovery module, client, service end and data storage server, described Zookeeper core processor includes server judge module and server display module, described zedis distributed caching method Comprise the following steps:
S1: described server judge module reads complete needed for the redis server cluster of described server cluster monitoring module Whole information;
S2: the information reading is sent to described client by described server judge module, described client will read redis Complete information needed for server cluster, extracts unique with respect to other physical nodes from the complete information of each physical node Information, wherein, described information includes ip address and port numbers;
The information that s3: described client is passed through to receive generates fixing multiple key, by load balancing core classes Consistenthash algorithm generates corresponding multiple hash codes, and the Hash codes that same information generates all are mapped to same thing Reason node, forms Hash ring with the kernel variable mapping table that consistenthash is filled in the mapping of Hash codes to physical node;
S4: described Hash ring is connected the read-write process completing that data backup is evaded with fault with described data storage server;
It is standby that s5: described fault transfer processing module, server cluster monitoring module and zookeeper core processor pass through data Part, evade malfunctioning node data recover realize fault transfer;
The step of realizing of described step " read procedure that data backup and fault are evaded " includes:
S401: described read-write proxy module, according to key parameter, is found out host node by Hash ring, and found slave node with this;
S402: judge whether host node can use, if host node can use, execution step s403, if host node unavailable but Slave node can use, and execution step s404, if host node and slave node are all unavailable;
S403: write toward host node MDL and the MDL write toward slave node;
S404: toward standby database and the volatile data base write of slave node;
S405: write the data base of other enabled nodes;
The step of realizing of described step " data backup and fault evade write process " includes:
S406: described read-write proxy module is according to key parameter acquiring host node and backup node;
S407: judge whether host node can use, if host node can use, execution step s408, if host node is unavailable but standby Node can use, execution step s409, if host node and slave node are all unavailable, execution step s410;
S408: read from host node;
S409: read from slave node;
S410: read from other nodes;
Wherein, the data storage method of described data storage server is as shown in the table:
Wherein, described host node is the nearest physics section being found by load balancing core classes consistenthash algorithm Point, slave node is the next node of host node.
2. zedis distributed caching method according to claim 1 it is characterised in that: described client includes load all Weighing apparatus processing module, " the load balancing core classes consistenthash algorithm " in described step s3 includes:
Described load balance process module receives parameter key by murmurhash2 algorithm, generates and returns Hash codes;
Described load balance process module passes through a void_addnode algorithm receiving node s and initial parameter key, uses parameter Key generates multiple Hash codes, and all of Hash codes are all mapped to described node s, and mapping is stored in treemap;
Described load balance process module receives parameter key by s_getclosestnode algorithm, is generated according to key Hashcode, with the tailmap algorithm of treemap, finds nearest node and returns.
3. zedis distributed caching method according to claim 1 it is characterised in that: described server cluster monitors mould Whether block includes connecting information needed for redis server, the table amount showing node availability state and detection redis server Available strategy.
4. zedis distributed caching method according to claim 3 it is characterised in that: described detection redis server is No available strategy includes: initializes described server cluster monitoring module and is connected with the foundation of redis server according to information;
Whether survived by the ping method detection node of client, then by the set method of client, checking whether can be normal It is stored in data, if checked all by returning server and can use, otherwise returning server unavailable;
Continuously call pingonce () for n time, record calls success or failure ratio and returns;
Described server cluster monitoring module first calls a pingonce (), if returning result and described server cluster monitoring Module is consistent, then redis server availability state is unchanged, and returns testing result, if testing result and described clothes first Business device cluster monitoring module is inconsistent, then call checkstateratio to be judged, returns knot with checkstateratio Fruit is defined, and returns testing result.
5. zedis distributed caching method according to claim 3 it is characterised in that: described Hash ring include redis section Point, the mapping table of redis nodal scheme to redis node itself, the maximum label of cluster and minimum label.
6. zedis distributed caching method according to claim 3 it is characterised in that: further comprise the steps of: described server Cluster monitoring module is monitored to cluster server, and the result that monitoring is drawn is sent to zookeeper core processor, The step of realizing of this step includes:
Read zedis cluster configuration, set up the corresponding Detection task of physical node, the change in availability situation detecting is passed through It is sent to client;
Initialization cluster, the cluster information of the construction incoming client of server cluster monitoring module, described server cluster monitoring Module reads described cluster information according to fixing configuration specification and initializes;
Described server cluster monitoring module constructs thread inner classes to the different physical node that each reads Redisping task, calls the ping method of the strategy of described " whether detection redis server can use ", detects physical node Availability;
According to monitored results, judge whether physical node change in availability, if change in availability in physical node, from can With being changed into unavailable, change described zookeeper core processor and configure and notify client;If physical node occurs unavailable Property change, be changed into available from unavailable, first data recovery, then change zookeeper core processor again and configure and notify institute State client.
7. zedis distributed caching method according to claim 1 it is characterised in that: described step s5 realize step Including:
Described fault transfer processing module generates Hash codes according to key parameter and finds host node, then finds the standby of host node Main-standby nodes are carried out identical write operation, host node=n, then slave node=n+1, write operation is in host node master data by node Storehouse space is carried out together with the standby database space of slave node.
8. zedis distributed caching method according to claim 1 it is characterised in that: described step s5 " evade fault The step of realizing of node step " includes:
Described fault transfer processing module is according to the data of java agent intercepts interface interchange, and finds main section by key parameter Point, judges whether host node can use, if available, carries out the work of data exchange on the primary node with data storage server; If host node is unavailable, carry out the work of data exchange in slave node and data storage server;If host node and standby Node is all unavailable, then, in remaining physical node, find available node and carry out data exchange with data storage server Work.
9. zedis distributed caching method according to claim 1 it is characterised in that: described step s5 " data is extensive Step of realizing again " includes:
Whether failure judgement node is evaded and having been completed, if completed, carrying out data recovery, otherwise continuing to pay dues and carrying out fault Node is evaded;
Find out and recover the relative host node of normal node and slave node it is assumed that MDL=n, recovery nodes=n+2, recover Node is destination node, host node=n+1, slave node=n+3;Recover data from slave node volatile data base space to target section Point MDL space, then empties slave node volatile data base space, recovers data to mesh from host node MDL space Mark node standby database space.
CN201610854537.3A 2016-09-27 2016-09-27 zedis distributed type buffer method Pending CN106357449A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610854537.3A CN106357449A (en) 2016-09-27 2016-09-27 zedis distributed type buffer method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610854537.3A CN106357449A (en) 2016-09-27 2016-09-27 zedis distributed type buffer method

Publications (1)

Publication Number Publication Date
CN106357449A true CN106357449A (en) 2017-01-25

Family

ID=57859969

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610854537.3A Pending CN106357449A (en) 2016-09-27 2016-09-27 zedis distributed type buffer method

Country Status (1)

Country Link
CN (1) CN106357449A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107517266A (en) * 2017-09-05 2017-12-26 江苏电力信息技术有限公司 A kind of instant communication method based on distributed caching
CN108011744A (en) * 2017-08-17 2018-05-08 北京车和家信息技术有限责任公司 Obtain the method and device of key
CN109213792A (en) * 2018-07-06 2019-01-15 武汉斗鱼网络科技有限公司 Method, server-side, client, device and the readable storage medium storing program for executing of data processing
CN109766222A (en) * 2019-01-22 2019-05-17 郑州云海信息技术有限公司 A kind of method and system for realizing web browser two-node cluster hot backup
CN110109620A (en) * 2019-04-25 2019-08-09 上海淇毓信息科技有限公司 Mix storage method, device and electronic equipment
CN110113406A (en) * 2019-04-29 2019-08-09 成都网阔信息技术股份有限公司 Based on distributed calculating service cluster frame
CN110351313A (en) * 2018-04-02 2019-10-18 武汉斗鱼网络科技有限公司 Data cache method, device, equipment and storage medium
CN111107120A (en) * 2018-10-29 2020-05-05 亿阳信通股份有限公司 Redis cluster construction method and system
CN111628899A (en) * 2019-02-27 2020-09-04 北京京东尚科信息技术有限公司 Method, device and system for drawing network interconnection and intercommunication condition between servers
CN111639061A (en) * 2020-05-26 2020-09-08 深圳壹账通智能科技有限公司 Data management method, device, medium and electronic equipment in Redis cluster
CN112861185A (en) * 2021-03-31 2021-05-28 中国工商银行股份有限公司 Data automatic deformation transmission method based on Hive data warehouse
CN112866035A (en) * 2021-02-24 2021-05-28 紫光云技术有限公司 Method for switching specified slave node into master node of redis service on cloud platform
CN114125059A (en) * 2021-10-11 2022-03-01 国电南瑞科技股份有限公司 Monitoring real-time data caching system and method based on container
CN114448850B (en) * 2021-12-21 2023-11-03 天翼云科技有限公司 Dialing control method, electronic equipment and dialing control system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8244678B1 (en) * 2008-08-27 2012-08-14 Spearstone Management, LLC Method and apparatus for managing backup data
CN104199957A (en) * 2014-09-17 2014-12-10 合一网络技术(北京)有限公司 Redis universal agent implementation method
CN104461783A (en) * 2014-12-10 2015-03-25 上海爱数软件有限公司 Virtual machine backup method by tracking sector data change

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8244678B1 (en) * 2008-08-27 2012-08-14 Spearstone Management, LLC Method and apparatus for managing backup data
CN104199957A (en) * 2014-09-17 2014-12-10 合一网络技术(北京)有限公司 Redis universal agent implementation method
CN104461783A (en) * 2014-12-10 2015-03-25 上海爱数软件有限公司 Virtual machine backup method by tracking sector data change

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
曾泉匀: "一种Redis集群管理的设计方案", 《中国科技论文在线》 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108011744A (en) * 2017-08-17 2018-05-08 北京车和家信息技术有限责任公司 Obtain the method and device of key
CN107517266A (en) * 2017-09-05 2017-12-26 江苏电力信息技术有限公司 A kind of instant communication method based on distributed caching
CN110351313A (en) * 2018-04-02 2019-10-18 武汉斗鱼网络科技有限公司 Data cache method, device, equipment and storage medium
CN110351313B (en) * 2018-04-02 2022-02-22 武汉斗鱼网络科技有限公司 Data caching method, device, equipment and storage medium
CN109213792A (en) * 2018-07-06 2019-01-15 武汉斗鱼网络科技有限公司 Method, server-side, client, device and the readable storage medium storing program for executing of data processing
CN109213792B (en) * 2018-07-06 2021-11-09 武汉斗鱼网络科技有限公司 Data processing method, server, client, device and readable storage medium
CN111107120B (en) * 2018-10-29 2022-09-02 亿阳信通股份有限公司 Redis cluster construction method and system
CN111107120A (en) * 2018-10-29 2020-05-05 亿阳信通股份有限公司 Redis cluster construction method and system
CN109766222A (en) * 2019-01-22 2019-05-17 郑州云海信息技术有限公司 A kind of method and system for realizing web browser two-node cluster hot backup
CN111628899A (en) * 2019-02-27 2020-09-04 北京京东尚科信息技术有限公司 Method, device and system for drawing network interconnection and intercommunication condition between servers
CN111628899B (en) * 2019-02-27 2022-07-05 北京京东尚科信息技术有限公司 Method, device and system for drawing network interconnection and intercommunication condition between servers
CN110109620B (en) * 2019-04-25 2023-08-04 上海淇毓信息科技有限公司 Hybrid storage method and device and electronic equipment
CN110109620A (en) * 2019-04-25 2019-08-09 上海淇毓信息科技有限公司 Mix storage method, device and electronic equipment
CN110113406B (en) * 2019-04-29 2022-04-08 成都网阔信息技术股份有限公司 Distributed computing service cluster system
CN110113406A (en) * 2019-04-29 2019-08-09 成都网阔信息技术股份有限公司 Based on distributed calculating service cluster frame
CN111639061A (en) * 2020-05-26 2020-09-08 深圳壹账通智能科技有限公司 Data management method, device, medium and electronic equipment in Redis cluster
CN111639061B (en) * 2020-05-26 2023-03-17 深圳壹账通智能科技有限公司 Data management method, device, medium and electronic equipment in Redis cluster
CN112866035A (en) * 2021-02-24 2021-05-28 紫光云技术有限公司 Method for switching specified slave node into master node of redis service on cloud platform
CN112861185A (en) * 2021-03-31 2021-05-28 中国工商银行股份有限公司 Data automatic deformation transmission method based on Hive data warehouse
CN114125059A (en) * 2021-10-11 2022-03-01 国电南瑞科技股份有限公司 Monitoring real-time data caching system and method based on container
CN114125059B (en) * 2021-10-11 2023-08-25 国电南瑞科技股份有限公司 Container-based monitoring real-time data caching system and method
CN114448850B (en) * 2021-12-21 2023-11-03 天翼云科技有限公司 Dialing control method, electronic equipment and dialing control system

Similar Documents

Publication Publication Date Title
CN106357449A (en) zedis distributed type buffer method
CN106210151A (en) A kind of zedis distributed caching and server cluster monitoring method
CN105933137B (en) A kind of method for managing resource, apparatus and system
CN107590072B (en) Application development and test method and device
CN103718535B (en) The alleviation of hardware fault
CN108270726B (en) Application instance deployment method and device
CN108023967B (en) Data balancing method and device and management equipment in distributed storage system
CN108959385B (en) Database deployment method, device, computer equipment and storage medium
CN108351806A (en) Database trigger of the distribution based on stream
CN109313564A (en) For supporting the server computer management system of the highly usable virtual desktop of multiple and different tenants
CN106446168B (en) A kind of load client realization method of Based on Distributed data warehouse
CN104035836A (en) Automatic disaster tolerance recovery method and system in cluster retrieval platform
CN109587258A (en) Activating method and device are visited in a kind of service
CN109151028A (en) A kind of distributed memory system disaster recovery method and device
CN113946276B (en) Disk management method, device and server in cluster
WO2021112908A1 (en) Barriers for dependent operations among sharded data stores
CN107707644A (en) Processing method, device, storage medium, processor and the terminal of request message
CN115080436B (en) Test index determining method and device, electronic equipment and storage medium
CN104657240B (en) The Failure Control method and device of more kernel operating systems
CN110471767B (en) Equipment scheduling method
CN106210101B (en) Message management system and information management method
CN115134424B (en) Load balancing method, load balancing device, computer equipment, storage medium and program product
US11474794B2 (en) Generating mock services based on log entries
US10481963B1 (en) Load-balancing for achieving transaction fault tolerance
CN108551484B (en) User information synchronization method, device, computer device and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170125

RJ01 Rejection of invention patent application after publication