CN105933391B - A kind of node expansion method, apparatus and system - Google Patents

A kind of node expansion method, apparatus and system Download PDF

Info

Publication number
CN105933391B
CN105933391B CN201610222111.6A CN201610222111A CN105933391B CN 105933391 B CN105933391 B CN 105933391B CN 201610222111 A CN201610222111 A CN 201610222111A CN 105933391 B CN105933391 B CN 105933391B
Authority
CN
China
Prior art keywords
node
cache system
distributed cache
new demand
host
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610222111.6A
Other languages
Chinese (zh)
Other versions
CN105933391A (en
Inventor
吴连朋
于芝涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Poly Polytron Technologies Inc
Original Assignee
Poly Polytron Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Poly Polytron Technologies Inc filed Critical Poly Polytron Technologies Inc
Priority to CN201610222111.6A priority Critical patent/CN105933391B/en
Publication of CN105933391A publication Critical patent/CN105933391A/en
Application granted granted Critical
Publication of CN105933391B publication Critical patent/CN105933391B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/06Management of faults, events, alarms or notifications
    • H04L41/0654Management of faults, events, alarms or notifications using network fault recovery
    • H04L41/0663Performing the actions predefined by failover planning, e.g. switching to standby network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention provides a kind of node expansion methods, apparatus and system, it is related to wireless communication technology field, applied in distributed cache system, including at least one service node and N number of physical server, each service node include host node and at least one from node, first instruction information based on the received, increases new demand servicing node in the distributed cache system, and destination service node is chosen from the distributed cache system;The new demand servicing node include host node and at least one from node, the host node and described be deployed on different physical servers respectively from node;Internal storage data on the destination service node is sent on the host node of the new demand servicing node;The first request message is sent to caching agent unit, first request message is at least used to indicate the caching agent unit and updates configuration information in the caching agent unit.With to avoid cache invalidation and loss of data.

Description

A kind of node expansion method, apparatus and system
Technical field
The present embodiments relate to wireless communication technology field more particularly to a kind of node expansion methods, apparatus and system.
Background technique
In internet industry, since the requirement of real-time of data access is high, data volume is big and concurrent pressure is big, many systems Distributed cache cluster can be disposed to adapt to the needs of business.Cache cluster is default by executing to all cache nodes mark Data access is distributed in multiple cache nodes in cache cluster by algorithm (for example, Hash strategy).
When business needs the support of more cache nodes, it is necessary to the dilatation cache node in existing cache cluster, it is existing Have in technology, is new cache node to be directly added into existing cache cluster, but existing caching is added in new cache node After in cluster, the variation of existing cache cluster Hash strategy will lead to, i.e., original partial data access may be divided It is fitted in new cache node, but new cache node is without former data.If being deployed with storage system in existing cache cluster, by The data in storage system can be accessed by distributing to the data access of new cache node, caused the access peak of a period of time, held very much Easily lead to storage system paralysis.If directly resulting in loss of data without deployment storage system behind cache cluster, causing industry Business loss.
Summary of the invention
The embodiment of the present invention provides a kind of node expansion method, device and system, at least to solve when in distribution The problem of cache invalidation generated when increasing new demand servicing node in caching system and loss of data.
In order to achieve the above objectives, the embodiment of the present invention adopts the following technical scheme that
In a first aspect, the embodiment of the present invention provides a kind of node expansion method, it is applied in distributed cache system, it is described Distributed cache system includes at least one service node and N number of physical server, and each service node includes a master Node and at least one from node, wherein the host node of same service node and be deployed in different physics clothes respectively from node Be engaged in device on, and at least one of same service node from node deployment on different physical servers;Wherein, N >=2;It is described Method includes:
S101, based on the received the first instruction information, the increase new demand servicing node in the distributed cache system, and from Destination service node is chosen in the distributed cache system;The new demand servicing node include host node and at least one from Node, the host node and described is deployed on different physical servers respectively from node;
S102, the internal storage data on the destination service node is sent on the host node of the new demand servicing node;
S103, the first request message is sent to caching agent unit, first request message is at least used to indicate described Caching agent unit updates the configuration information in the caching agent unit.
With reference to first aspect, in the first possible implementation of the first aspect, described based on the received first Indicate information, before increasing new demand servicing node in the distributed cache system, the method also includes:
S104, if it is determined that the distributed cache system meets node dilatation condition, then send the first prompt information, it is described First prompt information increases new demand servicing node for prompting in the distributed cache system.So, the distribution Whether caching system can be needs to increase new demand servicing node in distributed cache system described in timely learning, with the whole of lifting system Body performance, and avoid the loss and failure of internal storage data.
The possible implementation of with reference to first aspect the first, in second of possible implementation of first aspect In, the node dilatation condition is that there are the host nodes of first service node and all from node in the distributed cache system It is malfunctioning node, the new demand servicing node includes M a from node, wherein M is less than or equal to N, correspondingly, described at described point Increase new demand servicing node in cloth caching system, comprising:
The first host node is disposed on the first physical server, and first physics is removed in N number of physical server M physical server is chosen in N-1 physical server except server;
The M are disposed respectively on the M physical server from node, wherein one corresponds to an object from node Manage server.It so,, can be in time in the distributed caching there are when malfunctioning node in distributed cache system In system increase new demand servicing node, avoid the failure and loss of internal storage data, meanwhile, by the host node of new demand servicing node and from Node deployment is in the physical server event where the fooled new demand servicing node failure of different physical servers and new demand servicing node When barrier, host node can be substituted from node and provides corresponding service for the access request of user.
The possible implementation of with reference to first aspect the first, in the third possible implementation of first aspect In, first request message is used to indicate the caching agent unit for the configuration of the destination service node from the caching It is deleted in agent unit, and the mark of the new demand servicing node is changed to the target in the caching agent unit and is taken The mark of business node.So, when destination service node failure, new demand servicing can be updated in caching agent unit in time The configuration information of node, in order to be saved in time from service when the caching agent unit receives new access request Point obtains the internal storage data that the access request requests access to.
The possible implementation of with reference to first aspect the first, in the 4th kind of possible implementation of first aspect In, the node dilatation condition is that user's request amount is greater than the first preset threshold or the distribution in the distributed cache system The memory usage of formula caching system is greater than the second preset threshold;
The internal storage data by the destination service node is sent on the host node of the new demand servicing node, packet It includes:
According to user in the distributed cache system increased new demand servicing number of nodes, the memory of the newly-increased node And in the distributed cache system each service node memory, one is at least chosen from the distributed cache system Service node is destination service node;
The host node of the destination service node is migrated from the second physical server to third physical server, and by institute The all of destination service node are stated to be migrated respectively from node to all physical servers from where node of the newly-increased node On;Wherein, second physical server is the physical server where the host node of the destination service node;The third Physical server is the physical server where the host node of the new demand servicing node;
The internal storage data of the destination service node is migrated to the host node of the new demand servicing node, so that target takes The internal storage data of business node is migrated to the host node of the newly-increased node, user's request amount in the distributed cache system Less than the first preset threshold or the memory usage of the distributed cache system less than the second preset threshold.So, when When the memory usage or user's request amount of distributed cache system are respectively greater than the first preset threshold and the second preset threshold, The load that new demand servicing node is used to share existing service node can be increased, in distributed cache system in time to promote distribution The performance of formula caching system, while can be when the physical server failure where new demand servicing node failure and new demand servicing node When, host node, which can be substituted, from node provides corresponding service for the access request of user.
The 4th kind of possible implementation with reference to first aspect, in the 5th kind of possible implementation of first aspect In, after the host node that the internal storage data by the destination service node migrates to the new demand servicing node, the side Method further include:
Judge whether user's request amount is less than third predetermined threshold value or described distributed slow in the distributed cache system Whether the memory usage of deposit system is less than the 4th preset threshold;
If it is determined that user's request amount is less than third predetermined threshold value or the distributed caching in the distributed cache system The memory usage of system thens follow the steps S104 less than the 4th preset threshold.So, distribution can further be promoted The performance of caching system.
The 5th kind of possible implementation with reference to first aspect, in the 6th kind of possible implementation of first aspect In, first request message carries second port and the second IP address;
First request message is specifically used for indicating the caching agent unit by the first of the destination service node IP address and first port are changed to the second IP address and second port;Wherein, the first IP address is the destination service node Upper internal storage data does not migrate the preceding address in the caching agent unit, and the first port is on the destination service node Internal storage data does not migrate the preceding port in the caching agent unit, and the second IP address is memory on the destination service node Address of the internal storage data on the host node after Data Migration to new demand servicing node, the second port are the mesh The corresponding port of the internal storage data after internal storage data is migrated to the new demand servicing node on mark service node.So, When distributed cache system meets this dilatation condition, since internal storage data is migrated to fresh target node on destination service node After upper, the internal storage data on destination service node can correspond to a new IP address and access port, i.e. second port and the Two IP address, if updating caching agent not in time, when caching agent unit receives new access request, which is used for When internal storage data on access target service node, the first IP address before may not being migrated according to destination service node and First port can not get corresponding internal storage data from distributed cache system.Therefore by the first of the destination service node IP address and first port are changed to the second IP address and second port in this way, when there is new access to ask user's access target service When internal storage data on node, the internal storage data that the access request is directed toward can be returned into the access request in time.
Second aspect, the embodiment of the present invention provide a kind of node flash chamber, are applied in distributed cache system, described Distributed cache system includes at least one service node and N number of physical server, and each service node includes a master Node and at least one from node, wherein the host node of same service node and be deployed in different physics clothes respectively from node Be engaged in device on, and at least one of same service node from node deployment on different physical servers;Wherein, N >=2;It is described Device, comprising:
Execution unit increases new demand servicing in the distributed cache system for the based on the received first instruction information Node, and destination service node is chosen from the distributed cache system;The new demand servicing node include host node and At least one the host node and described is deployed on different physical servers respectively from node from node;
First transmission unit, for the internal storage data on the destination service node to be sent to the new demand servicing node On host node;
Second transmission unit sends the first request message to caching agent unit, and first request message is at least used for Indicate that the caching agent unit updates the configuration information in the caching agent unit.
In conjunction with second aspect, in the first possible implementation of the second aspect, described device further include:
First judging unit, for judging whether the distributed cache system meets node dilatation condition;
Third transmission unit, for determining that the distributed cache system meets node dilatation in first judging unit After condition, the first prompt information is sent, first prompt information is for prompting user in the distributed cache system Increase new demand servicing node.
In conjunction with second aspect, in a second possible implementation of the second aspect, the node dilatation condition is institute State in distributed cache system there are the host node of first service node and it is all from node be malfunctioning node, the new demand servicing Node includes M a from node, wherein M is less than or equal to N, and the execution unit includes at least deployment module, the deployment module tool Body is used for:
The first host node is disposed on the first physical server, and first physics is removed in N number of physical server M physical server is chosen in N-1 physical server except server;
The M are disposed respectively on the M physical server from node, wherein one corresponds to an object from node Manage server.
In conjunction with the third possible implementation of second aspect, in the third possible implementation of second aspect In, first request message is used to indicate the caching agent unit for the configuration of the destination service node from the caching It is deleted in agent unit, and the mark of the new demand servicing node is changed to the target in the caching agent unit and is taken The mark of business node.
In conjunction with second aspect, in the fourth possible implementation of the second aspect, the node dilatation condition is institute State the memory usage that user's request amount in distributed cache system is greater than the first preset threshold or the distributed cache system Greater than the second preset threshold;
First transmission unit, comprising:
Choose module, for according to user in the distributed cache system increased new demand servicing number of nodes, it is described The memory of each service node in the memory and the distributed cache system of newly-increased node, from the distributed cache system In at least choose a service node be destination service node;
Node transferring module, for migrating the host node of the destination service node from the second physical server to third Physical server, and by the destination service node it is all migrate respectively from node it is all from node to the newly-increased node On the physical server at place;Wherein, second physical server is the object where the host node of the destination service node Manage server;The third physical server is the physical server where the host node of the new demand servicing node;
Internal memory migration module, for migrating the internal storage data of the destination service node to the master of the new demand servicing node Node, so that the internal storage data of destination service node is migrated to the host node of the newly-increased node, it is described distributed slow User's request amount is pre- less than second less than the first preset threshold or the memory usage of the distributed cache system in deposit system If threshold value.
In conjunction with the 4th kind of possible implementation of second aspect, in the 5th kind of possible implementation of second aspect In, described device further includes second judgment unit,
The second judgment unit is specifically used for, and judges that whether user's request amount is less than in the distributed cache system Whether three preset thresholds or the memory usage of the distributed cache system are less than the 4th preset threshold;And in the distribution User's request amount is less than the memory usage of third predetermined threshold value or the distributed cache system less than the 4th in caching system When preset threshold, the third transmission unit is called.
In conjunction with the 5th kind of possible implementation of second aspect, in the 6th kind of possible implementation of second aspect In, first request message carries second port and the second IP address;
First request message is specifically used for indicating the caching agent unit by the first of the destination service node IP address and first port are changed to the second IP address and second port;Wherein, the first IP address is the destination service node Upper internal storage data does not migrate the preceding address in the caching agent unit, and the first port is on the destination service node Internal storage data does not migrate the preceding port in the caching agent unit, and the second IP address is memory on the destination service node Address of the internal storage data on the host node after Data Migration to new demand servicing node, the second port are the mesh The corresponding port of the internal storage data after internal storage data is migrated to the new demand servicing node on mark service node.
The third aspect, the embodiment of the present invention provide a kind of distributed cache system, and the distributed cache system includes Linux virtual server, at least one caching agent unit and at least one node flash chamber as described in any of the above.
The embodiment of the invention provides a kind of node expansion methods, first information are indicated based on the received, in the distribution Increase new demand servicing node in formula caching system, and chooses destination service node from the distributed cache system;The new clothes Be engaged in node include host node and at least one from node, due to from the internal storage data on node real-time synchronization host node, this For sample when host node failure, node flash chamber will can switch to host node from node automatically, provide service for access request, institute State host node and it is described be deployed on different physical servers respectively from node, in this way can be when the physics clothes where host node It, will be on the destination service node by providing service from node for the access request of distributed cache system when device failure of being engaged in Internal storage data be sent on the host node of the new demand servicing node;So when a host node breaks down or system When low memory, it can be substituted for host node from node, while the loss of internal storage data can also be avoided in time, to caching Agent unit sends the first request message, and first request message is at least used to indicate described in the caching agent unit update Configuration information in caching agent unit.In this way, when increasing new demand servicing node in distributed cache system, if receiving pipe The access request that reason person sends can get the access request instruction in time according to the configuration information in caching agent unit Internal storage data.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be in embodiment or description of the prior art Required attached drawing is briefly described, it should be apparent that, the accompanying drawings in the following description is only some realities of the invention Example is applied, it for those of ordinary skill in the art, without creative efforts, can also be according to these attached drawings Obtain other attached drawings.
Fig. 1 a is a kind of application architecture figure one of node expansion method provided in an embodiment of the present invention;
Fig. 1 b is a kind of application architecture figure two of node expansion method provided in an embodiment of the present invention;
Fig. 2 is a kind of flow diagram of node expansion method provided in an embodiment of the present invention;
Fig. 3 is a kind of application structure figure three of node expansion method provided in an embodiment of the present invention;
Fig. 4 is a kind of structural schematic diagram of node flash chamber provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
Fig. 1 a and Fig. 1 b are a kind of application architecture figure of node expansion method of the embodiment of the present invention, such as Fig. 1 a and Fig. 1 b institute Show, which includes that Linux virtual server, at least one caching agent unit and at least one node expand Capacitance device, wherein node flash chamber includes at least one service node and N number of physical server, each service node Including a host node and at least one from node, wherein the host node of same service node and be deployed in respectively not from node With physical server on, and same service node it is multiple from node deployment on different physical servers;Wherein, N >= 2;Administrator sends access request to LVS (Linux Virtual Server, Linux virtual server), and LVS is loaded using IP Balancing technique and it is based on content requests distribution technology, the access request that administrator sends balancedly is transferred to different physics and is taken It is executed on business device, after LVS does load balancing to access request, administrator's transmission access request is dealt into rear end caching agent list First (for example, Twemproxy);After caching agent unit receives the access request of LVS forwarding, it is distributed according to configuration strategy from backstage The access request for selecting a service node to send in formula caching system for administrator provides service.
Wherein, the caching agent unit of the embodiment of the present invention is connect with the node flash chamber, the caching agent unit It is at least with the following functions:
Receive the access request that administrator sends;
According to the strategy of access request option and installment, a main service node is selected to provide clothes from distributed cache system Business;
For monitoring the request message of node flash chamber transmission in real time;
According to when the first request message for receiving the transmission of node flash chamber, which saves comprising main service The change message of point and the change message of service node, and timely updated the caching agent according to first request message The configuration information of the configuration information of failed services node and new demand servicing node in unit, and real-time update distributed cache system And more new construction is sent to new host node.
For read and write abruption mode can be automatically turned on when configuring the one or more of some service node from node.It will Write request is sent to host node, and read request poll is sent to from node.
Wherein, service node is stored with internal storage data, and the access request for sending to administrator provides response, a clothes Be engaged in node include host node and at least one from node;Wherein, host node, for receiving the reading of caching agent unit transmission Or the request of data is write, when the access request that caching agent unit is sent is write request, the host node is used for access request In the memory storage of the internal storage data write-in host node of middle carrying, if the access request that the caching agent unit is sent is to read to ask It asks, the address for the target data that the host node is used to be carried according to the access request is read from the memory storage of host node The internal storage data that the target access address is directed toward, and the internal storage data is returned into caching agent unit, so that described The internal storage data that the target access address is directed toward is returned to administrator by caching agent unit.
From node, belong to the internal storage data on the host node of same service node with it for real-time synchronization;When caching generation When the access request for managing unit transmission is read request, the read data request of caching agent unit transmission also can handle from node, The internal storage data that the access request is directed toward, and the internal storage data returning response that the access request is directed toward are read from memory Give caching agent unit.
Physical server is used for the hardware environment of deployment services node.One or more can be disposed on one physical server A different service node.But the main and subordinate node of the same service node must be deployed in different physical servers.Physics clothes Business device can be divided into primary server and from server, primary server is the physical server for being deployed with host node, is from server It is deployed with the physical server for belonging to the slave node of the same service node with the host node.
The node flash chamber is connect with caching agent unit, and the node flash chamber is at least with the following functions:
For being interacted by API or application program with user, for example, can receive the first instruction of user's transmission Information, at the same to user show or the first prompt information of application program;
Whether operated normally for constantly checking primary server, from server, host node and from node;
For being sent to administrator or application program when some monitored primary server, host node break down First prompt information.For example, sending the first prompt information to user or other applications by API.
For when a primary server cisco unity malfunction, by the primary server that fails one of them from server updating For new primary server, and other of failure primary server is allowed to be changed to replicate new primary server from server;That one When the host node of service node breaks down, upgrade to host node from service node for one of the service node;Simultaneously when one The primary server of a service node and when breaking down from server, can be from the distributed cache system according to user The instruction information of input chooses destination service node, and the internal storage data on the service node of failure is sent to target clothes It is engaged on node.
Wherein, which can be the Sentinel unit in distributed cache system.
A node flash chamber, which can be disposed, in a distributed cache system monitors all service nodes, it can also portion Multiple node flash chambers are affixed one's name to monitor all service nodes jointly or monitor some or certain multiple service nodes respectively.
As shown in Figure 1 b, in the embodiment of the present invention each service node include a host node and at least one from section Point, host node are used for stored memory data, and deployment examples, from node for the internal storage data in real-time synchronization host node, often It the host node of a service node and is deployed on different physical servers respectively from node, for example, service as shown in Figure 1 b Node 1, including host node 1 and from node 1, host node 1 is deployed on physical server 1 service node 1, is disposed from node 1 , so, can be to avoid when the server fail where host node on physical server 2, it can be with from node The access request issued the user with provides service.
One service node is arranged to host node and is at least one from the purpose of node by the embodiment of the present invention, in order to When host node being avoided to break down, caused memory failure and the problem of lose, further, by the same service node Host node and be deployed on different physical servers respectively from node, so, when host node breaks down, and main section The physical server of point is when breaking down, due to from the internal storage data on node real-time synchronization host node, therefore can be by from section Point provides service to the access request of user.
Referring to fig. 2, Fig. 2 is a kind of node expansion method provided in an embodiment of the present invention, is applied to shown in Fig. 1 a and Fig. 1 b Distributed cache system in, which comprises
S101, node flash chamber first indicate information based on the received, increase in the distributed cache system new Service node, and destination service node is chosen from the distributed cache system;The new demand servicing node includes a main section Point and at least one from node, the host node and described be deployed on different physical servers respectively from node;
Internal storage data on the destination service node is sent to the new demand servicing section by S102, the node flash chamber On the host node of point;
S103, the node flash chamber send the first request message, first request message to caching agent unit It is at least used to indicate the caching agent unit and updates configuration information in the caching agent unit.
The embodiment of the invention provides a kind of node expansion methods, first information are indicated based on the received, in the distribution Increase new demand servicing node in formula caching system, and chooses destination service node from the distributed cache system;The new clothes Be engaged in node include host node and at least one from node, due to from the internal storage data on node real-time synchronization host node, this Sample, can be by providing service, the host node from node for the access request of distributed cache system when host node failure It is deployed on different physical servers respectively with described from node, it in this way can be when the physical server failure where host node When, by providing service from node for the access request of distributed cache system, by the memory number on the destination service node According to being sent on the host node of the new demand servicing node.So, when host node breaks down or Installed System Memory not When sufficient, it can be substituted for host node from node, while the loss of internal storage data can also be avoided in time, to caching agent list Member sends the first request message, and first request message is at least used to indicate the caching agent unit and updates the caching generation Manage the configuration information in unit.In this way, when increasing new demand servicing node in distributed cache system, if receiving administrator's hair The access request sent can get the memory of the access request instruction in time according to the configuration information in caching agent unit Data.
Wherein, each service node does principal and subordinate's deployment in the embodiment of the present invention, i.e., each service node disposes 1 host node With at least one from node.From the internal storage data of node meeting real-time synchronization host node, host node closes persistence function, You Congjie Point does persistence operation.
The very high scene of reliability requirement in order to improve master-slave synchronisation internal storage data, each service in the embodiment of the present invention Node configures 2 or more from node, and when configuring from node failure, host node refuses administrator's write operation.
The embodiment of the present invention to the node flash chamber receive first instruction information mode without limit, for example, It can be administrator or application program of the node flash chamber into distributed cache system and send the first prompt information Later, the administrator or application program send first to the node flash chamber and indicate information, are also possible to the pipe Reason person or application program oneself detect that the node flash chamber meets the backward node dilatation of node dilatation condition Device sends the first instruction information.
Illustratively, in order to enable the distributed cache system can be to be in distributed cache system described in timely learning It is no to need to increase new demand servicing node, with the overall performance of lifting system, and the loss and failure of internal storage data are avoided, the present invention is real Example is applied before executing step S101, the embodiment of the invention also includes:
S104, if it is determined that the distributed cache system meets node dilatation condition, then send the first prompt information, it is described First prompt information increases new demand servicing node for prompting in the distributed cache system.
Preferably, first prompt information is sent to administrator or application program by the node flash chamber.
The embodiment of the present invention without limiting, such as can be speech form to the concrete form of first prompt information Warning message, for example, showing described the to administrator by the warning device of establishing connection with the distributed cache system One prompt information, be also possible to written form shows administrator by impression window, is selected by administrator, in pipe Reason person is with when being intended to establish new demand servicing node in the distributed cache system, and the node flash chamber is described distributed slow New demand servicing node is established in deposit system.First prompt information can certainly be sent to the application program, for example, should Application program can be the monitoring programme operated in the distributed cache system, entire point of the monitoring programme real time monitoring Load, memory source of cloth caching system etc. can be alerted by alarm interface to alarming page when reaching the threshold value of setting On.Administrator, which views, can be confirmed after alarming page and needs to do System Expansion.
The embodiment of the present invention, without limiting, can according to need and be selected to the threshold value.
Wherein, the administrator or application program are filled by the node dilatation of api interface and the distributed cluster system Set connection.
It is further alternative, since there are a variety of node dilatation conditions, different sections in the distributed cache system It is different to caching agent that point dilatation condition corresponds to different increase new demand servicing node, correspondences in the distributed cache system Unit sends the content, function and the corresponding different sides that internal storage data is sent to new demand servicing node of the first request message Formula, the embodiment of the present invention is to this without limiting.Illustratively, the node dilatation condition in the embodiment of the present invention is the node Dilatation condition be the distributed cache system in there are the host node of first service node and it is all from node be failure section User's request amount is greater than the interior of the first preset threshold or the distributed cache system in point or the distributed cache system Occupancy is deposited greater than the second preset threshold.It is illustrated separately below.
In a kind of implementation, when the node dilatation condition is that there are first service sections in the distributed cache system The host node of point and all when being malfunctioning node from node, the new demand servicing node include M from node, wherein M is less than etc. In N, at this point, the destination service node is the service node to break down.So, in distributed cache system, exist When malfunctioning node, new demand servicing node can be increased in the distributed cache system in time, avoid the failure of internal storage data With loss, meanwhile, by the host node of new demand servicing node and from node deployment different physical servers be taken in new demand servicing node When physical server failure where failure and new demand servicing node, the access request that host node is user can be substituted from node Corresponding service is provided.
It is merely exemplary, step S101 can be specifically accomplished by the following way:
S1011A, the first host node is disposed on the first physical server, except described the in N number of physical server M physical server is chosen in N-1 physical server except one physical server;
S1012A, the M are disposed respectively on the M physical server from node, wherein one corresponding from node One physical server.
Illustratively, if the node dilatation condition is that there are first service nodes to correspond in the distributed cache system Host node and from node be malfunctioning node, at this point, do not lost in order to enable being stored in internal storage data on first service node, At this point, the node flash chamber sends the first prompt information to administrator or application program, administrator or application program are connect If same after receiving first prompt information be intended to establish new demand servicing node in the distributed cache system, to the node Flash chamber sends the first instruction information, and the node flash chamber is according to the first instruction information in the distributed caching It selects a physical server and to dispose the first host node on it in system, described the is removed in N number of physical server M physical server is chosen in N-1 physical server except one physical server, is divided on the M physical server The M are not disposed from node, and certainly, administrator can be according to the important of the internal storage data stored on the destination service node Property determine to dispose in the distributed memory system it is several from node, the embodiment of the present invention to this without limiting, Yong Huke To be selected as needed.
It is further alternative, when the node dilatation condition is that there are first service nodes in the distributed cache system Host node and all when being malfunctioning node from node, first request message is used to indicate the caching agent unit will The configuration of the destination service node is deleted from the caching agent unit, and will be described in the caching agent unit The mark of new demand servicing node is changed to the mark of the destination service node.It so, can when destination service node failure To update the configuration information of new demand servicing node in caching agent unit in time, in order to be received when the caching agent unit When new access request, the internal storage data that the access request requests access to can be obtained from service node in time.
It is further alternative, after step S1012A, the method also includes:
S1013A, the internal storage data on the destination service node is sent to and is arranged on first physical server Host node.
In order to guarantee that internal storage data is not to lose, guarantee leader follower replication reliability, promoted principal and subordinate all failure when it is reliable Property.The host node is periodically synchronous described newly-increased by the internal storage data being stored on the first host node according to predetermined period Other of service node are from node.
In another implementation, when the node dilatation condition is that user's request amount is big in the distributed cache system It is greater than the second preset threshold in the first preset threshold or the memory usage of the distributed cache system, illustratively, for Step S102 can specifically be accomplished by the following way:
S1021, according to user in the distributed cache system increased new demand servicing number of nodes, the newly-increased node Memory and the distributed cache system in each service node memory, at least selected from the distributed cache system Taking a service node is destination service node;
S1022, the host node of the destination service node is migrated from the second physical server to third physical server, And all of the destination service node are migrated respectively from node to all physics from where node of the newly-increased node On server;Wherein, second physical server is the physical server where the host node of the destination service node;Institute State the physical server where the host node that third physical server is the new demand servicing node;
S1023, the internal storage data of the destination service node is migrated to the host node of the new demand servicing node, so that The internal storage data of destination service node is migrated to the host node of the newly-increased node, user in the distributed cache system Request amount is less than the first preset threshold or the memory usage of the distributed cache system less than the second preset threshold.
Under the scene shown in Fig. 1 b, when in the distributed cache system user's request amount be greater than the first preset threshold or When the memory usage of the distributed cache system is greater than the second preset threshold, the method provided through the embodiment of the present invention can To migrate by the part host node on a physical server or from node to another physical server, for example, main section Point 2 is migrated to physical server 2, will be migrated from node 4 to physical server 2, it should be noted that is located at same physics Belong to the host node of different service nodes on server and can be located on same physical server after node, migration, It can be located on different physical servers, the embodiment of the present invention, without limiting, still, belongs to same service node to this It host node and is must be positioned on different physical servers after migration from node, host node 1 as shown in Figure 3 and from node 1, the host node 1 and belong to the same service node from node 1, before migration, as shown in Figure 1 b, host node 1 and from section Point 1 is located on physical server 1 and physical server 2, and after migration, the host node 1 is located on physical server 1, It is described to be located on physical server 3 from node 1.
Wherein, the embodiment of the present invention to the specific value of first preset threshold and second preset threshold without Limit, can be user and be configured in the application program by administrator, and by the first preset threshold set and Second preset threshold is sent to the node flash chamber, and the node flash chamber obtains the distributed cache system in time In user's request amount and memory usage be compared with the first preset threshold and the second preset threshold, when being more than default the When one preset threshold and the second preset threshold, the node flash chamber sends the first prompt to the application program or administrator Information.
It is further alternative, after the step S1023, the method also includes:
S1024, judge whether user's request amount is less than third predetermined threshold value or described point in the distributed cache system Whether the memory usage of cloth caching system is less than the 4th preset threshold;
S1025, if it is determined that in the distributed cache system user's request amount be less than third predetermined threshold value or the distribution The memory usage of formula caching system thens follow the steps S104 less than the 4th preset threshold.It can guarantee the distribution in this way Caching system by internal storage data lossless migration to new demand servicing node, avoids cache invalidation and loss of data in time.
It is further alternative, when the node dilatation condition is that user's request amount is greater than the in the distributed cache system One preset threshold or the memory usage of the distributed cache system are greater than the second preset threshold, and first request message is taken With second port and the second IP address;First request message is specifically used for indicating the caching agent unit by the mesh The first IP address and first port for marking service node are changed to the second IP address and second port;Wherein, the first IP address is Address before internal storage data does not migrate on the destination service node in the caching agent unit, the first port is institute The port before internal storage data on destination service node does not migrate in the caching agent unit is stated, the second IP address is the mesh Address of the internal storage data on the host node after internal storage data is migrated to new demand servicing node on mark service node, it is described Second port internal storage data correspondence after migrating for internal storage data on the destination service node to the new demand servicing node Port.
As shown in figure 4, the embodiment of the present invention provides a kind of node flash chamber, it is applied in distributed cache system, institute Stating distributed cache system includes at least one service node and N number of physical server, and each service node includes one Host node and at least one from node, wherein the host node of same service node and be deployed in different physics respectively from node On server, and at least one of same service node from node deployment on different physical servers;Wherein, N >=2;Section Point flash chamber 40, comprising:
Execution unit 401 increases new clothes for the based on the received first instruction information in the distributed cache system Business node, and destination service node is chosen from the distributed cache system;The new demand servicing node includes a host node With at least one from node, the host node and described it is deployed on different physical servers respectively from node;
First transmission unit 402, for the internal storage data on the destination service node to be sent to the new demand servicing section On the host node of point;
Second transmission unit 403 sends the first request message to caching agent unit, and first request message is at least used The configuration information in the caching agent unit is updated in the instruction caching agent unit.
The embodiment of the invention provides a kind of node flash chambers, first information are indicated based on the received, in the distribution Increase new demand servicing node in formula caching system, and chooses destination service node from the distributed cache system;The new clothes Be engaged in node include host node and at least one from node, due to from the internal storage data on node real-time synchronization host node, this Sample, can be by providing service, the host node from node for the access request of distributed cache system when host node failure It is deployed on different physical servers respectively with described from node, it in this way can be when the physical server failure where host node When, by providing service from node for the access request of distributed cache system, by the memory number on the destination service node According to being sent on the host node of the new demand servicing node;So when a host node breaks down or Installed System Memory is insufficient When, it can be substituted for host node from node, while the loss of internal storage data can also be avoided in time, to caching agent unit The first request message is sent, first request message is at least used to indicate the caching agent unit and updates the caching agent Configuration information in unit.In this way, when increasing new demand servicing node in distributed cache system, if receiving administrator's transmission Access request, the memory number of access request instruction can be got in time according to the configuration information in caching agent unit According to.
It is further alternative, described device further include:
First judging unit, for judging whether the distributed cache system meets node dilatation condition;
Third transmission unit, for determining that the distributed cache system meets node dilatation in first judging unit After condition, the first prompt information is sent, first prompt information is for prompting user in the distributed cache system Increase new demand servicing node.
Further alternative, the node dilatation condition is that there are first service nodes in the distributed cache system Host node and it is all from node be malfunctioning node, the new demand servicing node includes M from node, wherein M is less than or equal to N, institute Execution unit is stated including at least deployment module, the deployment module is specifically used for:
The first host node is disposed on the first physical server, and first physics is removed in N number of physical server M physical server is chosen in N-1 physical server except server;
The M are disposed respectively on the M physical server from node, wherein one corresponds to an object from node Manage server.
Further alternative, first request message is used to indicate the caching agent unit for the destination service section The configuration of point is deleted from the caching agent unit, and by the mark of the new demand servicing node in the caching agent unit Know the mark for being changed to the destination service node.
Further alternative, the node dilatation condition is that user's request amount is greater than first in the distributed cache system Preset threshold or the memory usage of the distributed cache system are greater than the second preset threshold;
First transmission unit, comprising:
Choose module, for according to user in the distributed cache system increased new demand servicing number of nodes, it is described The memory of each service node in the memory and the distributed cache system of newly-increased node, from the distributed cache system In at least choose a service node be destination service node;
Node transferring module, for migrating the host node of the destination service node from the second physical server to third Physical server, and by the destination service node it is all migrate respectively from node it is all from node to the newly-increased node On the physical server at place;Wherein, second physical server is the object where the host node of the destination service node Manage server;The third physical server is the physical server where the host node of the new demand servicing node;
Internal memory migration module, for migrating the internal storage data of the destination service node to the master of the new demand servicing node Node, so that the internal storage data of destination service node is migrated to the host node of the newly-increased node, it is described distributed slow User's request amount is pre- less than second less than the first preset threshold or the memory usage of the distributed cache system in deposit system If threshold value.
Further alternative, described device further includes second judgment unit,
The second judgment unit is specifically used for, and judges that whether user's request amount is less than in the distributed cache system Whether three preset thresholds or the memory usage of the distributed cache system are less than the 4th preset threshold;And in the distribution User's request amount is less than the memory usage of third predetermined threshold value or the distributed cache system less than the 4th in caching system When preset threshold, the third transmission unit is called.
In conjunction with the 5th kind of possible implementation of second aspect, in the 6th kind of possible implementation of second aspect In, first request message carries second port and the second IP address;
First request message is specifically used for indicating the caching agent unit by the first of the destination service node IP address and first port are changed to the second IP address and second port;Wherein, the first IP address is the destination service node Upper internal storage data does not migrate the preceding address in the caching agent unit, and the first port is on the destination service node Internal storage data does not migrate the preceding port in the caching agent unit, and the second IP address is memory on the destination service node Address of the internal storage data on the host node after Data Migration to new demand servicing node, the second port are the mesh The corresponding port of the internal storage data after internal storage data is migrated to the new demand servicing node on mark service node.
In several embodiments provided herein, it should be understood that disclosed system, device and method can be with It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit It divides, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components It can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown or The mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, the indirect coupling of device or unit It closes or communicates to connect, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit It is that the independent physics of each unit includes, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also realize in the form of hardware adds SFU software functional unit.
The above-mentioned integrated unit being realized in the form of SFU software functional unit can store and computer-readable deposit at one In storage media.Above-mentioned SFU software functional unit is stored in a storage medium, including some instructions are used so that a computer Equipment (can be personal computer, server or the network equipment etc.) executes the portion of each embodiment the method for the present invention Step by step.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (Read-Only Memory, abbreviation ROM), random access memory (Random Access Memory, abbreviation RAM), magnetic or disk etc. are various can store The medium of program code.
Finally, it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although Present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: it still may be used To modify the technical solutions described in the foregoing embodiments or equivalent replacement of some of the technical features; And these are modified or replaceed, technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution spirit and Range.

Claims (8)

1. a kind of node expansion method, which is characterized in that be applied in distributed cache system, the distributed cache system packet Include at least one service node and N number of physical server, each service node include a host node and at least one from Node, wherein the host node of same service node and be deployed on different physical servers respectively from node, and same service At least one of node is from node deployment on different physical servers;Wherein, N >=2;The described method includes:
S101, based on the received the first instruction information, increase new demand servicing node, and from described in the distributed cache system Destination service node is chosen in distributed cache system;The new demand servicing node include host node and at least one from section Point, the host node and described is deployed on different physical servers respectively from node;
S102, the internal storage data on the destination service node is sent on the host node of the new demand servicing node;
S103, the first request message is sent to caching agent unit, first request message is at least used to indicate the caching Agent unit updates the configuration information in the caching agent unit;
In the first instruction information based on the received, before increasing new demand servicing node in the distributed cache system, institute State method further include:
S104, if it is determined that the distributed cache system meets node dilatation condition, then send the first prompt information, described first Prompt information increases new demand servicing node for prompting in the distributed cache system.
2. the method according to claim 1, wherein the node dilatation condition is the distributed cache system It is middle there are the host node of first service node and it is all from node be malfunctioning node, the new demand servicing node includes M from section Point, wherein M is less than or equal to N, correspondingly, described increase new demand servicing node in the distributed cache system, comprising:
The first host node is disposed on the first physical server, and first physical services are removed in N number of physical server M physical server is chosen in N-1 physical server except device;
The M are disposed respectively on the M physical server from node, wherein one takes from the corresponding physics of node Business device.
3. the method according to claim 1, wherein the node dilatation condition is the distributed cache system Middle user's request amount is greater than the first preset threshold or the memory usage of the distributed cache system is greater than the second preset threshold;
The internal storage data by the destination service node is sent on the host node of the new demand servicing node, comprising:
According to user in the distributed cache system increased new demand servicing number of nodes, increase the memory of node and described newly The memory of each service node in distributed cache system at least chooses a service node from the distributed cache system For destination service node;
The host node of the destination service node is migrated from the second physical server to third physical server, and by the mesh Mark all of service node migrate from node respectively to all physical servers from where node of the newly-increased node;Its In, second physical server is the physical server where the host node of the destination service node;The third physics Server is the physical server where the host node of the new demand servicing node;
The internal storage data of the destination service node is migrated to the host node of the new demand servicing node, so that destination service section The internal storage data of point is migrated to the host node of the newly-increased node, and user's request amount is less than in the distributed cache system First preset threshold or the memory usage of the distributed cache system are less than the second preset threshold.
4. according to the method described in claim 3, it is characterized in that, being moved in the internal storage data by the destination service node It moves to after the host node of the new demand servicing node, the method also includes:
Judge whether user's request amount is less than third predetermined threshold value or the distributed caching system in the distributed cache system Whether the memory usage of system is less than the 4th preset threshold;
If it is determined that user's request amount is less than third predetermined threshold value or the distributed cache system in the distributed cache system Memory usage less than the 4th preset threshold, then follow the steps S104.
5. a kind of node flash chamber, which is characterized in that be applied in distributed cache system, the distributed cache system packet Include at least one service node and N number of physical server, each service node include a host node and at least one from Node, wherein the host node of same service node and be deployed on different physical servers respectively from node, and same service At least one of node is from node deployment on different physical servers;Wherein, N >=2;Described device, comprising:
Execution unit increases new demand servicing node for the based on the received first instruction information in the distributed cache system, And destination service node is chosen from the distributed cache system;The new demand servicing node includes a host node and at least one It is a from node, the host node and described be deployed on different physical servers respectively from node;
First transmission unit, for the internal storage data on the destination service node to be sent to the main section of the new demand servicing node Point on;
Second transmission unit sends the first request message to caching agent unit, and first request message is at least used to indicate The caching agent unit updates the configuration information in the caching agent unit;
Described device further include:
First judging unit, for judging whether the distributed cache system meets node dilatation condition;
Third transmission unit, for determining that the distributed cache system meets node dilatation condition in first judging unit Later, the first prompt information is sent, first prompt information is for prompting user to increase in the distributed cache system New demand servicing node.
6. device according to claim 5, which is characterized in that the node dilatation condition is the distributed cache system It is middle there are the host node of first service node and it is all from node be malfunctioning node, the new demand servicing node includes M from section Point, wherein M is less than or equal to N, and the execution unit includes at least deployment module, and the deployment module is specifically used for:
The first host node is disposed on the first physical server, and first physical services are removed in N number of physical server M physical server is chosen in N-1 physical server except device;
The M are disposed respectively on the M physical server from node, wherein one takes from the corresponding physics of node Business device.
7. device according to claim 5, which is characterized in that the node dilatation condition is the distributed cache system Middle user's request amount is greater than the first preset threshold or the memory usage of the distributed cache system is greater than the second preset threshold;
First transmission unit, comprising:
Choose module, for according to user in the distributed cache system increased new demand servicing number of nodes, increase newly node Memory and the distributed cache system in each service node memory, at least selected from the distributed cache system Taking a service node is destination service node;
Node transferring module, for migrating the host node of the destination service node from the second physical server to third physics Server, and by the destination service node it is all migrate respectively from node it is all where node to the newly-increased node Physical server on;Wherein, second physical server is the physics clothes where the host node of the destination service node Business device;The third physical server is the physical server where the host node of the new demand servicing node;
Internal memory migration module, for migrating the internal storage data of the destination service node to the main section of the new demand servicing node Point, so that the internal storage data of destination service node is migrated to the host node of the newly-increased node, the distributed caching User's request amount is default less than second less than the first preset threshold or the memory usage of the distributed cache system in system Threshold value.
8. a kind of distributed cache system, which is characterized in that the distributed cache system includes Linux virtual server, extremely A few caching agent unit and at least one node dilatation dress as described in claim 6-7 any one claim It sets.
CN201610222111.6A 2016-04-11 2016-04-11 A kind of node expansion method, apparatus and system Active CN105933391B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610222111.6A CN105933391B (en) 2016-04-11 2016-04-11 A kind of node expansion method, apparatus and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610222111.6A CN105933391B (en) 2016-04-11 2016-04-11 A kind of node expansion method, apparatus and system

Publications (2)

Publication Number Publication Date
CN105933391A CN105933391A (en) 2016-09-07
CN105933391B true CN105933391B (en) 2019-06-21

Family

ID=56840253

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610222111.6A Active CN105933391B (en) 2016-04-11 2016-04-11 A kind of node expansion method, apparatus and system

Country Status (1)

Country Link
CN (1) CN105933391B (en)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107919977B (en) * 2016-10-11 2021-09-03 阿里巴巴集团控股有限公司 Online capacity expansion and online capacity reduction method and device based on Paxos protocol
CN107104820B (en) * 2017-03-23 2020-02-07 国网江苏省电力公司信息通信分公司 Dynamic capacity-expansion daily operation and maintenance method based on F5 server node
CN107357532A (en) * 2017-07-14 2017-11-17 长沙开雅电子科技有限公司 A kind of new cache pre-reading implementation method of new cluster-based storage
CN107547635B (en) * 2017-08-04 2020-05-12 新华三大数据技术有限公司 Method and device for modifying IP address of large data cluster host
CN107886328B (en) * 2017-11-23 2021-01-26 深圳壹账通智能科技有限公司 Transaction processing method and device, computer equipment and storage medium
CN108121507B (en) * 2017-12-06 2021-04-02 北京奇艺世纪科技有限公司 Data processing method and device and electronic equipment
CN110309156A (en) * 2018-03-01 2019-10-08 阿里巴巴集团控股有限公司 Database Systems, database update, expansion method and equipment
CN108520025B (en) * 2018-03-26 2020-12-18 腾讯科技(深圳)有限公司 Service node determination method, device, equipment and medium
CN108717379B (en) * 2018-05-08 2023-07-25 平安证券股份有限公司 Electronic device, distributed task scheduling method and storage medium
CN108829787B (en) * 2018-05-31 2022-06-17 郑州云海信息技术有限公司 Metadata distributed system
CN109375966B (en) * 2018-08-03 2020-07-03 北京三快在线科技有限公司 Method, device and equipment for initializing node and storage medium
CN110019148B (en) * 2018-09-07 2021-05-25 网联清算有限公司 Database capacity management method and device, storage medium and computer equipment
CN111106947B (en) * 2018-10-29 2023-02-07 北京金山云网络技术有限公司 Node downtime repairing method and device, electronic equipment and readable storage medium
CN111338647B (en) * 2018-12-18 2023-09-12 杭州海康威视数字技术股份有限公司 Big data cluster management method and device
CN111435320B (en) * 2019-01-14 2023-04-11 阿里巴巴集团控股有限公司 Data processing method and device
CN110221916B (en) * 2019-05-23 2021-07-20 北京奇艺世纪科技有限公司 Memory capacity expansion method and device, configuration center system and electronic equipment
CN111010448B (en) * 2019-12-23 2022-06-03 北京奇艺世纪科技有限公司 Distributed message system and data center DC
CN111290838B (en) * 2020-05-09 2020-08-18 支付宝(杭州)信息技术有限公司 Application access request processing method and device based on container cluster
CN111556167A (en) * 2020-05-19 2020-08-18 湖南快乐阳光互动娱乐传媒有限公司 Video CDN node instant capacity expansion method, capacity expansion virtual machine room and CND system
CN111338806B (en) * 2020-05-20 2020-09-04 腾讯科技(深圳)有限公司 Service control method and device
CN112491995A (en) * 2020-11-18 2021-03-12 浪潮云信息技术股份公司 High-availability Redis service architecture and method
CN113407493A (en) * 2021-06-18 2021-09-17 北京金山云网络技术有限公司 Operation method, data read-write method, device, electronic equipment and medium
CN113806068B (en) * 2021-07-30 2023-12-12 上海晶赞融宣科技有限公司 Capacity expansion method and device for service system, readable storage medium and terminal
CN114138825A (en) * 2021-11-24 2022-03-04 聚好看科技股份有限公司 Server and method for providing data query service for application program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102025550A (en) * 2010-12-20 2011-04-20 中兴通讯股份有限公司 System and method for managing data in distributed cluster
CN103034664A (en) * 2011-10-10 2013-04-10 上海盛霄云计算技术有限公司 Method, system and device for controlling data migration of database
CN103747073A (en) * 2013-12-30 2014-04-23 乐视网信息技术(北京)股份有限公司 Distributed caching method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102025550A (en) * 2010-12-20 2011-04-20 中兴通讯股份有限公司 System and method for managing data in distributed cluster
CN103034664A (en) * 2011-10-10 2013-04-10 上海盛霄云计算技术有限公司 Method, system and device for controlling data migration of database
CN103747073A (en) * 2013-12-30 2014-04-23 乐视网信息技术(北京)股份有限公司 Distributed caching method and system

Also Published As

Publication number Publication date
CN105933391A (en) 2016-09-07

Similar Documents

Publication Publication Date Title
CN105933391B (en) A kind of node expansion method, apparatus and system
US20200310660A1 (en) Identifying sub-health object storage devices in a data storage system
CN103078927B (en) Key-value data distributed caching system and method thereof
CN107846358B (en) Data transmission method, device and network system
CN105095317B (en) Distributed data base service management system
CN104935654A (en) Caching method, write point client and read client in server cluster system
CN106059791B (en) Link switching method of service in storage system and storage device
CN104243527A (en) Data synchronization method and device and distributed system
CN106062717A (en) Distributed storage replication system and method
CN106844399A (en) Distributed data base system and its adaptive approach
CN108572976A (en) Data reconstruction method, relevant device and system in a kind of distributed data base
CN103207841A (en) Method and device for data reading and writing on basis of key-value buffer
CN102355369A (en) Virtual clustered system as well as processing method and processing device thereof
CN107404509B (en) Distributed service configuration system and information management method
CN106302607A (en) It is applied to block storage system and the method for cloud computing
CN112153133B (en) Data sharing method, device and medium
CN109918021B (en) Data processing method and device
CN110096220B (en) Distributed storage system, data processing method and storage node
CN109845192B (en) Computer system and method for dynamically adapting a network and computer readable medium
EP3232609A1 (en) Locking request processing method and server
CN105516263A (en) Data distribution method, device in storage system, calculation nodes and storage system
CN105407117A (en) Distributed data backup method, device and system
WO2016082078A1 (en) Path management system, device and method
CN107682411A (en) A kind of extensive SDN controllers cluster and network system
CN103563304A (en) Switch configuration method and cluster management device based on virtual networking

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20170209

Address after: 266000 Shandong Province, Qingdao city Laoshan District Songling Road No. 399

Applicant after: Poly Polytron Technologies Inc

Address before: 266071 Laoshan, Qingdao province Hongkong District No. East Road, room 248, room 131

Applicant before: Qingdao Hisense Media Networks Co., Ltd.

GR01 Patent grant
GR01 Patent grant