CN107943589A - The management method and device of a kind of data buffer storage - Google Patents

The management method and device of a kind of data buffer storage Download PDF

Info

Publication number
CN107943589A
CN107943589A CN201711226257.9A CN201711226257A CN107943589A CN 107943589 A CN107943589 A CN 107943589A CN 201711226257 A CN201711226257 A CN 201711226257A CN 107943589 A CN107943589 A CN 107943589A
Authority
CN
China
Prior art keywords
redis
data
class
machine groups
machines
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711226257.9A
Other languages
Chinese (zh)
Inventor
孙迁
叶国华
周毅
殷剑锋
邱进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suning Commerce Group Co Ltd
Original Assignee
Suning Commerce Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suning Commerce Group Co Ltd filed Critical Suning Commerce Group Co Ltd
Priority to CN201711226257.9A priority Critical patent/CN107943589A/en
Publication of CN107943589A publication Critical patent/CN107943589A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention discloses the management method and device of a kind of data buffer storage, it is related to big data technical field, enables to maintenance phase user normally to access.The present invention includes:According to capacity adjustment information, the second class redis machine groups are distributed for target service system;Suspend the data distributing queue at the same time, and by Refresh Data into the second class redis machine groups.After Flushing success, start the data distributing queue, and the second class redis machine groups are directed toward in the data distributing queue, wherein, after the completion of Refresh Data, the access request that the target service system receives is handled by the second class redis machines group;By the first kind redis machines group and rejoin resource pool.The present invention is suitable for the caching on-line rapid estimation under high concurrent scene.

Description

The management method and device of a kind of data buffer storage
Technical field
The present invention relates to the management method and device of big data technical field, more particularly to a kind of data buffer storage.
Background technology
At present, all kinds of online transaction platforms, ticket sale system, game server etc., the access pressure of required carrying is increasingly Height, concurrency all the time is with as astronomical figure.Such as the merchandise display system inquiry price per second for needing to receive magnanimity Access.
In order to tackle the pressure of high concurrent access, it usually needs mass data is loaded into caching.And it is much at present System is the method using hotspot caching, and resistance to compression mechanism is not high, very big pressure can be brought with database, so as to reduce stability. And use more set caching systems to carry out load balancing, although the stability of caching can be improved, in addition set up a set of caching system System also increases the operation cost of system.And during actual use, generally require multiple business of reaching the standard grade on a platform System, is separately configured extra caching, increased operation cost is very high for often set system.
Therefore, cache optimization mode common at present, mainly or when system-down is safeguarded, passes through operation number before According to the loading condition for estimating following a period of time, and it is System Expansion part of cache, on system initialization continues after maintenance Line.And will necessarily so cause system can not be accessed in maintenance phase, such as:Game can not log in, some webpages go out after opening Existing 404 pages etc..So that user can not normally access system, the operating loss of maintenance phase is caused.
The content of the invention
The embodiment of the present invention provides a kind of management method and device of data buffer storage, enables to maintenance phase user's energy Enough normal access.
To reach above-mentioned purpose, the embodiment of the present invention adopts the following technical scheme that:
According to capacity adjustment information, the second class redis machine groups are distributed for target service system;Suspend the data at the same time Queue is issued, and by Refresh Data into the second class redis machine groups.After Flushing success, start the data distributing team Row, and the second class redis machine groups are directed toward in the data distributing queue, wherein, after the completion of Refresh Data, pass through institute State the second class redis machines group and handle the access request that the target service system receives;By the first kind redis machines Group simultaneously rejoins resource pool.
It ensure that in the case of dilatation, the full dose of system caching (being stored in first kind redis machine groups) can be by Normal access queries price so that under the basis that price service normally provides, carry out full dose cache flush dilatation.Also, due to Before dilatation success, the data inside redis machine groups are had no effect on, it is assumed that on the premise of dilatation failure, first kind redis Machine group still can normally provide service.Realize operation system it is online in the state of carry out capacity adjustment, wherein in not The normal queries of disconnected operation system, access, and improves the scalability of system.Redis is fought for especially for multiple operation systems The situation of resource, is conducive to the allocative efficiency that management server improves cache resources.
Further, due to when high concurrent is inquired about, still can by caching process access request, also just reduce to The number of database access inquiry, while ensureing the stability of database, also maintains the performance of inquiry service.
Brief description of the drawings
To describe the technical solutions in the embodiments of the present invention more clearly, below will be to needed in the embodiment Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the present invention, for ability For the those of ordinary skill of domain, without creative efforts, it can also be obtained according to these attached drawings other attached Figure.
Fig. 1 is a kind of system architecture schematic diagram provided in an embodiment of the present invention;
Fig. 2 is method flow schematic diagram provided in an embodiment of the present invention;
Fig. 3 is instantiation schematic diagram provided in an embodiment of the present invention;
Fig. 4 is apparatus structure schematic diagram provided in an embodiment of the present invention.
Embodiment
To make those skilled in the art more fully understand technical scheme, below in conjunction with the accompanying drawings and specific embodiment party Formula is described in further detail the present invention.Embodiments of the present invention are described in more detail below, the embodiment is shown Example is shown in the drawings, wherein same or similar label represents same or similar element or has identical or class from beginning to end Like the element of function.It is exemplary below with reference to the embodiment of attached drawing description, is only used for explaining the present invention, and cannot It is construed to limitation of the present invention.Those skilled in the art of the present technique are appreciated that unless expressly stated, odd number shape used herein Formula " one ", "one", " described " and "the" may also comprise plural form.It is to be further understood that the specification of the present invention The middle wording " comprising " used refers to there are the feature, integer, step, operation, element and/or component, but it is not excluded that In the presence of or other one or more features of addition, integer, step, operation, element, component and/or their groups.It should be understood that When we claim element to be " connected " or during " coupled " to another element, it can be directly connected or coupled to other elements, or There may also be intermediary element.In addition, " connection " used herein or " coupling " can include wireless connection or coupling.Here make Wording "and/or" includes any cell of one or more associated list items and all combines.The art Technical staff is appreciated that unless otherwise defined all terms (including technical term and scientific terminology) used herein have The meaning identical with the general understanding of the those of ordinary skill in fields of the present invention.It is it should also be understood that such as general Those terms defined in dictionary, which should be understood that, to be had a meaning that is consistent with the meaning in the context of the prior art, and Unless being defined as here, will not be explained with the implication of idealization or overly formal.
Method flow in the present embodiment, can specifically perform, which includes in a kind of system as shown in Figure 1: Operation system, a kind of Redis (key-value storage systems) clusters of machines, management server, background data base and user are whole End, each end equipment of system can establish channel by internet between each other, and be carried out by respective data transmission port Data interaction.Wherein:
Operation system and management server disclosed in the present embodiment, on hardware view can be specifically work station, The equipment such as supercomputer, or a kind of server cluster for data processing being made of multiple servers.Wherein, manage Server is managed, refers to being used to manage Redis clusters of machines, and the server apparatus of monitoring operation system in real time, usually Management server be provided in the Operation and Maintenance Center for each operation system of background maintenance.
Redis clusters of machines disclosed in the present embodiment, can be specifically by multiple servers, slow on hardware view Deposit a kind of device clusters for data processing of machine composition.Cache resources in Redis clusters of machines are used for storage backstage Data in database, in order to which operation system is when receiving the access request of user terminal transmission, from Redis clusters of machines Middle extraction data return to user terminal, and relative to acquisition data are directly accessed the database, number can be improved by being extracted from caching According to feedback speed.
In background data base, the required data used when storing operation system operation, such as:Price data, logistics number According to, inventory data etc..Background data base can specifically use database schema common at present, type.
Operation system, can specifically be made of on hardware view multiple servers, and one kind is used to run online industry The inventive system runed in the system of business, such as online shopping platform, form ordering system, notice system etc..
User terminal, can specifically make an independent equipment in fact, or is integrated in a variety of media datas and plays system In system, such as smart mobile phone, tablet computer (Tablet Personal Computer), laptop computer (Laptop Computer), personal digital assistant (personal digital assistant, abbreviation PDA) etc..Such as:The online purchase of access The user terminal of thing platform, is usually used by a user in price queries, places an order, the operation such as merchandise news browses.
The embodiment of the present invention provides a kind of management method of data buffer storage, as shown in Fig. 2, including:
S1, according to capacity adjustment information, distribute the second class redis machine groups for target service system.
Wherein, the capacity adjustment information is at least used to represent redis number of machines to be adjusted, first kind redis machines Group is the currently used redis machine groups of the data distributing queue of the target service system.
Target service system described in the present embodiment, it can be understood as:In the operation system of multiple on-line operations, need Caching dilatation, capacity reducing or the operation system for waiting the on-line maintenances such as appearance replacement are carried out, as holding for the present embodiment logic flow Row target.
First kind redis machine groups, it can be understood as:The currently used redis machine groups of target service system.Second class Redis machine groups, it can be understood as:According to capacity adjustment information, the identified redis for needing to distribute to target service system Machine group.
Data distributing queue, uses specifically for background system to operation system refresh data, common, in operation system Each project be required for providing corresponding redis and read interface, in order to read each project from Redis clusters of machines Required data.Such as:Pricedata refreshes the commodity price of real-time update to merchandise display system.
Include redis number of machines of the target service system after on-line maintenance in capacity adjustment information.Such as:Such as Fig. 3 Shown, when the redis number of machines of the second class redis machine groups is more than first kind redis machine groups, as to target service The dilatation operation of system;It is pair when the redis number of machines of the second class redis machine groups is less than first kind redis machine groups The capacity reducing operation of target service system;When the redis number of machines of the second class redis machine groups is more than first kind redis machine groups When, replacement operation as is held to waiting for target service system, wherein, MySQL (a kind of Relational DBMS).
Specifically, capacity adjustment information can be sent from the Operation and Maintenance Center of target service system to management server, such as: Management server receives the capacity adjustment information that O&M terminal is sent, and the O&M terminal, which is located at, to be used on the target The Operation and Maintenance Center of operation system.
Alternatively, it can also be automatically generated by management server according to the operation conditions of target service system, in order to automatic Caching dilatation, capacity reducing are carried out to target service system or waits the on-line maintenances such as appearance replacement.Such as:Within each monitoring cycle, Gather the daily record data of the target service system.When detected according to the daily record data meet capacity regulation rule when, it is raw Into the capacity adjustment information.Wherein, the physical record load within a period of time of target service system in daily record data Situation, capacity regulation rule can be specifically a series of trigger thresholds set according to the load parameter of target service system, When the load in a period of time reaches the trigger threshold of level, then the corresponding redis machines of this level are read Number, and write capacity adjustment information.
S2, the pause data distributing queue, and by Refresh Data into the second class redis machine groups.
Wherein, the target service system is handled by the first kind redis machines group in data refresh procedure to connect Received access request.Since the data distributing queue suspends, the number in the second class redis machine groups is flushed to According to, specifically include and currently refreshed via background data base to first kind redis machines group in the present embodiment, Redis machines Device cluster be used for operation system provide cache resources, specifically, Redis clusters of machines can as the caching of operation system, The data that storage is called by background data base.When operation system receives user terminal access request, it is necessary to be adjusted from caching Take corresponding data and return to user terminal.
S3, start the data distributing queue, and the data distributing queue is directed toward the second class redis machines Group.
Wherein, after the completion of Refresh Data, the target service system is handled by the second class redis machines group and is connect Received access request.
S4, by the first kind redis machines group and rejoin resource pool.
Specifically, resource pool can be established according to the cache resources in Redis clusters of machines, management server is from resource pool The cache resources of middle extraction are redis machine groups, a redis machine group include one in Redis clusters of machines or The multiple hardware devices of person.
In existing scheme, whenever massive promotional campaign, online shopping platform needs dilatation redis machine groups, so as to The access pressure of high concurrent can be carried in price system, background data base needs the price data of magnanimity being loaded into redis In machine group.Many systems are the methods using hotspot caching at present, and hotspot caching cannot make full use of the pressure resisting machine of redis System, very big pressure can be brought with database, system can not ensure the stability of tower.Moreover, can not while dilatation is safeguarded, Ensure that user can be by normal access queries price, a set of standby system unless specifically configured.And in massive promotional campaign, The operation system of the access pressure of high concurrent can be met with, commodity information system, form ordering system, thing a kind of more than price system The almost all of operation system such as continuous query system can all meet with access pressure to some extent, be each due to cost problem A operation system all specially sets the also hardly possible realization of a set of standby system.
Which results in existing scheme, easily purchase can not inquire price or inquire the valency of mistake in dilation process Lattice, have seriously affected business (situation that can not find out price when 2-3 is small often occur, cause sale to be interrupted).
And the scheme of the present embodiment, it ensure that in the case of dilatation, the full dose caching of system (is stored in the first kind In redis machine groups) can be by normal access queries price so that under the basis that price service normally provides, carry out full dose and delay Deposit refreshing dilatation.Also, due to before dilatation success, having no effect on the data inside redis machine groups, it is assumed that fail in dilatation On the premise of, first kind redis machines group still can normally provide service.Realize operation system it is online in the state of into Row capacity adjusts, wherein the normal queries of non-interrupting service system, access, improves the scalability of system.Especially for more A operation system fights for the situation of redis resources, is conducive to the allocative efficiency that management server improves cache resources.
Further, due to when high concurrent is inquired about, still can by caching process access request, also just reduce to The number of database access inquiry, while ensureing the stability of database, also maintains the performance of inquiry service.
In the present embodiment, the second class redis machines are distributed for target service system according to capacity adjustment information in step S1 The concrete mode of device group, including:
According to the redis number of machines to be adjusted, the second class redis machines are marked off from the resource pool Group.
Increase reading in scm (Software Configuration Management, software configuration management) configuration to match somebody with somebody Put.
Wherein, the reading configuration newly increased includes the cache client generation corresponding to the second class redis machine groups Code.Cache client specifically can be understood as connecting the component of redis on SCM platforms.
Further, by Refresh Data to the concrete mode in the second class redis machine groups, bag described in step S2 Include:The reading newly increased in being configured according to the scm configures, and runs new cache client and carries out full dose cache flush, by institute The data before data distributing queue pause are stated, flush to the second class redis machine groups.
Wherein, the new cache client is according to the caching client for corresponding to the second class redis machine groups Code is held to perform.Specifically, after receiving the queue of data in operation system of stopping, backstage passes through new caching client in project End carries out full dose cache flush, flushes in new redis servers.Such as:Need to expand according to capacity adjustment information application During the redis number of machines of appearance, scm configurations (or be scm cached configurations, such as:It is by the two .xml files irised out Scm cached configurations) on one new redis of increase read configuration, increase a new caching reading client code in project (such as two redis client codes for reading scm configurations respectively in project, wherein, CacheClientUtil.java is pair The client code file of first kind redis machine groups is answered, CacheRefreshClientUtil.java is corresponding second class The client code file of redis machine groups, CacheRefreshClientUtil.java are the generation of new cache client Code file).By configuring increase reading configuration in scm so that original reading configuration is unaffected in scm configurations, i.e., The redis configurations of existing service and normal reading are not influenced.
In the present embodiment, the reading interface of 2 redis can be provided, wherein 1 uses when safeguarding.For example, upper On the basis of stating flow, specifically further include:
In data refresh procedure, when the target service system receives the access request that user terminal is sent, adjust Interface is read with the first redis, wherein, the first redis reads interface and is used to read number from the first kind redis machines group According to the 2nd redis reads interface and is used to read data from the second class redis machines group;
Interface is read by the first redis, according to the access request from the first kind redis machine group pollings Data, and the data inquired about are returned into the user terminal.
Wherein, the first redis reads interface and is used for before the data distributing queue is fixed tentatively, from the first kind redis Machine group reads data, i.e., in dilation process, reads interface by the first redis, first kind redis machine groups still provide Query function.After the second class redis machine groups take over the work of the second class redis machines group, read and connect by the 2nd redis Mouthful, the second class redis machine groups start to provide query function.Such as:After the queue of reception data all in cutting off project, Then carrying out full dose cache flush by the new cache client in backstage in project, (the specific of data refreshes rule, according to second The sharding rules of class redis machine groups are refreshed), flush in new redis servers.Wherein, due to matching somebody with somebody in scm Put and add a new redis reading configuration, increase a new caching reading client in project, so as to fulfill 2 The reading interface of redis, so as not to influence the redis of existing service configurations and normal reading.
Further, upon step s 2, before performing step S3, verification process is further included:
Before the data distributing queue is started, by the redis shardName of the second class redis machine groups, It imported into the configuration of inquiry service.
When verifying that the second class redis machines group can be accessed normally, start the data distributing queue.
Wherein, redis shardName are the names of corresponding every group of redis.The configuration of inquiry service can be understood as looking into The redis configuration files used always during inquiry.
In the present embodiment, by the first kind redis machines group and the specific side of resource pool is rejoined in step S4 Formula, including:
By the redis shardName of the first kind redis machine groups, deleted from the configuration of the inquiry service, And the redis shardName of the first kind redis machine groups are re-registered in the Resources list of the resource pool.Example Such as:After refreshing well, the redis shardName newly applied are copied into being configured to for inquiry service, are verified, if Have no problem, all queues that issues are started up, allow getting lodged in the data inside queue and enter in the second class redis machine groups Face and background data base, then first kind redis machine groups are returned into management server and cache resources are returned into resource pool.
The embodiment of the present invention also provides a kind of managing device of data buffer storage, which specifically may operate in such as Fig. 1 institutes In the management server shown.The device is as shown in Figure 4, including:
Distribution module, for according to capacity adjustment information, the second class redis machine groups, institute to be distributed for target service device State capacity adjustment information to be at least used to represent redis number of machines to be adjusted, first kind redis machines group is the target service The currently used redis machine groups of the data distributing queue of device;
Data management module, for suspending the data distributing queue, and by Refresh Data to the second class redis machines In device group, wherein, the target service device reception is handled by the first kind redis machines group in data refresh procedure The access request arrived;
Handover module, second class is directed toward for starting the data distributing queue, and by the data distributing queue Redis machine groups, wherein, after the completion of Refresh Data, the target service is handled by the second class redis machines group and is filled Put the access request received;
Resource management module, for by the first kind redis machines group and rejoining resource pool.
Wherein, the distribution module, specifically for according to the redis number of machines to be adjusted, from the resource pool Mark off the second class redis machine groups;Configuration is read in increase in scm configurations, wherein, the reading configuration newly increased includes Correspond to the cache client code of the second class redis machine groups.
The data management module, carries out full dose cache flush, by the number specifically for running new cache client According to the data before queue suspends are issued, the second class redis machine groups are flushed to, wherein, the new cache client root Performed according to the cache client code corresponding to the second class redis machine groups.
In existing scheme, whenever massive promotional campaign, online shopping platform needs dilatation redis machine groups, so as to The access pressure of high concurrent can be carried in price system, background data base needs the price data of magnanimity being loaded into redis In machine group.Many systems are the methods using hotspot caching at present, and hotspot caching cannot make full use of the pressure resisting machine of redis System, very big pressure can be brought with database, system can not ensure the stability of tower.Moreover, can not while dilatation is safeguarded, Ensure that user can be by normal access queries price, a set of standby system unless specifically configured.And in massive promotional campaign, The operation system of the access pressure of high concurrent can be met with, commodity information system, form ordering system, thing a kind of more than price system The almost all of operation system such as continuous query system can all meet with access pressure to some extent, be each due to cost problem A operation system all specially sets the also hardly possible realization of a set of standby system.
Which results in existing scheme, easily purchase can not inquire price or inquire the valency of mistake in dilation process Lattice, have seriously affected business (situation that can not find out price when 2-3 is small often occur, cause sale to be interrupted).
And the scheme of the present embodiment, it ensure that in the case of dilatation, the full dose caching of system (is stored in the first kind In redis machine groups) can be by normal access queries price so that under the basis that price service normally provides, carry out full dose and delay Deposit refreshing dilatation.Also, due to before dilatation success, having no effect on the data inside redis machine groups, it is assumed that fail in dilatation On the premise of, first kind redis machines group still can normally provide service.Realize operation system it is online in the state of into Row capacity adjusts, wherein the normal queries of non-interrupting service system, access, improves the scalability of system.Especially for more A operation system fights for the situation of redis resources, is conducive to the allocative efficiency that management server improves cache resources.
Further, due to when high concurrent is inquired about, still can by caching process access request, also just reduce to The number of database access inquiry, while ensureing the stability of database, also maintains the performance of inquiry service.
Each embodiment in this specification is described by the way of progressive, identical similar portion between each embodiment Divide mutually referring to what each embodiment stressed is the difference with other embodiment.It is real especially for equipment For applying example, since it is substantially similar to embodiment of the method, so describing fairly simple, related part is referring to embodiment of the method Part explanation.The above description is merely a specific embodiment, but protection scope of the present invention is not limited to This, any one skilled in the art the invention discloses technical scope in, the change that can readily occur in or replace Change, should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with the protection model of claim Subject to enclosing.

Claims (10)

  1. A kind of 1. management method of data buffer storage, it is characterised in that including:
    According to capacity adjustment information, the second class redis machine groups are distributed for target service system, the capacity adjustment information is at least For representing redis number of machines to be adjusted, first kind redis machines group is the data distributing queue of the target service system Currently used redis machine groups;
    Suspend the data distributing queue, and by Refresh Data into the second class redis machine groups, wherein, in data brush The access request received during new by the first kind redis machines group processing target service system;
    Start the data distributing queue, and the second class redis machine groups are directed toward in the data distributing queue, wherein, After the completion of Refresh Data, the access received by the second class redis machines group processing target service system please Ask;
    By the first kind redis machines group and rejoin resource pool.
  2. 2. it is target service system according to the method described in claim 1, it is characterized in that, described according to capacity adjustment information The second class redis machine groups are distributed, including:
    According to the redis number of machines to be adjusted, the second class redis machine groups are marked off from the resource pool;
    Configuration is read in increase in scm configurations, wherein, the reading configuration newly increased includes corresponding to the second class redis The cache client code of machine group.
  3. It is 3. according to the method described in claim 2, it is characterized in that, described by Refresh Data to the second class redis machines In group, including:
    The reading newly increased in being configured according to the scm configures, and runs new cache client and carries out full dose cache flush, by institute The data before data distributing queue pause are stated, flush to the second class redis machine groups, wherein, the new caching client End is performed according to the cache client code corresponding to the second class redis machine groups.
  4. 4. according to the method described in claim 1, it is characterized in that, further include:
    In data refresh procedure, when the target service system receives the access request that user terminal is sent, the is called One redis reads interface, wherein, the first redis reads interface and is used to read data from the first kind redis machines group, the Two redis read interface and are used to read data from the second class redis machines group;
    Interface is read by the first redis, according to the access request from the first kind redis machines group polling number According to, and the data inquired about are returned into the user terminal.
  5. 5. according to the method described in claim 1, it is characterized in that, further include:
    Before the data distributing queue is started, the redis shardName of the second class redis machine groups are imported Into the configuration of inquiry service;
    When verifying that the second class redis machines group can be accessed normally, start the data distributing queue.
  6. 6. according to the method described in claim 1, it is characterized in that, it is described by the first kind redis machines group and again plus Enter resource pool, including:
    By the redis shardName of the first kind redis machine groups, deleted from the configuration of the inquiry service, and will The redis shardName of the first kind redis machine groups are re-registered in the Resources list of the resource pool.
  7. 7. according to the method described in claim 1, it is characterized in that, further include:
    The capacity adjustment information that O&M terminal is sent is received, the O&M terminal, which is located at, to be used on the target service system The Operation and Maintenance Center of system;
    Alternatively, within each monitoring cycle, the daily record data of the target service system is gathered;Examined when according to the daily record data Measure when meeting capacity regulation rule, generate the capacity adjustment information.
  8. A kind of 8. managing device of data buffer storage, it is characterised in that including:
    Distribution module, for according to capacity adjustment information, the second class redis machine groups, the appearance to be distributed for target service device Amount adjustment information is at least used to represent redis number of machines to be adjusted, and first kind redis machines group is the target service device The currently used redis machine groups of data distributing queue;
    Data management module, for suspending the data distributing queue, and by Refresh Data to the second class redis machine groups In, wherein, handle what the target service device received by the first kind redis machines group in data refresh procedure Access request;
    Handover module, the second class redis is directed toward for starting the data distributing queue, and by the data distributing queue Machine group, wherein, after the completion of Refresh Data, the target service device is handled by the second class redis machines group and is connect Received access request;
    Resource management module, for by the first kind redis machines group and rejoining resource pool.
  9. 9. device according to claim 8, it is characterised in that the distribution module, specifically for according to described to be adjusted Redis number of machines, the second class redis machine groups are marked off from the resource pool;Increase reading in scm configurations to match somebody with somebody Put, wherein, the reading configuration newly increased includes the cache client code corresponding to the second class redis machine groups.
  10. 10. device according to claim 8, it is characterised in that the data management module, specifically for running new delay Deposit client and carry out full dose cache flush, the data before the data distributing queue is suspended, flush to the second class redis Machine group, wherein, the new cache client is according to the cache client for corresponding to the second class redis machine groups Code performs.
CN201711226257.9A 2017-11-29 2017-11-29 The management method and device of a kind of data buffer storage Pending CN107943589A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711226257.9A CN107943589A (en) 2017-11-29 2017-11-29 The management method and device of a kind of data buffer storage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711226257.9A CN107943589A (en) 2017-11-29 2017-11-29 The management method and device of a kind of data buffer storage

Publications (1)

Publication Number Publication Date
CN107943589A true CN107943589A (en) 2018-04-20

Family

ID=61947608

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711226257.9A Pending CN107943589A (en) 2017-11-29 2017-11-29 The management method and device of a kind of data buffer storage

Country Status (1)

Country Link
CN (1) CN107943589A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109151762A (en) * 2018-10-19 2019-01-04 海南易乐物联科技有限公司 A kind of the asynchronous process system and processing method of high concurrent acquisition data
CN109871394A (en) * 2019-01-17 2019-06-11 苏宁易购集团股份有限公司 A kind of full dose distribution high concurrent calculation method and device
CN111240737A (en) * 2020-01-20 2020-06-05 杭州海兴电力科技股份有限公司 Dynamic service parameter configuration method based on Redis
CN112882658A (en) * 2021-02-10 2021-06-01 深圳市云网万店科技有限公司 Data cache capacity expansion method and device, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105260376A (en) * 2015-08-17 2016-01-20 北京京东尚科信息技术有限公司 Method, equipment and system used for cluster node contraction and expansion
CN105677468A (en) * 2016-01-06 2016-06-15 北京京东尚科信息技术有限公司 Cache and designing method thereof and scheduling method and scheduling device using cache
CN105760552A (en) * 2016-03-25 2016-07-13 北京奇虎科技有限公司 Data management method and device
US20170285997A1 (en) * 2014-09-16 2017-10-05 Kove Ip, Llc Local primary memory as cpu cache extension
CN107357896A (en) * 2017-07-13 2017-11-17 北京小度信息科技有限公司 Expansion method, device, system and the data base cluster system of data-base cluster

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170285997A1 (en) * 2014-09-16 2017-10-05 Kove Ip, Llc Local primary memory as cpu cache extension
CN105260376A (en) * 2015-08-17 2016-01-20 北京京东尚科信息技术有限公司 Method, equipment and system used for cluster node contraction and expansion
CN105677468A (en) * 2016-01-06 2016-06-15 北京京东尚科信息技术有限公司 Cache and designing method thereof and scheduling method and scheduling device using cache
CN105760552A (en) * 2016-03-25 2016-07-13 北京奇虎科技有限公司 Data management method and device
CN107357896A (en) * 2017-07-13 2017-11-17 北京小度信息科技有限公司 Expansion method, device, system and the data base cluster system of data-base cluster

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109151762A (en) * 2018-10-19 2019-01-04 海南易乐物联科技有限公司 A kind of the asynchronous process system and processing method of high concurrent acquisition data
CN109871394A (en) * 2019-01-17 2019-06-11 苏宁易购集团股份有限公司 A kind of full dose distribution high concurrent calculation method and device
CN109871394B (en) * 2019-01-17 2022-11-11 苏宁易购集团股份有限公司 Full-distributed high-concurrency calculation method and device
CN111240737A (en) * 2020-01-20 2020-06-05 杭州海兴电力科技股份有限公司 Dynamic service parameter configuration method based on Redis
CN111240737B (en) * 2020-01-20 2023-05-05 杭州海兴电力科技股份有限公司 Redis-based dynamic service parameter configuration method
CN112882658A (en) * 2021-02-10 2021-06-01 深圳市云网万店科技有限公司 Data cache capacity expansion method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN107943589A (en) The management method and device of a kind of data buffer storage
KR101956236B1 (en) Data replication technique in database management system
CN103365929B (en) The management method of a kind of data base connection and system
US9386117B2 (en) Server side data cache system
CN103390041B (en) A kind of method and system that data, services is provided based on middleware
CN101557427A (en) Method for providing diffluent information and realizing the diffluence of clients, system and server thereof
CN107315776A (en) A kind of data management system based on cloud computing
CN102033912A (en) Distributed-type database access method and system
CN108885582A (en) Multi-tenant memory services for memory pool architecture
CN110110006A (en) Data managing method and Related product
CN101133397A (en) Oplogging for online recovery in direct connection client server systems
CN110019125A (en) The method and apparatus of data base administration
CN102236835A (en) Integration framework for enterprise content management systems
US20200019474A1 (en) Consistency recovery method for seamless database duplication
CN106713391A (en) Session information sharing method and sharing system
US20140172888A1 (en) Systems and Methods for Processing Hybrid Co-Tenancy in a Multi-Database Cloud
CN108363764A (en) A kind of distributed caching management system and method
CN107180113A (en) A kind of big data searching platform
CN109783258A (en) A kind of message treatment method, device and server
US11609910B1 (en) Automatically refreshing materialized views according to performance benefit
CN101727496A (en) Method for realizing load balancing cluster of MICROSOFT SQL SERVER database
CN100485629C (en) Assembling type computer system high speed cache data backup processing method and system
CN110334145A (en) The method and apparatus of data processing
CN102820998B (en) Realize the dual computer fault-tolerant service system towards office application and date storage method thereof
KR20190022600A (en) Data replication technique in database management system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180420