CN104202424B - A kind of method using software architecture to expand buffer memory - Google Patents

A kind of method using software architecture to expand buffer memory Download PDF

Info

Publication number
CN104202424B
CN104202424B CN201410482639.8A CN201410482639A CN104202424B CN 104202424 B CN104202424 B CN 104202424B CN 201410482639 A CN201410482639 A CN 201410482639A CN 104202424 B CN104202424 B CN 104202424B
Authority
CN
China
Prior art keywords
data
cache
client
expansion
buffer memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410482639.8A
Other languages
Chinese (zh)
Other versions
CN104202424A (en
Inventor
王和
邵利铎
何栋
王吉玲
安然
潘曦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PICC PROPERTY AND CASUALTY Co Ltd
Original Assignee
PICC PROPERTY AND CASUALTY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PICC PROPERTY AND CASUALTY Co Ltd filed Critical PICC PROPERTY AND CASUALTY Co Ltd
Priority to CN201410482639.8A priority Critical patent/CN104202424B/en
Publication of CN104202424A publication Critical patent/CN104202424A/en
Application granted granted Critical
Publication of CN104202424B publication Critical patent/CN104202424B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention provides a kind of method using software architecture to expand buffer memory, described method comprises the following steps: that S100 designs and Implements the service regeulations of buffer memory; Described buffer memory comprises client-cache and how group server is as the expansion caching server expanding buffer memory by disposing between client and service end; Wherein, often organize server to be made up of master server and multiple stage standby server; Described service regeulations are the method that judgement client-cache and/or expansion buffer memory carry out storing; S200 designs and Implements on caching server for storing the database table of data; Described buffer memory comprises client-cache and expansion buffer memory; After S300 designs and Implements expansion buffer memory, reading and writing data is regular; S400 designs and Implements cache management strategy; S500 designs and Implements cache optimization strategy.The invention provides a kind of method distributed caching being introduced enterprise's application architecture, described method can promote performance and the stability of original system.

Description

A kind of method using software architecture to expand buffer memory
Technical field
The present invention relates to computer realm, particularly a kind of method using software architecture to expand buffer memory.
Background technology
In recent years, along with the develop rapidly of each business event, carrying between the system that each enterprise uses and subsystem, process, mutual data volume are also increasing simultaneously and rapidly.Although the system that each enterprise uses adopts first-class enterprise-level framework exploitation, have many merits, but the supper-fast growth of traffic carrying capacity brings large data, high concurrent brand-new challenge to business system, larger pressure is caused to the performance of whole core system and stability.Although can resist this pressure by the method increasing hardware resource at present, unconfined increase hardware resource is unpractical, must consider to be dealt with by the transformation of software architecture.
Summary of the invention
For above-mentioned subproblem, the invention provides a kind of method using software architecture to expand buffer memory, except solving, the current load of system of accepting insurance is large, the problem of low-response in this invention, can also further genralrlization other bring system loading excessive needs business datum to be carried out in the business system of distributed caching because business datum increases fast.
Use software architecture to expand a method for buffer memory, it is characterized in that: described method comprises the following steps:
S100: the service regeulations designing and Implementing buffer memory;
Described buffer memory comprises client-cache and how group server is as the expansion caching server expanding buffer memory by disposing between client and service end; Wherein, often organize server to be made up of master server and multiple stage standby server; Described service regeulations are the method that judgement client-cache and/or expansion buffer memory carry out storing;
S200: design and Implement on caching server for storing the database table of data; Described buffer memory comprises client-cache and expansion buffer memory;
S300: after designing and Implementing expansion buffer memory, reading and writing data is regular;
S400: design and Implement cache management strategy;
S500: design and Implement cache optimization strategy.
Preferably, described step S100 is specific as follows:
Buffer control module is set, in described buffer control module, has a controling parameters at least; Described buffer memory service regeulations are:
(1) if the primary value of controling parameters is V0, represent and close, then only store in the enterprising row data of described client-cache;
(2) if the primary value of controling parameters is V1, expression is opened, then store at described client-cache and the enterprising row data of expansion buffer memory simultaneously;
(3) if the primary value of controling parameters is V2, expression is opened, then only store in the enterprising row data of described expansion buffer memory;
(4) if the deputy value of controling parameters is V3, represent and close, then on described client-cache, read caching data on client, the data of described caching data on client for storing in described client;
(5) if the deputy value of controling parameters is V4, expression is opened, then on described expansion buffer memory, reading expansion is data cached, the data cached data for storing on described expansion caching server of described expansion;
Wherein, V0, V1, V2, V3, V4 are arbitrary data types.
Preferably, described in step s 200 database table is as follows:
The described data will carrying out buffer memory are stored on expansion buffer memory with two-dimensional structure, be divided into the information cache dimension of data-interface and the name cache dimension of data-interface, the keyword of the information cache dimension of described data-interface be data-interface enter ginseng, the value of the information cache dimension of described data-interface is the return information of described data-interface; The keyword of the name cache dimension of described data-interface is described data-interface, the value of the name cache dimension of described data-interface be described data-interface enter ginseng.
Preferred:
The create-rule of the keyword of the information cache dimension of described data-interface is: all numberings entering to join the method name of title+identical data interface of the method name+described data-interface of numerical digit Institution Code+system code+described data-interface;
The create-rule of described data-interface name cache dimension keys, the numbering of numerical digit Institution Code+system code+described data interface method title+identical described data interface method title;
Described Institution Code is the coding of described system user;
Described system code is the sequential encoding of the subsystem of described system;
The figure place of the numbering of described identical data interface method title is more than or equal to the figure place of the identical number value of described data interface method title, the value of described numbering is the integer increased with sequence of natural numbers, if the figure place of described numbering is more than the figure place of described integer, then zero padding before the highest order of described integer; If there is no the situation that the method name of described data-interface is identical, then described identical data interface method name be numbered zero, the number of zero equals the figure place of described numbering.
Preferably, described in step S300, the read-write rule of data is as follows:
Before described client sends request msg, first by calling the data-interface of the request msg for obtaining and then obtaining the cache tag of described data-interface, and tentatively judge that asked data store at client-cache or in expansion buffer memory, then carry out read-write operation according to following principle by the cache tag obtained:
S301: if tentatively judge that the data of asking only store at client-cache, then first at described client-cache data query, if it is invalid that described client-cache does not have asked data or the data of asking to be set to, then described client sends request of data to service end;
Described service end is to described client return data response and the data of asking;
Described client receives described data and responds with the data of asking and asked data stored in described client-cache and/or upgrade;
S302: if tentatively judge that the data of asking are client-cache and expansion buffer memory, then first at described client-cache data query, if it is invalid that described client-cache does not have asked data or the data of asking to be set to, then described client sends request of data to the expansion caching server at described expansion buffer memory place;
If the data that described expansion caching server is asked to some extent, then return asked data to described client, described client receives the data of asking and stores at described client-cache and/or upgrade;
If described expansion caching server does not have asked data, then send the response not having asked data to described client, described client, after the described response of reception, sends described request of data to service end; Described service end is to described client return data response and the data of asking; The data that described client receives the response of described data and asks, and by the storage of asked data and/or in described client-cache and described expansion buffer memory;
S303: if tentatively judge that asked data are as expansion buffer memory, then the expansion caching server directly to described expansion buffer memory place sends request of data;
If the data that described expansion caching server is asked to some extent, then return asked data to described client;
If described expansion caching server does not have asked data, then send to described client and do not have asked data response, described client, after the described data response of reception, sends request of data to service end; Described service end is to described client return data response and the data of asking; Described client receives described data and responds with the data of asking and store data in described expansion buffer memory;
Wherein, when described client is when sending request of data to described expansion caching server, if when described client and described expansion caching server communication failure, described client will send described request of data directly to described service end, described service end is to described client return data response and the data of asking, and described client receives the response of described data and the data of asking;
If the data of asking returned from described service end need to store and/or upgrade at described client-cache, then described asked data stored and/or upgrade on described client-cache;
If the data of asking returned from described service end need to be stored in the expansion caching server of specifying:, if described communication failure not yet recovers, then described client abandons storage operation; If described communication failure recovers, then the data of asking returned are stored on the expansion caching server of specifying by described client executing.
Preferred: before described client sends request of data to service end, or in described client before sending request of data to described expansion caching server: the key value of the data that described client is first asked according to asked data construct.
Preferred: cache management strategy described in S400; Described cache management strategy comprises provides visual operation interface, described interface can show the data cached information that the address information of current and described client communication trouble-free expansion caching server and described current extensions caching server store, wherein:
Described data cached information carries out sequencing display according to frequency of utilization;
Described data cached information comprises for obtaining data cached data-interface title, and data-interface describes and can to the described data cached operation carried out;
The address information of described expansion caching server then comprises current IP for writing data cached write expansion caching server and port thereof, the IP of the current reading for reading cache data expansion caching server and port thereof, and the connection state information of said write expansion caching server and described reading expansion caching server, described connection state information comprises and can connect and can not be connected.
Preferred: described data cached information shows with list, described data cached display comprises data dictionary cache information and custom system cache information;
Described data dictionary cache information and custom system cache information sort from high to low according to frequency of utilization respectively, wherein said several preceding data dictionary records that sort of data dictionary cache information acquiescence display, other parts default hidden, this part being hidden carries out the switching of show or hide by user interactions;
When data dictionary record is shown, system provides the ability of cleaning buffer memory;
For the data dictionary record hidden, system also provides the ability removed certain data dictionary record concrete or some data dictionary record;
Described custom system cache information record default hidden, carries out the switching of the show or hide of custom system cache information record by user interactions; For the custom system cache information record hidden, system provides the ability removed certain user cache information record concrete or certain user's cache information record.
Preferred: described clear operation is carry out invalid flag to for the data cached of removing.
Preferably, described in step S500, optimisation strategy comprises described buffer memory pre-heating device, described buffer memory pre-heating device comprises data-interface and calls frequency statistics module and system at regular intervals safeguards notification module, described data-interface calls frequency statistics module to carry out adding up and sorting according to the frequency of utilization of data-interface, and the frequency of utilization of described data-interface statistics and sequence is informed to client and the service end of described system when system starts; Described system at regular intervals safeguards the time span that can arrange periodic maintenance in prompting module, and the frequency of utilization of data-interface before described time span being terminated at the end of time span statistics and sequence circularize system maintenance personnel.
Preferred:
After described method is also included in and designs and Implements buffer memory pre-heating device, at service end setting data pushing module, it is invalid that described data-pushing module can notify that the data occurring to upgrade in service end at described client-cache stored are set to by client; When data occur to upgrade in described service end expanding buffer memory, described data-pushing module initiatively can initiate renewal rewards theory to the expansion caching server at described expansion buffer memory place; Described data-pushing module also for when system starts, calls the statistics of frequency statistics module according to data-interface, data are preloaded onto described expansion caching server.
It is preferred: before described data-pushing module initiatively initiates renewal rewards theory to described expansion caching server, or described data-pushing module is when system starts, the statistics of frequency statistics module is being called according to data-interface, before data being preloaded onto described expansion caching server, described service end first builds the key value of institute's propelling data.
Preferably, after described method is also included in and designs and Implements buffer memory pre-heating device, design and Implement the rule of client-cache and expansion buffer memory being carried out to data allocations, specific as follows:
S601: the frequency of utilization of described data-interface is sorted according to order from big to small;
S602: carry out cache tag according to affiliated data-interface according to following principle by data cached:
S6021: described data cached belonging to the frequency of utilization of data-interface belong to be less than or equal to described sequence 10% data markers be V0V3, represent and only store in the enterprising row cache data of client-cache and read;
S6022: described data cached belonging to the frequency of utilization of data-interface belong to be greater than described sequence 10% and the data markers being less than or equal to 20% of described sequence is V1V3, representing in client-cache and the enterprising row cache data storage of expansion buffer memory, first reading at client-cache when sending request of data;
S6023: described data cached belonging to the frequency of utilization of data-interface belong to be greater than described sequence 20% data markers be V2V4, represent and store in the enterprising row cache data of expansion buffer memory and read;
Wherein, V0, V1, V2, V3, V4 are arbitrary data types;
S603: the clooating sequence of the frequency of utilization of the described data-interface of current use is circularized described client and service end.
Preferred:
Step S500 is also included in after to remove on expansion caching server data cached, sends as sent a notice to described service end: as described in notification package contain the data cached size removed and the data cached affiliated data-interface information removed;
Described service end, after the described notice of reception, by described data-pushing module according to data-interface acquisition of information current data wherein, and determines according to data cached size wherein the data volume sending to expansion buffer memory.
Preferred: described method also comprises Design and implementation data transmission policies and transmits after data are carried out serializing process by json, and store with binary form on the expansion caching server of specifying, described client uses after carrying out unserializing process after receiving data.
Preferred: described method have employed the storage architecture of Redis.
Preferred: described method also uses sentry's program of Redis.
Preferred: described method starts described sentry's program by writing and perform Linux script.
Preferred: described step S400 is also included in the database of described service end the address storage list increasing described expansion caching server, and the address information of described expansion caching server is increased in described storage list; The address information that a connection pool safeguards expansion caching server is built in described client.
Preferred: described step S400 also comprises Design and implementation consistency hash algorithm unit, described service end can determine to deposit unique expansion caching server of cache information by calling described consistency hash algorithm unit, described client can by calling the unique extension caching server of described consistency hash algorithm unit determination accessed cache information.
Preferred: described step S500 also comprises and designs and Implements log cache module, and the operation carried out for buffer memory is written in log cache file by described log cache module, described buffer memory comprises client-cache and expansion buffer memory.
The present invention has following features:
(1) the invention provides a kind of method distributed caching being introduced enterprise's application architecture, promoted performance and the stability of original system by software architecture;
(2), after carrying out buffer memory expansion by use this method, whole system can be extending transversely as required, can make full use of existing soft and hardware resource;
(3) the invention provides and visual cache management is realized to distributed caching, handled easily;
(4) after buffer memory is expanded, high-performance serialisation scheme is adopted to substitute traditional XML serializing, significantly reducing transmitted data amount, Integral lifting system throughput simultaneously, expansion buffer memory carries out the processing speed that serializing storage can accelerate data;
(5) adopt Redis distributed caching framework, the technical scheme of leader follower replication, read and write abruption can be realized, there is high reliability.
Accompanying drawing explanation
Fig. 1 is the method step schematic diagram using software architecture expansion buffer memory;
Fig. 2 is that buffer memory uses schematic flow sheet;
Fig. 3 is the system architecture diagram after adopting distributed expansion buffer memory;
Fig. 4 is the system architecture diagram using sentry's program.
Embodiment
In a basic embodiment, as shown in Figure 1, a kind of method using software architecture to expand buffer memory, described method comprises the following steps:
Use software architecture to expand a method for buffer memory, described method comprises the following steps:
S100: the service regeulations designing and Implementing buffer memory;
Described buffer memory comprises client-cache and how group server is as the expansion caching server expanding buffer memory by disposing between client and service end; Wherein, often organize server to be made up of master server and multiple stage standby server; Described service regeulations are the method that judgement client-cache and/or expansion buffer memory carry out storing;
S200: design and Implement on caching server for storing the database table of data; Described buffer memory comprises client-cache and expansion buffer memory;
S300: after designing and Implementing expansion buffer memory, reading and writing data is regular;
S400: design and Implement cache management strategy;
S500: design and Implement cache optimization strategy.
In the present embodiment, the expansion of system cache can be realized by above-mentioned steps, do not need the specific implementation details being concerned about that each walks, also not need the hardware performance paying close attention to expansion buffer memory.
First between client and service end, many group servers are disposed as expansion caching server by step S100, these servers use as the expansion buffer memory on original system, be used for storing client often to most of data of service end request, such client just preferentially can send request of data to expansion caching server when request msg, thus the number of times reduced to described service end request, alleviate the data response times of described service end, accelerate the request of data responding ability that system is total.
Service regeulations described in step S100 are judge it is use client-cache to carry out the method that digital independent and/or storage or service end buffer memory carry out digital independent and/or storage.Preferably, a controling parameters can be set in systems in which, be judged the cache location of digital independent and/or storage by this parameter.Be after buffer memory comes into operation in expansion by a benefit of state modulator, if system is unstable, just can switch back original system by optimum configurations.
After the service regeulations formulating buffer memory, conveniently to access and the management of data, need its database table deposited of design data to carrying out buffer memory, buffer memory here comprises the expansion buffer memory of client-cache and expansion servers.Therefore can be realized data cachedly effectively managing of storing by step S200, and can upgrade data targetedly and/or the operation such as deletion.
For the buffer memory service regeulations of step S100, need Design and implementation reading and writing data rule, i.e. step S300, by the Design and implementation of step S300, to effectively improve system processing speed, and improve system responses ability, particularly share the burden of the request response data to service end.
And further by the management of Design and implementation buffer memory and optimisation strategy, i.e. step S400 and S500, can make that system is convenient to be administered and maintained, performance is optimized more.
Pass through above-mentioned steps, improved system from software, the buffer memory of increase system, by the corresponding buffer memory service regeulations of Design and implementation, database table, digital independent rule, and the management of buffer memory and optimisation strategy, effectively can make full use of resource, improve the performance of original system, and make system have better autgmentability.
In one embodiment, specific implementation step S100 is carried out by arranging buffer control module.Have a controling parameters in described buffer control module at least, then described buffer memory service regeulations are:
(1) if the primary value of controling parameters is V0, represent and close, then only store in the enterprising row cache data of described client-cache;
(2) if the primary value of controling parameters is V1, expression is opened, then store in described client and the enterprising row cache data of expansion caching server simultaneously;
(3) if the primary value of controling parameters is V2, expression is opened, then only store in the enterprising row cache data of described expansion caching server;
(4) if the deputy value of controling parameters is V3, represent and close, then reading cache data on described client-cache;
(5) if the deputy value of controling parameters is V4, expression is opened, then on described expansion caching server, read data.
In this embodiment, the occurrence being indifferent to V0 ~ V4 is how many, and what the data type being also indifferent to them is, can be that integer also can for character string, and when representing identical meanings, their value can be the same or different.As long as ensure in implementation process that system operates accordingly.That formulates such buffer memory rule is three aspects.On the one hand, when new system cisco unity malfunction, legacy system can be switched to fast; On the other hand, support that new legacy system uses simultaneously; The third aspect, after the service condition for data in follow-up investigation system, optimization data memory allocated reads and lays the foundation.
More excellent, for the buffer memory rule of described system, the present invention devises the rule that reads and writes data.First described client obtains the data-interface of institute's request msg when request msg, and tentatively judge that the buffer memory that institute's request msg uses is client-cache or expansion buffer memory by the cache tag of described data-interface, then carry out read-write operation according to following rule:
Before described client sends request msg, first by calling the data-interface of the request msg for obtaining and then obtaining the cache tag of described data-interface, and tentatively judge that asked data store at client-cache or in expansion buffer memory, then carry out read-write operation according to following principle by the cache tag obtained:
S301: if tentatively judge that the data of asking only store at client-cache, then first at described client-cache data query, if it is invalid that described client-cache does not have asked data or the data of asking to be set to, then described client sends request of data to service end;
Described service end is to described client return data response and the data of asking;
Described client receives described data and responds with the data of asking and asked data stored in described client-cache and/or upgrade;
S302: if tentatively judge that the data of asking are client-cache and expansion buffer memory, then first at described client-cache data query, if it is invalid that described client-cache does not have asked data or the data of asking to be set to, then described client sends request of data to the expansion caching server at described expansion buffer memory place;
If the data that described expansion caching server is asked to some extent, then return asked data to described client, described client receives the data of asking and stores at described client-cache and/or upgrade;
If described expansion caching server does not have asked data, then send the response not having asked data to described client, described client, after the described response of reception, sends described request of data to service end; Described service end is to described client return data response and the data of asking; The data that described client receives the response of described data and asks, and by the storage of asked data and/or in described client-cache and described expansion buffer memory;
S303: if tentatively judge that asked data are as expansion buffer memory, then the expansion caching server directly to described expansion buffer memory place sends request of data;
If the data that described expansion caching server is asked to some extent, then return asked data to described client;
If described expansion caching server does not have asked data, then send to described client and do not have asked data response, described client, after the described data response of reception, sends request of data to service end; Described service end is to described client return data response and the data of asking; Described client receives described data and responds with the data of asking and store data in described expansion buffer memory;
Wherein, when described client is when sending request of data to described expansion caching server, if when described client and described expansion caching server communication failure, described client will send described request of data directly to described service end, described service end is to described client return data response and the data of asking, and described client receives the response of described data and the data of asking;
If the data of asking returned from described service end need to store and/or upgrade at described client-cache, then described asked data stored and/or upgrade on described client-cache;
If the data of asking returned from described service end need to be stored in the expansion caching server of specifying:, if described communication failure not yet recovers, then described client abandons storage operation; If described communication failure recovers, then the data of asking returned are stored on the expansion caching server of specifying by described client executing.
In a specific embodiment, achieve the process of the client-requested data as accompanying drawing 2, embody the reading and writing data rule of step S200 in this process.
The client of described system, before sending request of data, builds the keyword key of institute's request msg, and the treatment principle during request of data that described system adopts is:
If the second of 1 described controling parameters is V4, then described client will send request of data to expansion caching server;
Expand caching server described in 1.1 after the corresponding value of acquisition key, return value corresponding to described key by described client;
Client described in 1.2 receives value corresponding to described key, and judges whether the value that described key is corresponding is empty, if empty, then judge the conclusion not having asked data in expansion buffer memory, then sends request of data to service end;
Service end described in 1.3 is according to requesting query data and data are returned to described client; Described client, according to the state of described controling parameters first bit representation, judges whether received data to be stored on expansion buffer memory; If described controling parameters first is V2, then key and the value of the data received is associated, the result after described key and value association is put into expansion buffer memory; If V1, then the result after being associated by key and the value of the data of reception puts into client-cache; If V0, then the result after being associated by key and the value of the data of reception puts into expansion buffer memory and client-cache simultaneously;
If the second of 2 described controling parameters represents V3, then described client judges whether value that key is corresponding is empty, if not empty, then directly obtains the data of asking at described client-cache and uses; If it is empty, then request of data is sent to service end; Processing procedure is below identical with 1.3.Wherein, the requirement of V0, V1, V2, V3, V4 is the same, does not repeat them here.
Refer to Fig. 3, come into contacts with caching server when the overwhelming majority during client-requested data, the situation that the data of only asking in client are miss in the buffer just sends request to service end, significantly reduces the response burden of service end, improves the performance of system on the whole.
In one embodiment, in order to can to will carrying out the data of buffer memory and data cachedly effectively managing, a two-dimensional storage structure is devised for these are data cached, described two-dimensional storage structure is divided into interface message buffer memory dimension and interface name buffer memory dimension, the keyword of described interface message buffer memory dimension be interface enter ginseng, the value of described interface message buffer memory dimension is the return information of described interface; The keyword of described interface name buffer memory dimension is described interface, and the value of described interface name buffer memory dimension is that described interface enters ginseng.Described data cached comprise be stored in the data cached of client and be stored in expansion caching server on data cached.
More excellent, the create-rule of the keyword of described interface message buffer memory dimension is: numerical digit Institution Code+system code+interface method title+interface is all enters to join title+same-interface method name numbering;
The create-rule of described interface name buffer memory dimension keys, numerical digit Institution Code+system code+interface method title+same-interface method name numbering;
Described Institution Code is the coding of described system user;
Described system code is the sequential encoding of the subsystem of described system;
The figure place of the numbering of described identical data interface method title is more than or equal to the figure place of the identical number value of described data interface method title, the value of described numbering is the integer increased with sequence of natural numbers, if the figure place of described numbering is more than the figure place of described integer, then zero padding before the highest order of described integer; If there is no the situation that the method name of described data-interface is identical, then described identical data interface method name be numbered zero, the number of zero equals the figure place of described numbering.
In one embodiment, vehicle insurance is used to accept insurance the company of subsystem in Xi'an, the Institution Code in its Xi'an is 4, suppose it is 1234, what vehicle insurance accepted insurance subsystem is encoded to 0101, certain interface method is SeRVice.getInfo (StRingsystemCode, PRpDplanpRpDplan, and there is the function that 2 are such in system, distinguish identical interface name for convenience, the numbering of described same-interface method name establishes 1, then interface name is the keyword of the interface name buffer memory dimension of SeRVice.getInfo is 1234-0101-SeRVicegetInfo-1, return value is 1234-0101-SeRVicegetInfo-StRingsystemCode-PRpDplanpRpDpl an-1, this return value is using the keyword as interface message buffer memory dimension, a concrete data object can be obtained by the return value of the keyword of interface message buffer memory dimension.
In another embodiment, whole system only has the title of an interface method, and namely the title of described interface method is unique in the entire system, then identical data interface method name be numbered one 0.
In one embodiment, for convenience of management, if the method name of identical data interface be numbered 4, if then have 3 identical situations for the title of an interface method, then the numbering of the method name of described identical data interface is followed successively by 0001,0002,0003; And if only have the situation of, then described identical data interface method name be numbered 0000.
Usually, before described client sends request of data to service end, or in described client before sending request of data to described expansion caching server: the key value of the data that described client is first asked according to asked data construct.
So far, completed the basic step of carrying out buffer memory expansion by the method for software architecture, but in order to further improve the hit rate of digital independent, improved systematic function, needed Design and implementation management and optimisation strategy after expansion buffer memory.
In order to better manage caching server, in another embodiment, design and Implement cache management strategy; Described cache management strategy comprises provides visual operation interface, described interface can show the data cached information that the address information of current and described client communication trouble-free expansion caching server and described current extensions caching server store, wherein:
Described data cached information carries out sequencing display according to frequency of utilization;
Described data cached information comprises for obtaining data cached data-interface title, and data-interface describes and can to the described data cached operation carried out;
The address information of described expansion caching server then comprises current IP for writing data cached write expansion caching server and port thereof, the IP of the current reading for reading cache data expansion caching server and port thereof, and the connection state information of said write expansion caching server and described reading expansion caching server, described connection state information comprises and can connect and can not be connected.
Preferably, described data cached information shows with list, and described data cached display comprises data dictionary cache information and custom system cache information;
Described data dictionary cache information and custom system cache information sort from high to low according to frequency of utilization respectively, wherein said several preceding data dictionary records that sort of data dictionary cache information acquiescence display, other parts default hidden, this part being hidden carries out the switching of show or hide by user interactions;
When data dictionary record is shown, system provides the ability of cleaning buffer memory;
For the data dictionary record hidden, system also provides the ability removed certain data dictionary record concrete or some data dictionary record;
Described custom system cache information record default hidden, carries out the switching of the show or hide of custom system cache information record by user interactions; For the custom system cache information record hidden, system provides the ability removed certain user cache information record concrete or certain user's cache information record.
Preferably, described clear operation is carry out invalid flag to for the data cached of removing.
For improving data cached hit rate further, described caching management module, after removing expansion is data cached, sends the notice of data cached size and the data-interface information removed to service end; Described service end, after the notice receiving described removed data cached size and data-interface information, according to data-interface acquisition of information data, and will determine according to described size information the data volume sending to expansion buffer memory by described data-pushing module.
More excellent, for making the server of expansion buffer memory even to data cached memory allocation, and client can know server for store data, push time store data, and initiatively upgrade the data after deleting, both increase consistency hash algorithm unit at client and server, described consistency hash algorithm unit belongs to described caching management module.System after such improvement, the service end of described system can determine to deposit unique expansion caching server of cache information by calling described consistency hash algorithm unit, described client can by calling the unique extension caching server of described consistency hash algorithm unit determination accessed cache information.
Judge that the data of request are when expanding buffer memory in client, or in the data-pushing module of service end when carrying out propelling data, all will call consistency hash algorithm unit, for determining unique expansion caching server.By the use of consistency hash algorithm, deposit data can be made to be evenly distributed in all expansion servers, all expansion servers load balancing, are conducive to system stability.
And go back Design and implementation cache optimization strategy in another embodiment.Described cache optimization strategy comprises described buffer memory pre-heating device, refers to accompanying drawing 3.Described buffer memory pre-heating device comprises data-interface and calls frequency statistics module and system at regular intervals safeguards notification module, described data-interface calls frequency statistics module to carry out adding up and sorting according to the frequency of utilization of data-interface, and the frequency of utilization of described data-interface statistics and sequence is informed to client and the service end of described system when system starts; Described system at regular intervals safeguards the time span that can arrange periodic maintenance in prompting module, and the frequency of utilization of data-interface before described time span being terminated at the end of time span statistics and sequence circularize system maintenance personnel.
Described data-interface calls frequency statistics module and contributes to understanding the service condition of Various types of data in described system, contributes to providing data supporting to later system optimization.And system at regular intervals safeguards that prompting module can with the service condition of Various types of data in the mode reporting system attendant system of mail or note, in the optimization in later stage, contribute to system maintenance personnel system is optimized further generate strategy.
In the present embodiment, by the utilization to pre-heating device, call the statistics of frequency statistics module according to data-interface, data are preloaded onto described expansion caching server, be conducive to the hit rate improving most of data, improve system processing power.Preferably, can, before expansion buffer memory uses, just use the data-interface of this equipment to call frequency statistics functions of modules, a good effect can be obtained when expanding buffer memory and using like this.And if pre-heating device is just brought into use when expanding buffer memory and using, described data-interface calls frequency statistics module also after the system after improvement uses a period of time, can obtain the effect of high hit rate during restarting systems.
Preferably, after designing and Implementing buffer memory pre-heating device, at service end setting data pushing module, it is invalid that described data-pushing module can notify that the data occurring to upgrade in service end at described client-cache stored are set to by client; When data occur to upgrade in described service end expanding buffer memory, as shown in Figure 3, described service end initiatively initiates renewal rewards theory to the expansion caching server at described expansion buffer memory place; Described data-pushing module also for when system starts, calls the statistics of frequency statistics module according to data-interface, data are preloaded onto described expansion caching server.And in described data-pushing module before initiatively renewal rewards theory being initiated to described expansion caching server, or described data-pushing module is when system starts, the statistics of frequency statistics module is being called according to data-interface, before data being preloaded onto described expansion caching server, described service end first builds the key value of institute's propelling data.
The use of described data-pushing module, alleviates the request to described service end and response pressure.To expanding buffer memory to carry out Data Update operation, invalid operation is put to client by distinguishing, be conducive to improving the data cached hit rate to expansion buffer memory, and the data transfers simultaneously reduced client communication, be conducive to the response performance of raising system.
More excellent, for better improving data cached reading speed, on the basis having designed and Implemented buffer memory pre-heating device, design and Implement the rule of client-cache and expansion buffer memory being carried out to data allocations, specific as follows:
S601: the frequency of utilization of described data-interface is sorted according to order from big to small;
S602: carry out cache tag according to affiliated data-interface according to following principle by data cached:
S6021: described data cached belonging to the frequency of utilization of data-interface belong to be less than or equal to described sequence 10% data markers be V0V3, represent and only store in the enterprising row cache data of client-cache and read;
S6022: described data cached belonging to the frequency of utilization of data-interface belong to be greater than described sequence 10% and the data markers being less than or equal to 20% of described sequence is V1V3, representing in client-cache and the enterprising row cache data storage of expansion buffer memory, first reading at client-cache when sending request of data;
S6023: described data cached belonging to the frequency of utilization of data-interface belong to be greater than described sequence 20% data markers be V2V4, represent and store in the enterprising row cache data of expansion buffer memory and read;
S603: the clooating sequence of the frequency of utilization of the described data-interface of current use is circularized described client and service end.This step is mainly in order to ensure that the data allocations foundation that client and service end use is identical.
Wherein, the description of V0 ~ V4 is the same.By the frequency of utilization according to system business data, Pareto Law is utilized artificially to specify which deposit data in the convenient reading of client-cache to data; Which data not only leaves client-cache in but also leave in expansion buffer memory, shares the request response burden to service end; Which data only leaves in expansion buffer memory, while unduly increasing the pressure of client, shares the request response pressure of service end, is conducive to the response performance of raising system.
More excellent, described step S500 also comprises and has designed and Implemented log cache module, the operation carried out for buffer memory is written in log cache file by described log cache module, and described buffer memory comprises client-cache and expansion buffer memory, and described buffer memory comprises client-cache and expansion buffer memory.Data are write buffer memory by comprising the operation of buffer memory here, and it is invalid to put the data on buffer memory, to the Data Update on buffer memory or deletion etc.
In another embodiment, system data throughput is improved in order to improve transmission data, reduce transmitted data amount, accelerate resolution speed, described method also comprises Design and implementation data transmission policies and transmits after data are carried out serializing process by json, and store with binary form on the expansion caching server of specifying, described client uses after carrying out unserializing process after receiving data.But, if the data of asking are just at client-cache, do not need to carry out serializing and unserializing process.
Preferably, described method have employed the storage architecture of Redis.In another embodiment, in order to improve the reliability of system, system employs sentry's program of Redis, and by writing Linux script, performing Linux order and running described script, and then starts execution sentry program.
Sentry's program is controlled by compile script, easy to use, simple to operate, system maintenance personnel better can be facilitated to be configured and to safeguard, increase work efficiency.
In order to use described sentry's program, need the configuration carrying out relevant parameter in the configuration file of the sentry's program on the server disposing sentry's program.In addition, on described client-server supporting configuration Redis property file Redis.properties in configure the server address of described sentry's program, be connected for the server of client with sentry's program place, and in sentry's program, a message queue about client address can be safeguarded, after described client is connected with the server at described sentry's program place, can register in described message queue.
In order to administer and maintain expansion caching server, in the database of described service end, increase the address storage list of described expansion caching server, and the address information of described expansion caching server is increased in described storage list; The address information that a connection pool safeguards expansion caching server is built in described client.
In another embodiment, after described client is connected with the server at described sentry's program place, registers in described message queue and have subscribed the message of " Server switching ".When the address information of the master server of expansion caching server is sent to described client by described sentry's program, described client will upgrade described address information.Upon power-up of the system, the address information of all master servers can be sent to described client by sentry's program; When in system cloud gray model, certain master server breaks down, and certain of this master server standby server can be promoted to new master server by sentry's program, the address information of new master server can be informed to described client simultaneously.As shown in Figure 4, system deploys the active/standby server of many group Redis, described active/standby server is monitored by using sentry's cluster, multiple sentry's program is had in described sentry's cluster, if certain master server fault, described sentry's cluster can be promoted to master server initiatively by the master server of this fault from server, and initiatively new cache server status is informed described client.In the diagram, can also see, for the active/standby server of many group Redis, described client sends request of data by consistent hashing algorithm to the expansion servers of specifying, if expansion servers there are the data that described client is asked, then can return asked data to described client.Carrying out data cached storage by organizing active/standby server more, the request of described client to service end can be reduced, reduce the response times of described service end, improve the response speed of whole system; Use described in sentry's cluster monitoring simultaneously and organize active/standby server more, the reliability of system can be improved
In one embodiment, in order to administer and maintain expansion caching server, described step S400 is also included in the database of described service end the address storage list increasing described expansion caching server, and is increased in described storage list by the address information of described expansion caching server; The address information that a connection pool safeguards expansion caching server is built in described client.
When the address information of the master server of expansion caching server is sent to described client by described sentry's program, described client will upgrade described address information.Upon power-up of the system, the address information of all master servers can be sent to described client by sentry's program; When in system cloud gray model, certain master server breaks down, and certain of this master server standby server can be promoted to new master server by sentry's program, the address information of new master server can be informed to described client simultaneously.
In this specification, each embodiment adopts the mode of going forward one by one to describe, and what stress is all the difference with other embodiments, between each embodiment identical similar part mutually see.For system embodiment, due to itself and embodiment of the method basic simlarity, so description is fairly simple, relevant part illustrates see the part of embodiment of the method.
Above a kind of method by software architecture expansion buffer memory provided by the present invention is described in detail, apply specific case herein to set forth principle of the present invention and execution mode, the explanation of above embodiment just understands method of the present invention and core concept thereof for helping; Meanwhile, for those skilled in the art, according to thought of the present invention, all will change in specific embodiments and applications, in sum, this description should not be construed as limitation of the present invention.

Claims (20)

1. use software architecture to expand a method for buffer memory, it is characterized in that: described method comprises the following steps:
S100: the service regeulations designing and Implementing buffer memory;
Described buffer memory comprises client-cache and how group server is as the expansion caching server expanding buffer memory by disposing between client and service end; Wherein, often organize server to be made up of master server and multiple stage standby server; Described service regeulations are the method that judgement client-cache and/or expansion buffer memory carry out storing;
S200: design and Implement on caching server for storing the database table of data; Described buffer memory comprises client-cache and expansion buffer memory;
S300: after designing and Implementing expansion buffer memory, reading and writing data is regular;
S400: design and Implement cache management strategy;
S500: design and Implement cache optimization strategy;
Wherein, described step S100 is specific as follows:
Buffer control module is set, in described buffer control module, has a controling parameters at least; Described buffer memory service regeulations are:
(1) if the primary value of controling parameters is V0, represent and close, then only store in the enterprising row data of described client-cache;
(2) if the primary value of controling parameters is V1, expression is opened, then store at described client-cache and the enterprising row data of expansion buffer memory simultaneously;
(3) if the primary value of controling parameters is V2, expression is opened, then only store in the enterprising row data of described expansion buffer memory;
(4) if the deputy value of controling parameters is V3, represent and close, then on described client-cache, read caching data on client, the data of described caching data on client for storing in described client;
(5) if the deputy value of controling parameters is V4, expression is opened, then on described expansion buffer memory, reading expansion is data cached, the data cached data for storing on described expansion caching server of described expansion;
Wherein, V0, V1, V2, V3, V4 are arbitrary data types.
2. method according to claim 1, is characterized in that, database table described is in step s 200 as follows:
The data will carrying out buffer memory are stored on expansion buffer memory with two-dimensional structure, be divided into the information cache dimension of data-interface and the name cache dimension of data-interface, the keyword of the information cache dimension of described data-interface be data-interface enter ginseng, the value of the information cache dimension of described data-interface is the return information of described data-interface; The keyword of the name cache dimension of described data-interface is described data-interface, the value of the name cache dimension of described data-interface be described data-interface enter ginseng.
3. method according to claim 2, is characterized in that:
The create-rule of the keyword of the information cache dimension of described data-interface is: all numberings entering to join the method name of title+identical data interface of the method name+described data-interface of numerical digit Institution Code+system code+described data-interface;
The create-rule of described data-interface name cache dimension keys, the numbering of numerical digit Institution Code+system code+described data interface method title+identical described data interface method title;
Described Institution Code is the coding of described system user;
Described system code is the sequential encoding of the subsystem of described system;
The figure place of the numbering of described identical data interface method title is more than or equal to the figure place of the identical number value of described data interface method title, the value of described numbering is the integer increased with sequence of natural numbers, if the figure place of described numbering is more than the figure place of described integer, then zero padding before the highest order of described integer; If there is no the situation that the method name of described data-interface is identical, then described identical data interface method name be numbered zero, the number of zero equals the figure place of described numbering.
4. method according to claim 1, is characterized in that, described in step S300, the read-write rule of data is as follows:
Before described client sends request msg, first by calling the data-interface of the request msg for obtaining and then obtaining the cache tag of described data-interface, and tentatively judge that asked data store at client-cache or in expansion buffer memory, then carry out read-write operation according to following principle by the cache tag obtained:
S301: if tentatively judge that the data of asking only store at client-cache, then first at described client-cache data query, if it is invalid that described client-cache does not have asked data or the data of asking to be set to, then described client sends request of data to service end;
Described service end is to described client return data response and the data of asking;
Described client receives described data and responds with the data of asking and asked data stored in described client-cache and/or upgrade;
S302: if tentatively judge that the data of asking are client-cache and expansion buffer memory, then first at described client-cache data query, if it is invalid that described client-cache does not have asked data or the data of asking to be set to, then described client sends request of data to the expansion caching server at described expansion buffer memory place;
If the data that described expansion caching server is asked to some extent, then return asked data to described client, described client receives the data of asking and stores at described client-cache and/or upgrade;
If described expansion caching server does not have asked data, then send the response not having asked data to described client, described client, after the described response of reception, sends described request of data to service end; Described service end is to described client return data response and the data of asking; The data that described client receives the response of described data and asks, and by the storage of asked data and/or in described client-cache and described expansion buffer memory;
S303: if tentatively judge that asked data are as expansion buffer memory, then the expansion caching server directly to described expansion buffer memory place sends request of data;
If the data that described expansion caching server is asked to some extent, then return asked data to described client;
If described expansion caching server does not have asked data, then send to described client and do not have asked data response, described client, after the described data response of reception, sends request of data to service end; Described service end is to described client return data response and the data of asking; Described client receives described data and responds with the data of asking and store data in described expansion buffer memory;
Wherein, when described client is when sending request of data to described expansion caching server, if when described client and described expansion caching server communication failure, described client will send described request of data directly to described service end, described service end is to described client return data response and the data of asking, and described client receives the response of described data and the data of asking;
If the data of asking returned from described service end need to store and/or upgrade at described client-cache, then described asked data stored and/or upgrade on described client-cache;
If the data of asking returned from described service end need to be stored in the expansion caching server of specifying:, if described communication failure not yet recovers, then described client abandons storage operation; If described communication failure recovers, then the data of asking returned are stored on the expansion caching server of specifying by described client executing.
5. method according to claim 4, it is characterized in that: before described client sends request of data to service end, or in described client before sending request of data to described expansion caching server: the key value of the data that described client is first asked according to asked data construct.
6. method according to claim 1, is characterized in that: cache management strategy described in S400; Described cache management strategy comprises provides visual operation interface, described interface can show the data cached information that the address information of current and described client communication trouble-free expansion caching server and described current extensions caching server store, wherein:
Described data cached information carries out sequencing display according to frequency of utilization;
Described data cached information comprises for obtaining data cached data-interface title, and data-interface describes and can to the described data cached operation carried out;
The address information of described expansion caching server then comprises current IP for writing data cached write expansion caching server and port thereof, the IP of the current reading for reading cache data expansion caching server and port thereof, and the connection state information of said write expansion caching server and described reading expansion caching server, described connection state information comprises and can connect and can not be connected.
7. method according to claim 6, is characterized in that: described data cached information shows with list, and described data cached display comprises data dictionary cache information and custom system cache information;
Described data dictionary cache information and custom system cache information sort from high to low according to frequency of utilization respectively, wherein said several preceding data dictionary records that sort of data dictionary cache information acquiescence display, other parts default hidden, this part being hidden carries out the switching of show or hide by user interactions;
When data dictionary record is shown, system provides the ability of cleaning buffer memory;
For the data dictionary record hidden, system also provides the ability removed certain data dictionary record concrete or some data dictionary record;
Described custom system cache information record default hidden, carries out the switching of the show or hide of custom system cache information record by user interactions; For the custom system cache information record hidden, system provides the ability removed certain user cache information record concrete or certain user's cache information record.
8. method according to claim 7, is characterized in that: described clear operation is carry out invalid flag to for the data cached of removing.
9. method according to claim 8, it is characterized in that, described in step S500, optimisation strategy comprises described buffer memory pre-heating device, described buffer memory pre-heating device comprises data-interface and calls frequency statistics module and system at regular intervals safeguards notification module, described data-interface calls frequency statistics module to carry out adding up and sorting according to the frequency of utilization of data-interface, and the frequency of utilization of described data-interface statistics and sequence is informed to client and the service end of described system when system starts; Described system at regular intervals safeguards the time span that can arrange periodic maintenance in prompting module, and the frequency of utilization of data-interface before described time span being terminated at the end of time span statistics and sequence circularize system maintenance personnel.
10. method according to claim 9, is characterized in that:
After described method is also included in and designs and Implements buffer memory pre-heating device, at service end setting data pushing module, it is invalid that described data-pushing module can notify that the data occurring to upgrade in service end at described client-cache stored are set to by client; When data occur to upgrade in described service end expanding buffer memory, described data-pushing module initiatively can initiate renewal rewards theory to the expansion caching server at described expansion buffer memory place; Described data-pushing module also for when system starts, calls the statistics of frequency statistics module according to data-interface, data are preloaded onto described expansion caching server.
11. methods according to claim 10, it is characterized in that: before described data-pushing module initiatively initiates renewal rewards theory to described expansion caching server, or described data-pushing module is when system starts, the statistics of frequency statistics module is being called according to data-interface, before data being preloaded onto described expansion caching server, described service end first builds the key value of institute's propelling data.
12. methods according to claim 11, is characterized in that, after described method is also included in and designs and Implements buffer memory pre-heating device, design and Implement the rule of client-cache and expansion buffer memory being carried out to data allocations, specific as follows:
S601: the frequency of utilization of described data-interface is sorted according to order from big to small;
S602: carry out cache tag according to affiliated data-interface according to following principle by data cached:
S6021: described data cached belonging to the frequency of utilization of data-interface belong to be less than or equal to described sequence 10% data markers be V0V3, represent and only store in the enterprising row cache data of client-cache and read;
S6022: described data cached belonging to the frequency of utilization of data-interface belong to be greater than described sequence 10% and the data markers being less than or equal to 20% of described sequence is V1V3, representing in client-cache and the enterprising row cache data storage of expansion buffer memory, first reading at client-cache when sending request of data;
S6023: described data cached belonging to the frequency of utilization of data-interface belong to be greater than described sequence 20% data markers be V2V4, represent and store in the enterprising row cache data of expansion buffer memory and read;
Wherein, V0, V1, V2, V3, V4 are arbitrary data types;
S603: the clooating sequence of the frequency of utilization of the described data-interface of current use is circularized described client and service end.
13. methods according to claim 12, is characterized in that:
Step S500 is also included in after to remove on expansion caching server data cached, sends as sent a notice to described service end: as described in notification package contain the data cached size removed and the data cached affiliated data-interface information removed;
Described service end, after the described notice of reception, by described data-pushing module according to data-interface acquisition of information current data wherein, and determines according to data cached size wherein the data volume sending to expansion buffer memory.
14. methods according to claim 1, it is characterized in that: described method also comprises Design and implementation data transmission policies and transmits after data are carried out serializing process by json, and store with binary form on the expansion caching server of specifying, described client uses after carrying out unserializing process after receiving data.
15. methods according to claim 1, is characterized in that: described method have employed the storage architecture of Redis.
16. methods according to claim 15, is characterized in that: described method also uses sentry's program of Redis.
17. methods according to claim 16, is characterized in that: described method starts described sentry's program by writing and perform Linux script.
18. methods according to claim 1, it is characterized in that: described step S400 is also included in the database of described service end the address storage list increasing described expansion caching server, and the address information of described expansion caching server is increased in described storage list; The address information that a connection pool safeguards expansion caching server is built in described client.
19. methods according to claim 1, it is characterized in that: described step S400 also comprises Design and implementation consistency hash algorithm unit, described service end can determine to deposit unique expansion caching server of cache information by calling described consistency hash algorithm unit, described client can by calling the unique extension caching server of described consistency hash algorithm unit determination accessed cache information.
20. methods according to claim 1, it is characterized in that: described step S500 also comprises and designs and Implements log cache module, the operation carried out for buffer memory is written in log cache file by described log cache module, and described buffer memory comprises client-cache and expansion buffer memory.
CN201410482639.8A 2014-09-19 2014-09-19 A kind of method using software architecture to expand buffer memory Active CN104202424B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410482639.8A CN104202424B (en) 2014-09-19 2014-09-19 A kind of method using software architecture to expand buffer memory

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410482639.8A CN104202424B (en) 2014-09-19 2014-09-19 A kind of method using software architecture to expand buffer memory

Publications (2)

Publication Number Publication Date
CN104202424A CN104202424A (en) 2014-12-10
CN104202424B true CN104202424B (en) 2016-01-27

Family

ID=52087649

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410482639.8A Active CN104202424B (en) 2014-09-19 2014-09-19 A kind of method using software architecture to expand buffer memory

Country Status (1)

Country Link
CN (1) CN104202424B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106559497A (en) * 2016-12-06 2017-04-05 郑州云海信息技术有限公司 A kind of distributed caching method of WEB server based on daemon thread
WO2019090780A1 (en) * 2017-11-13 2019-05-16 深圳市华阅文化传媒有限公司 High-availability id generator, and id generation method and device thereof
CN108874903A (en) * 2018-05-24 2018-11-23 中国平安人寿保险股份有限公司 Method for reading data, device, computer equipment and computer readable storage medium
CN108897495B (en) * 2018-06-28 2023-10-03 北京五八信息技术有限公司 Cache updating method, device, cache equipment and storage medium
CN109614404B (en) * 2018-11-01 2023-08-01 创新先进技术有限公司 Data caching system and method
CN109739516B (en) * 2018-12-29 2023-06-20 深圳供电局有限公司 Cloud cache operation method and system
CN110825986B (en) * 2019-11-05 2023-03-21 上海携程商务有限公司 Method, system, storage medium and electronic device for client to request data

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101764824A (en) * 2010-01-28 2010-06-30 深圳市同洲电子股份有限公司 Distributed cache control method, device and system
CN102333108A (en) * 2011-03-18 2012-01-25 北京神州数码思特奇信息技术股份有限公司 Distributed cache synchronization system and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101764824A (en) * 2010-01-28 2010-06-30 深圳市同洲电子股份有限公司 Distributed cache control method, device and system
CN102333108A (en) * 2011-03-18 2012-01-25 北京神州数码思特奇信息技术股份有限公司 Distributed cache synchronization system and method

Also Published As

Publication number Publication date
CN104202424A (en) 2014-12-10

Similar Documents

Publication Publication Date Title
CN104202423B (en) A kind of system by software architecture expansion buffer memory
CN104202424B (en) A kind of method using software architecture to expand buffer memory
CN110225074B (en) Communication message distribution system and method based on equipment address domain
AU2013347807B2 (en) Scaling computing clusters
CN103067433B (en) A kind of data migration method of distributed memory system, equipment and system
CN103905537A (en) System for managing industry real-time data storage in distributed environment
CN105025053A (en) Distributed file upload method based on cloud storage technology and system
EP3575968A1 (en) Method and device for synchronizing active transaction lists
US11188229B2 (en) Adaptive storage reclamation
CN101937474A (en) Mass data query method and device
CN107800808A (en) A kind of data-storage system based on Hadoop framework
US10810054B1 (en) Capacity balancing for data storage system
CN108848132A (en) A kind of distribution scheduling station system based on cloud
CN109739435A (en) File storage and update method and device
CN105975614A (en) Cluster configuration device and data updating method and device
CN107908713A (en) A kind of distributed dynamic cuckoo filtration system and its filter method based on Redis clusters
US20220391411A1 (en) Dynamic adaptive partition splitting
CN102982033A (en) Small documents storage method and system thereof
CN105187489A (en) File transfer method and system capable of clustering and supporting multiple users to upload simultaneously
EP3709173B1 (en) Distributed information memory system, method, and program
CN117131080A (en) Data processing platform based on stream processing and message queue
CN103685359A (en) Data processing method and device
EP4323881A1 (en) Geographically dispersed hybrid cloud cluster
CN102541759B (en) Cache control device and cache control method
CN112800066A (en) Index management method, related device and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant