CN104202424A - Method for extending cache by software architecture - Google Patents

Method for extending cache by software architecture Download PDF

Info

Publication number
CN104202424A
CN104202424A CN201410482639.8A CN201410482639A CN104202424A CN 104202424 A CN104202424 A CN 104202424A CN 201410482639 A CN201410482639 A CN 201410482639A CN 104202424 A CN104202424 A CN 104202424A
Authority
CN
China
Prior art keywords
data
cache
client
expansion
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410482639.8A
Other languages
Chinese (zh)
Other versions
CN104202424B (en
Inventor
王和
邵利铎
何栋
王吉玲
安然
潘曦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PICC PROPERTY AND CASUALTY Co Ltd
Original Assignee
PICC PROPERTY AND CASUALTY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PICC PROPERTY AND CASUALTY Co Ltd filed Critical PICC PROPERTY AND CASUALTY Co Ltd
Priority to CN201410482639.8A priority Critical patent/CN104202424B/en
Publication of CN104202424A publication Critical patent/CN104202424A/en
Application granted granted Critical
Publication of CN104202424B publication Critical patent/CN104202424B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a method for extending a cache by software architecture. The method includes the steps: S100, designing and realizing a usage rule of the cache, wherein the cache comprises a client cache and an extended cache server, the extended cache server is used for deploying multiple groups of servers between a client side and a server side as extended caches, each group of servers is composed of a main server and multiple standby servers, and the usage rule is a method for judging storing of the client cache and/or the extended caches; S200, designing and realizing a database table used for storing data on a cache server, wherein the cache comprises the client cache and the extended caches; S300, designing and realizing data reading and writing rules after the cache is extended; S400, designing and realizing cache management strategies; S500, designing and realizing cache optimization strategies. The invention provides a method for introducing distributed caches into enterprise application architecture. By the methods, performance and stability of an original system can be improved.

Description

A kind of method of using software architecture expansion buffer memory
Technical field
The present invention relates to computer realm, particularly a kind of method of using software architecture expansion buffer memory.
Background technology
In recent years, along with the develop rapidly of each business event, carrying between the system that each enterprise is used and subsystem, processing, mutual data volume are also increasing simultaneously and rapidly.Although the system that each enterprise is used adopts first-class enterprise-level framework exploitation, have many merits, but the supper-fast growth of traffic carrying capacity brings large data, high concurrent brand-new challenge to business system, and the performance of whole core system and stability are caused to larger pressure.Although can resist this pressure by increasing the method for hardware resource at present, unconfined increase hardware resource is unpractical, must consider to deal with by the transformation of software architecture.
Summary of the invention
For above-mentioned subproblem, the invention provides a kind of method of using software architecture expansion buffer memory, except solving, the current load of the system of accepting insurance is large, the problem of low-response in this invention, and system loading is excessive need to carry out business datum in the business system of distributed caching because business datum rapid growth brings can also further to promote other.
A method of using software architecture expansion buffer memory, is characterized in that: described method comprises the following steps:
S100: the service regeulations of designing and Implementing buffer memory;
Described buffer memory comprises client-cache and how group servers are as the expansion caching server of expanding buffer memory by disposing between client and service end; Wherein, every group of server consists of master server and Duo Tai standby server; The method that described service regeulations are stored with client-cache and/or expansion buffer memory for judgement;
S200: design and Implement on caching server for storing the database table of data; Described buffer memory comprises client-cache and expansion buffer memory;
S300: after designing and Implementing expansion buffer memory, reading and writing data is regular;
S400: design and Implement cache management strategy;
S500: design and Implement cache optimization strategy.
Preferably, described step S100 is specific as follows:
Buffer control module is set, in described buffer control module, has at least one to control parameter; Described buffer memory service regeulations are:
(1), if controlling the primary value of parameter is V0, represent to close, only in the enterprising row data storage of described client-cache;
(2) if controlling the primary value of parameter is V1, represent out, simultaneously in described client-cache and the enterprising row data storage of expansion buffer memory;
(3) if controlling the primary value of parameter is V2, represent out, only in the enterprising row data storage of described expansion buffer memory;
(4) if controlling the deputy value of parameter is V3, represent to close, on described client-cache, read caching data on client, the data of described caching data on client for storing in described client;
(5) if controlling the deputy value of parameter is V4, represent out, on described expansion buffer memory, read expansion data cached, the data cached data for storing of described expansion on described expansion caching server;
Wherein, V0, V1, V2, V3, V4 are arbitrary data types.
Preferably, as follows in the database table described in step S200:
The described data that will carry out buffer memory are stored on expansion buffer memory with two-dimensional structure, be divided into the information cache dimension of data-interface and the name cache dimension of data-interface, what the keyword of the information cache dimension of described data-interface was data-interface enters to join, the return information that the value of the information cache dimension of described data-interface is described data-interface; The keyword of the name cache dimension of described data-interface is described data-interface, and what the value of the name cache dimension of described data-interface was described data-interface enters to join.
Preferred:
The create-rule of the keyword of the information cache dimension of described data-interface is: all numberings that enter to join the method name of title+identical data interface of the method name of numerical digit Institution Code+system code+described data-interface+described data-interface;
The create-rule of described data-interface name cache dimension keys, the numbering of numerical digit Institution Code+system code+described data interface method title+identical described data interface method title;
Described Institution Code is described system user's coding;
Described system code is the sequential encoding of the subsystem of described system;
The figure place of the numbering of described identical data interface method title is more than or equal to the figure place of the number value that described data interface method title is identical, the integer of the value of described numbering for increasing with sequence of natural numbers, if the figure place of described numbering is more than the figure place of described integer, zero padding before the highest order of described integer; If there is no the identical situation of the method name of described data-interface, the method name of described identical data interface is numbered the figure place that zero, zero number equals described numbering.
Preferably, as follows in the read-write rule of data described in step S300:
Before described client is sent request msg, first by calling the data-interface of the request msg for obtaining and then obtaining the buffer memory mark of described data-interface, and the data of asking by the preliminary judgement of buffer memory mark obtaining are store or in expansion buffer memory, then according to following principle, carry out read-write operation at client-cache:
S301: if the data that preliminary judgement is asked are only stored at client-cache, first at described client-cache data query, if described client-cache does not have asked data or the data of asking are set to invalidly, described client sends request of data to service end;
Described service end is to described client return data response and the data of asking;
Described in described client, data respond with the data of asking and asked data are stored in described client-cache and/or upgrade;
S302: if the data that preliminary judgement is asked are client-cache and expansion buffer memory, first at described client-cache data query, if described client-cache does not have asked data or the data of asking are set to invalidly, described client sends request of data to the expansion caching server at described expansion buffer memory place;
If the data that described expansion caching server is asked to some extent, return to asked data to described client, the data that described client is asked are also stored and/or upgrade at described client-cache;
If described expansion caching server does not have asked data, to described client, send the response that there is no asked data, described client, after receiving described response, sends described request of data to service end; Described service end is to described client return data response and the data of asking; Data response and the data of asking described in described client, and by asked data storage and/or in described client-cache and described expansion buffer memory;
S303: if the data that preliminary judgement is asked are expansion buffer memory, the direct expansion caching server to described expansion buffer memory place sends request of data;
If the data of asking to some extent on described expansion caching server, return to asked data to described client;
If there is no asked data on described expansion caching server, to described client, send and there is no asked data response, described client, after receiving described data response, sends request of data to service end; Described service end is to described client return data response and the data of asking; Described in described client, data respond with the data of asking and store data in described expansion buffer memory;
Wherein, when described client is when sending request of data to described expansion caching server, if when described client and described expansion caching server communication failure, described client will directly be sent described request of data to described service end, described service end is to the response of described client return data and the data of asking, data response and the data of asking described in described client;
If described client-cache need to be stored and/or be updated in to the data of asking of returning from described service end, described asked data are stored and/or are updated on described client-cache;
If the data of asking of returning from described service end need to be stored in the expansion caching server of appointment:, if described communication failure not yet recovers, described client is abandoned storage operation; If described communication failure recovers, described client executing stores the data of asking of returning on the expansion caching server of appointment into.
Preferred: before described client sends request of data to service end, or in described client before sending request of data to described expansion caching server: described client is first according to the key value of the data of asking described in asked data construct.
Preferred: in cache management strategy described in S400; Described cache management strategy comprises provides visual operation interface, on described interface, can show the data cached information of storing on the address information of the trouble-free expansion caching server of current and described client communication and described current expansion caching server, wherein:
Described data cached information is carried out sequencing display according to frequency of utilization;
Described data cached information comprises for obtaining data cached data-interface title, and data-interface is described and can be to the described data cached operation of carrying out;
The address information of described expansion caching server comprises current for writing the data cached IP that writes expansion caching server and port thereof, the current IP and the port thereof that read expansion caching server for reading cache data, and said write expansion caching server and described in read the connection state information of expansion caching server, described connection state information comprises and can connect and can not be connected.
Preferred: described data cached information shows with list, described data cached demonstration comprises data dictionary cache information and custom system cache information;
Described data dictionary cache information and custom system cache information sort from high to low according to frequency of utilization respectively, wherein said data dictionary cache information acquiescence shows several preceding data dictionary records that sort, other parts default hidden, this part being hidden carried out the switching of show or hide by user interactions;
When data dictionary record is shown, system provides the ability of cleaning buffer memory;
For hiding data dictionary record, system also provides concrete certain data dictionary record or some data dictionary is recorded to the ability of removing;
Described custom system cache information records default hidden, carries out the switching of the show or hide of custom system cache information record by user interactions; For hiding custom system cache information record, system provides concrete certain user cache information recording or certain user's cache information is recorded to the ability of removing.
Preferred: data cached the carry out invalid flag of described clear operation for wish is removed.
Preferably, in optimisation strategy described in step S500, comprise described buffer memory pre-heating device, described buffer memory pre-heating device comprises that data-interface calls frequency statistics module and system periodic maintenance notification module, described data-interface calls frequency statistics module and adds up and sort according to the frequency of utilization of data-interface, and by frequency of utilization statistics and the sequence of described data-interface, notifies client and the service end to described system when system starts; The time span of periodic maintenance can be set in described system periodic maintenance prompting module, and the frequency of utilization of the data-interface before described time span being finished when a time span finishes is added up and sequence circularizes system maintenance personnel.
Preferred:
Described method arranges data-pushing module in service end after being also included in and designing and Implementing buffer memory pre-heating device, and described data-pushing module can notify client that being set in the data that service end occur to be upgraded at described client-cache of storage is invalid; When data are in expansion buffer memory and when described service end occurs to upgrade, described data-pushing module can initiatively initiate to upgrade operation to the expansion caching server at described expansion buffer memory place; Described data-pushing module also, for when system starts, is called the statistics of frequency statistics module according to data-interface, data are preloaded onto to described expansion caching server.
It is preferred: before described data-pushing module initiatively initiates to upgrade operation to described expansion caching server, or described data-pushing module is when system starts, according to data-interface, calling the statistics of frequency statistics module, before data are preloaded onto to described expansion caching server, described service end first builds the key value of institute's propelling data.
Preferably, after described method is also included in and designs and Implements buffer memory pre-heating device, design and Implement the rule of client-cache and expansion buffer memory being carried out to data allocations, specific as follows:
S601: the frequency of utilization of described data-interface is sorted according to order from big to small;
S602: carry out buffer memory mark according to affiliated data-interface according to following principle by data cached:
S6021: described data cached under the frequency of utilization of data-interface to belong to 10% the data markers that is less than or equal to described sequence be V0V3, only represent in the enterprising row cache data storage of client-cache and read;
S6022: described data cached under the frequency of utilization of data-interface to belong to 10% and 20% the data markers that is less than or equal to described sequence that are greater than described sequence be V1V3, be illustrated in the enterprising row cache data storage of client-cache and expansion buffer memory, when sending request of data, first at client-cache, read;
S6023: described data cached under the frequency of utilization of data-interface to belong to 20% the data markers that is greater than described sequence be V2V4, be illustrated in the enterprising row cache data storage of expansion buffer memory and read;
Wherein, V0, V1, V2, V3, V4 are arbitrary data types;
S603: the clooating sequence of the frequency of utilization of the described data-interface of current use is circularized to described client and service end.
Preferred:
Step S500 is also included in and removes after data cached on expansion caching server, to described service end, sends as sent a notice: as described in the notification package data cached affiliated data-interface information that contains the data cached size of being removed and removed;
Described service end, after receiving described notice, by described data-pushing module basis data-interface acquisition of information current data wherein, and determines according to data cached size wherein the data volume that sends to expansion buffer memory.
Preferred: described method also comprises that Design and implementation data transmission policies is for to be undertaken data after serializing processing to transmit by json, and with binary form, store on the expansion caching server of appointment, after described client data, carry out after unserializing is processed using.
Preferred: described method has adopted the storage architecture of Redis.
Preferred: described method has also been used sentry's program of Redis.
Preferred: described method starts described sentry's program by writing and carry out Linux script.
Preferred: described step S400 is also included in the database of described service end increases the address storage list of described expansion caching server, and the address information of described expansion caching server is increased in described storage list; In described client, build the address information that a connection pool is safeguarded expansion caching server.
Preferred: described step S400 also comprises Design and implementation consistency hash algorithm unit, described service end can determine by calling described consistency hash algorithm unit unique expansion caching server of depositing cache information, and described client can be determined by calling described consistency hash algorithm unit unique expansion caching server of accessed cache information.
Preferred: described step S500 also comprises and design and Implement log cache module, described log cache module is written to the operation of carrying out for buffer memory in log cache file, and described buffer memory comprises client-cache and expansion buffer memory.
The present invention has following features:
(1) the invention provides a kind of method of distributed caching being introduced to enterprise's application architecture, by software architecture, promote performance and the stability of original system;
(2) by using this method to carry out after buffer memory expansion, whole system can be extending transversely as required, can make full use of existing soft and hardware resource;
(3) the invention provides distributed caching is realized to visual cache management, handled easily;
(4) after buffer memory is expanded, adopt high-performance serializing scheme to substitute traditional XML serializing, significantly reducing transmitted data amount, Integral lifting system throughput simultaneously, is carried out serializing and is stored the processing speed that can accelerate data on expansion buffer memory;
(5) adopt Redis distributed caching framework, can realize leader follower replication, read and write separated technical scheme, there is high reliability.
Accompanying drawing explanation
Fig. 1 is for being used the method step schematic diagram of software architecture expansion buffer memory;
Fig. 2 is that buffer memory is used schematic flow sheet;
Fig. 3 is for adopting the system architecture diagram after distributed expansion buffer memory;
Fig. 4 is for being used the system architecture diagram of sentry's program.
Embodiment
In a basic embodiment, as shown in Figure 1, a kind of method of using software architecture expansion buffer memory, described method comprises the following steps:
A method of using software architecture expansion buffer memory, described method comprises the following steps:
S100: the service regeulations of designing and Implementing buffer memory;
Described buffer memory comprises client-cache and how group servers are as the expansion caching server of expanding buffer memory by disposing between client and service end; Wherein, every group of server consists of master server and Duo Tai standby server; The method that described service regeulations are stored with client-cache and/or expansion buffer memory for judgement;
S200: design and Implement on caching server for storing the database table of data; Described buffer memory comprises client-cache and expansion buffer memory;
S300: after designing and Implementing expansion buffer memory, reading and writing data is regular;
S400: design and Implement cache management strategy;
S500: design and Implement cache optimization strategy.
In the present embodiment, by above-mentioned steps, carry out to realize the expansion of system cache, do not need to be concerned about the specific implementation details of each step, also do not need to pay close attention to the hardware performance of expansion buffer memory.
First by step S100, between client and service end, dispose many group servers as expansion caching server, these servers are used the expansion buffer memory on original system, be used for storing client often to most of data of service end request, client just can preferentially send request of data to expansion caching server when request msg like this, thereby reduce the number of times to described service end request, alleviate the data response times of described service end, the request of data responding ability that quickening system is total.
Service regeulations described in step S100 for judgement be to use client-cache to carry out that data read and/or storage or service end buffer memory carries out the method that data read and/or store.Preferably, can in system, arrange one and control parameter, by this parameter, judge the cache location that data read and/or store.A benefit of controlling by parameter is after expansion buffer memory comes into operation, if system is unstable, by parameter setting, just can switch back original system.
After having formulated the service regeulations of buffer memory, for convenience is to the access of data and management, need to be to carrying out its database table of depositing of design data of buffer memory, the buffer memory here comprises the expansion buffer memory of client-cache and expansion servers.Therefore by step S200, can realize the data cached of storage effectively managed, and can to data, upgrade targetedly and/or the operation such as deletion.
For the buffer memory service regeulations of step S100, need Design and implementation reading and writing data rule, i.e. step S300, by the Design and implementation of step S300, to effectively improve system processing speed, improve system responses ability, particularly share the burden to the request response data of service end.
And further by management and the optimisation strategy of Design and implementation buffer memory, i.e. step S400 and S500, can make that system is convenient to be administered and maintained, performance is optimized more.
Pass through above-mentioned steps, improved system from software, the buffer memory of increase system, by the corresponding buffer memory service regeulations of Design and implementation, database table, data, read rule, and the management of buffer memory and optimisation strategy, can effectively make full use of resource, improve the performance of original system, and make system there is better autgmentability.
In one embodiment, by being set, buffer control module carrys out specific implementation step S100.In described buffer control module, have at least one to control parameter, described buffer memory service regeulations are:
(1), if controlling the primary value of parameter is V0, represent to close, only in the enterprising row cache data storage of described client-cache;
(2) if controlling the primary value of parameter is V1, represent out, simultaneously in described client and the enterprising row cache data storage of expansion caching server;
(3) if controlling the primary value of parameter is V2, represent out, only in the enterprising row cache data storage of described expansion caching server;
(4), if controlling the deputy value of parameter is V3, represent to close, reading cache data on described client-cache;
(5) if controlling the deputy value of parameter is V4, represent out, reading out data on described expansion caching server.
In this embodiment, the occurrence of being indifferent to V0~V4 is how many, and what the data type of being also indifferent to them is, can be also character string for integer, and while representing identical meanings, their value can be the same or different.As long as the system that guarantees in implementation process is carried out corresponding operation.Formulate three aspects that are of such buffer memory rule.On the one hand, when new system cisco unity malfunction, can be switched to fast legacy system; On the other hand, support new legacy system to use simultaneously; The third aspect, after the service condition for data in follow-up investigation system, optimization data memory allocated reads and lays the foundation.
More excellent, for the buffer memory rule of described system, the present invention has designed the rule that reads and writes data.First described client obtains the data-interface of institute's request msg when request msg, and tentatively judge that by the buffer memory mark of described data-interface the buffer memory that institute's request msg is used is client-cache or expansion buffer memory, then according to following rule, carry out read-write operation:
Before described client is sent request msg, first by calling the data-interface of the request msg for obtaining and then obtaining the buffer memory mark of described data-interface, and the data of asking by the preliminary judgement of buffer memory mark obtaining are store or in expansion buffer memory, then according to following principle, carry out read-write operation at client-cache:
S301: if the data that preliminary judgement is asked are only stored at client-cache, first at described client-cache data query, if described client-cache does not have asked data or the data of asking are set to invalidly, described client sends request of data to service end;
Described service end is to described client return data response and the data of asking;
Described in described client, data respond with the data of asking and asked data are stored in described client-cache and/or upgrade;
S302: if the data that preliminary judgement is asked are client-cache and expansion buffer memory, first at described client-cache data query, if described client-cache does not have asked data or the data of asking are set to invalidly, described client sends request of data to the expansion caching server at described expansion buffer memory place;
If the data that described expansion caching server is asked to some extent, return to asked data to described client, the data that described client is asked are also stored and/or upgrade at described client-cache;
If described expansion caching server does not have asked data, to described client, send the response that there is no asked data, described client, after receiving described response, sends described request of data to service end; Described service end is to described client return data response and the data of asking; Data response and the data of asking described in described client, and by asked data storage and/or in described client-cache and described expansion buffer memory;
S303: if the data that preliminary judgement is asked are expansion buffer memory, the direct expansion caching server to described expansion buffer memory place sends request of data;
If the data of asking to some extent on described expansion caching server, return to asked data to described client;
If there is no asked data on described expansion caching server, to described client, send and there is no asked data response, described client, after receiving described data response, sends request of data to service end; Described service end is to described client return data response and the data of asking; Described in described client, data respond with the data of asking and store data in described expansion buffer memory;
Wherein, when described client is when sending request of data to described expansion caching server, if when described client and described expansion caching server communication failure, described client will directly be sent described request of data to described service end, described service end is to the response of described client return data and the data of asking, data response and the data of asking described in described client;
If described client-cache need to be stored and/or be updated in to the data of asking of returning from described service end, described asked data are stored and/or are updated on described client-cache;
If the data of asking of returning from described service end need to be stored in the expansion caching server of appointment:, if described communication failure not yet recovers, described client is abandoned storage operation; If described communication failure recovers, described client executing stores the data of asking of returning on the expansion caching server of appointment into.
In a specific embodiment, realized as the process of the client-requested data of accompanying drawing 2, in this process, embodied the reading and writing data rule of step S200.
The client of described system, before sending request of data, builds the keyword key of institute's request msg, and treatment principle during request of data that described system adopts is:
If the second of 1 described control parameter is V4, described client will send request of data to expansion caching server;
Described in 1.1, expand caching server after obtaining the corresponding value of key, will return to the value that described key is corresponding to described client;
Value corresponding to key described in client described in 1.2, and judge whether the value that described key is corresponding is empty, if empty, the conclusion that there is no asked data in buffer memory is expanded in judgement, to service end, sends request of data;
Described in 1.3, service end returns to described client according to requesting query data and by data; Described client, according to the state of described control parameter the first bit representation, judges whether received data to be stored on expansion buffer memory; If first of described control parameter is V2, that the key of the data that receive is associated with value, the result after described key is associated with value is put into and is expanded buffer memory; If V1, the result after the key of the data of reception is associated with value is put into client-cache; If V0, the result after the key of the data of reception is associated with value is put into expansion buffer memory and client-cache simultaneously;
If the second of 2 described control parameters represents V3, described client judges whether the value that key is corresponding is empty, if not empty, directly at described client-cache, obtains asked data and uses; If it is empty, to service end, send request of data; Processing procedure is below identical with 1.3.Wherein, requiring of V0, V1, V2, V3, V4 is the same, does not repeat them here.
Refer to Fig. 3, in the time of most during client-requested data, be to come into contacts with caching server, the data of only asking in client miss situation in buffer memory is just sent request to service end, has greatly alleviated the response burden of service end, has promoted on the whole the performance of system.
In one embodiment, in order to be data cachedly effectively to manage to carrying out the data of buffer memory, for these are data cached, designed a two-dimensional storage structure, described two-dimensional storage structure is divided into interface message buffer memory dimension and interface name buffer memory dimension, what the keyword of described interface message buffer memory dimension was interface enters to join, the return information that the value of described interface message buffer memory dimension is described interface; The keyword of described interface name buffer memory dimension is described interface, and the value of described interface name buffer memory dimension is that described interface enters ginseng.Be stored in the data cached of client and be stored in data cached on caching server of expansion described data cached comprising.
More excellent, the create-rule of the keyword of described interface message buffer memory dimension is: numerical digit Institution Code+system code+interface method title+interface is all enters to join title+same-interface method name numbering;
The create-rule of described interface name buffer memory dimension keys, numerical digit Institution Code+system code+interface method title+same-interface method name numbering;
Described Institution Code is described system user's coding;
Described system code is the sequential encoding of the subsystem of described system;
The figure place of the numbering of described identical data interface method title is more than or equal to the figure place of the number value that described data interface method title is identical, the integer of the value of described numbering for increasing with sequence of natural numbers, if the figure place of described numbering is more than the figure place of described integer, zero padding before the highest order of described integer; If there is no the identical situation of the method name of described data-interface, the method name of described identical data interface is numbered the figure place that zero, zero number equals described numbering.
In one embodiment, use vehicle insurance to accept insurance the company of subsystem in Xi'an, the Institution Code in its Xi'an is 4, suppose it is 1234, what vehicle insurance was accepted insurance subsystem is encoded to 0101, certain interface method is SeRVice.getInfo (StRing systemCode, PRpDplan pRpDplan, and in system, there are 2 such functions, distinguish for convenience identical interface name, the numbering of described same-interface method name is established 1, the keyword of the interface name buffer memory dimension that interface name is SeRVice.getInfo is 1234-0101-SeRVicegetInfo-1, return value is 1234-0101-SeRVicegetInfo-StRing systemCode-PRpDplanpRpDplan-1, this return value is using the keyword as interface message buffer memory dimension, the return value of the keyword by interface message buffer memory dimension can obtain a concrete data object.
In another embodiment, whole system only has the title of an interface method, and namely the title of described interface method is unique in whole system, and the method name of identical data interface is numbered one 0.
In one embodiment, for convenience of management, establish identical data interface method name be numbered 4, if the title for an interface method has 3 identical situations, the numbering of the method name of described identical data interface is followed successively by 0001,0002, and 0003; And if only have the situation of, the method name of described identical data interface is numbered 0000.
Conventionally, before described client sends request of data to service end, or in described client before sending request of data to described expansion caching server: described client is first according to the key value of the data of asking described in asked data construct.
So far, complete the basic step of carrying out buffer memory expansion by the method for software architecture, but in order further to improve the hit rate that data read, improved systematic function, need to be to Design and implementation management and optimisation strategy after expansion buffer memory.
In order better caching server to be managed, in another embodiment, designed and Implemented cache management strategy; Described cache management strategy comprises provides visual operation interface, on described interface, can show the data cached information of storing on the address information of the trouble-free expansion caching server of current and described client communication and described current expansion caching server, wherein:
Described data cached information is carried out sequencing display according to frequency of utilization;
Described data cached information comprises for obtaining data cached data-interface title, and data-interface is described and can be to the described data cached operation of carrying out;
The address information of described expansion caching server comprises current for writing the data cached IP that writes expansion caching server and port thereof, the current IP and the port thereof that read expansion caching server for reading cache data, and said write expansion caching server and described in read the connection state information of expansion caching server, described connection state information comprises and can connect and can not be connected.
Preferably, described data cached information shows with list, and described data cached demonstration comprises data dictionary cache information and custom system cache information;
Described data dictionary cache information and custom system cache information sort from high to low according to frequency of utilization respectively, wherein said data dictionary cache information acquiescence shows several preceding data dictionary records that sort, other parts default hidden, this part being hidden carried out the switching of show or hide by user interactions;
When data dictionary record is shown, system provides the ability of cleaning buffer memory;
For hiding data dictionary record, system also provides concrete certain data dictionary record or some data dictionary is recorded to the ability of removing;
Described custom system cache information records default hidden, carries out the switching of the show or hide of custom system cache information record by user interactions; For hiding custom system cache information record, system provides concrete certain user cache information recording or certain user's cache information is recorded to the ability of removing.
Preferably, data cached the carry out invalid flag of described clear operation for wish is removed.
For further improving data cached hit rate, described caching management module remove expansion data cached after, to service end, send the data cached size removed and the notice of data-interface information; Described service end, will be according to data-interface acquisition of information data by described data-pushing module after receiving the notice of described removed data cached size and data-interface information, and according to described size information, determine the data volume that sends to expansion buffer memory.
More excellent, even to data cached memory allocation for making to expand the server of buffer memory, and client can be known the data of server wish storage, the data of storage while pushing, and the data after deleting are upgraded in active, at client and server, all increased consistency hash algorithm unit, described consistency hash algorithm unit belongs to described caching management module.System after improving like this, the service end of described system can determine by calling described consistency hash algorithm unit unique expansion caching server of depositing cache information, and described client can be determined by calling described consistency hash algorithm unit unique expansion caching server of accessed cache information.
In the data of client judgement request during at expansion buffer memory, or in the data-pushing module of service end when carrying out propelling data, all will call consistency hash algorithm unit, for determining unique expansion caching server.By the use of consistency hash algorithm, can make deposit data be evenly distributed in all expansion servers, all expansion servers load balancing, are conducive to system stability.
And the cache optimization strategy of having gone back Design and implementation in another embodiment.Described cache optimization strategy comprises described buffer memory pre-heating device, refers to accompanying drawing 3.Described buffer memory pre-heating device comprises that data-interface calls frequency statistics module and system periodic maintenance notification module, described data-interface calls frequency statistics module and adds up and sort according to the frequency of utilization of data-interface, and by frequency of utilization statistics and the sequence of described data-interface, notifies client and the service end to described system when system starts; The time span of periodic maintenance can be set in described system periodic maintenance prompting module, and the frequency of utilization of the data-interface before described time span being finished when a time span finishes is added up and sequence circularizes system maintenance personnel.
Described data-interface calls frequency statistics module and contributes to the service condition of Various types of data in described system to understand, and contributing to provides data supporting to later system optimization.And system periodic maintenance prompting module can, with the service condition of Various types of data in the mode reporting system attendant system of mail or note, contribute to system maintenance personnel that system is further optimized and generated strategy in the optimization in later stage.
In the present embodiment, by the utilization to pre-heating device, according to data-interface, call the statistics of frequency statistics module, data are preloaded onto to described expansion caching server, be conducive to improve the hit rate of most of data, improve system processing power.Preferably, can before expansion buffer memory is used, just use the data-interface of this equipment to call frequency statistics functions of modules, can when expansion buffer memory is used, obtain a good effect like this.And if pre-heating device is just brought into use when expansion buffer memory is used, described data-interface calls frequency statistics module and also can the system after improvement use after a period of time, obtains the effect of high hit rate during restarting systems.
Preferably, after designing and Implementing buffer memory pre-heating device, in service end, data-pushing module is set, described data-pushing module can notify client that being set in the data that service end occur to be upgraded at described client-cache of storage is invalid; When data are in expansion buffer memory and when described service end occurs to upgrade, as shown in Figure 3, described service end initiatively initiates to upgrade operation to the expansion caching server at described expansion buffer memory place; Described data-pushing module also, for when system starts, is called the statistics of frequency statistics module according to data-interface, data are preloaded onto to described expansion caching server.And in described data-pushing module before initiatively described expansion caching server being initiated to upgrade operation, or described data-pushing module is when system starts, according to data-interface, calling the statistics of frequency statistics module, before data are preloaded onto to described expansion caching server, described service end first builds the key value of institute's propelling data.
The use of described data-pushing module, alleviates the request of described service end and response pressure.By distinguishing, expansion buffer memory is carried out Data Update operation and client is put to invalid operation, be conducive to improve the data cached hit rate to expansion buffer memory, and reduce the data volume transmission to client communication simultaneously, be conducive to improve the response performance of system.
More excellent, for better improving data cached reading speed, designing and Implementing on the basis of buffer memory pre-heating device, design and Implement the rule of client-cache and expansion buffer memory being carried out to data allocations, specific as follows:
S601: the frequency of utilization of described data-interface is sorted according to order from big to small;
S602: carry out buffer memory mark according to affiliated data-interface according to following principle by data cached:
S6021: described data cached under the frequency of utilization of data-interface to belong to 10% the data markers that is less than or equal to described sequence be V0V3, only represent in the enterprising row cache data storage of client-cache and read;
S6022: described data cached under the frequency of utilization of data-interface to belong to 10% and 20% the data markers that is less than or equal to described sequence that are greater than described sequence be V1V3, be illustrated in the enterprising row cache data storage of client-cache and expansion buffer memory, when sending request of data, first at client-cache, read;
S6023: described data cached under the frequency of utilization of data-interface to belong to 20% the data markers that is greater than described sequence be V2V4, be illustrated in the enterprising row cache data storage of expansion buffer memory and read;
S603: the clooating sequence of the frequency of utilization of the described data-interface of current use is circularized to described client and service end.This step is mainly in order to guarantee that the data allocations foundation that client and service end are used is identical.
Wherein, the description of V0~V4 is the same.By according to the frequency of utilization of system business data, utilize Pareto Law artificially to specify which deposit data conveniently to read at client-cache to data; Which data not only leaves client-cache in but also leave in expansion buffer memory, shares the request response burden to service end; Which data only leaves in expansion buffer memory, when excessively not increasing the pressure of client, shares the request response pressure of service end, is conducive to improve the response performance of system.
More excellent, described step S500 also comprises and has designed and Implemented log cache module, described log cache module is written to the operation of carrying out for buffer memory in log cache file, and described buffer memory comprises client-cache and expansion buffer memory, and described buffer memory comprises client-cache and expansion buffer memory.The operation to buffer memory here comprises data write to buffer memory, and it is invalid that the data on buffer memory are put, to the Data Update on buffer memory or deletion etc.
In another embodiment, in order to improve transmission data, improve system data throughput, reduce transmitted data amount, accelerate resolution speed, described method also comprises that Design and implementation data transmission policies is for to be undertaken data after serializing processing to transmit by json, and with binary form, store on the expansion caching server of appointment, after described client data, carry out after unserializing is processed using.Yet, if the data of asking just at client-cache, do not need to carry out serializing and unserializing is processed.
Preferably, described method has adopted the storage architecture of Redis.In another embodiment, in order to improve the reliability of system, system has been used sentry's program of Redis, and by writing Linux script, carries out the described script of Linux order operation, and then starts the sentry's program of carrying out.
By compile script, control sentry's program, easy to use, simple to operate, can better facilitate system maintenance personnel to be configured and to safeguard, increase work efficiency.
In order to use described sentry's program, in the configuration file of sentry's program that need to be on the server of disposing sentry's program, carry out the configuration of relevant parameter.In addition, on described client-server, in the property file Redis.properties of supporting configuration Redis, configure the server address of described sentry's program, for client, be connected with the server at sentry's program place, and in sentry's program, can safeguard a message queue about client address, after described client is connected with the server at described sentry's program place, can in described message queue, register.
In order to administer and maintain expansion caching server, in the database of described service end, increase the address storage list of described expansion caching server, and the address information of described expansion caching server is increased in described storage list; In described client, build the address information that a connection pool is safeguarded expansion caching server.
In another embodiment, after described client is connected with the server at described sentry's program place, the message of having registered and having subscribed to " Server switching " in described message queue.When described sentry's program sends to described client by the address information of the master server of expansion caching server, described client will be upgraded described address information.When system starts, sentry's program can send to described client by the address information of all master servers; In service when system, certain master server breaks down, and sentry's program can be promoted to new master server by certain standby server of this master server, the address information of new master server can be notified to described client simultaneously.As shown in Figure 4, system has been disposed the active/standby server of many groups Redis, by monitor described active/standby server with sentry's cluster, in described sentry's cluster, there are a plurality of sentry's programs, if certain master server fault, described sentry's cluster meeting active is promoted to master server by the master server of this fault from server, and initiatively new caching server state is informed to described client.In Fig. 4, can also see, active/standby server for many groups Redis, described client is to the expansion servers of appointment, to send request of data by consistent hashing algorithm, if the data that have described client to ask in expansion servers, can return to asked data to described client.By many groups active/standby server, carry out data cached storage, can reduce the request of described client to service end, reduce the response times of described service end, improve the response speed of whole system; Use many group active/standby servers described in sentry's cluster monitoring simultaneously, can improve the reliability of system
In one embodiment, in order to administer and maintain expansion caching server, described step S400 is also included in the database of described service end increases the address storage list of described expansion caching server, and the address information of described expansion caching server is increased in described storage list; In described client, build the address information that a connection pool is safeguarded expansion caching server.
When described sentry's program sends to described client by the address information of the master server of expansion caching server, described client will be upgraded described address information.When system starts, sentry's program can send to described client by the address information of all master servers; In service when system, certain master server breaks down, and sentry's program can be promoted to new master server by certain standby server of this master server, the address information of new master server can be notified to described client simultaneously.
In this specification, each embodiment adopts the mode of going forward one by one to describe, and what stress is all the difference with other embodiment, between each embodiment identical similar part mutually referring to.For system embodiment, because it is substantially similar to embodiment of the method, so description is fairly simple, relevant part is referring to the part explanation of embodiment of the method.
Above a kind of method of expanding buffer memory by software architecture provided by the present invention is described in detail, applied specific case herein principle of the present invention and execution mode are set forth, the explanation of above embodiment is just for helping to understand method of the present invention and core concept thereof; , for those skilled in the art, according to thought of the present invention, all will change in specific embodiments and applications, in sum, this description should not be construed as limitation of the present invention meanwhile.

Claims (10)

1. a method of using software architecture expansion buffer memory, is characterized in that: described method comprises the following steps:
S100: the service regeulations of designing and Implementing buffer memory;
Described buffer memory comprises client-cache and how group servers are as the expansion caching server of expanding buffer memory by disposing between client and service end; Wherein, every group of server consists of master server and Duo Tai standby server; The method that described service regeulations are stored with client-cache and/or expansion buffer memory for judgement;
S200: design and Implement on caching server for storing the database table of data; Described buffer memory comprises client-cache and expansion buffer memory;
S300: after designing and Implementing expansion buffer memory, reading and writing data is regular;
S400: design and Implement cache management strategy;
S500: design and Implement cache optimization strategy.
2. method according to claim 1, preferred, it is characterized in that, described step S100 is specific as follows:
Buffer control module is set, in described buffer control module, has at least one to control parameter; Described buffer memory service regeulations are:
(1), if controlling the primary value of parameter is V0, represent to close, only in the enterprising row data storage of described client-cache;
(2) if controlling the primary value of parameter is V1, represent out, simultaneously in described client-cache and the enterprising row data storage of expansion buffer memory;
(3) if controlling the primary value of parameter is V2, represent out, only in the enterprising row data storage of described expansion buffer memory;
(4) if controlling the deputy value of parameter is V3, represent to close, on described client-cache, read caching data on client, the data of described caching data on client for storing in described client;
(5) if controlling the deputy value of parameter is V4, represent out, on described expansion buffer memory, read expansion data cached, the data cached data for storing of described expansion on described expansion caching server;
Wherein, V0, V1, V2, V3, V4 are arbitrary data types.
3. method according to claim 2, is characterized in that, as follows in the database table described in step S200:
The described data that will carry out buffer memory are stored on expansion buffer memory with two-dimensional structure, be divided into the information cache dimension of data-interface and the name cache dimension of data-interface, what the keyword of the information cache dimension of described data-interface was data-interface enters to join, the return information that the value of the information cache dimension of described data-interface is described data-interface; The keyword of the name cache dimension of described data-interface is described data-interface, and what the value of the name cache dimension of described data-interface was described data-interface enters to join.
4. method according to claim 3, is characterized in that:
The create-rule of the keyword of the information cache dimension of described data-interface is: all numberings that enter to join the method name of title+identical data interface of the method name of numerical digit Institution Code+system code+described data-interface+described data-interface;
The create-rule of described data-interface name cache dimension keys, the numbering of numerical digit Institution Code+system code+described data interface method title+identical described data interface method title;
Described Institution Code is described system user's coding;
Described system code is the sequential encoding of the subsystem of described system;
The figure place of the numbering of described identical data interface method title is more than or equal to the figure place of the number value that described data interface method title is identical, the integer of the value of described numbering for increasing with sequence of natural numbers, if the figure place of described numbering is more than the figure place of described integer, zero padding before the highest order of described integer; If there is no the identical situation of the method name of described data-interface, the method name of described identical data interface is numbered the figure place that zero, zero number equals described numbering.
5. method according to claim 1, is characterized in that, as follows in the read-write rule of data described in step S300:
Before described client is sent request msg, first by calling the data-interface of the request msg for obtaining and then obtaining the buffer memory mark of described data-interface, and the data of asking by the preliminary judgement of buffer memory mark obtaining are store or in expansion buffer memory, then according to following principle, carry out read-write operation at client-cache:
S301: if the data that preliminary judgement is asked are only stored at client-cache, first at described client-cache data query, if described client-cache does not have asked data or the data of asking are set to invalidly, described client sends request of data to service end;
Described service end is to described client return data response and the data of asking;
Described in described client, data respond with the data of asking and asked data are stored in described client-cache and/or upgrade;
S302: if the data that preliminary judgement is asked are client-cache and expansion buffer memory, first at described client-cache data query, if described client-cache does not have asked data or the data of asking are set to invalidly, described client sends request of data to the expansion caching server at described expansion buffer memory place;
If the data that described expansion caching server is asked to some extent, return to asked data to described client, the data that described client is asked are also stored and/or upgrade at described client-cache;
If described expansion caching server does not have asked data, to described client, send the response that there is no asked data, described client, after receiving described response, sends described request of data to service end; Described service end is to described client return data response and the data of asking; Data response and the data of asking described in described client, and by asked data storage and/or in described client-cache and described expansion buffer memory;
S303: if the data that preliminary judgement is asked are expansion buffer memory, the direct expansion caching server to described expansion buffer memory place sends request of data;
If the data of asking to some extent on described expansion caching server, return to asked data to described client;
If there is no asked data on described expansion caching server, to described client, send and there is no asked data response, described client, after receiving described data response, sends request of data to service end; Described service end is to described client return data response and the data of asking; Described in described client, data respond with the data of asking and store data in described expansion buffer memory;
Wherein, when described client is when sending request of data to described expansion caching server, if when described client and described expansion caching server communication failure, described client will directly be sent described request of data to described service end, described service end is to the response of described client return data and the data of asking, data response and the data of asking described in described client;
If described client-cache need to be stored and/or be updated in to the data of asking of returning from described service end, described asked data are stored and/or are updated on described client-cache;
If the data of asking of returning from described service end need to be stored in the expansion caching server of appointment:, if described communication failure not yet recovers, described client is abandoned storage operation; If described communication failure recovers, described client executing stores the data of asking of returning on the expansion caching server of appointment into.
6. method according to claim 5, it is characterized in that: before described client sends request of data to service end, or in described client before sending request of data to described expansion caching server: described client is first according to the key value of the data of asking described in asked data construct.
7. method according to claim 1, is characterized in that: in cache management strategy described in S400; Described cache management strategy comprises provides visual operation interface, on described interface, can show the data cached information of storing on the address information of the trouble-free expansion caching server of current and described client communication and described current expansion caching server, wherein:
Described data cached information is carried out sequencing display according to frequency of utilization;
Described data cached information comprises for obtaining data cached data-interface title, and data-interface is described and can be to the described data cached operation of carrying out;
The address information of described expansion caching server comprises current for writing the data cached IP that writes expansion caching server and port thereof, the current IP and the port thereof that read expansion caching server for reading cache data, and said write expansion caching server and described in read the connection state information of expansion caching server, described connection state information comprises and can connect and can not be connected.
8. method according to claim 7, is characterized in that: described data cached information shows with list, and described data cached demonstration comprises data dictionary cache information and custom system cache information;
Described data dictionary cache information and custom system cache information sort from high to low according to frequency of utilization respectively, wherein said data dictionary cache information acquiescence shows several preceding data dictionary records that sort, other parts default hidden, this part being hidden carried out the switching of show or hide by user interactions;
When data dictionary record is shown, system provides the ability of cleaning buffer memory;
For hiding data dictionary record, system also provides concrete certain data dictionary record or some data dictionary is recorded to the ability of removing;
Described custom system cache information records default hidden, carries out the switching of the show or hide of custom system cache information record by user interactions; For hiding custom system cache information record, system provides concrete certain user cache information recording or certain user's cache information is recorded to the ability of removing.
9. method according to claim 8, is characterized in that: data cached the carry out invalid flag of described clear operation for wish is removed.
10. method according to claim 9, it is characterized in that, in optimisation strategy described in step S500, comprise described buffer memory pre-heating device, described buffer memory pre-heating device comprises that data-interface calls frequency statistics module and system periodic maintenance notification module, described data-interface calls frequency statistics module and adds up and sort according to the frequency of utilization of data-interface, and by frequency of utilization statistics and the sequence of described data-interface, notifies client and the service end to described system when system starts; The time span of periodic maintenance can be set in described system periodic maintenance prompting module, and the frequency of utilization of the data-interface before described time span being finished when a time span finishes is added up and sequence circularizes system maintenance personnel.
CN201410482639.8A 2014-09-19 2014-09-19 A kind of method using software architecture to expand buffer memory Active CN104202424B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410482639.8A CN104202424B (en) 2014-09-19 2014-09-19 A kind of method using software architecture to expand buffer memory

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410482639.8A CN104202424B (en) 2014-09-19 2014-09-19 A kind of method using software architecture to expand buffer memory

Publications (2)

Publication Number Publication Date
CN104202424A true CN104202424A (en) 2014-12-10
CN104202424B CN104202424B (en) 2016-01-27

Family

ID=52087649

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410482639.8A Active CN104202424B (en) 2014-09-19 2014-09-19 A kind of method using software architecture to expand buffer memory

Country Status (1)

Country Link
CN (1) CN104202424B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106559497A (en) * 2016-12-06 2017-04-05 郑州云海信息技术有限公司 A kind of distributed caching method of WEB server based on daemon thread
CN108874903A (en) * 2018-05-24 2018-11-23 中国平安人寿保险股份有限公司 Method for reading data, device, computer equipment and computer readable storage medium
CN108897495A (en) * 2018-06-28 2018-11-27 北京五八信息技术有限公司 Buffering updating method, device, buffer memory device and storage medium
CN109614404A (en) * 2018-11-01 2019-04-12 阿里巴巴集团控股有限公司 A kind of data buffering system and method
CN109739516A (en) * 2018-12-29 2019-05-10 深圳供电局有限公司 A kind of operation method and system of cloud caching
WO2019090780A1 (en) * 2017-11-13 2019-05-16 深圳市华阅文化传媒有限公司 High-availability id generator, and id generation method and device thereof
CN110825986A (en) * 2019-11-05 2020-02-21 上海携程商务有限公司 Method, system, storage medium and electronic device for client to request data

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101764824A (en) * 2010-01-28 2010-06-30 深圳市同洲电子股份有限公司 Distributed cache control method, device and system
CN102333108A (en) * 2011-03-18 2012-01-25 北京神州数码思特奇信息技术股份有限公司 Distributed cache synchronization system and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101764824A (en) * 2010-01-28 2010-06-30 深圳市同洲电子股份有限公司 Distributed cache control method, device and system
CN102333108A (en) * 2011-03-18 2012-01-25 北京神州数码思特奇信息技术股份有限公司 Distributed cache synchronization system and method

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106559497A (en) * 2016-12-06 2017-04-05 郑州云海信息技术有限公司 A kind of distributed caching method of WEB server based on daemon thread
WO2019090780A1 (en) * 2017-11-13 2019-05-16 深圳市华阅文化传媒有限公司 High-availability id generator, and id generation method and device thereof
CN108874903A (en) * 2018-05-24 2018-11-23 中国平安人寿保险股份有限公司 Method for reading data, device, computer equipment and computer readable storage medium
CN108897495A (en) * 2018-06-28 2018-11-27 北京五八信息技术有限公司 Buffering updating method, device, buffer memory device and storage medium
CN108897495B (en) * 2018-06-28 2023-10-03 北京五八信息技术有限公司 Cache updating method, device, cache equipment and storage medium
CN109614404A (en) * 2018-11-01 2019-04-12 阿里巴巴集团控股有限公司 A kind of data buffering system and method
CN109739516A (en) * 2018-12-29 2019-05-10 深圳供电局有限公司 A kind of operation method and system of cloud caching
CN109739516B (en) * 2018-12-29 2023-06-20 深圳供电局有限公司 Cloud cache operation method and system
CN110825986A (en) * 2019-11-05 2020-02-21 上海携程商务有限公司 Method, system, storage medium and electronic device for client to request data
CN110825986B (en) * 2019-11-05 2023-03-21 上海携程商务有限公司 Method, system, storage medium and electronic device for client to request data

Also Published As

Publication number Publication date
CN104202424B (en) 2016-01-27

Similar Documents

Publication Publication Date Title
CN104202423B (en) A kind of system by software architecture expansion buffer memory
CN104202424B (en) A kind of method using software architecture to expand buffer memory
AU2013347807B2 (en) Scaling computing clusters
CN103116661B (en) A kind of data processing method of database
CN101562543A (en) Cache data processing method and processing system and device thereof
CN105025053A (en) Distributed file upload method based on cloud storage technology and system
CN102541990A (en) Database redistribution method and system utilizing virtual partitions
US11188229B2 (en) Adaptive storage reclamation
CN104102693A (en) Object processing method and device
JP6582445B2 (en) Thin client system, connection management device, virtual machine operating device, method, and program
CN107179878A (en) The method and apparatus of data storage based on optimizing application
US20150112934A1 (en) Parallel scanners for log based replication
CN101916289A (en) Method for establishing digital library storage system supporting mass small files and dynamic backup number
CN103581332A (en) HDFS framework and pressure decomposition method for NameNodes in HDFS framework
CN109739435A (en) File storage and update method and device
CN105760391A (en) Data dynamic redistribution method and system, data node and name node
CN102982033A (en) Small documents storage method and system thereof
CN116760661A (en) Data storage method, apparatus, computer device, storage medium, and program product
CN111670560A (en) Electronic device, system and method
EP3709173B1 (en) Distributed information memory system, method, and program
CN104699720A (en) Merging and storing method and system for massive data
CN115238006A (en) Retrieval data synchronization method, device, equipment and computer storage medium
CN112162886B (en) Back-end storage device switching method, device, equipment and medium
CN103685359A (en) Data processing method and device
CN112800066A (en) Index management method, related device and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant