CN106254417A - Data cache method, Apparatus and system - Google Patents
Data cache method, Apparatus and system Download PDFInfo
- Publication number
- CN106254417A CN106254417A CN201610547674.2A CN201610547674A CN106254417A CN 106254417 A CN106254417 A CN 106254417A CN 201610547674 A CN201610547674 A CN 201610547674A CN 106254417 A CN106254417 A CN 106254417A
- Authority
- CN
- China
- Prior art keywords
- data
- server
- cache
- new data
- buffer storage
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1095—Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
- H04L67/5682—Policies or rules for updating, deleting or replacing the stored data
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
This application provides a kind of data cache method, relate to caching technology field, the method includes: master server updates the data in level cache to generate more new data;Master server generate synchronic command, by described synchronic command send to the most multiple from server with by described more new data write the plurality of from the L2 cache of server.Present invention also provides corresponding data buffer storage device and system.Methods, devices and systems shown in the embodiment of the present application, compared with prior art, the embodiment of the present application disclosure satisfy that the demand of data syn-chronization between different server, reduce among multiple servers because data cached diversity causes query time long or data exist the loss that error causes, reduce the server process time to data, accelerate the processing speed to user's request, promote Consumer's Experience.
Description
Technical field
The application relates to field of computer technology, is specifically related to a kind of data cache method, Apparatus and system.
Background technology
Along with the development of computer internet technology, in particular with the propelling of globalization process, the covering model of the Internet
Enclosing increasing, overburden depth has goed deep into our clothing, food, lodging and transportion--basic necessities of life.Along with the client radix of server service increases, service
The disposal ability of device is challenged.In prior art, in order to accelerate the processing speed to request, particularly please at a large number of users
When asking process, data base is the most directly inquired about and write to server, but has first done caching process, safeguards one in internal memory
Number accesses according to for self, the most again batch synchronization data and write data base, reduces inquiry data base and writes data base.Therefore,
Use distributor server system, the caching pressure of individual server is shared to other server, can effectively reduce single
The caching pressure of server, improves the disposal ability of server.
During realizing the application, inventor finds that prior art at least there is problems in that the distributed clothes of employing
Business device framework, owing to caching has been assigned to different servers, therefore needs between multiple servers when query caching data
Search, add query time;If additionally, the same number evidence of two or more server buffers, data to be faced
Stationary problem, have impact on request processing speed largely.
Summary of the invention
The embodiment of the present application provides a kind of data cache method, Apparatus and system, in order to solve the existing skill of above-mentioned elaboration
At least one problem in art.
The first aspect of the embodiment of the present application provides a kind of data cache method, and described method includes:
Master server updates the data in level cache to generate more new data;
Master server generate synchronic command, by described synchronic command send to the most multiple from server with by described more new data
Write the plurality of from the L2 cache of server.
The second aspect of the embodiment of the present application provides a kind of data cache method, and described method includes:
The synchronic command monitoring the data genaration more new data master server renewal level cache from server and send;
Described more new data is write L2 cache according to the synchronic command received from server.
The third aspect of the embodiment of the present application provides a kind of data cache method, and described method includes:
From the request of server customer in response end, update the data in L2 cache to generate more new data;
Send more newly requested, so that described master server according to institute based on described more new data to master server from server
State the data updated in data modification level cache.
The first aspect of the embodiment of the present application provides a kind of data buffer storage device, and described device includes:
Data updating unit, for updating the data in level cache to generate more new data;
Instruction signal generating unit, be used for generating synchronic command, by described synchronic command send to the most multiple from server with by institute
State the write of more new data the plurality of from the L2 cache of server.
The second aspect of the embodiment of the present application provides a kind of data buffer storage device, including:
Instruction monitoring unit, is used for monitoring master server and updates the data genaration more new data in level cache and send
Synchronic command;
Data synchronisation unit, for writing described more new data in L2 cache according to the synchronic command received.
The third aspect of the embodiment of the present application provides a kind of data buffer storage device, including:
Request-response unit, for the request of customer in response end, updates the data in L2 cache to generate more new data;
Update request transmitting unit, for sending more newly requested based on described more new data to master server, so that described
Master server is according to the data in described renewal data modification level cache.
The embodiment of the present application finally provides a kind of data buffering system, and described system includes:
First data buffer storage device, described first data buffer storage device provides for the first aspect described in above-described embodiment
Data buffer storage device;
Second data buffer storage device, described second data buffer storage device is the second aspect described in above-described embodiment or the 3rd
The data buffer storage device that aspect provides.
The data cache method of the embodiment of the present application offer, Apparatus and system, directly change its one-level by master server
After data in caching obtain more new data, according to this renewal data genaration synchronic command send to the most multiple make from server many
Individual from server, this more new data is write its respective L2 cache, or, by from server acknowledged client end please
Ask thus update after these data in the L2 cache of server obtain more new data, update further according to this renewal data genaration
Request Concurrency delivers to master server so that master server is according to being somebody's turn to do in its level cache of more newly requested renewal that server sends
Corresponding data after, generate the synchronic command of this more new data and send to other from server, further such that other from
Data same in the L2 cache of server obtain synchronized update.Compared with prior art, the embodiment of the present application disclosure satisfy that
The demand of data syn-chronization between different server, reduces among multiple servers because data cached diversity causes inquiry
There is the loss that error causes in overlong time or data, reduces the server process time to data, accelerates user's request
Processing speed, promotes Consumer's Experience.
Accompanying drawing explanation
In order to be illustrated more clearly that the technical scheme of the embodiment of the present application, required use in embodiment being described below
Accompanying drawing be briefly described, it should be apparent that, the accompanying drawing in describing below is some embodiments of the application, for ability
From the point of view of the those of ordinary skill of territory, on the premise of not paying creative work, it is also possible to obtain the attached of other according to these accompanying drawings
Figure.
Fig. 1 is the flow chart of the data cache method of the application first embodiment;
Fig. 2 is the flow chart of the data cache method of the application the second embodiment;
Fig. 3 is the flow chart of the data cache method of the application the 3rd embodiment;
Fig. 4 is the flow chart of the data cache method of the application the 4th embodiment;
Fig. 5 is the structural representation of the data buffer storage device of the application first embodiment;
Fig. 6 is the structural representation of the data buffer storage device of the application the 3rd embodiment;
Fig. 7 is the structural representation of the data buffer storage device of the application the 4th embodiment;
Fig. 8 is the structural representation of the data buffering device of the application one embodiment;
Fig. 9 is the structural representation of the data buffering system of the application one embodiment.
Detailed description of the invention
For making the purpose of the embodiment of the present application, technical scheme and advantage clearer, below in conjunction with the embodiment of the present application
In accompanying drawing, the technical scheme in the embodiment of the present application is clearly and completely described, it is clear that described embodiment is
Some embodiments of the present application rather than whole embodiments.Based on the embodiment in the application, those of ordinary skill in the art
The every other embodiment obtained under not making creative work premise, broadly falls into the scope of the application protection.
It should be noted that in the case of not conflicting, the embodiment in the application and the feature in embodiment can
To be mutually combined.
The application can be used in numerous general or special purpose computing system environment or configuration.Such as: personal computer, service
Device computer, handheld device or portable set, laptop device, multicomputer system, system based on microprocessor, top set
Box, programmable consumer-elcetronics devices, network PC, minicomputer, mainframe computer, include any of the above system or equipment
Distributed computing environment etc..
The application can be described in the general context of computer executable instructions, such as program
Module.Usually, program module includes performing particular task or realizing the routine of particular abstract data type, program, object, group
Part, data structure etc..The application can also be put into practice in a distributed computing environment, in these distributed computing environment, by
The remote processing devices connected by communication network performs task.In a distributed computing environment, program module is permissible
It is positioned in the local and remote computer-readable storage medium of storage device.
Finally, in addition it is also necessary to explanation, in this article, the relational terms of such as first and second or the like be used merely to by
One entity or operation separate with another entity or operating space, and not necessarily require or imply these entities or operation
Between exist any this reality relation or order.And, term " includes ", " comprising ", not only includes those key elements, and
And also include other key elements being not expressly set out, or also include intrinsic for this process, method, article or equipment
Key element.In the case of there is no more restriction, statement " including ... " key element limited, it is not excluded that including described wanting
Process, method, article or the equipment of element there is also other identical element.
Fig. 1 is the flow chart of the data cache method of the application first embodiment.As it is shown in figure 1, the method includes:
S11: master server updates the data in level cache to generate more new data;
S12: master server generate synchronic command, by described synchronic command send to the most multiple from server with by described renewal
Data write is the plurality of from the L2 cache of server.
In the present embodiment, master server can obtain more new data, according to this by changing the data in its level cache
Update data genaration synchronic command and send extremely the most multiple from server make multiple from server according to this synchronic command by this more
New data writes in its respective L2 cache, thus so that in the level cache of master server and all from server
Data in L2 cache obtain synchronized update.
Such as, when shopping website upper part commodity cause its price to change due to sales promotion, the factor such as out of season, can
To be changed the price of these commodity by the master server of this shopping website, by the commodity price write master server after change
In level cache, and such as with the forms of broadcasting send synchronic command to all of from server by master server, make all of
Price after the price of these commodity of the level two cache storage of server also becomes change, so, when user accesses this
During shopping website, being directed to which server regardless of the access request of this user, the commodity price that this user sees is
Price after change so that the price of these commodity can synchronize in time in all of server, it is to avoid this shopping website
Factor data synchronizes the loss caused the most in time.The most such as, when user watches video content, if video content is updated,
Corresponding video content in then master server updates its level cache, and will update after video content be synchronized to all of from clothes
In business device, it is to avoid the problem that video content that user is watched from server by difference is inconsistent.
Compared with prior art, the embodiment of the present application disclosure satisfy that the demand of data syn-chronization between different server, reduces
Among multiple servers because data cached diversity causes query time long or data exist the loss that error causes,
Reduce the server process time to data, accelerate the processing speed to user's request, improve Consumer's Experience.
Fig. 2 is the flow chart of the data cache method of the application the second embodiment.As in figure 2 it is shown, the method includes:
It is more newly requested that the reception of S21: master server sends from the request of server customer in response end, revises level cache
In data to generate more new data;
S22: master server generate synchronic command, by described synchronic command send to the most multiple from server with by described renewal
Data write is the plurality of from the L2 cache of server.
In the present embodiment, client send request to a certain from server, should be from the request of server acknowledged client end
While according to this request, this corresponding data in the L2 cache of server is updated, after obtaining more new data, then
Update request Concurrency according to this renewal data genaration and deliver to master server so that master server is according to sending more from server
After corresponding data in its level cache of newly requested renewal, generate the synchronic command of this more new data and send to other from clothes
Business device, further such that other data same in the L2 cache of server obtain synchronized update.
Unlike embodiment illustrated in fig. 1, in embodiment illustrated in fig. 1, master server can directly update its level cache
In data, and in embodiment illustrated in fig. 2, the data in level cache in master server be according to client-access from clothes
Business device data update and update, and then other from server further according in master server level cache data update and
Update the data in respective L2 cache.The method be may be used for user and write the data in server by client
Operation.Such as, when the article bought or service are paid the bill on shopping website by user by client, these article or service
Payment information be uploaded to a certain from server, should payment information show at the page of user's access from server and be uploaded to
Master server, payment information is synchronized to that other are all of from server by master server again, and user can be avoided because of network condition
After the reasons such as instability cause payment page furbishing, do not upgraded in time by other payment informations when server is accessed, from
And the problem being likely to result in user's loss.The most such as, user is when barrage video website viewing video A, by client to certain
One have sent a barrage to watched video A from server, makes should have answered the transmission request of user from server after
User can see the barrage oneself sent out on the client, meanwhile, from server, this barrage should be uploaded to master server, main
This barrage is synchronized all of from server at other again by server so that other users are being watched from server by other
During video A, also can see this barrage in real time, add user and watch the enjoyment of video, improve Consumer's Experience.
Fig. 3 is the flow chart of the data cache method of the application the 3rd embodiment.As it is shown on figure 3, the method includes:
S31: monitor master server from server and update the data genaration more new data level cache and the synchronization that sends
Instruction;
S32: according to the synchronic command received, described more new data is write L2 cache from server.
In the present embodiment, after the data in master server changes its level cache obtain more new data, supervise from server
Listen the synchronic command that master server sends according to this renewal data genaration, and according to this synchronic command, this more new data is write
In respective L2 cache, thus so that the level cache of master server is interior and all in the L2 cache of server
Data obtain synchronized update.
Fig. 4 is the flow chart of the data cache method of the application the 4th embodiment.As shown in Figure 4, the method includes:
S41: from the request of server customer in response end, updates the data in L2 cache to generate more new data;
S42: send more newly requested based on described more new data to master server from server, so that described master server root
According to the data in described renewal data modification level cache.
In the present embodiment, client send request to a certain from server, should be from the request of server acknowledged client end
While according to this request, this corresponding data in the L2 cache of server is updated, after obtaining more new data, then
Update request Concurrency according to this renewal data genaration and deliver to master server so that master server can send from server according to this
Its level cache of more newly requested renewal in corresponding data.Thus master server generates this more new data in subsequent process
Synchronic command and send to other from server, be further able to the number same in the L2 cache of server so that other
According to obtaining synchronized update.So, then the data in Servers-all are obtained for synchronized update.
In the embodiment of the present application, in principal and subordinate's server, its storage organization of data of caching can be sequential storage, chain type
Any one or more in storage, index storage and hash storage.For example, it is possible to by data with key-value pair (Key-Value)
Form store in the server.
In some embodiments of the embodiment of the present application, receiving master server to sending from server from server
Can send whether more new data is synchronizing successful feedback information from server to master server after synchronic command.
It should be noted that for aforesaid each method embodiment, in order to be briefly described, therefore it is all expressed as a series of
Action merge, but those skilled in the art should know, the application is not limited by described sequence of movement because
According to the application, some step can use other orders or carry out simultaneously.Secondly, those skilled in the art also should know
Knowing, embodiment described in this description belongs to preferred embodiment, involved action and module not necessarily the application
Necessary.
In the above-described embodiments, the description to each embodiment all emphasizes particularly on different fields, and does not has the portion described in detail in certain embodiment
Point, may refer to the associated description of other embodiments.
Fig. 5 is the structural representation of the data buffer storage device of the application first embodiment.Described in the embodiment of the present application
Data cache method can be implemented based on the data buffer storage device in the embodiment of the present application.As it is shown in figure 5, this device includes data
Updating block 51 and instruction signal generating unit 52.
Data updating unit 51 is for updating the data in level cache to generate more new data;
Instruction signal generating unit 52 be used for generating synchronic command, by described synchronic command send to the most multiple from server with by institute
State the write of more new data the plurality of from the L2 cache of server.
In the present embodiment, data updating unit can obtain more new data by changing the data in its level cache, refers to
Make signal generating unit according to this renewal data genaration synchronic command and send to the most multiple from server make multiple from server according to
This more new data is write in its respective L2 cache by this synchronic command, thus so that in the level cache of master server
Synchronized update is obtained with all data in the L2 cache of server.
Compared with prior art, the embodiment of the present application disclosure satisfy that the demand of data syn-chronization between different server, reduces
Among multiple servers because data cached diversity causes query time long or data exist the loss that error causes,
Reduce the server process time to data, accelerate the processing speed to user's request, improve Consumer's Experience.
In the data buffer storage device of the application the second embodiment, data updating unit is for receiving from server response visitor
The request of family end and send more newly requested, the data in amendment level cache are to generate more new data.
In the present embodiment, client send request to a certain from server, should be from the request of server acknowledged client end
While according to this request, this corresponding data in the L2 cache of server is updated, after obtaining more new data, then
Update request Concurrency according to this renewal data genaration and deliver to data updating unit so that data updating unit is according to being somebody's turn to do from server
After corresponding data in its level cache of more newly requested renewal sent, instruction signal generating unit generates the synchronization of this more new data and refers to
Make and send to other from server, further such that other data same in the L2 cache of server are synchronized
Update.
Fig. 6 is the structural representation of the data buffer storage device of the application the 3rd embodiment.As shown in Figure 6, this device includes
Instruction monitoring unit 61 and data synchronisation unit 62.
Instruction monitoring unit 61 is used for monitoring master server and updates the data genaration more new data in level cache and send
Synchronic command;
Data synchronisation unit 62 is for writing described more new data in L2 cache according to the synchronic command received.
In the present embodiment, after the data in master server changes its level cache obtain more new data, instruction is monitored single
Unit monitors the synchronic command that master server sends according to this renewal data genaration, and by data synchronisation unit according to this synchronization
Instruct and this more new data write in respective L2 cache, thus so that in the level cache of master server and all from
Data in the L2 cache of server obtain synchronized update.
Fig. 7 is the structural representation of the data buffer storage device of the application the 4th embodiment.As it is shown in fig. 7, this device includes
Request-response unit 71 and renewal request transmitting unit 72.
Request-response unit 71 updates number for the request of customer in response end, the data updated in L2 cache to generate
According to;
Update request transmitting unit 72 to be used for sending more newly requested based on described more new data to master server, so that described
Master server is according to the data in described renewal data modification level cache.
In the present embodiment, client send request to a certain from server, should answer from the request-response unit of server
According to this request, this corresponding data in the L2 cache of server is updated while answering the request of client, obtains
After more new data, more newly requested further according to this renewal data genaration and by update request transmitting unit send to master server,
Make master server can according to this from server send its level cache of more newly requested renewal in corresponding data.Thus
In subsequent process master server generate the synchronic command of this more new data and send to other from server, be further able to make
Obtain other data same in the L2 cache of server and obtain synchronized update.So, then the data in Servers-all are all
Obtain synchronized update.
The embodiment of the present application can be passed through hardware processor (hardware processor) and realize correlation function mould
Block.
Fig. 8 is the structural representation of the data buffering device 800 of the application one embodiment.The application specific embodiment is not
Implementing of data buffering device 800 is limited.As shown in Figure 8, this equipment may include that
Processor (processor) 810, communication interface (Communications Interface) 820, memorizer
(memory) 830 and communication bus 840.Wherein:
Processor 810, communication interface 820 and memorizer 830 complete mutual communication by communication bus 840.
Communication interface 820, for the net element communication with such as client etc..
Processor 810, is used for the program that performs 832, specifically can perform the correlation step in said method embodiment.
Specifically, program 832 can include that program code, described program code include computer-managed instruction.
Processor 810 is probably a central processor CPU, or specific integrated circuit ASIC (Application
Specific Integrated Circuit), or it is configured to implement the one or more integrated electricity of the embodiment of the present application
Road.
Fig. 9 is the structural representation of the data buffering system of the application one embodiment.As it is shown in figure 9, this system includes
One data buffer storage device 91 and the second data buffer storage device 92.
First data buffer storage device 91 is the data buffer storage device described in above-mentioned first or second embodiments;
Second data buffer storage device 92 is the data buffer storage device described in above-mentioned 3rd or the 4th embodiment.
Embodiment of the method described above is only schematically, and the wherein said unit illustrated as separating component can
To be or to may not be physically separate, the parts shown as unit can be or may not be physics list
Unit, i.e. may be located at a place, or can also be distributed on multiple NE.Can be selected it according to the actual needs
In some or all of module realize the purpose of the present embodiment scheme.Those of ordinary skill in the art are not paying creativeness
Work in the case of, be i.e. appreciated that and implement.
By the description of above embodiment, those skilled in the art is it can be understood that can be by each embodiment
Software adds the mode of required general hardware platform and realizes, naturally it is also possible to pass through hardware.Based on such understanding, above-mentioned skill
The part that prior art is contributed by art scheme the most in other words can embody with the form of software product, this calculating
Machine software product can store in a computer-readable storage medium, such as ROM/RAM, magnetic disc, CD etc., uses including some instructions
So that computer equipment (can be personal computer, server, or the network equipment etc.) perform each embodiment or
The method described in some part of person's embodiment.
Those skilled in the art are it should be appreciated that embodiments herein can be provided as method, system or computer program
Product.Therefore, the reality in terms of the application can use complete hardware embodiment, complete software implementation or combine software and hardware
Execute the form of example.And, the application can use at one or more computers wherein including computer usable program code
The shape of the upper computer program implemented of usable storage medium (including but not limited to disk memory and optical memory etc.)
Formula.
The application is with reference to method, equipment (system) and the flow process of computer program according to the embodiment of the present application
Figure and/or block diagram describe.It should be understood that can the most first-class by computer program instructions flowchart and/or block diagram
Flow process in journey and/or square frame and flow chart and/or block diagram and/or the combination of square frame.These computer programs can be provided
Instruction arrives the processor of general purpose computer, special-purpose computer, Embedded Processor or other programmable data processing device to produce
A raw machine so that the instruction performed by the processor of computer or other programmable data processing device is produced for real
The device of the function specified in one flow process of flow chart or multiple flow process and/or one square frame of block diagram or multiple square frame now.
These computer program instructions may be alternatively stored in and computer or other programmable data processing device can be guided with spy
Determine in the computer-readable memory that mode works so that the instruction being stored in this computer-readable memory produces and includes referring to
Make the manufacture of device, this command device realize at one flow process of flow chart or multiple flow process and/or one square frame of block diagram or
The function specified in multiple square frames.These computer program instructions also can be loaded into computer or other programmable datas process and set
It is standby upper so that on computer or other programmable devices, execution sequence of operations step is to produce computer implemented process,
Thus the instruction performed on computer or other programmable devices provides for realizing at one flow process of flow chart or multiple stream
The step of the function specified in journey and/or one square frame of block diagram or multiple square frame.
Last it is noted that above example is only in order to illustrate the technical scheme of the application, it is not intended to limit;Although
With reference to previous embodiment, the application is described in detail, it will be understood by those within the art that: it still may be used
So that the technical scheme described in foregoing embodiments to be modified, or wherein portion of techniques feature is carried out equivalent;
And these amendment or replace, do not make appropriate technical solution essence depart from the application each embodiment technical scheme spirit and
Scope.
Claims (9)
1. a data cache method, described method includes:
Master server updates the data in level cache to generate more new data;
Master server generate synchronic command, by described synchronic command send to the most multiple from server with will described more new data write
The plurality of from the L2 cache of server.
Method the most according to claim 1, wherein, described master server updates the data in level cache to generate renewal
Data include:
It is more newly requested that master server reception sends from the request of server customer in response end, the data in amendment level cache
To generate more new data.
3. a data cache method, including:
The synchronic command monitoring the data genaration more new data master server renewal level cache from server and send;
Described more new data is write L2 cache according to the synchronic command received from server.
4. a data cache method, described method includes:
From the request of server customer in response end, update the data in L2 cache to generate more new data;
From server based on described more new data to master server send more newly requested so that described master server according to described more
Data in new data amendment level cache.
5. a data buffer storage device, described device includes:
Data updating unit, for updating the data in level cache to generate more new data;
Instruction signal generating unit, be used for generating synchronic command, by described synchronic command send to the most multiple from server with by described more
New data write is the plurality of from the L2 cache of server.
Device the most according to claim 5, wherein, described data updating unit is for receiving from server customer in response end
Request and send more newly requested, amendment level cache in data to generate more new data.
7. a data buffer storage device, including:
Instruction monitoring unit, updates the data genaration more new data in level cache and the synchronization that sends for monitoring master server
Instruction;
Data synchronisation unit, for writing described more new data in L2 cache according to the synchronic command received.
8. a data buffer storage device, including:
Request-response unit, for the request of customer in response end, updates the data in L2 cache to generate more new data;
Update request transmitting unit, for sending more newly requested based on described more new data to master server, so that described main clothes
Business device is according to the data in described renewal data modification level cache.
9. a data buffering system, including
First data buffer storage device, described first data buffer storage device is to fill according to the data buffer storage described in claim 5 or 6
Put;
Second data buffer storage device, described second data buffer storage device is to fill according to the data buffer storage described in claim 7 or 8
Put.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610547674.2A CN106254417A (en) | 2016-07-12 | 2016-07-12 | Data cache method, Apparatus and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610547674.2A CN106254417A (en) | 2016-07-12 | 2016-07-12 | Data cache method, Apparatus and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106254417A true CN106254417A (en) | 2016-12-21 |
Family
ID=57613683
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610547674.2A Pending CN106254417A (en) | 2016-07-12 | 2016-07-12 | Data cache method, Apparatus and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106254417A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106657360A (en) * | 2016-12-30 | 2017-05-10 | 曙光信息产业(北京)有限公司 | Synchronization method and system for NIS servers under Linux system |
CN106790629A (en) * | 2017-01-03 | 2017-05-31 | 努比亚技术有限公司 | Data synchronization unit and its realize the method for data syn-chronization, client access system |
CN107204861A (en) * | 2017-07-27 | 2017-09-26 | 郑州云海信息技术有限公司 | A kind of server cluster event-handling method |
CN108200219A (en) * | 2018-03-13 | 2018-06-22 | 广东欧珀移动通信有限公司 | Method of data synchronization, device, server and storage medium |
CN109218447A (en) * | 2018-10-29 | 2019-01-15 | 中国建设银行股份有限公司 | Media file distribution method and file distributing platform |
CN112286952A (en) * | 2020-12-23 | 2021-01-29 | 智道网联科技(北京)有限公司 | Method, device and system for processing real-time traffic information |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103369051A (en) * | 2013-07-22 | 2013-10-23 | 中安消技术有限公司 | Data server cluster system and data synchronization method |
CN103634385A (en) * | 2013-11-22 | 2014-03-12 | 乐视网信息技术(北京)股份有限公司 | System, method and server for data synchronizing |
CN104954474A (en) * | 2015-06-19 | 2015-09-30 | 北京奇虎科技有限公司 | Method and device for data updating in load balancing |
CN105049263A (en) * | 2015-08-24 | 2015-11-11 | 浪潮(北京)电子信息产业有限公司 | Data processing method and data processing system |
-
2016
- 2016-07-12 CN CN201610547674.2A patent/CN106254417A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103369051A (en) * | 2013-07-22 | 2013-10-23 | 中安消技术有限公司 | Data server cluster system and data synchronization method |
CN103634385A (en) * | 2013-11-22 | 2014-03-12 | 乐视网信息技术(北京)股份有限公司 | System, method and server for data synchronizing |
CN104954474A (en) * | 2015-06-19 | 2015-09-30 | 北京奇虎科技有限公司 | Method and device for data updating in load balancing |
CN105049263A (en) * | 2015-08-24 | 2015-11-11 | 浪潮(北京)电子信息产业有限公司 | Data processing method and data processing system |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106657360A (en) * | 2016-12-30 | 2017-05-10 | 曙光信息产业(北京)有限公司 | Synchronization method and system for NIS servers under Linux system |
CN106790629A (en) * | 2017-01-03 | 2017-05-31 | 努比亚技术有限公司 | Data synchronization unit and its realize the method for data syn-chronization, client access system |
CN107204861A (en) * | 2017-07-27 | 2017-09-26 | 郑州云海信息技术有限公司 | A kind of server cluster event-handling method |
CN107204861B (en) * | 2017-07-27 | 2021-04-09 | 郑州云海信息技术有限公司 | Server cluster event processing method |
CN108200219A (en) * | 2018-03-13 | 2018-06-22 | 广东欧珀移动通信有限公司 | Method of data synchronization, device, server and storage medium |
CN108200219B (en) * | 2018-03-13 | 2020-04-14 | Oppo广东移动通信有限公司 | Data synchronization method, device, server and storage medium |
CN109218447A (en) * | 2018-10-29 | 2019-01-15 | 中国建设银行股份有限公司 | Media file distribution method and file distributing platform |
CN112286952A (en) * | 2020-12-23 | 2021-01-29 | 智道网联科技(北京)有限公司 | Method, device and system for processing real-time traffic information |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106254417A (en) | Data cache method, Apparatus and system | |
US20160323367A1 (en) | Massively-scalable, asynchronous backend cloud computing architecture | |
CN201682522U (en) | Conversation information storage system and application server | |
US9208189B2 (en) | Distributed request processing | |
US9811577B2 (en) | Asynchronous data replication using an external buffer table | |
CN103312761B (en) | System and method for optimizing downloadable content transmission | |
CN107357896A (en) | Expansion method, device, system and the data base cluster system of data-base cluster | |
CN108139958A (en) | Event batch processing, output sequence in continuous query processing and the state storage based on daily record | |
US8924348B2 (en) | System and method for sharing data between occasionally connected devices and remote global database | |
CN109791471A (en) | Virtualize the non-volatile memory device at peripheral unit | |
US8874587B2 (en) | Tenant placement in multitenant cloud databases with one-to-many data sharing | |
US11113244B1 (en) | Integrated data pipeline | |
CN109150929B (en) | Data request processing method and device under high concurrency scene | |
CN111338571B (en) | Task processing method, device, equipment and storage medium | |
CN109241033A (en) | The method and apparatus for creating real-time data warehouse | |
US10394781B2 (en) | Synchronization of offline data | |
CN104346426A (en) | Shared data de-duplication method and system | |
CN109271367A (en) | Distributed file system multinode snapshot rollback method and system | |
US10635672B2 (en) | Method and system for merging data | |
US20240098151A1 (en) | ENHANCED PROCESSING OF USER PROFILES USING DATA STRUCTURES SPECIALIZED FOR GRAPHICAL PROCESSING UNITS (GPUs) | |
Way | Transforming monograph collections with a model of collections as a service | |
CN109725913A (en) | The method and apparatus that data update | |
CN111767495A (en) | Method and system for synthesizing webpage | |
CN113535740B (en) | Inventory management method and device | |
CN106156277A (en) | For third-party data sharing update method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20161221 |
|
WD01 | Invention patent application deemed withdrawn after publication |