CN104346345A - Data storage method and device - Google Patents

Data storage method and device Download PDF

Info

Publication number
CN104346345A
CN104346345A CN201310315361.0A CN201310315361A CN104346345A CN 104346345 A CN104346345 A CN 104346345A CN 201310315361 A CN201310315361 A CN 201310315361A CN 104346345 A CN104346345 A CN 104346345A
Authority
CN
China
Prior art keywords
entity object
network data
data
buffer
buffer entity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310315361.0A
Other languages
Chinese (zh)
Other versions
CN104346345B (en
Inventor
吴新玉
丁岩
吴亮
陈小强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zhongxing Software Co Ltd
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201310315361.0A priority Critical patent/CN104346345B/en
Priority to US14/907,199 priority patent/US20160191652A1/en
Priority to PCT/CN2013/082003 priority patent/WO2014161261A1/en
Priority to EP13881423.1A priority patent/EP3026573A4/en
Publication of CN104346345A publication Critical patent/CN104346345A/en
Application granted granted Critical
Publication of CN104346345B publication Critical patent/CN104346345B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/5682Policies or rules for updating, deleting or replacing the stored data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • G06F16/9574Browsing optimisation, e.g. caching or content distillation of access to content, e.g. by caching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/90335Query processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention discloses a data storage method and device. The method comprises transmitting a request message to a network side device to obtain network data to be cached; selecting one or more cache entity objects for the network data from a cache entity object set, and directly storing obtained first-type network data into the one or more cache entity objects, ore storing serialized second-type network data into the one or more cache entity objects. The data storage method can reduce dependency on network and save network traffic as well as electricity for mobile terminals.

Description

The storage means of data and device
Technical field
The present invention relates to the communications field, in particular to a kind of storage means and device of data.
Background technology
In correlation technique, no matter for large-scale application or compact applications, buffer memory can say the pressure not only significantly reducing server flexibly, and because Consumer's Experience and facilitate users faster.Mobile terminal application ranges compact applications usually, wherein, the application of major part (about 99%) does not all need real-time update, and denounces in the mobile network speed as snail, working as with the data interaction of server is to lack at least, and such Consumer's Experience could be outstanding.
Adopt buffer memory greatly can alleviate the pressure of data interaction.The common suitable environment of cache management can comprise:
(1) application of providing services on the Internet;
(2) data are without the need to real-time update, even if the delay of short a few minutes also can adopt caching mechanism;
(3) expired time of buffer memory is acceptable, and it can not cause some data because renewal affects the image of product not in time.
Thus, the buffer memory benefit following points that can bring:
(1) pressure of server can greatly be reduced;
(2) response speed of client is accelerated greatly;
(3) client data loads the probability of makeing mistakes and greatly reduces, and drastically increases the stability of application;
(4) off-line browsing can be supported to a certain extent or can say as off-line browsing provides technical support.
At present, two kinds of buffer memory management methods comparatively commonly used are: database method and file method.Database method is after having downloaded data file, by the relevant information of file, such as: URL(uniform resource locator) (URL), path, download time, expired time etc. are stored in database, when next time needs to download, first can inquire about from database according to URL, if inquire current time and not out of date, then can read local file according to path, thus realize the effect of buffer memory.File rule is the last modification time using File.lastModified () method to obtain file, then compares with current time thus judges whether expired, and then realizing the effect of buffer memory.
But the data buffer storage solution provided in correlation technique relies on remote network services excessively, also need the electricity expending network traffics and mobile terminal in large quantities simultaneously.
Summary of the invention
The invention provides a kind of storage means and device of data, rely on remote network services excessively at least to solve the data buffer storage solution provided in correlation technique, also need the problem expending network traffics and mobile terminal electricity in large quantities simultaneously.
According to an aspect of the present invention, a kind of storage means of data is provided.
Storage means according to data of the present invention comprises: to network equipment initial request messages, obtains the network data treating buffer memory; For network data chooses one or more buffer entity object from the set of buffer memory entity object, and directly the network data of the first kind got is stored in one or more buffer entity object, or, the network data of the Second Type after serializing process is stored in one or more buffer entity object.
Preferably, before the network data of the first kind being stored in one or more buffer entity object, also comprise: the size obtaining the network data of the first kind; Judge whether the size of the network data of the first kind is less than or equal to the size of the residual memory space in the first buffer entity object, wherein, the memory priority rank of the first buffer entity object in one or more buffer entity object is the highest; If not, then delete according to the part or all of data of preset rules by current storage in the first buffer entity object, or, by the part or all of data batchmove that stores in the first buffer entity object in other buffer entity objects except the first buffer entity object, wherein, preset rules comprises one of following: least recently used (LRU) rule, the time stored in buffer entity object.
Preferably, before the network data of Second Type being stored in one or more buffer entity object, also comprise: the size obtaining the network data of Second Type; Judge whether the size of the network data of Second Type is less than or equal to the size of the residual memory space in the first buffer entity object; If not, then delete according to the part or all of data of preset rules by current storage in the first buffer entity object, or, by the part or all of data batchmove that stores in the first buffer entity object in other buffer entity objects except the first buffer entity object, wherein, preset rules comprises one of following: LRU rule, the time stored in buffer entity object.
Preferably, the network data of the first kind is being stored in one or more buffer entity object, or, before the network data of Second Type being stored in one or more buffer entity object, also comprise: for the network data of the first kind or the network data of Second Type arrange storaging mark, wherein, storaging mark is used for searching the network data of the first kind or the network data of Second Type after being stored in one or more buffer entity object by the network data of the first kind or the network data of Second Type.
Preferably, the network data of the first kind is stored in one or more buffer entity object, or, the network data of Second Type is stored to one or more buffer entity object and comprises: judge whether there is storaging mark in one or more buffer entity object; If existed, the network data of the first kind or with the storaging mark corresponding data of the network data of Second Type to current storage in one or more buffer entity object are then directly adopted to cover, or after the data corresponding with storaging mark being carried out to readjustment process, then adopt the network data of the network data of the first kind or the Second Type pair data corresponding with storaging mark to cover.
Preferably, comprise for the network data of the first kind or the network data of Second Type arrange storaging mark: travel through the whole storaging marks existed in one or more buffer entity object; Be defined as the storaging mark of the network data of the first kind or the network data setting of Second Type according to traversing result, wherein, the storaging mark of setting is all not identical with the whole storaging marks existed.
Preferably, buffer entity object set comprise following one of at least: the memory cache entity object of initial configuration; The file cache entity object of initial configuration; The database caches entity object of initial configuration; User-defined expansion buffer entity object.
According to a further aspect in the invention, a kind of memory storage of data is provided.
Memory storage according to data of the present invention comprises: the first acquisition module, for network equipment initial request messages, obtains the network data treating buffer memory; Memory module, for from the set of buffer memory entity object for network data chooses one or more buffer entity object, and directly the network data of the first kind got is stored in one or more buffer entity object, or, the network data of the Second Type after serializing process is stored in one or more buffer entity object.
Preferably, said apparatus also comprises: the second acquisition module, for obtaining the size of the network data of the first kind; First judge module, for judging whether the size of the network data of the first kind is less than or equal to the size of the residual memory space in the first buffer entity object, wherein, the memory priority rank of the first buffer entity object in one or more buffer entity object is the highest; First processing module, for exporting as time no at the first judge module, delete according to the part or all of data of preset rules by current storage in the first buffer entity object, or, by the part or all of data batchmove that stores in the first buffer entity object in other buffer entity objects except the first buffer entity object, wherein, preset rules comprises one of following: LRU rule, the time stored in buffer entity object.
Preferably, said apparatus also comprises: the 3rd acquisition module, for obtaining the size of the network data of Second Type; Second judge module, for judging whether the size of the network data of Second Type is less than or equal to the size of the residual memory space in the first buffer entity object; Second processing module, for exporting as time no at the second judge module, delete according to the part or all of data of preset rules by current storage in the first buffer entity object, or, by the part or all of data batchmove that stores in the first buffer entity object in other buffer entity objects except the first buffer entity object, wherein, preset rules comprises one of following: LRU rule, the time stored in buffer entity object.
Preferably, said apparatus also comprises: arrange module, for arranging storaging mark for the network data of the first kind or the network data of Second Type, wherein, storaging mark is used for searching the network data of the first kind or the network data of Second Type after being stored in one or more buffer entity object by the network data of the first kind or the network data of Second Type.
Preferably, memory module comprises: judging unit, for judging whether there is storaging mark in one or more buffer entity object; Processing unit, for when judging unit exports as being, the network data of the first kind or with the storaging mark corresponding data of the network data of Second Type to current storage in one or more buffer entity object are then directly adopted to cover, or after the data corresponding with storaging mark being carried out to readjustment process, then adopt the network data of the network data of the first kind or the Second Type pair data corresponding with storaging mark to cover.
Preferably, module is set and comprises: Traversal Unit, for traveling through the whole storaging marks existed in one or more buffer entity object; Determining unit, the storaging mark that the network data for the network data or Second Type that are defined as the first kind according to traversing result is arranged, wherein, the storaging mark of setting is all not identical with the whole storaging marks existed.
By the present invention, adopt to network equipment initial request messages, obtain the network data treating buffer memory, for network data chooses one or more buffer entity object from the set of buffer memory entity object, and directly the network data of the first kind got is stored in one or more buffer entity object, or, the network data of the Second Type after serializing process is stored in one or more buffer entity object, namely the dissimilar network data that receives from network equipment of the incompatible storage of buffer entity object set by building, reduce the request repeating to initiate acquisition identical network data to network equipment, reduce the information interaction frequency with network equipment, the data buffer storage solution that solving thus provides in correlation technique relies on remote network services excessively, also need the problem expending network traffics and mobile terminal electricity in large quantities simultaneously, and then the dependence that can reduce network, the electricity of saving network flow and mobile terminal.
Accompanying drawing explanation
Accompanying drawing described herein is used to provide a further understanding of the present invention, and form a application's part, schematic description and description of the present invention, for explaining the present invention, does not form inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 is the storage means of the data according to the embodiment of the present invention;
Fig. 2 is the schematic diagram of Android platform cache management according to the preferred embodiment of the invention;
Fig. 3 is the structured flowchart of the memory storage of data according to the embodiment of the present invention;
Fig. 4 is the structured flowchart of the memory storage of data according to the preferred embodiment of the invention.
Embodiment
Hereinafter also describe the present invention in detail with reference to accompanying drawing in conjunction with the embodiments.It should be noted that, when not conflicting, the embodiment in the application and the feature in embodiment can combine mutually.
Fig. 1 is the storage means of the data according to the embodiment of the present invention.As shown in Figure 1, the method can comprise following treatment step:
Step S102: to network equipment initial request messages, obtains the network data treating buffer memory;
Step S104: for network data chooses one or more buffer entity object from the set of buffer memory entity object, and directly the network data of the first kind got is stored in one or more buffer entity object, or, the network data of the Second Type after serializing process is stored in one or more buffer entity object.
The data buffer storage solution provided in correlation technique relies on remote network services excessively, also needs to expend network traffics and mobile terminal electricity in large quantities simultaneously.Adopt method as shown in Figure 1, to network equipment initial request messages, obtain the network data (such as: image data, string data) treating buffer memory, for network data chooses one or more buffer entity object from the set of buffer memory entity object, and directly the network data of the first kind got is stored in one or more buffer entity object, or, the network data of the Second Type after serializing process is stored in one or more buffer entity object, namely the dissimilar network data that receives from network equipment of the incompatible storage of buffer entity object set by building, reduce the request repeating to initiate acquisition identical network data to network equipment, reduce the information interaction frequency with network equipment, the data buffer storage solution that solving thus provides in correlation technique relies on remote network services excessively, also need the problem expending network traffics and mobile terminal electricity in large quantities simultaneously, and then the dependence that can reduce network, the electricity of saving network flow and mobile terminal.
It should be noted that, the network data stored in the above-mentioned one or more buffer entity objects chosen can be divided into two types:
The first kind, basic data type and self realized the data type of serializing, such as: int, float, string data; Such network data without the need to again through serializing process, and can directly store;
Second Type, structure type or picture/mb-type; Such network data only after serializing process, can store.
In preferred implementation process, above-mentioned buffer entity object set can include but not limited to following one of at least:
(1) the memory cache entity object of initial configuration;
(2) the file cache entity object of initial configuration;
(3) the database caches entity object of initial configuration;
(4) user-defined expansion buffer entity object.
In a preferred embodiment, above-mentioned buffer entity object set achieves a backup/caching component, construct a framework to the storage of different type network data of seeking unity of action, it is the data buffer storage of memory carrier that its initial configuration has achieved based on buffer memory abstract class with file, within save as the data buffer storage of memory carrier and take database as the basic buffer memory classification of the data buffer storage three kinds of memory carrier, user also can realize according to self needing the buffer memory realizing abstract class interface definition oneself or continue to expand the above-mentioned three kinds of cache way realized simultaneously, to meet the diversity in actual application.On the basis of above-mentioned two kinds of functions, user can also use caching function by the cache management class object of encapsulation.
Below in conjunction with Fig. 2 for the cache management assembly of Android (android) platform, the implementation method of Android platform cache management is described in further detail.Fig. 2 is the schematic diagram of Android platform cache management according to the preferred embodiment of the invention.As shown in Figure 2, Android platform cache management is as follows:
(1) cache management class, supports general type data and can eliminate data according to LRU principle.
Cache management class can provide the following function:
All data of function one, removing current type buffer memory;
Function two, the V value obtained according to K in all data of current type buffer memory, wherein, K is the data type of key, and Android platform Java grammer supports that general type is stated; V is the data type of value, and Android platform Java grammer supports that general type is stated;
Function three, stored in (K-V) data corresponding to buffer memory;
Function four, from buffer memory, remove correspondence (K-V) data;
The size of function five, acquisition buffer memory;
The maximum restriction of function six, acquisition buffer memory;
In the preferred embodiment, cache management supports general type key-value, can arrange according to actual conditions.The file cache that this cache management assembly realizes and database caches are all <String, Externalizable> types, anyly the file of serializing or data can carry out buffer memory.
(2) buffer entity interface, supports general type data, realizes data access.
The abstract class of buffer entity provides following a few class interface:
First kind interface, (K-V) data not accessed at most obtained in this buffer memory, to delete when buffer memory is about to overflow;
Equations of The Second Kind interface, obtain corresponding VALUE according to KEY in the buffer;
3rd class interface, according to KEY in buffer memory stored in data, when having there are the data corresponding with KEY in buffer memory, returned the V value of the correspondence existed;
4th class interface, delete data corresponding in buffer memory according to KEY;
The KB limit of the 5th class interface, acquisition buffer memory;
6th class interface, obtain data cached size/quantity;
7th class interface, traversal obtain the SET object of KEY.
(3) memory cache class, supports general type data, realizes the access of data object in internal memory.
(4) file cache class, only supports <String, and Externalizable> type data, realize data object and access with file mode.
(5) database caches class, only supports <String, and Externalizable> type data, realize data object and access with database mode.
(6) Externalizable, the value object data type of serializability, completes this function by Externalizable interface under Android platform.
Preferably, in step S104, before the network data of the first kind being stored in one or more buffer entity object, following operation can also be comprised:
Step S1: the size obtaining the network data of the first kind;
Step S2: judge whether the size of the network data of the first kind is less than or equal to the size of the residual memory space in the first buffer entity object, wherein, the memory priority rank of the first buffer entity object in one or more buffer entity object is the highest;
Step S3: if not, then delete according to the part or all of data of preset rules by current storage in the first buffer entity object, or, by the part or all of data batchmove that stores in the first buffer entity object in other buffer entity objects except the first buffer entity object, wherein, above-mentioned preset rules can include but not limited to one of following: LRU rule, the time stored in buffer entity object.
In a preferred embodiment, when receiving network data (such as: string data) from network equipment, through judging to find that the network data of this type does not need to carry out serializing process, and can directly store as VALUE value.Secondly, need for network data chooses one or more buffer entity object from the set of buffer memory entity object, the max cap. of each buffer entity object can be specified to limit simultaneously.Buffer entity object herein can be the memory cache entity object, file cache entity object or the database caches entity object that have configured, can certainly be user-defined expansion buffer entity object.In the process that network data is stored, storage policy (such as: the priority of buffer entity object) can be preset, in the preferred embodiment, can the priority priority that is the highest, file cache entity object of set memory buffer entity object be taken second place, the priority of database caches entity object comes finally.Then, start to judge whether the current memory capacity of memory cache entity object that priority is the highest meets the network data just received, if can meet, direct the network data received to be stored in memory cache entity object.And if the current memory capacity of memory cache entity object cannot hold the network data just received, now, can according to preset rules (such as: eliminate memory cache entity object untapped aging data near-mid term) by recent untapped aging data stored in file cache entity object or database caches entity object, and by the network data that just received stored in memory cache entity object, thus data buffer storage process can be carried out neatly, not affect performance and the experience of application.Certainly, in order to not affect the processing power of terminal side equipment, avoid excessively using memory cache entity object, therefore, even if the current memory capacity of memory cache entity object can hold the network data just received, but after stored in network data, the utilization rate of memory cache entity object has exceeded preset ratio (such as: 80%), now, also to need recent untapped aging data according to preset rules (such as: eliminate memory cache entity object untapped aging data near-mid term) stored in file cache entity object or database caches entity object.
After above-mentioned caching process, if visit again this Webpage next time, when needing to show identical string data, then without the need to carrying out data interaction but directly can obtain corresponding string data from memory cache entity object showing to network initiated request again, to reduce network traffics, quickening page presentation speed, lifting Consumer's Experience.
Preferably, in step S104, before the network data of Second Type being stored in one or more buffer entity object, can also comprise the following steps:
Step S4: the size obtaining the network data of Second Type;
Step S5: judge whether the size of the network data of Second Type is less than or equal to the size of the residual memory space in the first buffer entity object;
Step S6: if not, then delete according to the part or all of data of preset rules by current storage in the first buffer entity object, or, by the part or all of data batchmove that stores in the first buffer entity object in other buffer entity objects except the first buffer entity object, wherein, above-mentioned preset rules can include but not limited to one of following: least recently used LRU rule, the time stored in buffer entity object.
In a preferred embodiment, when receiving network data (such as: image data) from network equipment, through judging to find that the network data of this type needs to carry out serializing process, first, need the serializing process realizing this data type, just can store as VALUE value using this.After completing above-mentioned serializing preliminary work, cache management assembly just can be used to carry out buffer memory.Secondly, need for network data chooses one or more buffer entity object from the set of buffer memory entity object, the max cap. of each buffer entity object can be specified to limit simultaneously.Buffer entity object herein can be the memory cache entity object, file cache entity object or the database caches entity object that have configured, can certainly be user-defined expansion buffer entity object.In the process that network data is stored, storage policy (such as: the priority of buffer entity object) can be preset, in the preferred embodiment, can the priority priority that is the highest, file cache entity object of set memory buffer entity object be taken second place, the priority of database caches entity object comes finally.Then, start to judge whether the current memory capacity of memory cache entity object that priority is the highest meets the network data just received, if can meet, direct the network data received to be stored in memory cache entity object.And if the current memory capacity of memory cache entity object cannot hold the network data just received, now, can according to preset rules (such as: eliminate memory cache entity object untapped aging data near-mid term) by recent untapped aging data stored in file cache entity object or database caches entity object, and by the network data that just received stored in memory cache entity object, thus data buffer storage process can be carried out neatly, not affect performance and the experience of application.Certainly, in order to not affect the processing power of terminal side equipment, avoid excessively using memory cache entity object, therefore, even if the current memory capacity of memory cache entity object can hold the network data just received, but after stored in network data, the utilization rate of memory cache entity object has exceeded preset ratio (such as: 80%), now, also to need recent untapped aging data according to preset rules (such as: eliminate memory cache entity object untapped aging data near-mid term) stored in file cache entity object or database caches entity object.
After above-mentioned caching process, if visit again this Webpage next time, when needing to show identical image data, then without the need to carrying out data interaction but directly can obtain corresponding image data from memory cache entity object showing to network initiated request again, to reduce network traffics, quickening page presentation speed, lifting Consumer's Experience.
Preferably, in step S104, the network data of the first kind is stored in one or more buffer entity object, or, before the network data of Second Type being stored in one or more buffer entity object, following process can also be comprised:
Step S7: for the network data of the first kind or the network data of Second Type arrange storaging mark, wherein, storaging mark is used for searching the network data of the first kind or the network data of Second Type after being stored in one or more buffer entity object by the network data of the first kind or the network data of Second Type.
In a preferred embodiment, storaging mark KEY can be set for the network data received at every turn, and using network data as VALUE, set up the corresponding relation between KEY and VALUE, be stored in one or more buffer entity object, thus can so that search the network data stored subsequently through KEY.Need if follow-up to search any secondary network data received, when known KEY, directly can use and obtain data cached function, find corresponding data by KEY value.If do not know KEY, then can by the acquisition KEY consolidation function of cache management class, traversal finds all KEY, just can inquire about after then finding required KEY value.
Preferably, in step S104, the network data of the first kind is stored in one or more buffer entity object, or, the network data of Second Type is stored in one or more buffer entity object and can comprises following operation:
Step S8: judge whether there is storaging mark in one or more buffer entity object;
Step S9: if existed, the network data of the first kind or with the storaging mark corresponding data of the network data of Second Type to current storage in one or more buffer entity object are then directly adopted to cover, or after the data corresponding with storaging mark being carried out to readjustment process, then adopt the network data of the network data of the first kind or the Second Type pair data corresponding with storaging mark to cover.
In a preferred embodiment, according to the type of network data, if string data can not need to carry out serializing process, and directly store as VALUE value; If image data needs the serializing process realizing this data type, just can store as VALUE value using this.In the middle of storing process, network data is distinguished with storaging mark KEY.And when store network data, the storaging mark KEY distributed for network data is not unique, namely probably there is the mark identical with the storaging mark KEY that the network data for just having received is distributed in buffer entity object.Now, if there are the data corresponding with KEY in buffer entity object, new data then can be adopted in storing process directly to cover legacy data, certainly, capped legacy data can also return to user by a callback interface, specifically can set according to individual subscriber demand the need of the process carrying out adjusting back legacy data.
Preferably, in the step s 7, can comprise the following steps for the network data of the first kind or the network data of Second Type arrange storaging mark:
Step S10: travel through the whole storaging marks existed in one or more buffer entity object;
Step S11: the storaging mark being defined as the network data of the first kind or the network data setting of Second Type according to traversing result, wherein, the storaging mark of setting is all not identical with the whole storaging marks existed.
In a preferred embodiment, in order to avoid being increased in complicacy and the triviality of Network Search data in one or more buffer entity object, the maloperation simultaneously also caused in order to avoid the cover data because storaging mark is identical causes loss of data, therefore, first can travel through in each buffer entity object there is which storaging mark before storaging mark is set, and then a storaging mark all not identical with the current whole storaging marks existed is set.
As a preferred embodiment of the present invention, the treatment scheme when using Android platform cache tools to carry out buffer memory can comprise following treatment step:
The first step, the data cached type of serializing, realize Externalizable interface;
Second step, instantiation cache management class, named cache strategy, i.e. internal memory, file or database, simultaneously can also the optionally maximum restriction of named cache;
The data of the 3rd step, designated store mark KEY and serializing, use the caching function that cache management class is corresponding;
4th step, judging the legitimacy of KEY and VALUE, must not be all empty;
5th step, calculate the size of data to be stored, guarantee that it is less than or equal to the maximum restriction of buffer memory;
6th step, judge whether identify KEY in buffer memory exists, if existed, newly-generated VALUE value will cover original value and store, and judges whether enough to store stored in space the data treating buffer memory according to the cache policy of specifying, if no, then need first to delete aging data;
In the preferred embodiment, the judgment mode of aging data is as follows: in memory cache mechanism, LinkedHashMap can store according to time sequencing, and therefore, what come foremost is aging data.In file cache mechanism, database, except preserving KEY filename, also has the creation-time of respective file, can judge thus according to this time.Database caches mechanism is similar with file cache, can store a time field, inquire about this time field and just can know aging data when data store.
7th step, in buffer memory, write this K-V value, in memory cache mechanism, constructed the LinkedHashMap according to access order arrangement, be namely increase map entry stored in data; File cache mechanism can utilize database to preserve file cache relevant information, have data need stored in time, first generate a corresponding filename according to KEY, then in this file, write data; Simultaneously more new database, database caches mechanism is a newly-increased entry in database;
8th step if the data buffer storage that existed of KEY, then by readjustment, can return K-V old value;
When using above-mentioned Android platform cache tools to obtain data, if when known KEY, directly can use and obtaining data cached function, finding corresponding data by KEY value.If do not know KEY, then can by the acquisition KEY consolidation function of cache management class, traversal finds all KEY, just can inquire about after then finding required KEY value.
Fig. 3 is the structured flowchart of the memory storage of data according to the embodiment of the present invention.As shown in Figure 3, the memory storage of these data can comprise: the first acquisition module 100, for network equipment initial request messages, obtains the network data treating buffer memory; Memory module 102, for from the set of buffer memory entity object for network data chooses one or more buffer entity object, and directly the network data of the first kind got is stored in one or more buffer entity object, or, the network data of the Second Type after serializing process is stored in one or more buffer entity object.
Adopt device as shown in Figure 3, the data buffer storage solution that solving provides in correlation technique relies on remote network services excessively, also need the problem expending network traffics and mobile terminal electricity in large quantities simultaneously, and then the dependence that can reduce network, the electricity of saving network flow and mobile terminal.
Preferably, as shown in Figure 4, said apparatus can also comprise: the second acquisition module 104, for obtaining the size of the network data of the first kind; First judge module 106, for judging whether the size of the network data of the first kind is less than or equal to the size of the residual memory space in the first buffer entity object, wherein, the memory priority rank of the first buffer entity object in one or more buffer entity object is the highest; First processing module 108, for exporting as time no at the first judge module, delete according to the part or all of data of preset rules by current storage in the first buffer entity object, or, by the part or all of data batchmove that stores in the first buffer entity object in other buffer entity objects except the first buffer entity object, wherein, above-mentioned preset rules can include but not limited to one of following: LRU rule, the time stored in buffer entity object.
Preferably, as shown in Figure 4, said apparatus can also comprise: the 3rd acquisition module 110, for obtaining the size of the network data of Second Type; Second judge module 112, for judging whether the size of the network data of Second Type is less than or equal to the size of the residual memory space in the first buffer entity object; Second processing module 114, for exporting as time no at the second judge module, delete according to the part or all of data of preset rules by current storage in the first buffer entity object, or, by the part or all of data batchmove that stores in the first buffer entity object in other buffer entity objects except the first buffer entity object, wherein, above-mentioned preset rules can include but not limited to one of following: LRU rule, the time stored in buffer entity object.
Preferably, as shown in Figure 4, said apparatus can also comprise: arrange module 116, for arranging storaging mark for the network data of the first kind or the network data of Second Type, wherein, storaging mark is used for searching the network data of the first kind or the network data of Second Type after being stored in one or more buffer entity object by the network data of the first kind or the network data of Second Type.
Preferably, memory module 102 can comprise: judging unit (not shown), for judging whether there is storaging mark in one or more buffer entity object; Processing unit (not shown), for when judging unit exports as being, the network data of the first kind or with the storaging mark corresponding data of the network data of Second Type to current storage in one or more buffer entity object are then directly adopted to cover, or after the data corresponding with storaging mark being carried out to readjustment process, then adopt the network data of the network data of the first kind or the Second Type pair data corresponding with storaging mark to cover.
Preferably, arranging module 116 can comprise: Traversal Unit (not shown), for traveling through the whole storaging marks existed in one or more buffer entity object; Determining unit (not shown), the storaging mark that the network data for the network data or Second Type that are defined as the first kind according to traversing result is arranged, wherein, the storaging mark of setting is all not identical with the whole storaging marks existed.
From above description, can find out, above embodiments enable following technique effect (it should be noted that these effects are effects that some preferred embodiment can reach): adopt technical scheme provided by the present invention, realize the network data buffer memory in this locality, when local application is asked network data continually and is more to the demand of various resource, can by utilizing this caching component, greatly improve the handling property of mobile terminal, the request that network is initiated can also be reduced simultaneously.The present invention is constructing on basic other basis of cache class of internal memory, file and database three kinds, also reserve and the expansion of other buffer memory systems has been used, support buffer memory to network graphic, backup content is unrestricted, the information from web download such as arbitrary data, file, picture can be backed up.
Obviously, those skilled in the art should be understood that, above-mentioned of the present invention each module or each step can realize with general calculation element, they can concentrate on single calculation element, or be distributed on network that multiple calculation element forms, alternatively, they can realize with the executable program code of calculation element, thus, they can be stored and be performed by calculation element in the storage device, and in some cases, step shown or described by can performing with the order be different from herein, or they are made into each integrated circuit modules respectively, or the multiple module in them or step are made into single integrated circuit module to realize.Like this, the present invention is not restricted to any specific hardware and software combination.
The foregoing is only the preferred embodiments of the present invention, be not limited to the present invention, for a person skilled in the art, the present invention can have various modifications and variations.Within the spirit and principles in the present invention all, any amendment done, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (13)

1. a storage means for data, is characterized in that, comprising:
To network equipment initial request messages, obtain the network data treating buffer memory;
One or more buffer entity object is chosen for described network data from the set of buffer memory entity object, and directly the network data of the first kind got is stored in described one or more buffer entity object, or, the network data of the Second Type after described serializing process is stored in described one or more buffer entity object.
2. method according to claim 1, is characterized in that, before the network data of the described first kind being stored in described one or more buffer entity object, also comprises:
Obtain the size of the network data of the described first kind;
Judge whether the size of the network data of the described first kind is less than or equal to the size of the residual memory space in the first buffer entity object, wherein, the memory priority rank of described first buffer entity object in described one or more buffer entity object is the highest;
If not, then delete according to the part or all of data of preset rules by current storage in described first buffer entity object, or, by the described part or all of data batchmove that stores in described first buffer entity object in other buffer entity objects except described first buffer entity object, wherein, described preset rules comprises one of following: least recently used LRU rule, the time stored in described buffer entity object.
3. method according to claim 1, is characterized in that, before the network data of described Second Type being stored in described one or more buffer entity object, also comprises:
Obtain the size of the network data of described Second Type;
Judge whether the size of the network data of described Second Type is less than or equal to the size of the residual memory space in the first buffer entity object;
If not, then delete according to the part or all of data of preset rules by current storage in described first buffer entity object, or, by the described part or all of data batchmove that stores in described first buffer entity object in other buffer entity objects except described first buffer entity object, wherein, described preset rules comprises one of following: LRU rule, the time stored in described buffer entity object.
4. method according to claim 1, it is characterized in that, the network data of the described first kind is being stored in described one or more buffer entity object, or, before the network data of described Second Type being stored in described one or more buffer entity object, also comprise:
For the network data of the described first kind or the network data of described Second Type arrange storaging mark, wherein, described storaging mark is used for searching the network data of the described first kind or the network data of described Second Type after the network data of the network data of the described first kind or described Second Type being stored in described one or more buffer entity object.
5. method according to claim 4, it is characterized in that, the network data of the described first kind is stored in described one or more buffer entity object, or, the network data of described Second Type is stored to described one or more buffer entity object and comprises:
Judge whether there is described storaging mark in described one or more buffer entity object;
If existed, the network data of the described first kind or with the described storaging mark corresponding data of the network data of described Second Type to current storage in described one or more buffer entity object are then directly adopted to cover, or the data corresponding with described storaging mark are being carried out after readjustment processes, then adopting the network data of the network data of the described first kind or the described Second Type pair data corresponding with described storaging mark to cover.
6. method according to claim 4, is characterized in that, comprises for the network data of the described first kind or the network data of described Second Type arrange described storaging mark:
Travel through the whole storaging marks existed in described one or more buffer entity object;
Be defined as the storaging mark of the network data of the described first kind or the network data setting of described Second Type according to traversing result, wherein, the storaging mark of described setting is all not identical with the described whole storaging marks existed.
7. method according to any one of claim 1 to 6, is characterized in that, described buffer entity object set comprise following one of at least:
The memory cache entity object of initial configuration;
The file cache entity object of initial configuration;
The database caches entity object of initial configuration;
User-defined expansion buffer entity object.
8. a memory storage for data, is characterized in that, comprising:
First acquisition module, for network equipment initial request messages, obtains the network data treating buffer memory;
Memory module, for choosing one or more buffer entity object for described network data from the set of buffer memory entity object, and directly the network data of the first kind got is stored in described one or more buffer entity object, or, the network data of the Second Type after described serializing process is stored in described one or more buffer entity object.
9. device according to claim 8, is characterized in that, described device also comprises:
Second acquisition module, for obtaining the size of the network data of the described first kind;
First judge module, for judging whether the size of the network data of the described first kind is less than or equal to the size of the residual memory space in the first buffer entity object, wherein, the memory priority rank of described first buffer entity object in described one or more buffer entity object is the highest;
First processing module, for exporting as time no at described first judge module, delete according to the part or all of data of preset rules by current storage in described first buffer entity object, or, by the described part or all of data batchmove that stores in described first buffer entity object in other buffer entity objects except described first buffer entity object, wherein, described preset rules comprises one of following: least recently used LRU rule, the time stored in described buffer entity object.
10. device according to claim 8, is characterized in that, described device also comprises:
3rd acquisition module, for obtaining the size of the network data of described Second Type;
Second judge module, for judging whether the size of the network data of described Second Type is less than or equal to the size of the residual memory space in the first buffer entity object;
Second processing module, for exporting as time no at described second judge module, delete according to the part or all of data of preset rules by current storage in described first buffer entity object, or, by the described part or all of data batchmove that stores in described first buffer entity object in other buffer entity objects except described first buffer entity object, wherein, described preset rules comprises one of following: least recently used LRU rule, the time stored in described buffer entity object.
11. devices according to claim 8, is characterized in that, described device also comprises:
Module is set, for arranging storaging mark for the network data of the described first kind or the network data of described Second Type, wherein, described storaging mark is used for searching the network data of the described first kind or the network data of described Second Type after the network data of the network data of the described first kind or described Second Type being stored in described one or more buffer entity object.
12. devices according to claim 11, is characterized in that, described memory module comprises:
Judging unit, for judging whether there is described storaging mark in described one or more buffer entity object;
Processing unit, for when described judging unit exports as being, the network data of the described first kind or with the described storaging mark corresponding data of the network data of described Second Type to current storage in described one or more buffer entity object are then directly adopted to cover, or the data corresponding with described storaging mark are being carried out after readjustment processes, then adopting the network data of the network data of the described first kind or the described Second Type pair data corresponding with described storaging mark to cover.
13. devices according to claim 11, is characterized in that, the described module that arranges comprises:
Traversal Unit, for traveling through the whole storaging marks existed in described one or more buffer entity object;
Determining unit, the storaging mark that the network data for the network data or described Second Type that are defined as the described first kind according to traversing result is arranged, wherein, the storaging mark of described setting is all not identical with the described whole storaging marks existed.
CN201310315361.0A 2013-07-24 2013-07-24 The storage method and device of data Active CN104346345B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201310315361.0A CN104346345B (en) 2013-07-24 2013-07-24 The storage method and device of data
US14/907,199 US20160191652A1 (en) 2013-07-24 2013-08-21 Data storage method and apparatus
PCT/CN2013/082003 WO2014161261A1 (en) 2013-07-24 2013-08-21 Data storage method and apparatus
EP13881423.1A EP3026573A4 (en) 2013-07-24 2013-08-21 Data storage method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310315361.0A CN104346345B (en) 2013-07-24 2013-07-24 The storage method and device of data

Publications (2)

Publication Number Publication Date
CN104346345A true CN104346345A (en) 2015-02-11
CN104346345B CN104346345B (en) 2019-03-26

Family

ID=51657459

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310315361.0A Active CN104346345B (en) 2013-07-24 2013-07-24 The storage method and device of data

Country Status (4)

Country Link
US (1) US20160191652A1 (en)
EP (1) EP3026573A4 (en)
CN (1) CN104346345B (en)
WO (1) WO2014161261A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106055706A (en) * 2016-06-23 2016-10-26 杭州迪普科技有限公司 Cache resource storage method and device
CN106341447A (en) * 2016-08-12 2017-01-18 中国南方电网有限责任公司 Database service intelligent exchange method based on mobile terminal
CN106681995A (en) * 2015-11-05 2017-05-17 阿里巴巴集团控股有限公司 Data caching method and data query method and device
CN108664597A (en) * 2018-05-08 2018-10-16 深圳市创梦天地科技有限公司 Data buffer storage device, method and storage medium on a kind of Mobile operating system
CN117292550A (en) * 2023-11-24 2023-12-26 天津市普迅电力信息技术有限公司 Speed limiting early warning function detection method for Internet of vehicles application
WO2024026592A1 (en) * 2022-07-30 2024-02-08 华为技术有限公司 Data storage method and related apparatus

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107704473A (en) * 2016-08-09 2018-02-16 中国移动通信集团四川有限公司 A kind of data processing method and device
CN112118283B (en) * 2020-07-30 2023-04-18 爱普(福建)科技有限公司 Data processing method and system based on multi-level cache
CN115878505B (en) * 2023-03-01 2023-05-12 中诚华隆计算机技术有限公司 Data caching method and system based on chip implementation

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1804831A (en) * 2005-01-13 2006-07-19 陈翌 Network cache management system and method
CN102521252A (en) * 2011-11-17 2012-06-27 四川长虹电器股份有限公司 Access method of remote data
US20130018875A1 (en) * 2011-07-11 2013-01-17 Lexxe Pty Ltd System and method for ordering semantic sub-keys utilizing superlative adjectives
CN103034650A (en) * 2011-09-29 2013-04-10 北京新媒传信科技有限公司 System and method for processing data
US20130179451A1 (en) * 2011-06-24 2013-07-11 International Business Machines Corporation Dynamically scalable modes

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6249844B1 (en) * 1998-11-13 2001-06-19 International Business Machines Corporation Identifying, processing and caching object fragments in a web environment
US6233606B1 (en) * 1998-12-01 2001-05-15 Microsoft Corporation Automatic cache synchronization
US6757708B1 (en) * 2000-03-03 2004-06-29 International Business Machines Corporation Caching dynamic content
US7409389B2 (en) * 2003-04-29 2008-08-05 International Business Machines Corporation Managing access to objects of a computing environment
CN1615041A (en) * 2004-08-10 2005-05-11 谢成火 Memory space providing method for mobile terminal
EP1770954A1 (en) * 2005-10-03 2007-04-04 Amadeus S.A.S. System and method to maintain coherence of cache contents in a multi-tier software system aimed at interfacing large databases
US8621075B2 (en) * 2011-04-27 2013-12-31 Seven Metworks, Inc. Detecting and preserving state for satisfying application requests in a distributed proxy and cache system
JP5204333B2 (en) * 2011-07-04 2013-06-05 本田技研工業株式会社 Metal oxygen battery
CN102306166A (en) * 2011-08-22 2012-01-04 河南理工大学 Mobile geographic information spatial index method
CN102332030A (en) * 2011-10-17 2012-01-25 中国科学院计算技术研究所 Data storing, managing and inquiring method and system for distributed key-value storage system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1804831A (en) * 2005-01-13 2006-07-19 陈翌 Network cache management system and method
US20130179451A1 (en) * 2011-06-24 2013-07-11 International Business Machines Corporation Dynamically scalable modes
US20130018875A1 (en) * 2011-07-11 2013-01-17 Lexxe Pty Ltd System and method for ordering semantic sub-keys utilizing superlative adjectives
CN103034650A (en) * 2011-09-29 2013-04-10 北京新媒传信科技有限公司 System and method for processing data
CN102521252A (en) * 2011-11-17 2012-06-27 四川长虹电器股份有限公司 Access method of remote data

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106681995A (en) * 2015-11-05 2017-05-17 阿里巴巴集团控股有限公司 Data caching method and data query method and device
CN106055706A (en) * 2016-06-23 2016-10-26 杭州迪普科技有限公司 Cache resource storage method and device
CN106055706B (en) * 2016-06-23 2019-08-06 杭州迪普科技股份有限公司 A kind of cache resources storage method and device
CN106341447A (en) * 2016-08-12 2017-01-18 中国南方电网有限责任公司 Database service intelligent exchange method based on mobile terminal
CN108664597A (en) * 2018-05-08 2018-10-16 深圳市创梦天地科技有限公司 Data buffer storage device, method and storage medium on a kind of Mobile operating system
WO2024026592A1 (en) * 2022-07-30 2024-02-08 华为技术有限公司 Data storage method and related apparatus
CN117292550A (en) * 2023-11-24 2023-12-26 天津市普迅电力信息技术有限公司 Speed limiting early warning function detection method for Internet of vehicles application
CN117292550B (en) * 2023-11-24 2024-02-13 天津市普迅电力信息技术有限公司 Speed limiting early warning function detection method for Internet of vehicles application

Also Published As

Publication number Publication date
EP3026573A1 (en) 2016-06-01
EP3026573A4 (en) 2016-07-27
US20160191652A1 (en) 2016-06-30
WO2014161261A1 (en) 2014-10-09
CN104346345B (en) 2019-03-26

Similar Documents

Publication Publication Date Title
CN104346345A (en) Data storage method and device
US8539338B2 (en) Cooperative rendering cache for mobile browser
CN103200212B (en) A kind of method and system realizing distributed conversation under cloud computing environment
US7698411B2 (en) Selectively delivering cached content or processed content to clients based upon a result completed percentage
CN111970315A (en) Method, device and system for pushing message
US20160335243A1 (en) Webpage template generating method and server
CN104731516A (en) Method and device for accessing files and distributed storage system
CN106209948A (en) A kind of data push method and device
CN105868231A (en) Cache data updating method and device
CN110928904A (en) Data query method and device and related components
CN105721538A (en) Data access method and apparatus
CN112100541B (en) Method and device for loading website page element, electronic device and storage medium
CN102567339A (en) Method, device and system for acquiring start page
CN102402613A (en) Webpage text information filtering system and method
CN110737857A (en) back-end paging acceleration method, system, terminal and storage medium
CN105608159A (en) Data caching method and device
CN107526762A (en) Service end, multi-data source searching method and system
CN108319634B (en) Directory access method and device for distributed file system
CN109947718A (en) A kind of date storage method, storage platform and storage device
CN111339057A (en) Method, apparatus and computer readable storage medium for reducing back-to-source requests
CN107798106A (en) A kind of URL De-weight methods in distributed reptile system
CN103561083A (en) Internet of things data processing method
CN108304555A (en) Distributed maps data processing method
CN106294417A (en) A kind of data reordering method, device and electronic equipment
CN108108400B (en) API (application program interface) local data increment-based method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20181031

Address after: 201203 Shanghai Zhangjiang hi tech park, 889 B, Bi Po Road 205

Applicant after: Shanghai Zhongxing Software Co., Ltd.

Address before: No. 55, Nanshan District science and technology road, Nanshan District, Shenzhen, Guangdong

Applicant before: ZTE Corporation

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant