CN103019956A - Method and device for operating cache data - Google Patents

Method and device for operating cache data Download PDF

Info

Publication number
CN103019956A
CN103019956A CN2012104074575A CN201210407457A CN103019956A CN 103019956 A CN103019956 A CN 103019956A CN 2012104074575 A CN2012104074575 A CN 2012104074575A CN 201210407457 A CN201210407457 A CN 201210407457A CN 103019956 A CN103019956 A CN 103019956A
Authority
CN
China
Prior art keywords
memory headroom
data
memory
headroom
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012104074575A
Other languages
Chinese (zh)
Other versions
CN103019956B (en
Inventor
杨帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Qizhi Software Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd, Qizhi Software Beijing Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201210407457.5A priority Critical patent/CN103019956B/en
Publication of CN103019956A publication Critical patent/CN103019956A/en
Application granted granted Critical
Publication of CN103019956B publication Critical patent/CN103019956B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a method and device for operating cache data. The method comprises the following steps of judging whether a first internal memory space is empty or not if the first internal memory space is fully filled with data, taking the data, located at a starting position, out of the first internal memory space to be written into a second internal memory space with larger volume if the first internal memory space is not empty, until taking out all data in the first internal memory space; and releasing the first internal memory space if the first internal memory space is empty. The invention also provides a method for transferring the cache data in the first internal memory space into the second internal memory space with small volume as well as a device corresponding to the method. Due to the adoption of the method and the device, not only can the enlarging/reduction requirement of the cache volume be met, but also a user can access the cache data normally during the transferring period of the data; and meanwhile, during the peak flow of the data, the pressure of a database or a magnetic disk can be alleviated, and the integral service stability also can be improved.

Description

A kind of to data cached method of operating and device
Technical field
The present invention relates to data communication technology field, be specifically related to a kind of to data cached method of operating and device.
Background technology
Caching technology has more and more been used in many large-scale internet products and service at present.The technician of website can the hot spot data of user's access, be placed on and carry out buffer memory in the internal memory usually.When other users access identical data, can directly from internal memory, read, and the data that read are returned to the user, from the slow devices such as database or disk, search avoiding.
Although the equipment of many cache classes is widely used, but most buffer memory device is when insufficient memory, all need again to apply for to operating system the internal memory of a greater room, discharge simultaneously old region of memory, with migration or padding data in this new memory headroom.But such way, be easy to upper level applications is brought certain " jolting ", namely on the time point of internal memory dilatation, so that data cached whole inefficacy, and all user's access this moment, all must go to seek data on database or the disk, thereby increase response time of user's access, simultaneously so that access pressure moment of disk excessive.
The present invention is directed to the problems referred to above that existing buffer memory device occurs when the buffer memory dilatation, proposed a solution, i.e. strategy by the timesharing Mobile data, with the data smoothing in the current memory headroom migrate in another memory headroom.Can satisfy like this demand that buffer memory capacity expands, can guarantee that again the user is the access cache data normally during the internal memory dilatation; Simultaneously can when the data traffic peak, alleviate the pressure of database or disk, and then increase the stability of integrity service.
Summary of the invention
In view of the above problems, the present invention has been proposed so as to provide a kind of overcome the problems referred to above or address the above problem at least in part to the data cached method and apparatus that operates, the present invention when carrying out dilatation to buffer memory at every turn, do not discharge immediately current memory headroom, but according to certain time interval (such as every 100 milliseconds), slowly the Data Migration of current memory headroom is arrived another memory headroom, after the total data migration is finished, discharge again current memory headroom.
According to one aspect of the present invention, provide a kind of to the data cached method that operates, comprising:
If the data in the first memory headroom have been write full, judge then whether described the first memory headroom is empty, and if not, then taking-up is positioned at the data of reference position from described the first memory headroom, until take out all data in described the first memory headroom; If then discharge described the first memory headroom;
The data of taking out from described the first memory headroom are carried out the Hash operation, obtain the memory address of described data in the second memory headroom, described data are written on this memory address of described the second memory headroom.
Alternatively, the capacity of described the second memory headroom is greater than the capacity of described the first memory headroom.
Alternatively, after described the first memory headroom is fully written, for described the first memory headroom arranges a zone bit to identify described the first memory headroom just in dilatation.
Alternatively, when the data in the first memory headroom have been write when full, judge every a time interval whether described the first memory headroom is empty.
Alternatively, store into data in described the second memory headroom after, with the deletion of this data in described the first memory headroom.
Alternatively, in the process of the Data Migration of memory headroom buffer memory, if the user conducts interviews to the data that are buffered in the memory headroom, then send read data request to described the first memory headroom first, when reading unsuccessfully, send read data request to described the second memory headroom again.
According to a further aspect in the invention, provide a kind of to the data cached device that operates, having comprised:
Judge sub-device, be suitable for having write when full when the data in the first memory headroom, judge whether described the first memory headroom is empty;
Read sub-device, be suitable for when described the first memory headroom is not sky, taking-up is positioned at the data of reference position from described the first memory headroom, until take out all data in described the first memory headroom;
Discharge sub-device, be suitable for when described the first memory headroom is sky, discharging described the first memory headroom;
Process sub-device, be suitable for the data of taking out from described the first memory headroom are carried out the Hash operation, obtain the memory address of described data in the second memory headroom;
Write sub-device, the data that are suitable for taking out from described the first memory headroom are written on this memory address of described the second memory headroom.
Alternatively, the capacity of described the second memory headroom is greater than the capacity of described the first memory headroom.
Alternatively, also comprise the sub-device of sign, be suitable for after described the first memory headroom is fully written, for described the first memory headroom arranges a zone bit to identify described the first memory headroom just in dilatation.
Alternatively, the sub-device of described judgement has been write when full when the data in the first memory headroom, judges every a time interval whether described the first memory headroom is empty.
Alternatively, also comprise the sub-device of deletion, be suitable in data being stored into described the second memory headroom after, with the deletion of this data in described the first memory headroom.
Alternatively, also comprise the sub-device of control, be suitable in the process of the Data Migration of memory headroom buffer memory, if the user conducts interviews to the data that are buffered in the memory headroom, then control sends read data request to described the first memory headroom first, when reading unsuccessfully, send read data request to described the second memory headroom again.
According to technique scheme of the present invention, can improve greatly Systems balanth and reliability, when the data traffic peak, can guarantee the dilatation that buffer memory is level and smooth, in the process of buffer memory dilatation, data access for the user still can respond rapidly, improves greatly the cache data access hit rate.Solved thus the problems referred to above that prior art exists, obtained for the comparatively responsive operation system of data cached variation, the peak load decline 30%-50% of system, the response average velocity of user's request improves the 2-3 beneficial effect of second.
Technique scheme of the present invention can also be according to concrete application needs flexible configuration simultaneously, such as, can be according to the difference of operation system, select the size of buffer memory dilatation, and the interval time of each migration, when a large amount of stale data occurring, also can reversely dwindle smoothly spatial cache simultaneously.
Above-mentioned explanation only is the general introduction of technical solution of the present invention, for can clearer understanding technological means of the present invention, and can be implemented according to the content of instructions, and for above and other objects of the present invention, feature and advantage can be become apparent, below especially exemplified by the specific embodiment of the present invention.
Description of drawings
By reading hereinafter detailed description of the preferred embodiment, various other advantage and benefits will become cheer and bright for those of ordinary skills.Accompanying drawing only is used for the purpose of preferred implementation is shown, and does not think limitation of the present invention.And in whole accompanying drawing, represent identical parts with identical reference symbol.In the accompanying drawings:
Fig. 1 shows a kind of according to an embodiment of the invention to the data cached method flow diagram that operates;
Fig. 2 shows according to an embodiment of the invention the Data Migration of the first memory headroom synoptic diagram to the first memory headroom of dilatation;
Fig. 3 shows the synoptic diagram that travels through according to an embodiment of the invention each address in the first memory headroom; And
Fig. 4 shows a kind of in accordance with another embodiment of the present invention to the data cached method flow diagram that operates.
Embodiment
Exemplary embodiment of the present disclosure is described below with reference to accompanying drawings in more detail.Although shown exemplary embodiment of the present disclosure in the accompanying drawing, yet should be appreciated that and to realize the disclosure and the embodiment that should do not set forth limits here with various forms.On the contrary, it is in order to understand the disclosure more thoroughly that these embodiment are provided, and can with the scope of the present disclosure complete convey to those skilled in the art.
According to an aspect of the present invention, propose a kind ofly to the data cached method that operates, as shown in Figure 1, the method may further comprise the steps:
Step S1, full if the data in the first memory headroom have been write, judge then whether described the first memory headroom is empty, and if not, then taking-up is positioned at the data of reference position from described the first memory headroom, until take out all data in described the first memory headroom; If then discharge described the first memory headroom;
Step S2 carries out the Hash operation to the data of taking out from described the first memory headroom, obtain the memory address of described data in the second memory headroom, described data is written on this memory address of described the second memory headroom.
Wherein, the capacity of described the second memory headroom is greater than the capacity of described the first memory headroom.
Alternatively, after described the first memory headroom is fully written, for described the first memory headroom arranges a zone bit to identify described the first memory headroom just in dilatation.Fig. 2 shows the synoptic diagram that travels through according to an embodiment of the invention each address in the first memory headroom, as shown in Figure 2, during each migration data, taking-up is positioned at the data of reference position from the first memory headroom, until take out all data in the first memory headroom.
Alternatively, when the data in the first memory headroom have been write when full, judge every a time interval whether described the first memory headroom is empty, and the described time interval is such as being 100 milliseconds.Alternatively, described Hash operation can be carried out when taking out data from described the first memory headroom at every turn.
Alternatively, store into data in the second memory headroom after, with the deletion of this data in described the first memory headroom, such as can be after data being stored in the second memory headroom, with this data deletion in described the first memory headroom at every turn.
Key for the some addresses in the first memory headroom, wherein, key is the unique identification of some user-defined a certain storage data, can access memory location corresponding with this key in the second memory headroom according to hash algorithm, such as, if the key of the some addresses in the first memory headroom is bd919769e9, then through obtaining in the second memory headroom corresponding with it memory address behind the hash algorithm be: hash (bd919769e9)=08, that is to say, in migration when data cached, memory address is numbered 08 position in Data Migration to the second memory headroom of bd919769e9 with being stored in the first memory headroom.
The realization of Hash operation, can be defined by developer oneself: for example, the grid corresponding to memory location of foundation and literal in file system, preserve the literal of storage in grid, for example name converts name to stroke number, then be stored in grid corresponding to stroke numeral such as 10 strokes altogether of names, be placed on grid No. 10, just by Hash operation, the planned existence in the grid.
Travel through so successively each address in the first memory headroom, just can be with the data cached timesharing in the first memory headroom, all move in the second memory headroom.
Such as, as shown in Figure 3, if current memory headroom can be stored 12 data, along with data constantly increase, article 12, storage space can not satisfy user's demand, then can the data in current memory headroom have write when full the automatically memory headroom that can deposit 24 data of application, and the data in the current memory headroom are moved in the memory headroom that can deposit 24 data slowly, after the total data migration is finished, discharge again current memory headroom.Among Fig. 3, upper figure represents the current memory headroom that can not meet consumers' demand, and figure below represents that the capacity of new application is greater than another content space of current memory headroom.
In the process of the Data Migration of memory headroom buffer memory, if the user conducts interviews to the data that are buffered in the memory headroom, then send read data request to the first memory headroom first, when reading unsuccessfully, send read data request to the second memory headroom again.
According to another embodiment of the present invention to the data cached method flow diagram that operates as shown in Figure 4, in Fig. 4, at first judge data in the first memory headroom whether write full, if not, process ends then; If, then every a time interval, judge whether described the first memory headroom is empty, if not, then from described the first memory headroom, take out the data that are positioned at reference position and carry out the Hash operation, obtain the memory address of described data in the second memory headroom, described data are written on this memory address of described the second memory headroom, until take out all data of described the first memory headroom, simultaneously, after data being stored in the second memory headroom, with this data deletion in described the first memory headroom; If, then discharging described the first memory headroom, flow process finishes.
According to a further aspect in the invention, propose a kind ofly to the data cached device that operates, this device comprises:
Judge sub-device, be suitable for having write when full when the data in the first memory headroom, judge whether described the first memory headroom is empty;
Read sub-device, be suitable for when described the first memory headroom is not sky, taking-up is positioned at the data of reference position from described the first memory headroom, until take out all data in described the first memory headroom;
Discharge sub-device, be suitable for when described the first memory headroom is sky, discharging described the first memory headroom;
Process sub-device, be suitable for the data of taking out from described the first memory headroom are carried out the Hash operation, obtain the memory address of described data in the second memory headroom;
Write sub-device, the data that are suitable for taking out from described the first memory headroom are written on this memory address of described the second memory headroom.
Wherein, the capacity of described the second memory headroom is greater than the capacity of described the first memory headroom.
Alternatively, described device also comprises the sub-device of sign, is suitable for after described the first memory headroom is fully written, for described the first memory headroom arranges a zone bit to identify described the first memory headroom just in dilatation.
Alternatively, the sub-device of described judgement has been write when full when the data in the first memory headroom, judges every a time interval whether described the first memory headroom is empty, and the described time interval is such as being 100 milliseconds.
Alternatively, described device also comprises the sub-device of deletion, after being suitable in data being stored into described the second memory headroom, with this data deletion in described the first memory headroom, such as can after data being stored in the second memory headroom, these data in described the first memory headroom be deleted at every turn.
Alternatively, described device also comprises the sub-device of control, be suitable in the process of the Data Migration of memory headroom buffer memory, if the user conducts interviews to the data that are buffered in the memory headroom, then control sends read data request to described the first memory headroom first, when reading unsuccessfully, send read data request to described the second memory headroom again.
In addition, when a large amount of stale datas appears in the first memory headroom, also can be according to reverse this memory headroom that dwindles smoothly of the method for similar technique scheme, that is:
According to a further aspect in the invention, propose another kind to the data cached method that operates, the method may further comprise the steps:
Step S1, if the stale data in the first memory headroom has reached a certain threshold value, then judge and whether also have valid data in described the first memory headroom, if any, then taking-up is positioned at the valid data of reference position from described the first memory headroom, until take out all valid data in described the first memory headroom; Such as nothing, then discharge described the first memory headroom;
Step S2 carries out the Hash operation to the data of taking out from described the first memory headroom, obtain the memory address of described data in the second memory headroom, described data is written on this memory address of described the second memory headroom.
Wherein, the capacity of described the second memory headroom is less than the capacity of described the first memory headroom.
Alternatively, after the stale data in described the first memory headroom has reached a certain threshold value, for arranging a zone bit, reduces in the memory headroom to identify described the first memory headroom described the first memory headroom.
Alternatively, when the stale data in the first memory headroom has reached a certain threshold value, judge in described the first memory headroom whether also have valid data every a time interval, the described time interval is such as being 100 milliseconds.
Alternatively, described Hash operation can be carried out when taking out data from described the first memory headroom at every turn.
Alternatively, store into data in described the second memory headroom after, with the deletion of this data in described the first memory headroom, such as can be after data being stored in the second memory headroom, with this data deletion in described the first memory headroom at every turn.
Travel through so successively each address in the first memory headroom, just can be with the data cached timesharing in the first memory headroom, all move in the second memory headroom.
Such as, if store 24 data in the current memory headroom, wherein 12 is stale data, storage resources for the system of saving, can automatically apply for the space that to deposit 12 data, and the valid data in the current memory headroom are moved in the memory headroom that can deposit 12 data slowly, after the total data migration is finished, discharge again current memory headroom.
In the process of the Data Migration of memory headroom buffer memory, if the user conducts interviews to the data that are buffered in the memory headroom, then send read data request to the first memory headroom first, when reading unsuccessfully, again to the second memory headroom space reading out data request.
According to a further aspect in the invention, propose a kind ofly to the data cached device that operates, this device comprises:
Judge sub-device, be suitable for when the stale data in the first memory headroom has reached a certain threshold value judge whether also have valid data in described the first memory headroom;
Read sub-device, when being suitable for also having valid data in described the first memory headroom, taking-up is positioned at the valid data of reference position from described the first memory headroom, until take out all data in described the first memory headroom;
Discharge sub-device, when being suitable in described the first memory headroom, not having valid data, discharge described the first memory headroom;
Process sub-device, be suitable for the data of taking out from described the first memory headroom are carried out the Hash operation, obtain the memory address of described data in the second memory headroom;
Write sub-device, the data that are suitable for taking out from described the first memory headroom are written on this memory address of described the second memory headroom.
Wherein, the capacity of described the second memory headroom is less than the capacity of described the first memory headroom.
Alternatively, described device also comprises the sub-device of sign, after being suitable for stale data in described the first memory headroom and having reached a certain threshold value, reduces in the memory headroom with this described first memory headroom of sign for described the first memory headroom arranges a zone bit.
Alternatively, the sub-device of described judgement judges in described the first memory headroom whether also have valid data every a time interval when the stale data in the first memory headroom has reached a certain threshold value, and the described time interval is such as being 100 milliseconds.
Alternatively, described device also comprises the sub-device of deletion, after being suitable in data being stored into described the second memory headroom, with this data deletion in described the first memory headroom, such as can after data being stored in the second memory headroom, these data in described the first memory headroom be deleted at every turn.
Alternatively, described device also comprises the sub-device of control, be suitable in the process of the Data Migration of memory headroom buffer memory, if the user conducts interviews to the data that are buffered in the memory headroom, then control sends read data request to described the first memory headroom first, when reading unsuccessfully, send read data request to described the second memory headroom again.
Intrinsic not relevant with any certain computer, virtual system or miscellaneous equipment with demonstration at this algorithm that provides.Various general-purpose systems also can be with using based on the teaching at this.According to top description, it is apparent constructing the desired structure of this type systematic.In addition, the present invention is not also for any certain programmed language.Should be understood that and to utilize various programming languages to realize content of the present invention described here, and the top description that language-specific is done is in order to disclose preferred forms of the present invention.
In the instructions that provides herein, a large amount of details have been described.Yet, can understand, embodiments of the invention can be put into practice in the situation of these details not having.In some instances, be not shown specifically known method, structure and technology, so that not fuzzy understanding of this description.
Similarly, be to be understood that, in order to simplify the disclosure and to help to understand one or more in each inventive aspect, in the description to exemplary embodiment of the present invention, each feature of the present invention is grouped together in single embodiment, figure or the description to it sometimes in the above.Yet the method for the disclosure should be construed to the following intention of reflection: namely the present invention for required protection requires the more feature of feature clearly put down in writing than institute in each claim.Or rather, as following claims reflected, inventive aspect was to be less than all features of the disclosed single embodiment in front.Therefore, follow claims of embodiment and incorporate clearly thus this embodiment into, wherein each claim itself is as independent embodiment of the present invention.
Those skilled in the art are appreciated that and can adaptively change and they are arranged in one or more equipment different from this embodiment the module in the equipment among the embodiment.Can be combined into a module or unit or assembly to the module among the embodiment or unit or assembly, and can be divided into a plurality of submodules or subelement or sub-component to them in addition.In such feature and/or process or unit at least some are mutually repelling, and can adopt any combination to disclosed all features in this instructions (comprising claim, summary and the accompanying drawing followed) and so all processes or the unit of disclosed any method or equipment make up.Unless in addition clearly statement, disclosed each feature can be by providing identical, being equal to or the alternative features of similar purpose replaces in this instructions (comprising claim, summary and the accompanying drawing followed).
In addition, those skilled in the art can understand, although embodiment more described herein comprise some feature rather than further feature included among other embodiment, the combination of the feature of different embodiment means and is within the scope of the present invention and forms different embodiment.For example, in the following claims, the one of any of embodiment required for protection can be used with array mode arbitrarily.
All parts embodiment of the present invention can realize with hardware, perhaps realizes with the software module of moving at one or more processor, and perhaps the combination with them realizes.It will be understood by those of skill in the art that can use in practice microprocessor or digital signal processor (DSP) realize according to the embodiment of the invention to some or all some or repertoire of parts in the device of data cached operation.The present invention can also be embodied as be used to part or all equipment or the device program (for example, computer program and computer program) of carrying out method as described herein.Such realization program of the present invention can be stored on the computer-readable medium, perhaps can have the form of one or more signal.Such signal can be downloaded from internet website and obtain, and perhaps provides at carrier signal, perhaps provides with any other form.
It should be noted above-described embodiment the present invention will be described rather than limit the invention, and those skilled in the art can design alternative embodiment in the situation of the scope that does not break away from claims.In the claims, any reference symbol between bracket should be configured to limitations on claims.Word " comprises " not to be got rid of existence and is not listed in element or step in the claim.Being positioned at word " " before the element or " one " does not get rid of and has a plurality of such elements.The present invention can realize by means of the hardware that includes some different elements and by means of the computing machine of suitably programming.In having enumerated the unit claim of some devices, several in these devices can be to come imbody by same hardware branch.The use of word first, second and C grade does not represent any order.Can be title with these word explanations.

Claims (12)

1. one kind to the data cached method that operates, and comprising:
If the data in the first memory headroom have been write full, judge then whether described the first memory headroom is empty, and if not, then taking-up is positioned at the data of reference position from described the first memory headroom, until take out all data in described the first memory headroom; If then discharge described the first memory headroom;
The data of taking out from described the first memory headroom are carried out the Hash operation, obtain the memory address of described data in the second memory headroom, described data are written on this memory address of described the second memory headroom.
2. the method for claim 1 is characterized in that, the capacity of described the second memory headroom is greater than the capacity of described the first memory headroom.
3. method as claimed in claim 2 is characterized in that, after described the first memory headroom is fully written, for described the first memory headroom arranges a zone bit to identify described the first memory headroom just in dilatation.
4. the method for claim 1 is characterized in that, when the data in the first memory headroom have been write when full, judges every a time interval whether described the first memory headroom is empty.
5. the method for claim 1 is characterized in that, store into data in described the second memory headroom after, with the deletion of this data in described the first memory headroom.
6. the method for claim 1, it is characterized in that, in the process of the Data Migration of memory headroom buffer memory, if the user conducts interviews to the data that are buffered in the memory headroom, then send read data request to described the first memory headroom first, when reading unsuccessfully, send read data request to described the second memory headroom again.
7. one kind to the data cached device that operates, and comprising:
Judge sub-device, be suitable for having write when full when the data in the first memory headroom, judge whether described the first memory headroom is empty;
Read sub-device, be suitable for when described the first memory headroom is not sky, taking-up is positioned at the data of reference position from described the first memory headroom, until take out all data in described the first memory headroom;
Discharge sub-device, be suitable for when described the first memory headroom is sky, discharging described the first memory headroom;
Process sub-device, be suitable for the data of taking out from described the first memory headroom are carried out the Hash operation, obtain the memory address of described data in the second memory headroom;
Write sub-device, the data that are suitable for taking out from described the first memory headroom are written on this memory address of described the second memory headroom.
8. device as claimed in claim 7 is characterized in that, the capacity of described the second memory headroom is greater than the capacity of described the first memory headroom.
9. device as claimed in claim 7 is characterized in that, also comprises the sub-device of sign, is suitable for after described the first memory headroom is fully written, for described the first memory headroom arranges a zone bit to identify described the first memory headroom just in dilatation.
10. device as claimed in claim 7 is characterized in that, the sub-device of described judgement has been write when full when the data in the first memory headroom, judges every a time interval whether described the first memory headroom is empty.
11. device as claimed in claim 7 is characterized in that, also comprises the sub-device of deletion, be suitable in data being stored into described the second memory headroom after, with the deletion of this data in described the first memory headroom.
12. device as claimed in claim 7, it is characterized in that, also comprise the sub-device of control, be suitable in the process of the Data Migration of memory headroom buffer memory, if the user conducts interviews to the data that are buffered in the memory headroom, then control sends read data request to described the first memory headroom first, when reading unsuccessfully, sends read data request to described the second memory headroom again.
CN201210407457.5A 2012-10-23 2012-10-23 A kind of to data cached method of operating and device Active CN103019956B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210407457.5A CN103019956B (en) 2012-10-23 2012-10-23 A kind of to data cached method of operating and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210407457.5A CN103019956B (en) 2012-10-23 2012-10-23 A kind of to data cached method of operating and device

Publications (2)

Publication Number Publication Date
CN103019956A true CN103019956A (en) 2013-04-03
CN103019956B CN103019956B (en) 2015-11-25

Family

ID=47968581

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210407457.5A Active CN103019956B (en) 2012-10-23 2012-10-23 A kind of to data cached method of operating and device

Country Status (1)

Country Link
CN (1) CN103019956B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014198161A1 (en) * 2013-06-13 2014-12-18 中兴通讯股份有限公司 Direct table storage method and device
CN106570068A (en) * 2016-10-13 2017-04-19 腾讯科技(北京)有限公司 Information recommendation method and device
CN108446077A (en) * 2018-03-21 2018-08-24 华立科技股份有限公司 Electric energy meter communication data storage method, device and electric energy meter
CN109933293A (en) * 2019-03-25 2019-06-25 深圳忆联信息系统有限公司 Method for writing data, device and computer equipment based on SpiFlash
CN110704174A (en) * 2018-07-09 2020-01-17 中国移动通信有限公司研究院 Memory release method and device, electronic equipment and storage medium
CN113535085A (en) * 2021-07-05 2021-10-22 歌尔科技有限公司 Method, terminal device, system and storage medium for transmitting product identification

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1442790A (en) * 2002-07-15 2003-09-17 尹启凤 Extending method for extending ROM of computer and interface chip thereof
JP2004129401A (en) * 2002-10-03 2004-04-22 Hitachi Ltd Power system distributed monitoring controller
CN1991730A (en) * 2005-12-28 2007-07-04 英业达股份有限公司 Expanding system and method for redundant array of self-contained disk
CN101162461A (en) * 2006-10-09 2008-04-16 中兴通讯股份有限公司 EMS memory data-base capacity-enlarging method
CN101226457A (en) * 2008-01-25 2008-07-23 中兴通讯股份有限公司 On-line capacity-enlarging system and method for magnetic disc array

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1442790A (en) * 2002-07-15 2003-09-17 尹启凤 Extending method for extending ROM of computer and interface chip thereof
JP2004129401A (en) * 2002-10-03 2004-04-22 Hitachi Ltd Power system distributed monitoring controller
CN1991730A (en) * 2005-12-28 2007-07-04 英业达股份有限公司 Expanding system and method for redundant array of self-contained disk
CN101162461A (en) * 2006-10-09 2008-04-16 中兴通讯股份有限公司 EMS memory data-base capacity-enlarging method
CN101226457A (en) * 2008-01-25 2008-07-23 中兴通讯股份有限公司 On-line capacity-enlarging system and method for magnetic disc array

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014198161A1 (en) * 2013-06-13 2014-12-18 中兴通讯股份有限公司 Direct table storage method and device
CN106570068A (en) * 2016-10-13 2017-04-19 腾讯科技(北京)有限公司 Information recommendation method and device
CN108446077A (en) * 2018-03-21 2018-08-24 华立科技股份有限公司 Electric energy meter communication data storage method, device and electric energy meter
CN110704174A (en) * 2018-07-09 2020-01-17 中国移动通信有限公司研究院 Memory release method and device, electronic equipment and storage medium
CN109933293A (en) * 2019-03-25 2019-06-25 深圳忆联信息系统有限公司 Method for writing data, device and computer equipment based on SpiFlash
CN109933293B (en) * 2019-03-25 2022-06-07 深圳忆联信息系统有限公司 Data writing method and device based on SpiFlash and computer equipment
CN113535085A (en) * 2021-07-05 2021-10-22 歌尔科技有限公司 Method, terminal device, system and storage medium for transmitting product identification

Also Published As

Publication number Publication date
CN103019956B (en) 2015-11-25

Similar Documents

Publication Publication Date Title
CN103049393A (en) Method and device for managing memory space
CN103019956A (en) Method and device for operating cache data
CN100481028C (en) Method and device for implementing data storage using cache
CN106055489B (en) Memory device and its operating method
CN104881333B (en) A kind of storage system and its method used
CN102349055B (en) To the access time optimization of the file stored on a memory
CN103246696A (en) High-concurrency database access method and method applied to multi-server system
CN102043686A (en) Disaster tolerance method, backup server and system of memory database
CN101122888A (en) Method and system for writing and reading application data
CN103647850A (en) Data processing method, device and system of distributed version control system
CN103049224A (en) Method, device and system for importing data into physical tape
CN104035925A (en) Data storage method and device and storage system
CN107480074A (en) A kind of caching method, device and electronic equipment
CN106055274A (en) Data storage method, data reading method and electronic device
CN104156322A (en) Cache management method and device
CN102073594A (en) Method for attenuating thermal data
CN105573673A (en) Database based data cache system
CN102768672B (en) A kind of disk space management method and apparatus
CN104391947B (en) Magnanimity GIS data real-time processing method and system
CN107783732A (en) A kind of data read-write method, system, equipment and computer-readable storage medium
CN104375955A (en) Cache device and control method thereof
CN103176753A (en) Storage device and data management method of storage device
CN110209600B (en) CACHE space distribution method and system based on simplified LUN
CN107967306B (en) Method for rapidly mining association blocks in storage system
CN102103545A (en) Method, device and system for caching data

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220725

Address after: Room 801, 8th floor, No. 104, floors 1-19, building 2, yard 6, Jiuxianqiao Road, Chaoyang District, Beijing 100015

Patentee after: BEIJING QIHOO TECHNOLOGY Co.,Ltd.

Address before: 100088 room 112, block D, 28 new street, new street, Xicheng District, Beijing (Desheng Park)

Patentee before: BEIJING QIHOO TECHNOLOGY Co.,Ltd.

Patentee before: Qizhi software (Beijing) Co.,Ltd.