CN105677483A - Data caching method and device - Google Patents

Data caching method and device Download PDF

Info

Publication number
CN105677483A
CN105677483A CN201511033840.9A CN201511033840A CN105677483A CN 105677483 A CN105677483 A CN 105677483A CN 201511033840 A CN201511033840 A CN 201511033840A CN 105677483 A CN105677483 A CN 105677483A
Authority
CN
China
Prior art keywords
pond
memory capacity
buffer memory
hard disk
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201511033840.9A
Other languages
Chinese (zh)
Other versions
CN105677483B (en
Inventor
赵智宝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL Corp
Original Assignee
TCL Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TCL Corp filed Critical TCL Corp
Priority to CN201511033840.9A priority Critical patent/CN105677483B/en
Publication of CN105677483A publication Critical patent/CN105677483A/en
Application granted granted Critical
Publication of CN105677483B publication Critical patent/CN105677483B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory

Abstract

The invention belongs to the technical field of intelligent equipment, and provides a data caching method and device. The method comprises the steps that a dynamic cache pool required by the network and the default cache capacity of the cache pool are set, and the dynamic cache pool has a multilayer cache structure; the optimal cache capacity of the dynamic cache pool is acquired according to the operating condition of a system; when the optimal cache capacity is smaller than the default cache capacity, the cache capacity of each layer of cache pool in the dynamic cache pool is regulated to optimize the use efficiency of the system. The problems that when too much resource information acquired through the network request exists, the system is low in use efficiency and response speed are solved, and the response speed of the system to user operation is effectively increased.

Description

A kind of method of data buffer storage and device
Technical field
The invention belongs to technical field of intelligent equipment, particularly relate to method and the device of a kind of data buffer storage.
Background technology
Along with the arriving in mobile Internet epoch, get more and more as the exchange method between smart machine and the server of client, interaction times. For video display class APP, such as like strange skill video, having the resource information of thousands of films under each classified catalogue, these resource informations include picture resource (such as film poster), protocol data (such as film succinct) etc. These resource informations all cannot be downloaded under embedded platform and buffer memory. The main buffer memory picture resource of prior art, and not caching protocol data. When obtaining less than resource information, user can not scroll through pages forward or backward. If user frequently triggers network request, when the resource information of system cache is too much, then the service efficiency of system and response speed are substantially reduced.
Summary of the invention
In consideration of it, the embodiment of the present invention provides a kind of method of data buffer storage and device, it is cached the problem that service efficiency is low and response speed is slow of system of many times in the resource information obtained by network request solving prior art.
First aspect, it is provided that a kind of method of data buffer storage, described method includes:
Arranging dynamic buffering pond and the acquiescence buffer memory capacity thereof of network request, described dynamic buffering pond has multilamellar buffer structure;
Operation conditions according to system obtains the best buffer memory capacity in described dynamic buffering pond;
When described best buffer memory capacity is less than described acquiescence buffer memory capacity, adjust the buffer memory capacity of each layer of cache pool in described dynamic buffering pond respectively, to optimize the service efficiency of system.
Second aspect, it is provided that the device of a kind of data buffer storage, described device includes:
Arranging module, for arranging dynamic buffering pond and the acquiescence buffer memory capacity thereof of network request, described dynamic buffering pond has multilamellar buffer structure;
Acquisition module, obtains the best buffer memory capacity in described dynamic buffering pond for the operation conditions according to system;
Adjusting module, for when described best buffer memory capacity is less than described acquiescence buffer memory capacity, adjusting the buffer memory capacity of each layer of cache pool in described dynamic buffering pond respectively, to optimize the service efficiency of system.
Compared with prior art, the present invention by arranging special dynamic buffering pond and conventional memory capacity thereof for network request, and described dynamic buffering pond has multilamellar buffer structure; Operation conditions according to system obtains the best buffer memory capacity in described dynamic buffering pond; When described best buffer memory capacity is less than described acquiescence buffer memory capacity, then adjust the buffer memory capacity of each layer of cache pool in described dynamic buffering pond respectively, to optimize the service efficiency of system; Thus solving the problem that service efficiency is low and response speed is slow being cached system of many times in the resource information obtained by network request, it is effectively improved the response speed of system of users operation.
Accompanying drawing explanation
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, the accompanying drawing used required in embodiment or description of the prior art will be briefly described below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the premise not paying creative work, it is also possible to obtain other accompanying drawings according to these accompanying drawings.
Fig. 1 is the flowchart of the method for the data buffer storage that the embodiment of the present invention provides;
Fig. 2 be the embodiment of the present invention provide data buffer storage method in step S102 implement flow chart;
Fig. 3 is the composition structure chart of the data buffer storage device that the embodiment of the present invention provides.
Detailed description of the invention
In order to make the purpose of the present invention, technical scheme and advantage clearly understand, below in conjunction with drawings and Examples, the present invention is further elaborated. Should be appreciated that specific embodiment described herein is only in order to explain the present invention, is not intended to limit the present invention.
The present invention by arranging special dynamic buffering pond and conventional memory capacity thereof for network request, and described dynamic buffering pond has multilamellar buffer structure; Operation conditions according to system obtains the best buffer memory capacity in described dynamic buffering pond; When described best buffer memory capacity is less than described acquiescence buffer memory capacity, then adjust the buffer memory capacity of each layer of cache pool in described dynamic buffering pond respectively, to optimize the service efficiency of system; Thus solving the problem that service efficiency is low and response speed is slow being cached system of many times in the resource information obtained by network request, it is effectively improved the response speed of system of users operation. The embodiment of the present invention additionally provides corresponding device, is described in detail individually below.
What Fig. 1 illustrated the method for the data buffer storage that the embodiment of the present invention provides realizes flow process.
In embodiments of the present invention, the method for described data buffer storage is applied to smart machine, and described smart machine includes but not limited to smart mobile phone, panel computer, intelligent TV set, computer and learning machine etc.
Consulting Fig. 1, the method for described data buffer storage includes:
In step S101, arranging dynamic buffering pond and the acquiescence buffer memory capacity thereof of network request, described dynamic buffering pond has multilamellar buffer structure.
In embodiments of the present invention, it is provided with special dynamic buffering pond for network request, and configures the acquiescence buffer memory capacity in described dynamic buffering pond. Here, described dynamic buffering pond for buffer memory by the downloaded data message of network request, as by the downloaded the resources of movie & TV information of network request, theme bag resource information, books resource information etc. Described acquiescence buffer memory capacity is the buffer memory capacity in the described dynamic buffering pond that developer pre-sets, when the memory headroom of system and hard drive space are enough, and the maximum storage capacity in described dynamic buffering pond.
Further, in embodiments of the present invention, described dynamic buffering pond has multilamellar buffer structure, and including memory cache pond, hard disk cache pond, accordingly, described acquiescence buffer memory capacity includes memory cache pond default capability and hard disk cache pond default capability. Wherein, described memory cache pond is the memory headroom in system, including the first memory cache pond and the second memory cache pond; Described hard disk cache pond is the hard drive space in system.
Further, described first memory cache pond is for interactive information required during switching between cache user interface A ctivity.
Due in android system, need when switching between user interface Activity and Activity to carry a lot of jump information, but the size of the Intent biography value of Activity is fixing, and need striding course to carry out repeatedly having copied transmission, thus adding the memory consumption of system. The embodiment of the present invention is by arranging the first memory cache pond, for the interactive information between Activity and the Activity of temporal cache, the transmission of striding course can be reduced, and regain described first memory cache pond immediately after Activity takes out described interactive information, to reuse, effectively reduce the memory consumption of system.
The analysis result of the downloaded protocol data of network request is passed through for buffer memory in described second memory cache pond.
The downloaded protocol data of network request and the triggered time information of network request, effective duration information are passed through in described hard disk cache pond for buffer memory.
Here, the analysis result (i.e. object) of protocol data downloaded in the default historical time section currently risen by described second memory cache pond buffer memory so that user is in the process that quickly rolling refreshes, and the page can upgrade in time. And the downloaded protocol data (i.e. text message) of network request and the triggered time information of network request, effective duration information etc. are passed through by hard disk cache pond buffer memory, when smart machine is again started up after shut down, still it is not lost in described hard disk cache pond the protocol data downloaded by network request of buffer memory so that user can also view the data downloaded accordingly on the page; And when start, additionally it is possible to judge that according to the triggered time of described network request and effective duration information whether the protocol data of correspondence is effective, invalid, delete. Wherein, described effective duration information is configured according to practical situation by user.
In step s 102, the best buffer memory capacity in described dynamic buffering pond is obtained according to the operation conditions of system.
Here, the best buffer memory capacity in described dynamic buffering pond includes the best buffer memory capacity of internal memory and the best buffer memory capacity of hard disk, wherein, the best buffer memory capacity of described internal memory determines the memory headroom that described memory cache pond reality under current system running status is available, and the best buffer memory capacity of described hard disk determines the hard drive space that described hard disk cache pond reality under current system running status is available.
Alternatively, Fig. 2 illustrates that the step S102's that the embodiment of the present invention provides implements flow process. Consulting Fig. 2, described step S102 includes:
In step s 201, residue available memory space in acquisition system and the first preset ratio, calculate the product of described residue available memory space and the first preset ratio, obtain the best buffer memory capacity of internal memory.
In step S202, residue available hard disk space in acquisition system and the second preset ratio, calculate the product of described residue available hard disk space and the second preset ratio, obtain the best buffer memory capacity of hard disk.
Here, described first preset ratio and the second preset ratio are pre-set by developer. Exemplarily, when the residue available memory space in system is 100M, and described first preset ratio is 0.5, the best buffer memory capacity of described internal memory is 100M*0.5=50M. When the residue available hard disk space in system is 500M, and described first preset ratio is 0.7, the best buffer memory capacity of described internal memory is 500M*0.7=350M.
In step s 103, when described best buffer memory capacity is less than described acquiescence buffer memory capacity, the buffer memory capacity of each layer of cache pool in described dynamic buffering pond is adjusted respectively, to optimize the service efficiency of system.
In embodiments of the present invention, respectively the best buffer memory capacity of internal memory is compared with memory cache pond default capability and the best buffer memory capacity of hard disk and hard disk cache pond default capability are compared.
When memory headroom deficiency in system, namely when the best buffer memory capacity of internal memory is less than memory cache pond default capability, adjust the size in memory cache pond. Described step S103 specifically includes:
When the best buffer memory capacity of described internal memory is less than described memory cache pond default capability, delete the interactive information in described first memory cache pond, and/or
Delete the analysis result of protocol data in described second memory cache pond, to reduce the memory capacity in described memory cache pond.
When hard drive space deficiency in system, namely when the best buffer memory capacity of hard disk is less than hard disk cache pond default capability, adjust the size in hard disk cache pond. Described step S103 specifically also includes:
When the best buffer memory capacity of described hard disk is less than described hard disk cache pond default capability, delete the protocol data in described hard disk cache pond, the triggered time information of network request and effective duration information, to reduce the memory capacity in described hard disk cache pond.
Here, when memory headroom deficiency in system, preferentially stored interactive information in described first memory cache pond is partially or fully deleted, to reduce the memory capacity in described first memory cache pond so that system regains memory headroom vacant in the slow pond of described first internal memory. Alternatively, when deleting, determine the interactive information needing to delete in conjunction with lru algorithm, effective duration information etc. In like manner, it is possible to again the analysis result local of the protocol data in described second memory cache pond is deleted, to reduce the memory capacity in described second memory cache pond so that system regains memory headroom vacant in described second memory cache pond. When hard drive space deficiency in system, then by the protocol data in described hard disk cache pond, the triggered time information of network request and effectively duration information local deletion, the protocol data that the network request that the cache request time is nearer is corresponding, the triggered time information of network request and effective duration information etc., to reduce the memory capacity in described hard disk cache pond, system is made to regain hard drive space vacant in described hard disk cache pond. Thus avoiding user frequently triggering network request, when the resource information of system cache is too much, the service efficiency of system and the low problem of response speed, efficiently solve the problem that service efficiency is low and response speed is slow being cached system of many times in resource information, improve the response speed of system of users operation.
The present invention by arranging special dynamic buffering pond and conventional memory capacity thereof for network request, and described dynamic buffering pond has multilamellar buffer structure; Operation conditions according to system obtains the best buffer memory capacity in described dynamic buffering pond; When described best buffer memory capacity is less than described acquiescence buffer memory capacity, then adjust the buffer memory capacity of each layer of cache pool in described dynamic buffering pond respectively, to optimize the service efficiency of system; Thus solving the problem that service efficiency is low and response speed is slow being cached system of many times in the resource information obtained by network request, it is effectively improved the response speed of system of users operation.
Fig. 3 illustrates the composition structure of the device of the data buffer storage that the embodiment of the present invention provides, and for the ease of illustrating, illustrate only the part relevant to the embodiment of the present invention.
In embodiments of the present invention, the described device method for realizing the data buffer storage described in above-mentioned Fig. 1 or Fig. 2 embodiment, it is possible to be the unit being built in the software unit of smart machine, hardware cell or software and hardware combining. Described smart machine includes but not limited to smart mobile phone, panel computer, intelligent TV set, intelligent watch, learning machine etc.
Consulting Fig. 3, the device of described data buffer storage includes:
Arranging module 31, for arranging dynamic buffering pond and the acquiescence buffer memory capacity thereof of network request, described dynamic buffering pond has multilamellar buffer structure;
Acquisition module 32, obtains the best buffer memory capacity in described dynamic buffering pond for the operation conditions according to system;
Adjusting module 33, for when described best buffer memory capacity is less than described acquiescence buffer memory capacity, adjusting the buffer memory capacity of each layer of cache pool in described dynamic buffering pond respectively, to optimize the service efficiency of system.
Further, described dynamic buffering pond includes memory cache pond, hard disk cache pond; Described acquiescence buffer memory capacity includes memory cache pond default capability and hard disk cache pond default capability;
Wherein, described memory cache pond is the memory headroom in system, including the first memory cache pond and the second memory cache pond; Described hard disk cache pond is the hard drive space in system;
Described first memory cache pond is for interactive information required during switching between cache user interface A ctivity;
The analysis result of the downloaded protocol data of network request is passed through for buffer memory in described second memory cache pond;
The downloaded protocol data of network request and the triggered time information of network request, effective duration information are passed through in described hard disk cache pond for buffer memory.
Further, described best buffer memory capacity includes the best buffer memory capacity of internal memory and the best buffer memory capacity of hard disk.
Described acquisition module 32 includes:
First acquiring unit 321, for the residue available memory space in acquisition system and the first preset ratio, calculates the product of described residue available memory space and the first preset ratio, obtains the best buffer memory capacity of internal memory;
Second acquisition unit 322, for the residue available hard disk space in acquisition system and the second preset ratio, calculates the product of described residue available hard disk space and the second preset ratio, obtains the best buffer memory capacity of hard disk.
Further, described adjusting module 33 includes:
First adjustment unit 331, for when the best buffer memory capacity of described internal memory is less than described memory cache pond default capability, deleting the interactive information in described first memory cache pond, and/or
Delete the analysis result of protocol data in described second memory cache pond, to reduce the memory capacity in described memory cache pond.
Further, described adjusting module 33 also includes:
Second adjustment unit 332, for when the best buffer memory capacity of described hard disk is less than described hard disk cache pond default capability, delete the protocol data in described hard disk cache pond, the triggered time information of network request and effective duration information, to reduce the memory capacity in described hard disk cache pond.
It should be noted that, device in the embodiment of the present invention may be used for the whole technical schemes realizing in said method embodiment, the function of its each functional module can implement according to the method in said method embodiment, it implements the associated description that process can refer in examples detailed above, repeats no more herein.
The present invention by arranging special dynamic buffering pond and conventional memory capacity thereof for network request, and described dynamic buffering pond has multilamellar buffer structure; Operation conditions according to system obtains the best buffer memory capacity in described dynamic buffering pond; When described best buffer memory capacity is less than described acquiescence buffer memory capacity, then adjust the buffer memory capacity of each layer of cache pool in described dynamic buffering pond respectively, to optimize the service efficiency of system; Thus solving the problem that service efficiency is low and response speed is slow being cached system of many times in the resource information obtained by network request, it is effectively improved the response speed of system of users operation.
Those of ordinary skill in the art are it is to be appreciated that the unit of each example that describes in conjunction with the embodiments described herein and algorithm steps, it is possible to being implemented in combination in of electronic hardware or computer software and electronic hardware. These functions perform with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme. Professional and technical personnel specifically can should be used for using different methods to realize described function to each, but this realization is it is not considered that beyond the scope of this invention.
Those skilled in the art is it can be understood that arrive, for convenience and simplicity of description, and the device of foregoing description and the specific works process of unit, it is possible to reference to the corresponding process in preceding method embodiment, do not repeat them here.
In several embodiments provided herein, it should be understood that the method for disclosed data buffer storage and device, it is possible to realize by another way. Such as, device embodiment described above is merely schematic, such as, described module, unit division, being only a kind of logic function to divide, actual can have other dividing mode when realizing, for instance multiple unit or assembly can in conjunction with or be desirably integrated into another system, or some features can ignore, or do not perform. Another point, shown or discussed coupling each other or direct-coupling or communication connection can be through INDIRECT COUPLING or the communication connection of some interfaces, device or unit, it is possible to be electrical, machinery or other form.
The described unit illustrated as separating component can be or may not be physically separate, and the parts shown as unit can be or may not be physical location, namely may be located at a place, or can also be distributed on multiple NE. Some or all of unit therein can be selected according to the actual needs to realize the purpose of the present embodiment scheme.
It addition, each functional unit, module in each embodiment of the present invention can be integrated in a processing unit, it is also possible to be that unit, module are individually physically present, it is also possible to two or more unit, module are integrated in a unit.
If described function is using the form realization of SFU software functional unit and as independent production marketing or use, it is possible to be stored in a computer read/write memory medium. Based on such understanding, part or the part of this technical scheme that prior art is contributed by technical scheme substantially in other words can embody with the form of software product, this computer software product is stored in a storage medium, including some instructions with so that a computer equipment (can be personal computer, server, or the network equipment etc.) perform all or part of step of method described in each embodiment of the present invention. And aforesaid storage medium includes: USB flash disk, portable hard drive, read only memory (ROM, Read-OnlyMemory), the various media that can store program code such as random access memory (RAM, RandomAccessMemory), magnetic disc or CD.
The above; being only the specific embodiment of the present invention, but protection scope of the present invention is not limited thereto, any those familiar with the art is in the technical scope that the invention discloses; change can be readily occurred in or replace, all should be encompassed within protection scope of the present invention. Therefore, protection scope of the present invention should described be as the criterion with scope of the claims.

Claims (10)

1. the method for a data buffer storage, it is characterised in that described caching method includes:
Arranging dynamic buffering pond and the acquiescence buffer memory capacity thereof of network request, described dynamic buffering pond has multilamellar buffer structure;
Operation conditions according to system obtains the best buffer memory capacity in described dynamic buffering pond;
When described best buffer memory capacity is less than described acquiescence buffer memory capacity, adjust the buffer memory capacity of each layer of cache pool in described dynamic buffering pond respectively, to optimize the service efficiency of system.
2. the method for data buffer storage as claimed in claim 1, it is characterised in that described dynamic buffering pond includes memory cache pond, hard disk cache pond; Described acquiescence buffer memory capacity includes memory cache pond default capability and hard disk cache pond default capability;
Wherein, described memory cache pond is the memory headroom in system, including the first memory cache pond and the second memory cache pond; Described hard disk cache pond is the hard drive space in system;
Described first memory cache pond is for interactive information required during switching between cache user interface A ctivity;
The analysis result of the downloaded protocol data of network request is passed through for buffer memory in described second memory cache pond;
The downloaded protocol data of network request and the triggered time information of network request, effective duration information are passed through in described hard disk cache pond for buffer memory.
3. the method for data buffer storage as claimed in claim 2, it is characterised in that described best buffer memory capacity includes the best buffer memory capacity of internal memory and the best buffer memory capacity of hard disk;
The described running status according to system obtains the best buffer memory capacity in described dynamic buffering pond and includes:
Residue available memory space in acquisition system and the first preset ratio, calculate the product of described residue available memory space and the first preset ratio, obtain the best buffer memory capacity of internal memory;
Residue available hard disk space in acquisition system and the second preset ratio, calculate the product of described residue available hard disk space and the second preset ratio, obtain the best buffer memory capacity of hard disk.
4. the method for data buffer storage as claimed in claim 3, it is characterized in that, described adjusting the buffer memory capacity of each layer of cache pool in described dynamic buffering pond respectively when described best buffer memory capacity is less than described acquiescence buffer memory capacity, the service efficiency to optimize system includes:
When the best buffer memory capacity of described internal memory is less than described memory cache pond default capability, delete the interactive information in described first memory cache pond, and/or
Delete the analysis result of protocol data in described second memory cache pond, to reduce the memory capacity in described memory cache pond.
5. the method for data buffer storage as claimed in claim 3, it is characterized in that, described adjusting the buffer memory capacity of each layer of cache pool in described dynamic buffering pond respectively when described best buffer memory capacity is less than described acquiescence buffer memory capacity, the service efficiency to optimize system includes:
When the best buffer memory capacity of described hard disk is less than described hard disk cache pond default capability, delete the protocol data in described hard disk cache pond, the triggered time information of network request and effective duration information, to reduce the memory capacity in described hard disk cache pond.
6. the device of a data buffer storage, it is characterised in that described device includes:
Arranging module, for arranging dynamic buffering pond and the acquiescence buffer memory capacity thereof of network request, described dynamic buffering pond has multilamellar buffer structure;
Acquisition module, obtains the best buffer memory capacity in described dynamic buffering pond for the operation conditions according to system;
Adjusting module, for when described best buffer memory capacity is less than described acquiescence buffer memory capacity, adjusting the buffer memory capacity of each layer of cache pool in described dynamic buffering pond respectively, to optimize the service efficiency of system.
7. the device of data buffer storage as claimed in claim 6, it is characterised in that described dynamic buffering pond includes memory cache pond, hard disk cache pond; Described acquiescence buffer memory capacity includes memory cache pond default capability and hard disk cache pond default capability;
Wherein, described memory cache pond is the memory headroom in system, including the first memory cache pond and the second memory cache pond; Described hard disk cache pond is the hard drive space in system;
Described first memory cache pond is for interactive information required during switching between cache user interface A ctivity;
The analysis result of the downloaded protocol data of network request is passed through for buffer memory in described second memory cache pond;
The downloaded protocol data of network request and the triggered time information of network request, effective duration information are passed through in described hard disk cache pond for buffer memory.
8. the device of data buffer storage as claimed in claim 7, it is characterised in that described best buffer memory capacity includes the best buffer memory capacity of internal memory and the best buffer memory capacity of hard disk;
Described acquisition module includes:
First acquiring unit, for the residue available memory space in acquisition system and the first preset ratio, calculates the product of described residue available memory space and the first preset ratio, obtains the best buffer memory capacity of internal memory;
Second acquisition unit, for the residue available hard disk space in acquisition system and the second preset ratio, calculates the product of described residue available hard disk space and the second preset ratio, obtains the best buffer memory capacity of hard disk.
9. the device of data buffer storage as claimed in claim 8, it is characterised in that described adjusting module includes:
First adjustment unit, for when the best buffer memory capacity of described internal memory is less than described memory cache pond default capability, deleting the interactive information in described first memory cache pond, and/or
Delete the analysis result of protocol data in described second memory cache pond, to reduce the memory capacity in described memory cache pond.
10. the device of data buffer storage as claimed in claim 8, it is characterised in that described adjusting module includes:
Second adjustment unit, for when the best buffer memory capacity of described hard disk is less than described hard disk cache pond default capability, delete the protocol data in described hard disk cache pond, the triggered time information of network request and effective duration information, to reduce the memory capacity in described hard disk cache pond.
CN201511033840.9A 2015-12-31 2015-12-31 Data caching method and device Active CN105677483B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201511033840.9A CN105677483B (en) 2015-12-31 2015-12-31 Data caching method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201511033840.9A CN105677483B (en) 2015-12-31 2015-12-31 Data caching method and device

Publications (2)

Publication Number Publication Date
CN105677483A true CN105677483A (en) 2016-06-15
CN105677483B CN105677483B (en) 2020-01-24

Family

ID=56190045

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201511033840.9A Active CN105677483B (en) 2015-12-31 2015-12-31 Data caching method and device

Country Status (1)

Country Link
CN (1) CN105677483B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108681469A (en) * 2018-05-03 2018-10-19 武汉斗鱼网络科技有限公司 Page cache method, device, equipment based on android system and storage medium
CN110533176A (en) * 2018-05-25 2019-12-03 北京深鉴智能科技有限公司 Buffer storage and its associated computing platform for neural computing
CN111107438A (en) * 2019-12-30 2020-05-05 北京奇艺世纪科技有限公司 Video loading method and device and electronic equipment
CN112667588A (en) * 2019-10-16 2021-04-16 青岛海信移动通信技术股份有限公司 Intelligent terminal device and method for writing file system data
WO2023065915A1 (en) * 2021-10-22 2023-04-27 华为技术有限公司 Storage method and apparatus, device, and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5784698A (en) * 1995-12-05 1998-07-21 International Business Machines Corporation Dynamic memory allocation that enalbes efficient use of buffer pool memory segments
KR20010000208A (en) * 2000-08-17 2001-01-05 음용기 Method and system for large image display
CN102640472A (en) * 2009-12-14 2012-08-15 瑞典爱立信有限公司 Dynamic cache selection method and system
CN103246613A (en) * 2012-02-08 2013-08-14 联发科技(新加坡)私人有限公司 Cache device and cache data acquiring method therefor
CN103279429A (en) * 2013-05-24 2013-09-04 浪潮电子信息产业股份有限公司 Application-aware distributed global shared cache partition method
CN103907097A (en) * 2011-09-30 2014-07-02 美国网域存储技术有限公司 Intelligence for controlling virtual storage appliance storage allocation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5784698A (en) * 1995-12-05 1998-07-21 International Business Machines Corporation Dynamic memory allocation that enalbes efficient use of buffer pool memory segments
KR20010000208A (en) * 2000-08-17 2001-01-05 음용기 Method and system for large image display
CN102640472A (en) * 2009-12-14 2012-08-15 瑞典爱立信有限公司 Dynamic cache selection method and system
CN103907097A (en) * 2011-09-30 2014-07-02 美国网域存储技术有限公司 Intelligence for controlling virtual storage appliance storage allocation
CN103246613A (en) * 2012-02-08 2013-08-14 联发科技(新加坡)私人有限公司 Cache device and cache data acquiring method therefor
CN103279429A (en) * 2013-05-24 2013-09-04 浪潮电子信息产业股份有限公司 Application-aware distributed global shared cache partition method

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108681469A (en) * 2018-05-03 2018-10-19 武汉斗鱼网络科技有限公司 Page cache method, device, equipment based on android system and storage medium
CN108681469B (en) * 2018-05-03 2021-07-30 武汉斗鱼网络科技有限公司 Page caching method, device, equipment and storage medium based on Android system
CN110533176A (en) * 2018-05-25 2019-12-03 北京深鉴智能科技有限公司 Buffer storage and its associated computing platform for neural computing
CN112667588A (en) * 2019-10-16 2021-04-16 青岛海信移动通信技术股份有限公司 Intelligent terminal device and method for writing file system data
CN112667588B (en) * 2019-10-16 2022-12-02 青岛海信移动通信技术股份有限公司 Intelligent terminal device and method for writing file system data
CN111107438A (en) * 2019-12-30 2020-05-05 北京奇艺世纪科技有限公司 Video loading method and device and electronic equipment
CN111107438B (en) * 2019-12-30 2022-04-22 北京奇艺世纪科技有限公司 Video loading method and device and electronic equipment
WO2023065915A1 (en) * 2021-10-22 2023-04-27 华为技术有限公司 Storage method and apparatus, device, and storage medium

Also Published As

Publication number Publication date
CN105677483B (en) 2020-01-24

Similar Documents

Publication Publication Date Title
CN105677483A (en) Data caching method and device
US9058324B2 (en) Predictive precaching of data based on context
US10862992B2 (en) Resource cache management method and system and apparatus
US8850075B2 (en) Predictive, multi-layer caching architectures
US20120246390A1 (en) Information processing apparatus, program product, and data writing method
CN105512251B (en) A kind of page cache method and device
CN103428251B (en) A kind of downloading task distribution method and device
CN102541538A (en) Picture displaying method and device based on mobile terminal
CN112051970A (en) Dirty data management for hybrid drives
US10657678B2 (en) Method, apparatus and device for creating a texture atlas to render images
KR102402780B1 (en) Apparatus and method for managing memory
CN103902575A (en) Pictorial information loading method and related device
CN104113567A (en) Content distribution network data processing method, device and system
CN104778049A (en) Implementation method used for human-computer interaction APP (application) on the basis of Android system and interaction system
US9584619B2 (en) Business web applications lifecycle management with multi-tasking ability
JP6181291B2 (en) Information transmission based on reading speed
CN104199729A (en) Resource management method and system
US8281091B2 (en) Automatic selection of storage volumes in a data storage system
US9787755B2 (en) Method and device for browsing network data, and storage medium
CN105630967A (en) Caching method and device based on GIS display data
CN105338097A (en) Terminal screen size-based flow control method, terminal and business server
US11068207B2 (en) Method, device, and computer program product for managing storage system
US11928755B2 (en) Integrating predetermined virtual tours for real-time delivery on third-party resources
Kumar et al. Improve Client performance in Client Server Mobile Computing System using Cache Replacement Technique
CN110795461A (en) iOS application-based data caching method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant