WO2019206260A1 - 文件缓存的读取方法和装置 - Google Patents

文件缓存的读取方法和装置 Download PDF

Info

Publication number
WO2019206260A1
WO2019206260A1 PCT/CN2019/084476 CN2019084476W WO2019206260A1 WO 2019206260 A1 WO2019206260 A1 WO 2019206260A1 CN 2019084476 W CN2019084476 W CN 2019084476W WO 2019206260 A1 WO2019206260 A1 WO 2019206260A1
Authority
WO
WIPO (PCT)
Prior art keywords
file block
read
hotspot
cache
user
Prior art date
Application number
PCT/CN2019/084476
Other languages
English (en)
French (fr)
Inventor
李涛
周帅
李渴
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2019206260A1 publication Critical patent/WO2019206260A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/172Caching, prefetching or hoarding of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation

Definitions

  • the present application relates to the field of computer application technologies, and more particularly, to a method and apparatus for reading a file cache.
  • the conventional solution generally adopts the least recently used (LRU) management method to manage the file cache.
  • LRU least recently used
  • the most recently accessed file page is cached as much as possible, and the least recently used file page is preferentially released, thereby making the electronic
  • the device launches the application it can retrieve the relevant files of the application from the cache to speed up the application startup process.
  • the LRU management mode is separated from the real application scenario of the user, and the effect is not satisfactory in the actual application process. For example, when there is more useless data in the file cache recently used by the user, the useful file cache is eliminated from the memory, resulting in a low file cache hit rate, memory fluctuation, and affecting the user experience.
  • the present application provides a method and apparatus for reading a file cache to speed up the startup of an application scenario of an electronic device.
  • a method for reading a file cache comprising: predicting at least one operation to be performed by a user; determining a hotspot file block corresponding to the at least one operation; and before the at least one operation occurs, the at least one operation The hot file block corresponding to an operation is read into the cache.
  • the at least one operation described above is an operation for starting an application scenario of the electronic device.
  • different operations of the user may correspond to different application scenarios, and the user can implement the opening of the corresponding application scenario by using these operations. Therefore, the foregoing at least one operation to be predicted by the user is actually equivalent to predicting an application scenario that may occur, and after predicting the corresponding application scenario, the user may be predicted to perform the next operation, or may directly The user may have to perform an operation to make predictions.
  • the at least one operation includes a pull-down notification bar, a sliding notification bar to the presence of a WeChat message, and a click WeChat, by which the user can initiate WeChat.
  • the application scenario may be an application or a specific scenario of the application.
  • the above application scenario may be WeChat or a click friend circle in WeChat.
  • the determining the hotspot file block corresponding to the at least one operation includes: determining address information of the hotspot file block corresponding to the at least one operation.
  • the specific address information is in the form of [file, offset, page].
  • the file indicates the file where the hot file block is located
  • the offset is the offset of the address of the hot file block relative to the base address or the starting address of the file.
  • the shift, page represents the file page contained in the hot file block.
  • reading the hotspot file block corresponding to the at least one operation into the cache includes: reading, according to the address information of the hotspot file block corresponding to the at least one operation, the hotspot file block corresponding to the at least one operation into the cache.
  • the hotspot file block corresponding to the at least one operation may be read into the cache from the disk according to the address information of the hotspot file block corresponding to the at least one operation.
  • the startup speed of the application scenario corresponding to the user operation can be accelerated.
  • the at least one operation is to perform all or part of the operation of the application scenario.
  • the hotspot file block corresponding to some operations related to starting an application scenario may be pre-read into the cache, or the hotspot file blocks corresponding to all operations related to an application scenario may be pre-read. Go to the cache.
  • predicting at least one operation to be performed by the user includes: acquiring a trigger message; determining, according to the preset first correspondence, an operation corresponding to the trigger message as the at least one operation.
  • the first correspondence is used to indicate the operation of the user corresponding to the different trigger message.
  • the first correspondence may be used to indicate a mapping relationship between the trigger message and the operation of the user. It is in the form of a table showing different trigger messages and the actions of the users corresponding to the different trigger messages.
  • the trigger message is used to prompt the user to start the application scenario.
  • the trigger message may be a message received by an application of the electronic device, for example, the trigger message is a news push message, and the news push message is used to prompt the user to open the news client.
  • the trigger message is a WeChat message, and the WeChat message is used to prompt the user to open the WeChat message.
  • the operation to be performed by the user can be determined, so that the hotspot file block corresponding to the operation to be performed by the user can be read into the cache in advance, and the startup speed of the application scenario is accelerated.
  • predicting at least one operation to be performed by the user includes predicting at least one operation to be performed by the user according to a user's operating habits.
  • predicting at least one operation to be performed by the user according to the operation habit of the user includes: acquiring operation habit information of the user; and predicting at least one operation to be performed by the user according to the operation habit information of the user.
  • the user's operation habit information can be used to indicate the habit of the user to open the application scenario. For example, the user opens an application scenario by using a certain operation mode at a fixed time every day. Then, when the fixed time is to be reached, the user's operating habits can be used to predict which operation mode the user will use to open the application scenario.
  • the hotspot file block corresponding to the at least one operation includes a plurality of hotspot file blocks
  • the hotspot file block corresponding to the at least one operation is read into the cache, including: performing a read operation on the plurality of hotspot file blocks simultaneously Wherein the read operation includes a plurality of sub-phases, each of the plurality of hotspot file blocks being in a different sub-phase of the read operation.
  • the read operation includes multiple sub-phases, it is possible to simultaneously perform read operations on different hot file blocks in different sub-phases (equivalent to parallel reading of different hot file blocks), thereby improving the reading of hot file blocks. Take the speed to further reduce the startup time of the application scenario.
  • the plurality of sub-phases described above include opening, positioning, read initiation, and read response.
  • Each of the plurality of sub-phases may correspond to one hot file block.
  • the first hotspot file block, the second hotspot file block, the third hotspot file block, and the fourth hotspot file block are respectively in an open, locate, read initiate, and read response phase of the read operation.
  • the method further includes: determining a prefetch policy according to a system load of the electronic device and a size of the hotspot file block corresponding to the at least one operation; and reading the hotspot file block corresponding to the at least one operation into the cache, including The hotspot file block corresponding to the at least one operation is read into the cache according to the prefetch policy.
  • the foregoing prefetching strategy may specifically include the number, size, and the like of hotspot file blocks that need to be read in each sub-phase.
  • the system load of the electronic device and the size of the hot file block can specify a reasonable prefetching strategy, which can improve the reading efficiency of the hot file block.
  • the system load of the electronic device includes at least one of a CPU occupancy of the electronic device, an available memory size, and a process wait caused by the I/O.
  • the method further includes: acquiring historical data of the electronic device, where the historical data includes the hotspot file block read by the electronic device in the cache when the user operates in different operations; determining the operation and reading of the user according to the historical data.
  • the second correspondence relationship of the hotspot file blocks in the cache is obtained; the second correspondence information corresponding to the second relationship is generated; and the hotspot file block corresponding to the at least one operation is determined, including: determining at least one operation according to the second correspondence information The corresponding hot file block.
  • the second corresponding relationship may indicate a mapping or a mapping relationship between the operation of the user and the hotspot file block.
  • the specificity of the first corresponding relationship may be in the form of a table, where the table shows different operations and different hotspot files corresponding to different operations. Piece.
  • the historical data can accurately determine the correspondence between the operation of the user and the hot file block, and the hot file block corresponding to the at least one operation can be conveniently determined according to the corresponding relationship.
  • a file cache reading apparatus comprising means for performing a method of reading a file cache in any of the first aspect and the first aspect of the first aspect.
  • a file cache reading apparatus comprising: a memory for storing a program; a processor and a transceiver, wherein the processor is configured to execute a program stored in the memory, and when the program is executed, processing
  • the apparatus is for performing the method of the first aspect or its various implementations.
  • the reading device of the file cache in the above third aspect may specifically be an electronic device such as a smart phone, a PAD, a smart watch or the like.
  • a computer readable medium storing program code for device execution, the program code comprising instructions for performing the method of the first aspect or various implementations thereof.
  • a computer program code comprising instructions for performing the method of the first aspect or various implementations thereof.
  • the computer program code in the above fifth aspect may be program code located inside the electronic device, and when the program code is executed, the startup speed of the application scenario of the electronic device may be accelerated.
  • FIG. 1 is a schematic flowchart of a method for reading a file cache according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of determining a hotspot file block in a WeChat startup scenario
  • FIG. 3 is a schematic diagram of comparison between a normal start WeChat and a prefetch start WeChat;
  • FIG. 4 is a schematic block diagram of a file buffer reading apparatus according to an embodiment of the present application.
  • FIG. 5 is a schematic block diagram of a file cache reading apparatus according to an embodiment of the present application.
  • FIG. 6 is a schematic block diagram of a file cache reading system according to an embodiment of the present application.
  • the method for reading a file cache in the embodiment of the present application may be performed by an electronic device, where the electronic device may specifically be a smart phone (especially an Android mobile phone), a personal digital assistant (PDA), a tablet computer, a car computer.
  • the electronic device may specifically be a smart phone (especially an Android mobile phone), a personal digital assistant (PDA), a tablet computer, a car computer.
  • PDA personal digital assistant
  • a smart terminal device that contains rich display content.
  • the LRU management mode is out of the user's real scene, resulting in a low file cache hit rate and poor user experience. Therefore, in order to better read the file cache, you can first learn to open the files opened during the hot boot process, store these files, and take out these pre-stored files when you start the corresponding application again to speed up the application. The startup process.
  • the above method of reading the cache is to perform pre-fetching of the file granularity cache when the cold start of the corresponding application has occurred, and the execution timing of the file cache is relatively backward.
  • the application is Booting may cause read competition with the foreground process, resulting in a long startup time.
  • the cache prefetching granularity is too large, which may cause a large number of useless file pages to be loaded, causing performance degradation.
  • the user may first predict the operation that the user may perform next, and then pre-read the hot file file corresponding to the operation that the user may perform next into the cache, so that the user performs corresponding
  • the operation speeds up the startup of the application scenario and improves the user experience.
  • FIG. 1 is a schematic flowchart of reading a file cache in an embodiment of the present application.
  • the method shown in FIG. 1 can be performed by an electronic device.
  • the method shown in FIG. 1 specifically includes steps 101 to 103, and step 101 to step 103 are respectively described in detail below.
  • the at least one operation may be an operation that the user may perform on the electronic device.
  • the at least one operation may be a sliding electronic device screen, clicking a screen of the electronic device, clicking certain icons of the electronic device operation interface, and the like.
  • each operation may include only a single action or multiple actions.
  • the number of actions included in a single operation is not limited in this application. For example, one action only includes a swipe screen action, and another action includes clicking the notification bar and sliding the notification bar to the two actions of the WeChat message.
  • the user after receiving a WeChat message, the user performs the following operations: swiping the screen to the notification bar, sliding to the notification bar to display the WeChat message, and clicking the WeChat message. Therefore, when the WeChat message is received again next time, it is possible to predict the next operation of executing the slide screen to the appearance of the notification bar, the sliding to the notification bar to appear the WeChat message, and the click of the WeChat message.
  • the user After receiving the microblog message, the user performs the following operations: sliding the screen to the notification bar, sliding the notification bar to the occurrence of the microblog message, and clicking the microblog message. Then, once the electronic device receives the microblog message, it can predict that the user is likely to perform operations such as sliding the screen to the notification bar, sliding the notification bar to appearing the microwave message, and clicking the microblog message.
  • the at least one operation may be all operations or partial operations that the user uses to open an application scenario. Specifically, the at least one operation may be to open all or part of the operation of the WeChat.
  • the at least one operation may be a sliding screen to the appearance of the notification bar, a sliding to the notification bar, a WeChat message, and a WeChat message, and the at least one operation may also be a sliding screen to the appearance of the notification bar and sliding to the notification bar to generate a WeChat message.
  • the above application scenario may be an application or a specific scenario of the application.
  • the above application scenario may be WeChat or a click friend circle in WeChat.
  • predicting the at least one operation of the user specifically includes: predicting at least one operation of the user according to an operation habit of the user.
  • the operation habit information of the user may be acquired first, and then at least one operation to be performed by the user is predicted according to the operation habit information of the user.
  • the operation habit information of the user may be used to indicate the habit of the user to open the application scenario.
  • the operation habit information may include the time when the user opens the application scenario, the action taken to open the application scenario, and the like.
  • the user opens an application scenario by using a certain operation mode at a fixed time every day. Then, when the fixed time is to be reached, the user's operating habits can be used to predict which operation mode the user will use to open the application scenario. For example, the user will open the calling software on time every day at 5:30 pm, and the user starts the calling software by clicking the screen and clicking the calling software (specifically, clicking the icon of the calling software), then, whenever At about 5:30 pm, you can predict that the user's next action is to click on the screen and click on the calling software, thus realizing the prediction of the next possible operation of the user.
  • the at least one operation of the user is predicted, including: acquiring a trigger message; determining, according to the preset first correspondence, an operation corresponding to the trigger message as the at least one operation, where the first correspondence is used to indicate different The corresponding relationship between the trigger message and the operation of the user.
  • the first correspondence is used to indicate an operation of a user corresponding to different trigger messages.
  • the foregoing first correspondence may indicate a mapping relationship between the trigger message and the operation of the user, and the specificity of the first correspondence may be in the form of a table, where the table shows different trigger messages, and different trigger messages. The corresponding user's operation.
  • Table 1 shows the correspondence between different trigger messages and the operation of the user.
  • the information represented by the table 1 can be referred to as a first correspondence relationship, and the user corresponding to different trigger messages can be obtained through Table 1. operating.
  • WeChat message Swipe the screen to the notification bar, slide to the notification bar to see the WeChat message, click on the WeChat message Weibo news Light up the screen, unlock the screen, click on the Weibo message ... ...
  • the user operation corresponding to the different trigger message can be obtained according to the correspondence shown in Table 1.
  • the trigger message may be a message pushed by an application of the electronic device.
  • the trigger message may be a message pushed by a news client, a message pushed by a shopping application, or the like.
  • the trigger message may also be a message received by an application of the electronic device.
  • the trigger message may be a received WeChat message or a Weibo message or the like.
  • the hotspot file block corresponding to each operation may be a file block that needs to be retrieved from the disk into the memory when such an operation occurs.
  • the hotspot block corresponding to the operation may be a main file block that needs to be retrieved from the disk into the cache when the operation is performed.
  • the file block that is retrieved into the memory is a hot file block corresponding to the operation.
  • the first operation is performed 10 times in total, wherein each time the first operation is performed, a certain file block is retrieved from the disk into the cache, and then the first operation is performed from the disk for 10 times.
  • the intersection of the file blocks in the cache is the hot file block corresponding to the first operation.
  • the historical data of the file block that is retrieved from the disk to the cache when each operation is performed may be first obtained, and then the hot file file corresponding to each operation is determined according to the historical data. (corresponding relationship between each operation and the corresponding hot file block), and then the hot file block corresponding to the at least one operation may be determined.
  • the method shown in FIG. 1 further includes: acquiring historical data of the electronic device, where the historical data includes the hotspot file block read by the electronic device in the cache when the user operates in different operations; determining the operation and reading of the user according to the historical data.
  • the second correspondence relationship of the hotspot file blocks in the cache is obtained; the second correspondence information corresponding to the second relationship is generated; and the hotspot file block corresponding to the at least one operation is determined, including: determining at least one operation according to the second correspondence information The corresponding hot file block.
  • the second corresponding relationship may indicate a mapping or a mapping relationship between the operation of the user and the hotspot file block.
  • the specificity of the first corresponding relationship may be in the form of a table, where the table shows different operations and different hotspot files corresponding to different operations. Piece.
  • the historical data can accurately determine the correspondence between the operation of the user and the hot file block, and the hot file block corresponding to the at least one operation can be conveniently determined according to the corresponding relationship.
  • Table 2 shows the correspondence between different operations and hotspot file blocks. It should be understood that the hotspot file block 1, the hotspot file block 2, and the like shown in Table 2 may be specifically the address information of the hotspot file block. .
  • the information represented in Table 2 can be referred to as a second correspondence relationship, and the hotspot file blocks corresponding to different operations can be obtained through Table 2.
  • Hot file block Swipe the screen to the notification bar Hot file block 1, hot file block 2... Sliding to the notification bar appears WeChat message Hot file block 3, hot file block 4... Click on WeChat news Hot file block 5, hot file block 6, hot file block 7... ... ...
  • hotspot file blocks corresponding to different operations can be directly obtained.
  • the startup speed of the application scenario corresponding to the user operation can be accelerated.
  • the hot file block When the hot file block is read from the disk into the memory, in order to speed up the reading speed of the hot file block, the hot file block can be read in a parallel manner. Specifically, the reading of the file block can be divided into several sub-pieces. In the process, the reading of different hot file blocks can be performed in each sub-process, that is, the reading of different hot file blocks can be performed in different sub-stages, which can speed up the reading speed of the hot file blocks.
  • the above-mentioned mechanism for dividing the reading of a file block into a plurality of sub-phases so that different hotspot file blocks can be pre-fetched in different sub-stages may be referred to as a hierarchical multi-fetch prefetch mechanism.
  • the reading the hotspot file block corresponding to the at least one operation into the cache includes: performing a read operation on the plurality of hotspot file blocks simultaneously Wherein the read operation includes a plurality of sub-phases, each of the plurality of hotspot file blocks being in a different sub-phase of the read operation.
  • the read operation includes multiple sub-phases, it is possible to simultaneously perform read operations on different hot file blocks in different sub-phases (equivalent to parallel reading of different hot file blocks), thereby improving the reading of hot file blocks. Take the speed to further reduce the startup time of the application scenario.
  • the foregoing multiple sub-phases specifically include: opening, positioning, reading initiation, and reading response.
  • hot file blocks For example, if at least one operation corresponds to 100 hot file blocks, then four hot file blocks can be read in the four stages of opening, positioning, reading initiation and read response, which can greatly speed up the hot file block. Read speed.
  • the read operation of the hot file block is divided into multiple stages (open, locate, read initiate, read response, etc.).
  • multiple files corresponding
  • the method of reading the hotspot file block after completely reading the hot file block can speed up the reading operation of the hot file block, so that the hot file block can be read from the disk more quickly. Cache, which speeds up the startup of subsequent application scenarios.
  • the prefetching policy may be determined according to the system load of the electronic device and the size of the hotspot file block corresponding to the at least one operation, and then, The hot file block read operation is performed according to the prefetch policy.
  • the method shown in FIG. 1 further includes: determining a prefetching policy according to a system load of the electronic device and a size of the hotspot file block corresponding to the at least one operation; and reading the hotspot file block corresponding to the at least one operation into the cache, specifically
  • the method includes: reading, according to a prefetching strategy, a hotspot file block corresponding to at least one operation into a cache.
  • the foregoing prefetching strategy may specifically include the number, size, and the like of hotspot file blocks that need to be read in each sub-phase.
  • the system load of the electronic device includes at least one of a CPU occupancy of the electronic device, an available memory size, and a process wait caused by the I/O.
  • a reasonable prefetching strategy can be determined according to the system load of the electronic device and the size of the hotspot file block corresponding to the at least one operation, and a reasonable prefetching strategy can be determined in consideration of the electronic device system resources, and the hotspot file can be improved. Block reading efficiency.
  • Embodiment 1 Start reading of a WeChat scene hotspot file block.
  • the file cache reading method of the embodiment of the present application can be used to retrieve related files initiated by the WeChat.
  • starting WeChat mainly includes the following main processes:
  • the learning module with learning is used to learn the hotspot file blocks corresponding to each operation.
  • the user behavior in the WeChat startup scenario is divided into a WeChat message, a drop-down notification bar, a sliding notification bar, a WeChat message, and a WeChat sub-phase.
  • the file read operation performed at this stage is recorded, and the pre-read file information of the scene can be identified and stored in the database after multiple operations. .
  • the hotspot file blocks of each stage of the WeChat startup are read according to the learned information.
  • the hotspot file block corresponding to each sub-phase (the different sub-phases corresponding to different operations) when the WeChat is started may be identified, and then, when the WeChat cold start scenario,
  • the scene pre-read file information can be read from the previous pre-read file information database.
  • the pre-read quota is adjusted in the front part of the WeChat cold start sub-stage to increase the read-ahead strength. To protect the effect.
  • the quota increased, to reduce the large number of IO readings when clicking WeChat.
  • the file operation sequence of each stage that is, the prefetching strategy, is compiled according to indicators such as hot spot size, loading probability, and operation time consumption.
  • the hotspot file block is prefetched according to the hierarchical multi-generation mechanism.
  • the process of determining the hotspot file blocks of each sub-phase and the pre-fetching strategy of the hotspot file blocks of each sub-phase in the first embodiment is shown in FIG. 2 .
  • the first row in FIG. 2 shows a plurality of operations or sub-phases including: a WeChat message, a sliding screen to the appearance of a notification bar, a sliding notification bar, a WeChat message, and a WeChat message.
  • the pre-read file data set corresponding to each operation or sub-phase is as shown in the second line of FIG. 2, and the third line of FIG. 2 can be obtained by learning the pre-read file data set shown in the second line of FIG. 2 .
  • the pre-read file data set may include multiple operations for each operation or a pre-read file corresponding to each sub-phase. Through learning, a common pre-read file may be learned from the pre-read file data set, thereby obtaining each operation or Pre-read files corresponding to each sub-phase.
  • the pre-reading strategy can be formulated for each operation or the pre-read file corresponding to each sub-phase.
  • the prefetch policy may include the size of the prefetched file block, the address of the file block, and the like.
  • the prefetching strategy of each sub-phase formulated in the first embodiment is as shown in the fourth line of FIG. 2, wherein the total amount of files to be prefetched from the WeChat message, the sliding screen to the appearance of the notification bar and the click of the WeChat message is 20 MB. The total amount of files to be prefetched from the notification bar to the WeChat message is 30MB.
  • the fourth line of Figure 2 shows the files that need to be read in the pre-read strategy of each sub-phase and need to be read. The address of the file.
  • the size of the file block to be read is 20 MB
  • the files to be opened are f1 and f2, etc., and the mapping operations are required to be f1 and f3, etc.; sliding the screen to the file that needs to be read in the notification bar
  • the size of the block is 20MB
  • the files that need to be opened are f3 and f4, etc.
  • the read initiates are f2 (10, 100) and f3 (0, 200);
  • the size of the file block that needs to be read by sliding the notification bar to the WeChat message is 30MB.
  • the files that need to be opened are f5 and f7, and f3 (30, 700) and f4 (0, 600) are required to be read.
  • the size of the file block that needs to be read by clicking WeChat message is 20 MB, and the demapping/closing operation is required.
  • performing a read initiation indication for f2 (10, 100) means reading 100 file pages starting from address 10 of file f2.
  • the file cache reading method in the embodiment of the present application can accurately identify the hot file block required for the start of the WeChat.
  • the Hotspot file page of the WeChat can be pre-processed in advance. Take, prepare in advance, improve the cache hit rate, thereby reducing the startup time and improving the user experience.
  • FIG. 3 is a diagram showing an effect of applying the reading method of the file cache of the embodiment of the present application to a scene initiated by WeChat.
  • the prefetching starts by pre-reading the hotspot file block in each operation process of the WeChat startup into the cache (specifically, preparing the required hotspot file block in advance and performing sequential access, corresponding to FIG. 3
  • the application can be started in less time than the normal startup process of the application.
  • FIGS. 4 and 5 The method for reading the file cache of the embodiment of the present application is described in detail with reference to FIG. 1 to FIG. 3, and the file cache reading device of the embodiment of the present application will be described below with reference to FIG. 4 and FIG. It should be understood that the file buffer reading apparatus described in FIGS. 4 and 5 is capable of executing the respective steps of the file buffer reading method illustrated in FIGS. 1 through 3, for the sake of brevity, the following description of FIGS. 4 and 5 is described. The repeated description is appropriately omitted when the device is shown.
  • FIG. 4 is a schematic block diagram of a file buffer reading apparatus according to an embodiment of the present application.
  • the device 200 shown in FIG. 4 specifically includes:
  • the prediction module 201 is configured to predict at least one operation that the user is to perform, where the at least one operation is used to start an application scenario of the electronic device;
  • a determining module 202 configured to determine a hotspot file block corresponding to the at least one operation
  • the reading module 203 is configured to read the hotspot file block corresponding to the at least one operation into the cache before the at least one operation occurs.
  • the file cache reading device can predict the operation to be performed by the user, and the hot file block corresponding to the corresponding operation is read into the cache in advance, thereby speeding up the startup speed of the application scenario corresponding to the user operation.
  • the prediction module 201 is specifically configured to: acquire a trigger message; determine, according to the preset first correspondence, an operation corresponding to the trigger message as the at least one operation, where The first correspondence relationship is used to indicate a correspondence between different trigger messages and operations of the user.
  • the hotspot file block corresponding to the at least one operation is a plurality of hotspot file blocks
  • the reading module 203 is specifically configured to: simultaneously perform a read operation on the multiple hotspot file blocks, where The read operation includes a plurality of sub-phases, each of the plurality of hotspot file blocks being in a different sub-phase of the read operation.
  • the plurality of sub-phases include opening, positioning, read initiation, and read response.
  • the determining module 202 is further configured to determine a prefetch policy according to a system load of the electronic device and a size of a hotspot file block corresponding to the at least one operation, where the prefetch policy is used. And indicating the size or the number of the hotspot file blocks that are required to be prefetched by the at least one operation; the reading module 203 is specifically configured to read the hotspot file block corresponding to the at least one operation into the cache according to the prefetch policy.
  • the system load of the electronic device includes at least one of a CPU occupancy rate of the electronic device, an available memory size, and a process wait caused by an I/O.
  • the apparatus 200 further includes: a learning module 204, where the learning module 204 is specifically configured to: acquire historical data of the electronic device, where the historical data includes when the user operates in different operations. Reading, by the electronic device, the hot file block in the cache; determining, according to the historical data, a second correspondence between the operation of the user and the hot file block read into the cache; generating a second corresponding to the second relationship Corresponding relationship information; the determining module 202 is specifically configured to: determine, according to the second correspondence information, a hotspot file block corresponding to the at least one operation.
  • FIG. 5 is a schematic block diagram of a file cache reading apparatus according to an embodiment of the present application.
  • the device 300 shown in FIG. 5 specifically includes:
  • a memory 301 configured to store a program
  • the processor 302 is configured to execute a program stored in the memory 301.
  • the processor 302 is specifically configured to: predict at least one operation to be performed by the user, where the at least one operation is used to start An application scenario of the electronic device; determining a hotspot file block corresponding to the at least one operation; reading, before the at least one operation occurs, the hotspot file block corresponding to the at least one operation into the cache.
  • the file cache reading device can predict the operation to be performed by the user, and the hot file block corresponding to the corresponding operation is read into the cache in advance, thereby speeding up the startup speed of the application scenario corresponding to the user operation.
  • the processor 302 in the device 300 corresponds to the prediction module 201, the determining module 202, and the reading module 203 in the device 200.
  • the above device 200 and device 300 may specifically be electronic devices.
  • the processor 302 may specifically be a central processing unit (CPU) of the electronic device.
  • the file cache reading system of the embodiment of the present application will be described in detail below with reference to FIG. It should be understood that the file cache reading system shown in FIG. 6 is capable of executing the file cache reading method of the embodiment of the present application. In addition, the file cache reading system in the embodiment of the present application may specifically run on an intelligent operating system platform software inside a smart device, a server, or the like.
  • FIG. 6 is a schematic block diagram of a file cache reading system of an embodiment of the present application.
  • the file cache reading system shown in FIG. 6 is in the operating system in the smart device and the server according to the hierarchy. When an application running on the operating system starts, the operating system needs to retrieve the corresponding file from the disk into the cache to implement the application.
  • the file cache reading system shown in FIG. 6 is divided into an event collection subsystem, a scene recognition subsystem, a prefetch policy formulation subsystem, and a prefetch execution subsystem according to functions. The following describes each subsystem in the file cache reading system in detail.
  • the events collected by the event collection subsystem may be various operations of the user, changes in device status (such as bright screen and blackout), switching of the front and back applications of the device, and the like.
  • the event collection subsystem internally includes a scene event instrumentation module.
  • the user scene instrumentation module can be added to the original graphics, window, and events subsystem (GWES) system of the operating system.
  • the user scene instrumentation module is responsible for the user scene related instrumentation (equivalent to marking some events collected in the event subsystem) issued by the original GWES event management module, and transparently transmitting the scene events obtained by the instrumentation.
  • the scene recognition subsystem completes the identification of the specific scene, and after identifying the corresponding scene, the user may be predicted to perform the next operation.
  • the scene recognition subsystem can also be divided into the following functional modules according to the functions.
  • the specific functions of each functional module are as follows:
  • the scene pre-identification module is configured to pre-identify each sub-phase related to a certain scene according to a user event of the scene event instrumentation module, combining factors such as time, foreground application, and user habit.
  • the user scene may be refined into multiple stages according to the time sequence of the user operation, and needs to include a stage before entering the scene in order to predict the scene.
  • the scenario pre-identification module predicts the identified scene result, which is first passed to the scene hotspot file block collection module for generating the data set. Then, after the scene hotspot file block is successfully identified, the scene pre-recognition result can be sent to the policy formulation subsystem for formulating the policy.
  • the foregoing scene pre-identification module pre-identifies each sub-phase related to a scene, which is equivalent to the operation of the predicted user in the above,
  • the scene hotspot file block collection module is configured to receive a scene sub-phase sent by the scene pre-identification module, and collect and record data of the hot file block at this stage.
  • the hotspot file block is passed to the scene hotspot file block identification module for identifying the scene hotspot.
  • the scene hotspot file block identifying module is configured to identify the scene hotspot file block according to the recognition algorithm (by file block size, loading ratio sorting, etc.) according to the scene file block hotspot data set provided by the scene hotspot file block collecting module, and identify the scene hotspot file block. The result is sent to the scene hotspot file block storage module.
  • the scene hotspot file block information is the hot file block information queue of each sub-phase of the scene.
  • the scene hotspot file block storage module is responsible for accepting the hotspot block information of the scene hotspot file block identification module and storing it in the database persistently.
  • Databases include, but are not limited to, local disks, in-memory databases, and cloud databases.
  • the pre-fetching policy-making subsystem is responsible for accepting the key events of the scene pre-recognition after the scene hotspot file block has been stored, and combining the system resource load situation to formulate the corresponding prefetching strategy of the current scenario.
  • Scene events include, but are not limited to, [Application + Interface + Behavior], and the prefetch strategy includes [Scene Subphase] + [Hot File Block] + [Prefetch Sub Stage].
  • the prefetch policy formulation subsystem includes a system resource load identification module and a prefetch policy formulation module.
  • the resource load identification module is configured to collect the system resource load of the operating system (CPU/MEM/IO, etc., wherein MEM represents memory, IO represents input and output), and the system load pressure level is evaluated. The system resource load pressure level is notified to the prefetch strategy formulation module.
  • the prefetching strategy setting module is configured to read the original scene hotspot file block information in the database, and adjust the hot file file block loading quota of each sub-stage according to the system resource load pressure level, and formulate the current file cache prefetching strategy. .
  • the prefetch execution subsystem performs prefetching of multi-level, concurrent hotspot blocks after receiving the determined policy.
  • the so-called multi-level prefetch, and prefetching of a hot file block include multiple levels: open, locate, read initiate, read response, and so on.
  • the so-called concurrent prefetching means that at a certain time, different levels of prefetching operations of multiple hotspot file blocks can be performed simultaneously.
  • the disclosed systems, devices, and methods may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the functions may be stored in a computer readable storage medium if implemented in the form of a software functional unit and sold or used as a standalone product.
  • the technical solution of the present application which is essential or contributes to the prior art, or a part of the technical solution, may be embodied in the form of a software product, which is stored in a storage medium, including
  • the instructions are used to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program code. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

本申请提供了一种文件缓存的读取方法和装置。该方法包括:预测用户将要进行的至少一个操作,其中,所述至少一个操作用于启动电子设备的任意一个应用场景;确定所述至少一个操作对应的热点文件块;在所述至少一个操作发生之前,将所述至少一个操作对应的热点文件块读取到缓存中。本申请能够加快电子设备的应用场景的启动速度。

Description

文件缓存的读取方法和装置
本申请要求于2018年04月28日提交中国专利局、申请号为201810399282.5、申请名称为“文件缓存的读取方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机应用技术领域,并且更具体地,涉及一种文件缓存的读取方法和装置。
背景技术
随着电子设备的快速发展,电子设备(例如,智能手机、智能手表等)承载的应用程序越来越多,使得电子设备的操作系统需要承载的业务场景也越来越多,进而导致电子设备越用越慢、越用越卡的问题日益突出。而产生这种问题的主要原因是电子设备有限的物理内存不能够承载无限多的应用程序。
为了解决上述问题,传统方案一般采用最近最少使用(least recently used,LRU)管理方式来管理文件缓存,具体地,尽量缓存最近访问的文件页,并优先释放最近最少使用的文件页,从而使得电子设备在启动应用程序时,可以从缓存中调取该应用程序的相关文件,以加快应用程序的启动过程。但是,LRU管理方式脱离了用户的真实应用场景,在实际应用过程中效果并不理想。例如,当用户最近使用的文件缓存中出现较多的无用数据时,会造成有用的文件缓存被淘汰出内存的情况,从而造成文件缓存的命中率低,内存震荡,影响用户体验。
发明内容
本申请提供一种文件缓存的读取方法和装置,以加快电子设备的应用场景的启动速度。
第一方面,提供了一种文件缓存的读取方法,该方法包括:预测用户将要进行的至少一个操作;确定该至少一个操作对应的热点文件块;在该至少一个操作发生之前,将该至少一个操作对应的热点文件块读取到缓存中。
上述至少一个操作为用于启动电子设备的应用场景的操作。
具体地,用户不同的操作可以对应不同的应用场景,用户通过这些操作能够实现相应应用场景的开启。因此,上述预测用户将要进行的至少一个操作实际上相当于预测可能发生的应用场景,在预测出了相应的应用场景后也就可以预测出用户接下来可能要进行的操作,或者也可以直接对用户可能要进行的操作进行预测。
例如,上述至少一个操作包括下拉通知栏、滑动通知栏到出现微信消息和点击微信,用户通过这些操作能够启动微信。
应理解,上述应用场景具体可以是应用程序,也可以是应用程序的一个具体场景。例如,上述应用场景可以是微信,也可以是微信中的点击朋友圈。
可选地,上述确定至少一个操作对应的热点文件块,包括:确定上述至少一个操作对应的热点文件块的地址信息。
可选地,上述地址信息具体形式为[文件、偏移量、页]。
当热点文件块的地址信息为[文件、偏移量、页]时,文件表示热点文件块所处的文件,偏移量为热点文件块的地址相对于文件的基址或者起始地址的偏移量,页表示热点文件块所包含的文件页。
可选地,上述将该至少一个操作对应的热点文件块读取到缓存中,包括:根据至少一个操作对应的热点文件块的地址信息将至少一个操作对应的热点文件块读取到缓存中。
具体地,可以根据至少一个操作对应的热点文件块的地址信息从磁盘中将至少一个操作对应的热点文件块读取到缓存中。
本申请中,通过对用户将要进行的操作进行预测,并将相应操作对应的热点文件块提前读取到缓存中,能够加快用户操作对应的应用场景的启动速度。
可选地,上述至少一个操作是与启动应用场景的全部或者部分操作。
在本申请中,既可以将启动某个应用场景有关的部分操作对应的热点文件块预先读取到缓存中,也可以将与某个应用场景有关的全部操作对应的热点文件块都预先读取到缓存中。
在某些实现方式中,预测用户将要进行的至少一个操作,包括:获取触发消息;根据预设的第一对应关系,将触发消息对应的操作确定为上述至少一个操作。
其中,上述第一对应关系用于指示不同的触发消息对应的用户的操作,具体地,上述第一对应关系可以指示触发消息与用户的操作之间的映射关系,该第一对应关系的具体可以是表格的形式,该表格示出了不同的触发消息,以及不同的触发消息分别对应的用户的操作。
可选地,上述触发消息用于提示用户启动应用场景。
具体地,上述触发消息可以是电子设备的某个应用程序收到的消息,例如,该触发消息为新闻推送消息,该新闻推送消息用于提示用户打开新闻客户端。再如,该触发消息为微信消息,该微信消息用于提示用户点开微信。
根据触发消息与不同操作的对应关系,能够确定用户将要进行的操作,从而使得后续能够将用户将要进行的操作对应的热点文件块预先读取到缓存中,加快应用场景的启动速度。
在某些实现方式中,预测用户将要进行的至少一个操作,包括:根据用户的操作习惯预测用户将要进行的至少一个操作。
进一步地,根据用户的操作习惯预测用户将要进行的至少一个操作,包括:获取用户的操作习惯信息;根据用户的操作习惯信息预测用户将要进行的至少一个操作。
用户的操作习惯信息可以用于指示用户打开应用场景的习惯,例如,用户每天在固定时刻通过某种操作方式打开某个应用场景。那么,当将要达到该固定时刻时,可以根据该用户的操作习惯预测出用户将要采用何种操作方式来打开该应用场景。
在某些实现方式中,上述至少一个操作对应的热点文件块包括多个热点文件块,将至 少一个操作对应的热点文件块读取到缓存中,包括:对多个热点文件块同时执行读操作,其中,读操作包括多个子阶段,多个热点文件块中的每个热点文件块分别处于读操作的不同子阶段。
由于读操作包括多个子阶段,因此,可以在不同的子阶段对不同的热点文件块同时执行读操作(相当于对不同的热点文件块进行并行的读取),从而可以提高热点文件块的读取速度,进一步减小应用场景的启动时间。
可选地,上述多个子阶段包括打开、定位、读发起和读响应。
上述多个子阶段中的每个子阶段可以对应一个热点文件块。例如,第一热点文件块、第二热点文件块、第三热点文件块以及第四热点文件块分别处于读操作的打开、定位、读发起和读响应阶段。
在某些实现方式中,上述方法还包括:根据电子设备的系统负载和至少一个操作对应的热点文件块的大小确定预取策略;将至少一个操作对应的热点文件块读取到缓存中,包括:根据预取策略将至少一个操作对应的热点文件块读取到缓存中。
上述预取策略具体可以包括每个子阶段需要读取的热点文件块的数量、大小等等。
通过电子设备的系统负载以及热点文件块的大小能够指定合理的预取策略,能够提高热点文件块的读取效率。
在制定预取策略时还可以考虑各个操作发生的概率的大小,例如,某个操作在后续发生的概率较大,那么,可以预取较多的热点文件块,如果某个操作在后续发生的概率较小,那么,可以预取较少的热点文件块。
在某些实现方式中,电子设备的系统负载包括电子设备的CPU占有率、可用内存大小以及I/O导致的进程等待中的至少一种。
在某些实现方式中,上述方法还包括:获取电子设备的历史数据,该历史数据包括用户在不同的操作时电子设备读取到缓存中的热点文件块;根据历史数据确定用户的操作与读取到缓存中的热点文件块的第二对应关系;生成与第二应关系对应的第二对应关系信息;确定至少一个操作对应的热点文件块,包括:根据第二对应关系信息确定至少一个操作对应的热点文件块。
上述第二对应关系可以指示用户的操作与热点文件块对应关系或者映射关系,该第一对应关系的具体可以是表格的形式,该表格示出了不同的操作以及不同的操作分别对应的热点文件块。
通过历史数据能够较为准确地确定用户的操作与热点文件块的对应关系,进而可以根据该对应关系较为方便地确定至少一个操作对应的热点文件块。
第二方面,提供了一种文件缓存的读取装置,该装置包括用于执行上述第一方面以及第一方面中的任意一种可能的实现方式中的文件缓存的读取方法的模块。
第三方面,提供了一种文件缓存的读取装置,该装置包括:存储器,用于存储程序;处理器和收发器,其中,处理器用于执行存储器存储的程序,当程序被执行时,处理器用于执行第一方面或其各种实现方式中的方法。
上述第三方面中的文件缓存的读取装置具体可以是电子设备,例如,智能手机、PAD、智能手表等。
第四方面,提供一种计算机可读介质,所述计算机可读介质存储用于设备执行的程序 代码,所述程序代码包括用于执行第一方面或其各种实现方式中的方法的指令。
第五方面,提供一种计算机程序代码,所述程序代码包括用于执行第一方面或其各种实现方式中的方法的指令。
上述第五方面中的计算机程序代码可以是位于电子设备内部的程序代码,该程序代码被执行时可以加快电子设备的应用场景的启动速度。
附图说明
图1是本申请实施例的文件缓存的读取方法的示意性流程图;
图2是确定微信启动场景下的热点文件块的示意图;
图3是正常启动微信和预取启动微信的对比示意图;
图4是本申请实施例的文件缓存的读取装置的示意性框图;
图5是本申请实施例的文件缓存的读取装置的示意性框图;
图6是本申请实施例的文件缓存的读取系统的示意性框图。
具体实施方式
下面将结合附图,对本申请中的技术方案进行描述。
本申请实施例的文件缓存的读取方法可以由电子设备执行,其中,该电子设备具体可以是智能手机(尤其是安卓手机)、个人数字助理(personal digital assistant,PDA)、平板电脑、车载电脑等包含丰富显示内容的智能终端设备。
由于LRU管理方式脱离了用户的真实场景,导致文件缓存的命中率低下,用户体验不佳。因此,为了更好进行文件缓存的读取,可以先学习应用热启动过程中打开的文件,将这些文件存储起来,并在下次再次启动相应的应用时将这些预先存储的文件取出,以加快应用的启动过程。
但是上述读取缓存的方式是在相应的应用程序的冷启动已经发生时才进行文件颗粒度的缓存的预取,文件缓存的执行时机比较靠后,在某些复杂的情况下,该应用的启动进行可能会与前台进程产生读取竞争,导致启动时间仍然较长,另外,缓存预取的粒度太大,可能会导致大量无用的文件页被加载,造成性能劣化。
因此,在进行文件缓存的读取时,可以先预测出用户接下来可能进行的操作,然后将用户接下来可能进行的操作所对应的热点文件块预先读取到缓存中,从而在用户进行相应的操作时能够加快应用场景的启动速度,提高用户体验。
下面结合图1至图3对本申请实施例的文件缓存的读取方法进行详细的介绍。
图1是本申请实施例的文件缓存的读取的示意性流程图。
图1所示的方法可以由电子设备执行,图1所示的方法具体包括步骤101至步骤103,下面对步骤101至步骤103分别进行详细的描述。
101、预测用户将要进行的至少一个操作。
上述至少一个操作可以是用户可能对电子设备进行的操作,例如,上述至少一个操作可以是滑动电子设备屏幕、点击电子设备的屏幕以及点击电子设备操作界面的某些图标等等。
应理解,本申请实施例中,每个操作既可以仅包含单个动作,也可以包含多个动作。 本申请对单个操作包含的动作的个数并不限定。例如,一个操作仅包含滑动屏幕动作,另一个操作包括点击通知栏和滑动通知栏到出现微信消息这两个动作。
例如,以智能手机为例,当接收到微信消息之后,用户会执行下面的操作:滑动屏幕到出现通知栏、滑动到通知栏出现微信消息和点击微信消息。因此,当下次再接收到微信消息时可以预测接下来要执行滑动屏幕到出现通知栏、滑动到通知栏出现微信消息、和点击微信消息等操作。
类似地,当收到微博消息之后,用户会执行下列操作:滑动屏幕到通知栏、滑动通知栏到出现微博消息和点击微博消息。那么,一旦电子设备接收到微博消息就可以预测用户很有可能要执行滑动屏幕到通知栏、滑动通知栏到出现微波消息以及点击微博消息等操作。
上述至少一个操作可以是用户用于打开某个应用场景的全部操作或者部分操作。具体地,上述至少一个操作可以是打开微信的全部操作或者部分操作。例如,上述至少一个操作可以是滑动屏幕到出现通知栏、滑动到通知栏出现微信消息和点击微信消息,上述至少一个操作也可以是滑动屏幕到出现通知栏和滑动到通知栏出现微信消息。
上述应用场景可以是应用程序,也可以是应用程序的一个具体场景。例如,上述应用场景可以是微信,也可以是微信中的点击朋友圈。
应理解,预测用户的至少一个操作的方式有多种。
可选地,预测用户的至少一个操作具体包括:根据用户的操作习惯来预测用户的至少一个操作。
具体地,可以先获取用户的操作习惯信息,然后再根据用户的操作习惯信息来预测用户将要进行的至少一个操作。
用户的操作习惯信息可以用于指示用户打开应用场景的习惯,上述操作习惯信息可以具体包括用户打开应用场景的时间,打开应用场景的所采用的动作等等。
具体地,用户每天在固定时刻通过某种操作方式打开某个应用场景。那么,当将要达到该固定时刻时,可以根据该用户的操作习惯预测出用户将要采用何种操作方式来打开该应用场景。例如,用户每天下午五点半会准时打开叫车软件,并且用户是通过点击屏幕和点击叫车软件(具体地可以是点击叫车软件的图标)的方式来启动叫车软件,那么,每当将要到达下午五点半时,可以预测用户接下来的操作为点击屏幕和点击叫车软件,这样就实现了对用户接下来可能进行的操作的预测。
可选地,预测用户的至少一个操作,包括:获取触发消息;根据预设的第一对应关系,将该触发消息对应的操作确定为该至少一个操作,其中,第一对应关系用于指示不同的触发消息与所述用户的操作的对应关系。
其中,上述第一对应关系用于指示不同的触发消息对应的用户的操作。具体地,上述第一对应关系可以指示触发消息与用户的操作之间的映射关系,该第一对应关系的具体可以是表格的形式,该表格示出了不同的触发消息,以及不同的触发消息分别对应的用户的操作。
例如,表1示出了不同的触发消息与用户的操作之间的对应关系,该表1所表示的信息就可以称为第一对应关系,通过表1就能够获取不同触发消息对应的用户的操作。
表1
触发消息 用户的操作
微信消息 滑动屏幕到出现通知栏、滑动到通知栏出现微信消息、点击微信消息
微博消息 点亮屏幕、解锁屏幕、点击微博消息
如表1所示,当获取了触发消息之后直接根据表1所示的对应关系就能够得到不同触发消息对应的用户操作。
上述触发消息可以是电子设备的应用程序推送的消息,例如,上述触发消息可以是新闻客户端推送的消息,购物应用程序推送的消息等。另外,上述触发消息也可以是电子设备的应用程序接收到的消息,例如,上述触发消息可以是接收到的微信消息或者微博消息等等。
102、确定至少一个操作对应的热点文件块。
每个操作对应的热点文件块可以是这种操作发生时需要从磁盘调取到内存中的文件块。
对于一个操作来说,该操作对应的热点块可以是该操作进行时需要从磁盘调取到缓存中的主要文件块。具体地,对于一个操作来说,在多次进行该操作时,都被调取到内存中的文件块为该操作对应的热点文件块。
例如,第一操作一共进行了10次,其中每次进行第一操作时都要从磁盘中调取一定的文件块到缓存中,那么,这10次进行第一次操作从磁盘中调取到缓存中的文件块的交集就是该第一操作对应的热点文件块。
在确定上述至少一个操作对应的热点文件块时,可以先获取各个操作进行时从磁盘调取到缓存中的文件块的历史数据,然后再根据该历史数据确定每个操作所对应的热点文件块(各个操作与相应的热点文件块的对应关系),接下来就可以确定上述至少一个操作对应的热点文件块。
具体地,图1所示的方法还包括:获取电子设备的历史数据,该历史数据包括用户在不同的操作时电子设备读取到缓存中的热点文件块;根据历史数据确定用户的操作与读取到缓存中的热点文件块的第二对应关系;生成与第二应关系对应的第二对应关系信息;确定至少一个操作对应的热点文件块,包括:根据第二对应关系信息确定至少一个操作对应的热点文件块。
上述第二对应关系可以指示用户的操作与热点文件块对应关系或者映射关系,该第一对应关系的具体可以是表格的形式,该表格示出了不同的操作以及不同的操作分别对应的热点文件块。
通过历史数据能够较为准确地确定用户的操作与热点文件块的对应关系,进而可以根据该对应关系较为方便地确定至少一个操作对应的热点文件块。
例如,表2示出了不同的操作与热点文件块之间的对应关系,应理解,表2中所示的热点文件块1、热点文件块2等表示的具体可以是热点文件块的地址信息。表2所表示的信息就可以称为第二对应关系,通过表2就能够获取不同操作对应的热点文件块。
表2
用户的操作 热点文件块
滑动屏幕到出现通知栏 热点文件块1、热点文件块2…
滑动到通知栏出现微信消息 热点文件块3、热点文件块4…
点击微信消息 热点文件块5、热点文件块6、热点文件块7…
如表2所示,根据表2所示的对应关系能够直接得到不同操作对应的热点文件块。
103、在该至少一个操作发生之前,将至少一个操作对应的热点文件块读取到缓存中。
本申请中,通过对用户将要进行的操作进行预测,并将相应操作对应的热点文件块提前读取到缓存中,能够加快用户操作对应的应用场景的启动速度。
在将热点文件块从磁盘中读到内存中时,为了加快热点文件块的读取速度,可以采用并行的方式来读取热点文件块,具体地,可以将文件块的读取划分成若干个子过程,在每个子过程都可以进行不同热点文件块的读取工作,也就是使得不同热点文件块的读取可以在不同的子阶段进行,能够加快了热点文件块的读取速度。上述将文件块的读取划分成多个子阶段从而能够在不同的子阶段进行不同的热点文件块的预取的机制可以称为分级多发的预取机制。
具体地,当上述至少一个操作对应的热点文件块包括多个热点文件块时,上述将至少一个操作对应的热点文件块读取到缓存中,包括:对该多个热点文件块同时执行读操作,其中,该读操作包括多个子阶段,该多个热点文件块中的每个热点文件块分别处于读操作的不同子阶段。
由于读操作包括多个子阶段,因此,可以在不同的子阶段对不同的热点文件块同时执行读操作(相当于对不同的热点文件块进行并行的读取),从而可以提高热点文件块的读取速度,进一步减小应用场景的启动时间。
可选地,上述多个子阶段具体包括:打开、定位、读发起和读响应。
例如,上述至少一个操作一共对应100个热点文件块,那么,可以在打开、定位、读发起和读响应这四个阶段分别进行4个热点文件块的读取操作,能够大大加快热点文件块的读取速度。
本申请中,热点文件块的读取操作被分成了多个阶段(打开、定位、读发起、读响应等)。对于某个用户场景执行预取的某一时间点,会有多个文件(对应)处于文件读取状态,但分别处于不同的读取阶段,它们合理并充分的占用了文件读取的流水线,与传统方案中在完全完成一个热点文件块的读取之后才进行下一个热点文件块的方式相比,能够加快热点文件块的读取操作,使得热点文件块能够更快地从磁盘读取到缓存中,从而可以加快后续应用场景启动的速度。
另外,为了更好地从磁盘中读取出上述至少一个操作的热点文件块,还可以根据电子设备的系统负载和至少一个操作对应的热点文件块的大小来确定预取策略,接下来,再按照该预取策略再进行热点文件块的读取操作。
具体地,图1所示的方法还包括:根据电子设备的系统负载和至少一个操作对应的热点文件块的大小确定预取策略;将至少一个操作对应的热点文件块读取到缓存中,具体包 括:根据预取策略将至少一个操作对应的热点文件块读取到缓存中。
上述预取策略具体可以包括每个子阶段需要读取的热点文件块的数量、大小等等。
可选地,电子设备的系统负载包括电子设备的CPU占有率、可用内存大小以及I/O导致的进程等待中的至少一种。
本申请中,根据电子设备的系统负载和至少一个操作对应的热点文件块的大小能够确定合理的预取策略,能够在考虑电子设备系统资源的情况下确定合理的预取策略,能够提高热点文件块的读取效率。
下面以微信启动场景为例对本申请实施例的文件缓存的读取方法进行详细的说明。
实施例一:启动微信场景热点文件块的读取。
以安卓智能手机的一个用户场景为例,用户正在一边听歌一边玩游戏时,突然来了条重要微信消息,用户马上点击该消息打开微信,这时有较高的概率会出现需要3-4秒钟才能打开界面的现象,非常影响用户体验。
为了加快微信的启动速度,可以采用本申请实施例的文件缓存的读取方法来调取微信启动的相关文件。具体地,启动微信主要包括以下主要过程:
首先,利用带学习的识别模块对各个操作对应的热点文件块进行学习。
具体地,将微信启动场景下的用户行为按时间顺序划分为来微信消息、下拉通知栏、滑动通知栏到出现微信消息、点击微信等子阶段。每次在用户进入各个子阶段的文件块的读取的时候,通过记录该阶段进行的文件读取操作,并经过多次操作能够识别出该场景的预读取文件信息,并存储到数据库中。使得在下一阶段能够根据存储的信息识别出不同操作对应的热点文件块。
其次,在微信冷启动时,根据学习的信息读取微信启动的各个阶段的热点文件块。
具体地,可以根据上一个步骤中存储的场景的预读取文件信息识别出微信启动时的各个子阶段(各个子阶段对应不同的操作)对应的热点文件块,接下来微信冷启动场景时,可以从之前的预读取文件信息数据库读取场景预读取文件信息。
由于微信启动场景下用户的后台负载非常重,所以IO性能很差,高概率会出现微信冷启动的IO耗时长,因此在微信冷启动子阶段的靠前部分调整预读配额来增加预读力度来保障效果。在来微信消息、下拉通知栏、滑动到出现微信等阶段的配额上升,以减轻点击微信时爆发大量IO读取。根据调整后的各阶段配额,按热点块大小、加载概率、操作耗时等指标来编排出各阶段的文件操作序列,即预取策略。
最后,按照分级多发的机制进行热点文件块的预取。
在执行文件预取时,可以在各阶段可以保障稳定的预取量,并在预取的各阶段都能平稳执行。比如,收到微信消息时,部分预读取文件发起open、fstat、mmap,完全识别出的关键文件直接发起read及之后操作,达到提前、精准、高速的完成文件读写,极大改善用户此场景的性能。
实施例一中确定出各个子阶段的热点文件块以及制定各个子阶段的热点文件块的预取策略的过程如图2所示。
图2中的第一行示出了多个操作或者子阶段,该多个操作或者子阶段具体包括:来微信消息、滑动屏幕到出现通知栏、滑动通知栏到出现微信消息和点击微信消息。该各个操作或者子阶段对应的预读取文件数据集如图2中第二行所示,通过对图2第二行所示的预 读取文件数据集进行学习,可以得到图2第三行所示的各个子阶段对应的预读取文件。
其中,预读取文件数据集可以包含多次进行各个操作或者各个子阶段对应的预读取文件,通过学习可以从预读取文件数据集中学习出公用的预读取文件,从而得到各个操作或者各个子阶段对应的预读取文件。
在学习到各个操作或者各个子阶段对应的预取文件之后,可以针对各个操作或者各个子阶段对应的预读取文件来制定预读策略。该预取策略可以包含预取的文件块的大小和文件块的地址等等。实施例一中制定的各个子阶段的预取策略如图2第四行所示,其中,来微信消息、滑动屏幕到出现通知栏和点击微信消息要预取的文件的总量均为20MB,而滑动通知栏到出现微信消息要预取的文件的总量为30MB,另外,图2第四行还示出了各个子阶段的预读策略中给出了需要读取的文件以及需要读取的文件的地址。
具体地,来微信消息时需要读取的文件块的大小为20MB,需要打开的文件是f1和f2等,需要进行映射操作的是f1和f3等;滑动屏幕到出现通知栏需要读取的文件块的大小为20MB,需要打开的文件是f3和f4等,需要进行读发起的是f2(10,100)和f3(0,200)等;滑动通知栏到出现微信消息需要读取的文件块的大小为30MB,需要打开的文件是f5和f7,需要进行读发起的是f3(30,700)和f4(0,600)等;点击微信消息需要读取的文件块的大小为20MB,需要进行解映射/关闭操作的为f2(10,100)和f3(0,200)等。其中,对f2(10,100)执行读发起表示表示从文件f2的地址10开始读取100个文件页。
在实施例一中,通过本申请实施例的文件缓存的读取方法能够准确地识别出微信启动时所需要的热点文件块,在接下来启动微信时,可以提前对微信的热点文件页进行预取,提前准备,提高缓存的命中率,从而减少启动的时间,提升用户体验。
图3示出了将本申请实施例的文件缓存的读取方法应用到微信启动的场景的效果图。如图3所示,预取启动通过将微信启动的各个操作过程中的热点文件块预先读取到缓存中(具体是将启动所需要的热点文件块提前准备,并进行顺序访问,对应图3中的预取进程),与应用的正常启动过程相比,采用更少的时间就可以完成应用的启动。
上文结合图1至图3对本申请实施例的文件缓存的读取方法进行了详细的描述,下面结合图4和图5对本申请实施例的文件缓存的读取装置进行描述。应理解,图4和图5所描述的文件缓存的读取装置能够执行图1至图3中所示的文件缓存的读取方法的各个步骤,为了简洁,下面在描述图4和图5所示的装置时适当省略重复的描述。
图4是本申请实施例的文件缓存的读取装置的示意性框图。图4所示的装置200具体包括:
预测模块201,用于预测用户将要进行的至少一个操作,其中,所述至少一个操作用于启动电子设备的应用场景;
确定模块202,用于确定所述至少一个操作对应的热点文件块;
读取模块203,用于在所述至少一个操作发生之前,将所述至少一个操作对应的热点文件块读取到缓存中。
本申请中,文件缓存的读取装置通过对用户将要进行的操作进行预测,并将相应操作对应的热点文件块提前读取到缓存中,能够加快用户操作对应的应用场景的启动速度。
可选地,作为一个实施例,所述预测模块201具体用于:获取触发消息;根据预设的第一对应关系,将所述触发消息对应的操作确定为所述至少一个操作,其中,所述第一对 应关系用于指示不同的触发消息与所述用户的操作的对应关系。
可选地,作为一个实施例,所述至少一个操作对应的热点文件块为多个热点文件块,所述读取模块203具体用于:对所述多个热点文件块同时执行读操作,其中,所述读操作包括多个子阶段,所述多个热点文件块中的每个热点文件块分别处于所述读操作的不同子阶段。
可选地,作为一个实施例,所述多个子阶段包括打开、定位、读发起和读响应。
可选地,作为一个实施例,所述确定模块202还用于根据所述电子设备的系统负载和所述至少一个操作对应的热点文件块的大小确定预取策略,所述预取策略用于指示所述至少一个操作需要预取的热点文件块的大小或者数量;所述读取模块203具体用于根据所述预取策略将所述至少一个操作对应的热点文件块读取到缓存中。
可选地,作为一个实施例,所述电子设备的系统负载包括所述电子设备的CPU占有率、可用内存大小以及I/O导致的进程等待中的至少一种。
可选地,作为一个实施例,所述装置200还包括:学习模块204,所述学习模块204具体用于:获取所述电子设备的历史数据,所述历史数据包括用户在不同的操作时所述电子设备读取到缓存中的热点文件块;根据所述历史数据确定用户的操作与读取到缓存中的热点文件块的第二对应关系;生成与所述第二应关系对应的第二对应关系信息;所述确定模块202具体用于:根据所述第二对应关系信息确定所述至少一个操作对应的热点文件块。
图5是本申请实施例的文件缓存的读取装置的示意性框图。图5所示的装置300具体包括:
存储器301,用于存储程序;
处理器302,用于执行存储器301中存储的程序,当存储器301中的程序被执行时,处理器302具体用于:预测用户将要进行的至少一个操作,其中,所述至少一个操作用于启动电子设备的应用场景;确定所述至少一个操作对应的热点文件块;在所述至少一个操作发生之前,将所述至少一个操作对应的热点文件块读取到缓存中。
本申请中,文件缓存的读取装置通过对用户将要进行的操作进行预测,并将相应操作对应的热点文件块提前读取到缓存中,能够加快用户操作对应的应用场景的启动速度。
其中,上述装置300中的处理器302相当于装置200中的预测模块201、确定模块202以及读取模块203。
上述装置200和装置300具体可以是电子设备。当装置300是电子设备时,处理器302具体可以是电子设备的中央处理器(central processing unit,CPU)。
下面对结合附图6对本申请实施例的文件缓存的读取系统进行详细的说明。应理解,图6所示的文件缓存的读取系统能够执行本申请实施例的文件缓存的读取方法。另外,本申请实施例的文件缓存的读取系统具体可以运行在智能设备、服务器等内部的智能操作系统平台软件。
图6示出了本申请实施例的文件缓存的读取系统的示意性框图。图6所示的文件缓存的读取系统按照层次处于智能设备、服务器中的操作系统中。当运行在操作系统上的应用启动时,需要操作系统从磁盘中调取相应的文件到缓存中,以实现应用的启动。图6所示的文件缓存的读取系统按照功能划分为事件采集子系统、场景识别子系统、预取策略制定 子系统和预取执行子系统。下面对文件缓存的读取系统中的各个子系统进行详细的介绍。
事件采集子系统采集的事件可以是用户的各种操作、设备状态的变化(如亮屏和熄屏)、设备前后台应用程序的切换等。
事件采集子系统内部包括场景事件插桩模块。具体地,可以在操作系统原有的图形、窗口、事件子系统(graphics windowing and events subsystem,GWES)系统中,新增用户场景插桩模块。该用户场景插桩模块负责从原有的GWES的事件管理模块发出的用户场景相关插桩(相当于对事件子系统中采集的某些事件做标记),并将插桩得到的场景事件透传到场景识别子系统,由场景识别子系统完成对具体场景的识别,在识别出了相应的场景之后就可以预测用户接下来将要进行的操作。
场景识别子系统按照功能还可以分为以下几个功能模块,各个功能模块的具体功能如下:
场景预识别模块,用于根据场景事件插桩模块的用户事件,结合时间、前台应用、用户习惯等因素,预识别某场景相关的各个子阶段。
其中,用户场景按照用户操作的时间序列可以细化为多个阶段,并且需要包含进入场景前的阶段,以便预测场景。场景预识别模块预测识别的场景结果,首先会传递给场景热点文件块采集模块,供其生成数据集。然后,在场景热点文件块识别成功后,就可以将场景预识别结果发给策略制定子系统,供其制定策略。
上述场景预识别模块预识别某场景相关的各个子阶段相当于上文中的预测用户将要进行的操作、
场景热点文件块采集模块,用于接收场景预识别模块发过来的场景子阶段,并采集、记录此阶段热点文件块的数据。
另外,可以将热点文件块细化到文件页的颗粒,包括[文件、偏移、页]等信息。在经过多次采集,得到足够多的场景文件热点数据集后,将数据集传递给场景热点文件块识别模块,供其识别出场景热点。
场景热点文件块识别模块,用于根据场景热点文件块采集模块提供的场景文件块热点数据集,根据识别算法(按文件块大小,加载比例排序等),识别出场景热点文件块,并将识别结果发送给场景热点文件块存储模块。场景热点文件块信息是场景各个子阶段的热点文件块信息队列。
场景热点文件块存储模块,负责接受场景热点文件块识别模块的热点块信息,并持久化存储到数据库。数据库包括但不限于本地磁盘、内存数据库及云端数据库。
预取策略制定子系统,负责在场景热点文件块已存储后,接受场景预识别的场景关键事件,结合系统资源负载情况,制定本次场景的对应预取策略。场景事件包括但不限于[应用+界面+行为],预取策略包括[场景子阶段]+[热点文件块]+[预取子阶段]。
预取策略制定子系统包括系统资源负载识别模块和预取策略制定模块。其中,资源负载识别模块用于采集操作系统的系统资源负载(CPU/MEM/IO等,其中,MEM表示内存,IO表示输入输出),并评估系统负载压力等级。并将系统资源负载压力等级告知预取策略制定模块。
预取策略制定模块,用于读取数据库中原有的场景热点文件块信息,并根据系统资源负载压力等级,调整场景各子阶段的热点文件块加载配额,制定出本次的文件缓存预取策 略。
预取执行子系统,在接受到确定的策略后进行多级、并发的热点块的预取。所谓多级预取,及一次热点文件块的预取包括多个级别:打开、定位、读发起、读响应等。所谓并发预取,就是在某一时刻,可以同时执行多个热点文件块的不同级别的预取操作。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (14)

  1. 一种文件缓存的读取方法,其特征在于,包括:
    预测用户将要进行的至少一个操作,其中,所述至少一个操作用于启动电子设备的应用场景;
    确定所述至少一个操作对应的热点文件块;
    在所述至少一个操作发生之前,将所述至少一个操作对应的热点文件块读取到缓存中。
  2. 如权利要求1所述的方法,其特征在于,所述预测用户的至少一个操作,包括:
    获取触发消息;
    根据预设的第一对应关系,将所述触发消息对应的操作确定为所述至少一个操作,其中,所述第一对应关系用于指示不同的触发消息与所述用户的操作的对应关系。
  3. 如权利要求1或2所述的方法,其特征在于,所述至少一个操作对应的热点文件块为多个热点文件块,所述将所述至少一个操作对应的热点文件块读取到缓存中,包括:
    对所述多个热点文件块同时执行读操作,其中,所述读操作包括多个子阶段,所述多个热点文件块中的每个热点文件块分别处于所述读操作的不同子阶段。
  4. 如权利要求3所述的方法,其特征在于,所述多个子阶段包括打开、定位、读发起和读响应。
  5. 如权利要求1-4中任一项所述的方法,其特征在于,所述方法还包括:
    根据所述电子设备的系统负载和所述至少一个操作对应的热点文件块的大小确定预取策略,所述预取策略用于指示所述至少一个操作需要预取的热点文件块的大小或者数量;
    所述将所述至少一个操作对应的热点文件块读取到缓存中,包括:根据所述预取策略将所述至少一个操作对应的热点文件块读取到缓存中。
  6. 如权利要求5所述的方法,其特征在于,所述电子设备的系统负载包括所述电子设备的CPU占有率、可用内存大小以及I/O导致的进程等待中的至少一种。
  7. 如权利要求1-6中任一项所述的方法,其特征在于,所述方法还包括:
    获取所述电子设备的历史数据,所述历史数据包括用户在不同的操作时所述电子设备读取到缓存中的热点文件块;
    根据所述历史数据确定用户的操作与读取到缓存中的热点文件块的第二对应关系;
    生成与所述第二应关系对应的第二对应关系信息;
    所述确定所述至少一个操作对应的热点文件块,包括:
    根据所述第二对应关系信息确定所述至少一个操作对应的热点文件块。
  8. 一种文件缓存的读取装置,其特征在于,包括:
    预测模块,用于预测用户将要进行的至少一个操作,其中,所述至少一个操作用于启动电子设备的应用场景;
    确定模块,用于确定所述至少一个操作对应的热点文件块;
    读取模块,用于在所述至少一个操作发生之前,将所述至少一个操作对应的热点文件 块读取到缓存中。
  9. 如权利要求8所述的装置,其特征在于,所述预测模块具体用于:
    获取触发消息;
    根据预设的第一对应关系,将所述触发消息对应的操作确定为所述至少一个操作,其中,所述第一对应关系用于指示不同的触发消息与所述用户的操作的对应关系。
  10. 如权利要求8或9所述的装置,其特征在于,所述至少一个操作对应的热点文件块为多个热点文件块,所述读取模块具体用于:
    对所述多个热点文件块同时执行读操作,其中,所述读操作包括多个子阶段,所述多个热点文件块中的每个热点文件块分别处于所述读操作的不同子阶段。
  11. 如权利要求10所述的装置,其特征在于,所述多个子阶段包括打开、定位、读发起和读响应。
  12. 如权利要求8-11中任一项所述的装置,其特征在于,所述确定模块还用于根据所述电子设备的系统负载和所述至少一个操作对应的热点文件块的大小确定预取策略,所述预取策略用于指示所述至少一个操作需要预取的热点文件块的大小或者数量;
    所述读取模块具体用于根据所述预取策略将所述至少一个操作对应的热点文件块读取到缓存中。
  13. 如权利要求12所述的装置,其特征在于,所述电子设备的系统负载包括所述电子设备的CPU占有率、可用内存大小以及I/O导致的进程等待中的至少一种。
  14. 如权利要求8-13中任一项所述的装置,其特征在于,所述装置还包括:
    学习模块,所述学习模块具体用于:
    获取所述电子设备的历史数据,所述历史数据包括用户在不同的操作时所述电子设备读取到缓存中的热点文件块;
    根据所述历史数据确定用户的操作与读取到缓存中的热点文件块的第二对应关系;
    生成与所述第二应关系对应的第二对应关系信息;
    所述确定模块具体用于:
    根据所述第二对应关系信息确定所述至少一个操作对应的热点文件块。
PCT/CN2019/084476 2018-04-28 2019-04-26 文件缓存的读取方法和装置 WO2019206260A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810399282.5 2018-04-28
CN201810399282.5A CN110427582A (zh) 2018-04-28 2018-04-28 文件缓存的读取方法和装置

Publications (1)

Publication Number Publication Date
WO2019206260A1 true WO2019206260A1 (zh) 2019-10-31

Family

ID=68294816

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/084476 WO2019206260A1 (zh) 2018-04-28 2019-04-26 文件缓存的读取方法和装置

Country Status (2)

Country Link
CN (1) CN110427582A (zh)
WO (1) WO2019206260A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114077588B (zh) * 2020-08-20 2023-03-28 荣耀终端有限公司 一种预读方法及装置
CN115079959B (zh) * 2022-07-26 2023-06-09 荣耀终端有限公司 一种文件管理的方法、装置及电子设备
CN115883910A (zh) * 2022-12-27 2023-03-31 天翼云科技有限公司 一种用于分片视频加速的渐进式弹性缓存方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102770841A (zh) * 2010-02-26 2012-11-07 三星电子株式会社 用于产生最小引导映像的方法和装置
CN103885901A (zh) * 2012-12-21 2014-06-25 联想(北京)有限公司 文件读取方法、存储设备和电子设备
US20170075627A1 (en) * 2015-09-15 2017-03-16 Salesforce.Com, Inc. System having in-memory buffer service, temporary events file storage system and backup events file uploader service
CN107861886A (zh) * 2017-11-28 2018-03-30 青岛海信电器股份有限公司 缓存数据的处理方法、装置及终端

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1204102A (zh) * 1998-06-04 1999-01-06 中国地图出版社 电子地图中的图象处理方法
CN102255963A (zh) * 2011-07-01 2011-11-23 清华大学 一种提供远程文件系统按需推送服务的方法
CN105824820A (zh) * 2015-01-04 2016-08-03 华为技术有限公司 一种媒体文件的缓存方法和装置
CN106572381A (zh) * 2016-11-07 2017-04-19 青岛海信电器股份有限公司 一种图片缩略图的处理方法和智能电视
CN107450860B (zh) * 2017-08-15 2020-05-08 湖南安存科技有限公司 一种基于分布式存储的地图文件预读方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102770841A (zh) * 2010-02-26 2012-11-07 三星电子株式会社 用于产生最小引导映像的方法和装置
CN103885901A (zh) * 2012-12-21 2014-06-25 联想(北京)有限公司 文件读取方法、存储设备和电子设备
US20170075627A1 (en) * 2015-09-15 2017-03-16 Salesforce.Com, Inc. System having in-memory buffer service, temporary events file storage system and backup events file uploader service
CN107861886A (zh) * 2017-11-28 2018-03-30 青岛海信电器股份有限公司 缓存数据的处理方法、装置及终端

Also Published As

Publication number Publication date
CN110427582A (zh) 2019-11-08

Similar Documents

Publication Publication Date Title
US10191856B2 (en) Method of managing web browser cache size using logical relationships and clustering
WO2019206260A1 (zh) 文件缓存的读取方法和装置
US9367211B1 (en) Interface tab generation
KR101719500B1 (ko) 캐싱된 플로우들에 기초한 가속
US11520588B2 (en) Prefetch filter table for storing moderately-confident entries evicted from a history table
US20160306655A1 (en) Resource management and allocation using history information stored in application's commit signature log
US11366757B2 (en) File pre-fetch scheduling for cache memory to reduce latency
US20160077673A1 (en) Intelligent Canvas
EP2851792A1 (en) Solid state drives that cache boot data
CN107223240B (zh) 与文件高速缓存的上下文感知管理相关联的计算方法和装置
CN110837480A (zh) 缓存数据的处理方法及装置、计算机存储介质、电子设备
US9870400B2 (en) Managed runtime cache analysis
CN109558187A (zh) 一种用户界面渲染方法及装置
CN110858238A (zh) 一种数据处理的方法和装置
CN112379945B (zh) 用于运行应用的方法、装置、设备以及存储介质
Ren et al. {Memory-Centric} Data Storage for Mobile Systems
CN113485642A (zh) 数据缓存方法及装置
CN112732542A (zh) 信息处理方法、信息处理装置及终端设备
Lim et al. Applications IO profiling and analysis for smart devices
US20100217941A1 (en) Improving the efficiency of files sever requests in a computing device
CN117235088B (zh) 一种存储系统的缓存更新方法、装置、设备、介质及平台
WO2012050416A1 (en) A method of caching application
US10678699B2 (en) Cascading pre-filter to improve caching efficiency
CN108255918B (zh) 预读关键词集合的获取方法、网页访问设备及电子设备
Chang et al. A New Readahead Framework for SSD-based Caching Storage in IoT Systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19792679

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19792679

Country of ref document: EP

Kind code of ref document: A1