CN110427582A - The read method and device of file cache - Google Patents

The read method and device of file cache Download PDF

Info

Publication number
CN110427582A
CN110427582A CN201810399282.5A CN201810399282A CN110427582A CN 110427582 A CN110427582 A CN 110427582A CN 201810399282 A CN201810399282 A CN 201810399282A CN 110427582 A CN110427582 A CN 110427582A
Authority
CN
China
Prior art keywords
files
hot spot
spot blocks
read
blocks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810399282.5A
Other languages
Chinese (zh)
Inventor
李涛
周帅
李渴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201810399282.5A priority Critical patent/CN110427582A/en
Priority to PCT/CN2019/084476 priority patent/WO2019206260A1/en
Publication of CN110427582A publication Critical patent/CN110427582A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/172Caching, prefetching or hoarding of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

This application provides a kind of read method of file cache and devices.This method comprises: prediction user will carry out at least one operation, wherein it is described at least one operate any one application scenarios for starting electronic equipment;Determine at least one described corresponding hot spot blocks of files of operation;Before at least one described operation occurs, at least one described corresponding hot spot blocks of files of operation is read in caching.The application can accelerate the starting speed of the application scenarios of electronic equipment.

Description

The read method and device of file cache
Technical field
This application involves computer application technologies, and more particularly, to a kind of read method of file cache And device.
Background technique
With the fast development of electronic equipment, the application journey of electronic equipment (for example, smart phone, smartwatch etc.) carrying Sequence is more and more, so that the business scenario that the operating system of electronic equipment needs to carry is also more and more, and then electronics is caused to set It is standby more with it is slower, more with the problem of blocking become increasingly conspicuous.And leading to the problem of the main reason for this is the limited object of electronic equipment Reason memory can not carry infinite number of application program.
To solve the above-mentioned problems, traditional scheme generally use it is least recently used (Least Recently Used, LRU) way to manage manages file cache, specifically, caches the file page of recent visit as far as possible, and preferentially release is minimum recently The file page used, so that electronic equipment when starting application program, can transfer the phase of the application program from caching File is closed, to accelerate the start-up course of application program.But LRU way to manage is departing from the true application scenarios of user, in reality The effect is unsatisfactory in the application process of border.For example, when there is more hash in user's most recently used file cache, Will cause useful file cache be eliminated out memory the case where, to cause the hit rate of file cache low, memory concussion, shadow Ring user experience.
Summary of the invention
The application provides the read method and device of a kind of file cache, to accelerate the starting of the application scenarios of electronic equipment Speed.
In a first aspect, provide a kind of read method of file cache, this method comprises: prediction user will carry out to A few operation;Determining this, at least one operates corresponding hot spot blocks of files;Before at this, at least one operation occurs, extremely by this Few one operates corresponding hot spot blocks of files and reads in caching.
At least one above-mentioned operation is the operation for starting the application scenarios of electronic equipment.
Specifically, the different operation of user can correspond to different application scenarios, and user can be realized by these operations The unlatching of respective application scene.Therefore, at least one operation that above-mentioned prediction user will carry out is effectively equivalent to prediction can The application scenarios that can occur, next can also predict user after having predicted corresponding application scenarios may carry out Operation, or the operation that may directly can also be carried out to user predicts.
For example, it is above-mentioned at least one operation include drop-down notification bar, sliding notification bar to occur wechat message and click it is micro- Letter, user can start wechat by these operations.
It should be understood that above-mentioned application scenarios specifically can be application program, it is also possible to a concrete scene of application program. For example, above-mentioned application scenarios can be wechat, the click circle of friends being also possible in wechat.
Optionally, at least one corresponding hot spot blocks of files of operation of above-mentioned determination, comprising: determine at least one above-mentioned operation The address information of corresponding hot spot blocks of files.
Optionally, address above mentioned information concrete form is [file, offset, page].
When the address information of hot spot blocks of files is [file, offset, page], text locating for representation of file hot spot blocks of files Part, offset are the address of hot spot blocks of files relative to the plot of file or the offset of initial address, and page table shows hot spot text The file page that part block is included.
Optionally, above-mentioned at least one corresponding hot spot blocks of files of operation by this is read in caching, comprising: according at least At least one is operated corresponding hot spot blocks of files and read in caching by the address information of the corresponding hot spot blocks of files of one operation.
Specifically, the address information of corresponding hot spot blocks of files can be operated according at least one will at least one from disk The corresponding hot spot blocks of files of a operation is read in caching.
It in the application, is predicted by the operation that will be carried out to user, and by the corresponding hot spot file of corresponding operating Block is read in caching in advance, can accelerate the starting speed of the corresponding application scenarios of user's operation.
Optionally, at least one above-mentioned operation is all or part of operation with starting application scenarios.
In this application, it is preparatory that the corresponding hot spot blocks of files of the related part operation of some application scenarios can will both have been started It reads in caching, the corresponding hot spot blocks of files of related with some application scenarios all operationss can also all be read in advance In caching.
In some implementations, at least one operation that prediction user will carry out, comprising: obtain triggering message;Root According to preset first corresponding relationship, the corresponding operation of message will be triggered and be determined as at least one above-mentioned operation.
Wherein, above-mentioned first corresponding relationship is used to indicate the operation of the different corresponding users of triggering message, specifically, on State the first corresponding relationship can indicate triggering message and the operation of user between mapping relations, first corresponding relationship it is specific The form that can be table, this table show different triggering message and the different corresponding users of triggering message Operation.
Optionally, above-mentioned triggering message is for prompting user starts application scene.
Specifically, above-mentioned triggering message can be the message that some application program of electronic equipment receives, for example, the triggering Message is news push message, and the news push message is for prompting user to open news client.For another example, which is Wechat message, the wechat message is for prompting user's point to open wechat.
According to the corresponding relationship of triggering message and different operation, the operation that user will carry out can determine, so that It is subsequent the corresponding hot spot blocks of files of operation that user carry out can be read in caching in advance, accelerate opening for application scenarios Dynamic speed.
In some implementations, at least one operation that prediction user will carry out, comprising: practise depending on the user's operation At least one operation that used prediction user will carry out.
Further, at least one operation that habit prediction user will carry out depending on the user's operation, comprising: obtain and use The operating habit information at family;At least one operation that habits information prediction user will carry out depending on the user's operation.
The operating habit information of user can serve to indicate that user opens the habit of application scenarios, for example, user exists daily Fixed time opens some application scenarios by certain mode of operation.It so, can basis when the fixed time will be reached The operating habit of the user predicts user will open the application scenarios using which kind of mode of operation.
In some implementations, at least one above-mentioned corresponding hot spot blocks of files of operation includes multiple hot spot blocks of files, At least one is operated corresponding hot spot blocks of files to read in caching, comprising: be performed simultaneously reading behaviour to multiple hot spot blocks of files Make, wherein read operation includes multiple sub-stages, and each hot spot blocks of files in multiple hot spot blocks of files is respectively at read operation Different sub-stages.
Since read operation includes multiple sub-stages, it can be same to different hot spot blocks of files in different sub-stages Shi Zhihang read operation (is equivalent to and carries out parallel reading to different hot spot blocks of files), so as to improve hot spot blocks of files Reading speed further decreases the starting time of application scenarios.
Optionally, above-mentioned multiple sub-stages include opening, positioning, reading to initiate and read response.
Each sub-stage in above-mentioned multiple sub-stages can correspond to a hot spot blocks of files.For example, the first hot spot file Block, second hot area blocks of files, third hot spot blocks of files and the 4th hot spot blocks of files be respectively at the opening of read operation, positioning, It reads to initiate and read response phase.
In some implementations, the above method further include: according to the system load of electronic equipment and at least one operation The size determination of corresponding hot spot blocks of files prefetches strategy;At least one is operated into corresponding hot spot blocks of files and reads caching In, comprising: at least one corresponding hot spot blocks of files of operation is read in caching according to strategy is prefetched.
The above-mentioned strategy that prefetches can specifically include quantity, the size of hot spot blocks of files etc. that each sub-stage needs to read Deng.
It can be specified by the system load of electronic equipment and the size of hot spot blocks of files and reasonably prefetch strategy, it can Improve the reading efficiency of hot spot blocks of files.
It is also conceivable to the size for the probability that each operation occurs when formulating and prefetching strategy, for example, some operation is rear The raw probability of supervention is larger, it is possible to prefetch more hot spot blocks of files, if some operation subsequent generation probability compared with It is small, it is possible to prefetch less hot spot blocks of files.
In some implementations, the system load of electronic equipment includes that the CPU occupation rate of electronic equipment, free memory are big The waiting at least one of process caused by small and I/O.
In some implementations, the above method further include: obtain the historical data of electronic equipment, which includes User's electronic equipment in different operations reads the hot spot blocks of files in caching;The operation of user is determined according to historical data With the second corresponding relationship for reading the hot spot blocks of files in caching;Generating should be related to that corresponding second corresponding relationship is believed with second Breath;Determine that at least one operates corresponding hot spot blocks of files, comprising: at least one operation is determined according to the second correspondence relationship information Corresponding hot spot blocks of files.
Above-mentioned second corresponding relationship can indicate the operation and hot spot blocks of files corresponding relationship or mapping relations of user, should The form that specifically can be table of first corresponding relationship, this table show different operations and different operations are right respectively The hot spot blocks of files answered.
The operation of user and the corresponding relationship of hot spot blocks of files, Jin Erke can be relatively accurately determined by historical data To determine that at least one operates corresponding hot spot blocks of files more conveniently according to the corresponding relationship.
Second aspect provides a kind of reading device of file cache, which includes for executing above-mentioned first aspect And the module of the read method of the file cache in any one possible implementation in first aspect.
The third aspect provides a kind of reading device of file cache, which includes: memory, for storing program; Processor and transceiver, wherein processor is used to execute the program of memory storage, and when program is performed, processor is used for Execute the method in first aspect or its various implementation.
The reading device of file cache in the above-mentioned third aspect specifically can be electronic equipment, for example, smart phone, PAD, smartwatch etc..
Fourth aspect, provides a kind of computer-readable medium, and the computer-readable medium storage is executed for equipment Program code, said program code include the instruction for executing the method in first aspect or its various implementation.
5th aspect, provides a kind of computer program code, said program code include for execute first aspect or its The instruction of method in various implementations.
Computer program code in above-mentioned 5th aspect can be the program code positioned at electronic equipment internal, the program Code is performed the starting speed that can accelerate the application scenarios of electronic equipment.
Detailed description of the invention
Fig. 1 is the schematic flow chart of the read method of the file cache of the embodiment of the present application;
Fig. 2 is the schematic diagram of the hot spot blocks of files under determining wechat starting scene;
Fig. 3 is normal starting wechat and the contrast schematic diagram for prefetching starting wechat;
Fig. 4 is the schematic block diagram of the reading device of the file cache of the embodiment of the present application;
Fig. 5 is the schematic block diagram of the reading device of the file cache of the embodiment of the present application;
Fig. 6 is the schematic block diagram of the reading system of the file cache of the embodiment of the present application.
Specific embodiment
Below in conjunction with attached drawing, the technical solution in the application is described.
The read method of the file cache of the embodiment of the present application can be executed by electronic equipment, wherein electronic equipment tool Body can be smart phone (especially Android mobile phone), personal digital assistant (Personal Digital Assistant, PDA), tablet computer, vehicle-mounted computer etc. include the intelligent terminal of abundant display content.
Since LRU way to manage is departing from the real scene of user, cause the hit rate of file cache low, user experience It is bad.Therefore, in order to more preferably carry out the reading of file cache, can first learn using the file opened during thermal starting, it will These files store, and take out these pre-stored files when being again started up corresponding application next time, to accelerate The start-up course of application.
But the mode of above-mentioned reading caching is that file is just carried out when the cold start-up of corresponding application program has occurred and that The caching of granularity prefetches, and the execution opportunity of file cache compares rearward, in the case where certain complicated, the starting of the application May be generated with foreground process and read competition, causes the starting time still longer, in addition, the granularity of cache prefetching is too Greatly, a large amount of useless file pages be may result in be loaded, performance is caused to deteriorate.
Therefore, when carrying out the reading of file cache, next operation that user can be carried out can be first predicted, then Hot spot blocks of files corresponding to next operation that user can be carried out is read in caching in advance, to carry out phase in user It can accelerate the starting speed of application scenarios when the operation answered, improve user experience.
It is described in detail below with reference to read method of the Fig. 1 to Fig. 3 to the file cache of the embodiment of the present application.
Fig. 1 is the schematic flow chart of the reading of the file cache of the embodiment of the present application.
Method shown in FIG. 1 can be executed by electronic equipment, and method shown in FIG. 1 specifically includes step 101 to step 103, step 101 to step 103 is described in detail respectively below.
101, at least one operation that prediction user will carry out.
It is above-mentioned at least one operation can be user may to electronic equipment carry out operation, for example, it is above-mentioned at least one Operation can be sliding electronic equipment screen, clicks the screen of electronic equipment and click certain figures at electronic device interface Mark etc..
It should be understood that each operation both can only include individual part in the embodiment of the present application, also may include multiple dynamic Make.The number for the movement that the application includes to single operation does not limit.For example, an operation is only acted comprising sliding screen, Another operation include click notification bar and sliding notification bar to occur wechat message the two act.
For example, after receiving wechat message, user can execute following operation by taking smart phone as an example: sliding screen Curtain to occur notification bar, slide into notification bar occur wechat message and click wechat message.Therefore, wechat is received again when next time Sliding screen can be predicted next to execute when message and wechat message and point occur to notification bar occur, slide into notification bar Hit the operation such as wechat message.
Similarly, after receiving Twitter message, user can execute following operation: sliding screen to notification bar, sliding are logical Know column to occur Twitter message and click Twitter message.So, once electronic equipment receives Twitter message can predict to use Family will probably execute sliding screen and grasp to notification bar, sliding notification bar to there is microwave message and click Twitter message etc. Make.
At least one above-mentioned operation can be all operationss or part operation that user is used to open some application scenarios. Specifically, at least one above-mentioned operation can be all operationss for opening wechat or part operation.For example, it is above-mentioned at least one Operation can be sliding screen to occur notification bar, slide into notification bar occur wechat message and click wechat message, it is above-mentioned extremely A few operation is also possible to slide screen to notification bar occur and slide into notification bar and wechat message occur.
Above-mentioned application scenarios can be application program, be also possible to a concrete scene of application program.For example, above-mentioned answer It can be wechat with scene, the click circle of friends being also possible in wechat.
It should be understood that there are many modes of at least one operation of prediction user.
Optionally, predict that at least one operation of user specifically includes: habit is depending on the user's operation to predict user's At least one operation.
Specifically, the operating habit information of user can be first obtained, is then come further according to the operating habit information of user pre- Survey at least one operation that user will carry out.
The operating habit information of user can serve to indicate that user opens the habit of application scenarios, aforesaid operations habits information The time that user opens application scenarios can be specifically included, used movement of application scenarios etc. is opened.
Specifically, user opens some application scenarios by certain mode of operation in fixed time daily.So, when will When reaching the fixed time, user can be predicted according to the operating habit of the user to be opened using which kind of mode of operation The application scenarios.For example, five thirty of user every afternoon can open chauffeur software on time, and user be by click screen and The mode of chauffeur software (specifically can be the icon for clicking chauffeur software) is clicked to start chauffeur software, then, whenever general When reaching 5 PM half, the next operation of user can be predicted as click screen and clicks chauffeur software, it is thus real The prediction for the operation that next can be can be carried out to user is showed.
Optionally, at least one operation of user is predicted, comprising: obtain triggering message;According to the preset first corresponding pass The corresponding operation of the triggering message is determined as at least one operation, wherein the first corresponding relationship is used to indicate different by system Trigger the corresponding relationship of message and the operation of the user.
Wherein, above-mentioned first corresponding relationship is used to indicate the operation of the different corresponding users of triggering message.Specifically, on State the first corresponding relationship can indicate triggering message and the operation of user between mapping relations, first corresponding relationship it is specific The form that can be table, this table show different triggering message and the different corresponding users of triggering message Operation.
For example, table 1 shows the corresponding relationship between the operation of different triggering message and user, represented by the table 1 Information can be known as the first corresponding relationship, can obtain the different operations for triggering the corresponding user of message by table 1.
Table 1
Trigger message The operation of user
Wechat message Sliding screen to occur notification bar, slide into notification bar occur wechat message, click wechat message
Twitter message It lights screen, solution lock screen, click Twitter message
As shown in table 1, the directly corresponding relationship according to shown in table 1 can obtain difference after obtaining triggering message Trigger the corresponding user's operation of message.
Above-mentioned triggering message can be the message of the application program push of electronic equipment, for example, above-mentioned triggering message can be with It is the message of news client push, the message etc. of shopping application program push.In addition, above-mentioned triggering message is also possible to electronics The message that the application program of equipment receives, for example, above-mentioned triggering message can be the wechat message received or microblogging disappears Breath etc..
102, determine that at least one operates corresponding hot spot blocks of files.
The corresponding hot spot blocks of files of each operation can be needs when this operation occurs and be deployed into memory from disk Blocks of files.
For an operation, the corresponding hot spot block of the operation can be to be needed to be deployed into from disk when the operation carries out Master file block in caching.Specifically, for an operation, when repeatedly carrying out the operation, all it is deployed into memory In blocks of files be the corresponding hot spot blocks of files of the operation.
For example, the first operation has carried out 10 times altogether, wherein carrying out that one will be transferred from disk when the first operation every time Fixed blocks of files into caching, then, this 10 times carry out for the first time operation from be deployed into disk caching in blocks of files friendship Collection is exactly the corresponding hot spot blocks of files of the first operation.
When determining at least one above-mentioned corresponding hot spot blocks of files of operation, can first obtain when each operation carries out from magnetic Disk is deployed into the historical data of the blocks of files in caching, then determines hot spot corresponding to each operation further according to the historical data Blocks of files (each to operate and the corresponding relationship of corresponding hot spot blocks of files) is next assured that at least one above-mentioned operation Corresponding hot spot blocks of files.
Specifically, method shown in FIG. 1 further include: obtain the historical data of electronic equipment, which includes user In different operations, electronic equipment reads the hot spot blocks of files in caching;The operation and reading of user are determined according to historical data Get the second corresponding relationship of the hot spot blocks of files in caching;Corresponding second correspondence relationship information should be related to second by generating; Determine that at least one operates corresponding hot spot blocks of files, comprising: at least one operation pair is determined according to the second correspondence relationship information The hot spot blocks of files answered.
Above-mentioned second corresponding relationship can indicate the operation and hot spot blocks of files corresponding relationship or mapping relations of user, should The form that specifically can be table of first corresponding relationship, this table show different operations and different operations are right respectively The hot spot blocks of files answered.
The operation of user and the corresponding relationship of hot spot blocks of files, Jin Erke can be relatively accurately determined by historical data To determine that at least one operates corresponding hot spot blocks of files more conveniently according to the corresponding relationship.
For example, table 2 shows the corresponding relationship between different operation and hot spot blocks of files, it should be appreciated that shown in table 2 The address information that specifically can be hot spot blocks of files of the expressions such as hot spot blocks of files 1, hot spot blocks of files 2.Information represented by table 2 It can be known as the second corresponding relationship, the corresponding hot spot blocks of files of different operation can be obtained by table 2.
Table 2
The operation of user Hot spot blocks of files
Sliding screen is to there is notification bar Hot spot blocks of files 1, hot spot blocks of files 2 ...
It slides into notification bar and wechat message occurs Hot spot blocks of files 3, hot spot blocks of files 4 ...
Click wechat message Hot spot blocks of files 5, hot spot blocks of files 6, hot spot blocks of files 7 ...
As shown in table 2, the corresponding relationship according to shown in table 2 can directly obtain the corresponding hot spot blocks of files of different operation.
103, before at least one operation occurs at this, at least one is operated into corresponding hot spot blocks of files and reads caching In.
It in the application, is predicted by the operation that will be carried out to user, and by the corresponding hot spot file of corresponding operating Block is read in caching in advance, can accelerate the starting speed of the corresponding application scenarios of user's operation.
By hot spot blocks of files from when being read in memory in disk, can be in order to accelerate the reading speed of hot spot blocks of files Hot spot blocks of files is read using parallel form, specifically, the reading of blocks of files can be divided into several subprocess, In Each subprocess can be carried out the read work of different hot spot blocks of files, that is, make the reading of different hot spot blocks of files can The reading speed of hot spot blocks of files can be accelerated in different sub-stage progress.It is above-mentioned to be divided into the reading of blocks of files Multiple sub-stages are properly termed as being classified so as to carry out the mechanism of different hot spot blocks of files prefetched in different sub-stages Multiple prefetches mechanism.
Specifically, above-mentioned to incite somebody to action when at least one above-mentioned corresponding hot spot blocks of files of operation includes multiple hot spot blocks of files At least one operates corresponding hot spot blocks of files and reads in caching, comprising: is performed simultaneously reading behaviour to multiple hot spot blocks of files Make, wherein the read operation includes multiple sub-stages, and each hot spot blocks of files in multiple hot spot blocks of files is respectively at reading behaviour The different sub-stages of work.
Since read operation includes multiple sub-stages, it can be same to different hot spot blocks of files in different sub-stages Shi Zhihang read operation (is equivalent to and carries out parallel reading to different hot spot blocks of files), so as to improve hot spot blocks of files Reading speed further decreases the starting time of application scenarios.
Optionally, above-mentioned multiple sub-stages specifically include: opening, positioning, read to initiate and read response.
For example, at least one above-mentioned operation corresponds to altogether 100 hot spot blocks of files, it is possible in opening, positioning, reading It initiates and reads to respond the read operation that this four-stage carries out 4 hot spot blocks of files respectively, hot spot blocks of files can be greatly speeded up Reading speed.
In the application, the read operation of hot spot blocks of files is divided into multiple stages, and (opening, positioning read to initiate, read to respond Deng).The sometime point prefetched is executed for some user's scene, multiple files (correspondence) is had and is in file reading state, But it is respectively at the different reading stages, they are reasonable and adequately occupy the assembly line of file reading, in traditional scheme The mode for just carrying out next hot spot blocks of files after the reading for being fully finished a hot spot blocks of files is compared, and heat can be accelerated The read operation of dot file block enables hot spot blocks of files quickly to read in caching from disk, after accelerating The speed of continuous application scenarios starting.
In addition, in order to preferably read out the hot spot blocks of files of at least one above-mentioned operation from disk, it can also basis The system load of electronic equipment and at least one size for operating corresponding hot spot blocks of files prefetch strategy to determine, next, The read operation that strategy carries out hot spot blocks of files again is prefetched according still further to this.
Specifically, method shown in FIG. 1 further include: corresponding according to the system load of electronic equipment and at least one operation The size determination of hot spot blocks of files prefetches strategy;At least one is operated corresponding hot spot blocks of files to read in caching, specifically It include: to read in caching according to prefetching strategy at least one is operated corresponding hot spot blocks of files.
The above-mentioned strategy that prefetches can specifically include quantity, the size of hot spot blocks of files etc. that each sub-stage needs to read Deng.
Optionally, the system load of electronic equipment includes the CPU occupation rate, free memory size and I/O of electronic equipment The caused waiting at least one of process.
In the application, the size energy of corresponding hot spot blocks of files is operated at least one according to the system load of electronic equipment Enough determinations reasonably prefetch strategy, can determine in the case where considering electronic apparatus system resource and reasonably prefetch strategy, energy Enough improve the reading efficiency of hot spot blocks of files.
The read method of the file cache of the embodiment of the present application is carried out specifically so that wechat starts scene as an example below It is bright.
Embodiment one: the reading of starting wechat scene hot spot blocks of files.
By taking user's scene of Android smartphone as an example, when user plays game while listening song, come suddenly Item important wechat message, user click the message at once and open wechat, at this moment have higher probability will appear and need 3-4 seconds The phenomenon that interface could be opened, very influence user experience.
In order to accelerate the starting speed of wechat, can be transferred using the read method of the file cache of the embodiment of the present application The associated documents of wechat starting.Specifically, starting wechat mainly includes following main process:
Firstly, being learnt using the identification module with study to the corresponding hot spot blocks of files of each operation.
Specifically, the user behavior under wechat starting scene is divided into chronological order and comes wechat message, drop-down notice Column, sliding notification bar are to there is wechat message, the click sub-stages such as wechat.Enter the blocks of files of each sub-stage in user every time Reading when, can recognize that the scene by recording the File read operation of stage progress, and by multi-pass operation Pre-read the file information, and store into database.Enable and difference is identified according to the information of storage in next stage Operate corresponding hot spot blocks of files.
Secondly, reading the hot spot blocks of files in each stage of wechat starting according to the information of study in wechat cold start-up.
Specifically, when can identify wechat starting according to the pre-read the file information of the scene stored in previous step The corresponding hot spot blocks of files of each sub-stage (each sub-stage corresponds to different operations), following wechat is cold-started scene When, scene pre-read the file information can be read from pre-read file information data library before.
Since the backstage load that wechat starts user under scene is very heavy, so IO poor performance, high probability will appear micro- Time-consuming by the IO of letter cold start-up, therefore pre-reads quota in the forward portion adjustment of wechat cold start-up sub-stage to increase the dynamics of pre-reading To ensure effect.Coming wechat message, drop-down notification bar, sliding into the quota for the stages such as wechat occur to rise, clicked with mitigating A large amount of I O reads are broken out when wechat.According to each stage quota adjusted, by hot spot block size, load probability, operation time-consuming etc. Index arranges the file operation sequence in each stage, that is, prefetches strategy.
Finally, carrying out prefetching for hot spot blocks of files according to multiple mechanism is classified.
When executing file and prefetching, can ensure stable pre-fetch amount in each stage, and each stage prefetched all It can smoothly perform.For example, part pre-read file initiates open, fstat, mmap when receiving wechat message, identify completely Critical file is directly initiated read and is operated later, reach in advance, precisely, the completion file read-write of high speed, greatly improvement user The performance of this scene.
The hot spot blocks of files of each sub-stage is determined in embodiment one and formulates the hot spot blocks of files of each sub-stage Prefetch strategy process it is as shown in Figure 2.
The first row in Fig. 2 shows multiple operations, and perhaps the multiple operation of sub-stage or sub-stage specifically include: coming Wechat message, sliding screen to occur notification bar, sliding notification bar to occur wechat message and click wechat message.Each behaviour Make or the corresponding pre-read file data collection of sub-stage is as shown in the second row in Fig. 2, by being pre-read to shown in the second row of Fig. 2 File data collection is taken to be learnt, the corresponding pre-read file of each sub-stage shown in available Fig. 2 the third line.
Wherein, pre-read file data collection, which may include, repeatedly carries out each operation or each sub-stage is corresponding pre-reads File is taken, common pre-read file can be gone out from pre-read file data focusing study by study, to obtain each behaviour Work or the corresponding pre-read file of each sub-stage.
Learn to each operation perhaps each sub-stage it is corresponding prefetch after file can operate for each or The corresponding pre-read file of each sub-stage pre-reads strategy to formulate.This, which prefetches strategy, may include the size of the blocks of files prefetched With the address of blocks of files etc..The each sub-stage formulated in embodiment one prefetches strategy as shown in Fig. 2 fourth line, wherein Carry out wechat message, sliding screen arrives that the total amount of notification bar and the click wechat message file to be prefetched occur be 20MB, and slide Dynamic notification bar to the total amount for the file that wechat message to be prefetched occur be 30MB, in addition, Fig. 2 fourth line also shows each sub- rank Pre-reading for section gives the address for the file for needing the file read and needs to read in strategy.
Specifically, the size of blocks of files for needing to read when Lai Weixin message is 20MB, need open file be f1 and F2 etc., that need to carry out map operation is f1 and f3 etc.;Screen is slided to the size for the blocks of files that notification bar needs to read occur For 20MB, needing open file is f3 and f4 etc., and that carry out reading initiation is f2 (10,100) and f3 (0,200) etc.;It is sliding Dynamic notification bar is 30MB to the size for the blocks of files that wechat message needs to read occur, and needing open file is f5 and f7, is needed That carry out reading initiation is f3 (30,700) and f4 (0,600) etc.;Clicking wechat message needs the size of blocks of files read to be 20MB, needing to carry out demapping/shutoff operation is f2 (10,100) and f3 (0,200) etc..Wherein, f2 (10,100) are executed It reads to initiate to indicate to read 100 file pages since the address of file f 2 10.
In example 1, wechat can accurately be identified by the read method of the file cache of the embodiment of the present application Required hot spot blocks of files when starting the hot spot file page to wechat can carry out in advance in advance in next starting wechat It takes, prepares in advance, improve the hit rate of caching, to reduce the time of starting, promote user experience.
Fig. 3 shows the effect that the read method of the file cache of the embodiment of the present application is applied to the scene of wechat starting Figure.Caching is read by the hot spot blocks of files in each operating process for starting wechat in advance as shown in figure 3, prefetching starting In (hot spot blocks of files required for specifically starting prepares in advance, and carries out sequential access, in corresponding diagram 3 prefetch into Journey), compared with the normal boot process of application, the starting of application can be completed using the less time.
It is described in detail above in association with read method of the Fig. 1 to Fig. 3 to the file cache of the embodiment of the present application, under Face combines Fig. 4 and Fig. 5 that the reading device of the file cache of the embodiment of the present application is described.It should be understood that Fig. 4 and Fig. 5 are retouched The reading device for the file cache stated is able to carry out each step of the read method of file cache shown in Fig. 1 to Fig. 3, is It is succinct, repetitive description is suitably omitted when describing Fig. 4 and device shown in fig. 5 below.
Fig. 4 is the schematic block diagram of the reading device of the file cache of the embodiment of the present application.Device 200 shown in Fig. 4 has Body includes:
Prediction module 201, at least one operation that will be carried out for predicting user, wherein at least one described operation For starting the application scenarios of electronic equipment;
Determining module 202, for determining at least one described corresponding hot spot blocks of files of operation;
Read module 203, for before at least one described operation occurs, at least one to operate corresponding heat by described in Dot file block is read in caching.
In the application, the reading device of file cache is predicted by the operation that will be carried out to user, and will be corresponding It operates corresponding hot spot blocks of files to read in caching in advance, the starting speed of the corresponding application scenarios of user's operation can be accelerated Degree.
Optionally, as one embodiment, the prediction module 201 is specifically used for: obtaining triggering message;According to preset The corresponding operation of the triggering message is determined as at least one described operation, wherein described first is corresponding by the first corresponding relationship Relationship is used to indicate the corresponding relationship of the operation of different triggering message and the user.
Optionally, as one embodiment, at least one described corresponding hot spot blocks of files of operation is multiple hot spot files Block, the read module 203 are specifically used for: being performed simultaneously read operation to the multiple hot spot blocks of files, wherein the read operation Including multiple sub-stages, each hot spot blocks of files in the multiple hot spot blocks of files is respectively at difference of the read operation Stage.
Optionally, as one embodiment, the multiple sub-stage includes opening, positioning, reading to initiate and read response.
Optionally, as one embodiment, the determining module 202 is also used to the system load according to the electronic equipment Determined at least one described size for operating corresponding hot spot blocks of files and prefetch strategy, it is described prefetch strategy be used to indicate it is described The size or quantity for the hot spot blocks of files that at least one operation needs to prefetch;The read module 203 is specifically used for according to institute State prefetch strategy by it is described at least one operate corresponding hot spot blocks of files and read in caching.
Optionally, as one embodiment, the system load of the electronic equipment includes that the CPU of the electronic equipment occupies The waiting at least one of process caused by rate, free memory size and I/O.
Optionally, as one embodiment, described device 200 further include: study module 204, the study module 204 have Body is used for: obtaining the historical data of the electronic equipment, the historical data includes user's electronics in different operations Equipment reads the hot spot blocks of files in caching;The operation of user is determined according to the historical data and reads the heat in caching Second corresponding relationship of dot file block;Corresponding second correspondence relationship information should be related to described second by generating;The determining mould Block 202 is specifically used for: determining at least one described corresponding hot spot blocks of files of operation according to second correspondence relationship information.
Fig. 5 is the schematic block diagram of the reading device of the file cache of the embodiment of the present application.Device 300 shown in fig. 5 has Body includes:
Memory 301, for storing program;
Processor 302, for executing the program stored in memory 301, when the program in memory 301 is performed, Processor 302 is specifically used for: at least one operation that prediction user will carry out, wherein at least one described operation is for opening The application scenarios of dynamic electronic equipment;Determine at least one described corresponding hot spot blocks of files of operation;In at least one described operation Before generation, at least one described corresponding hot spot blocks of files of operation is read in caching.
In the application, the reading device of file cache is predicted by the operation that will be carried out to user, and will be corresponding It operates corresponding hot spot blocks of files to read in caching in advance, the starting speed of the corresponding application scenarios of user's operation can be accelerated Degree.
Wherein, the processor 302 in above-mentioned apparatus 300 is equivalent to prediction module 201, determining module 202 in device 200 And read module 203.
Above-mentioned apparatus 200 and device 300 specifically can be electronic equipment.When device 300 is electronic equipment, processor 302 specifically can be the central processing unit (Central Processing Unit, CPU) of electronic equipment.
It is described in detail below in conjunction with reading system of the attached drawing 6 to the file cache of the embodiment of the present application.Ying Li Solution, the reading system of file cache shown in fig. 6 are able to carry out the read method of the file cache of the embodiment of the present application.In addition, The reading system of the file cache of the embodiment of the present application specifically may operate in the internal intelligent operation such as smart machine, server System platform software.
Fig. 6 shows the schematic block diagram of the reading system of the file cache of the embodiment of the present application.File shown in fig. 6 is slow The reading system deposited is in smart machine according to level, in the operating system in server.When operation answering on an operating system When with starting, need operating system from corresponding file is transferred in disk into caching, to realize the starting of application.Shown in Fig. 6 The reading system of file cache be divided by function as event acquisition subsystem, scene Recognition subsystem, prefetch policy development Subsystem and prefetch executive subsystem.The subsystems in the reading system of file cache are described in detail below.
Event acquisition subsystem acquisition event can be the various operations of user, the variation of equipment state (such as bright screen and Put out screen), the switching of equipment front and back application program etc..
Event acquisition subsystem internal includes scene event pitching pile module.It specifically, can be in the original figure of operating system Shape, window in Event Subsystem (Graphics Windowing and Events Subsystem, GWES) system, are increased newly and are used Family scene pitching pile module.User's scene pitching pile module is responsible for the user's scene issued from the event manager module of original GWES Related pitching pile (be equivalent to and make marks to the certain events acquired in Event Subsystem), and the scene event transparent transmission that pitching pile is obtained To scene recognition subsystem, the identification to concrete scene is completed by scene Recognition subsystem, have identified corresponding scene it Next operation that user will carry out can be predicted afterwards.
Scene Recognition subsystem is further divided into following functional module, the specific function of each functional module according to function It can be as follows:
Scene pre-identification module, for the customer incident according to scene event pitching pile module, binding time, foreground application, The factors such as user's habit, the relevant each sub-stage of pre-identification scene.
Wherein, user's scene can be refined as multiple stages according to the time series of user's operation, and need comprising into Stage before entering scene, to predict scene.The scene results of scene pre-identification module Forecasting recognition, can pass to scene first Hot spot blocks of files acquisition module generates data set for it.Then, after scene hot spot blocks of files identifies successfully, so that it may by field Scape pre-identification result issues policy development subsystem, generates strategy for it.
The relevant each sub-stage of above-mentioned scene pre-identification module pre-identification scene is equivalent to prediction user above The operation that will carry out,
Scene hot spot blocks of files acquisition module, the scene sub-stage sent for receiving scene pre-identification module, and adopt Collect, record the data of this stage hot spot blocks of files.
Furthermore it is possible to hot spot blocks of files be refine to the particle of file page, including the information such as [file, offset, page].It is passing through Multi collect is crossed, after obtaining enough document scene hot spot data collection, data set is passed into the identification of scene hot spot blocks of files Module identifies scene hot spot for it.
Scene hot spot blocks of files identification module, the document scene block for being provided according to scene hot spot blocks of files acquisition module Hot spot data collection identifies scene hot spot blocks of files according to recognizer (pressing file block size, load ratio sequence etc.), and Recognition result is sent to scene hot spot blocks of files memory module.Scene hot spot file block message is the heat of each sub-stage of scene Dot file block message queue.
Scene hot spot blocks of files memory module is responsible for receiving the hot spot block message of scene hot spot blocks of files identification module, and Persistent storage is to database.Database includes but is not limited to local disk, memory database and cloud database.
Policy development subsystem is prefetched, is responsible for after scene hot spot blocks of files has stored, receives the scene of scene pre-identification Critical event, in conjunction with system resource loading condition, the correspondence for formulating this scene prefetches strategy.Scene event includes but is not limited to [application+interface+behavior], prefetching strategy includes [scene sub-stage]+[hot spot blocks of files]+[prefetching sub-stage].
Policy development subsystem is prefetched to include system resource remained capacity module and prefetch policy development module.Wherein, it provides Source remained capacity module is used to acquire the system resource load (CPU/MEM/IO etc.) of operating system, and assessment system load pressure Grade.And system resource load pressure grade informing is prefetched into policy development module.
Policy development module is prefetched, for original scene hot spot file block message in reading database, and according to system The hot spot blocks of files of resource load pressure rating, adjustment each sub-stage of scene loads quota, and the file cache for making this is pre- Take strategy.
Executive subsystem is prefetched, prefetching for multistage, concurrent hot spot block is carried out after receiving determining strategy.It is so-called more Grade prefetches and a secondary hot spots blocks of files is prefetched including multiple ranks: opening, positioning, reads to initiate, read response etc..It is so-called concurrent It prefetches, is exactly the pre- extract operation that at a time may be performed simultaneously the different stage of multiple hot spot blocks of files.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed Scope of the present application.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In several embodiments provided herein, it should be understood that disclosed systems, devices and methods, it can be with It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit It divides, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components It can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown or The mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, the indirect coupling of device or unit It closes or communicates to connect, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product It is stored in a computer readable storage medium.Based on this understanding, the technical solution of the application is substantially in other words The part of the part that contributes to existing technology or the technical solution can be embodied in the form of software products, the meter Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be a People's computer, server or network equipment etc.) execute each embodiment the method for the application all or part of the steps. And storage medium above-mentioned includes: that USB flash disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), arbitrary access are deposited The various media that can store program code such as reservoir (Random Access Memory, RAM), magnetic or disk.
The above, the only specific embodiment of the application, but the protection scope of the application is not limited thereto, it is any Those familiar with the art within the technical scope of the present application, can easily think of the change or the replacement, and should all contain Lid is within the scope of protection of this application.Therefore, the protection scope of the application should be based on the protection scope of the described claims.

Claims (14)

1. a kind of read method of file cache characterized by comprising
At least one operation that prediction user will carry out, wherein at least one described operation is for starting answering for electronic equipment Use scene;
Determine at least one described corresponding hot spot blocks of files of operation;
Before at least one described operation occurs, at least one described corresponding hot spot blocks of files of operation is read into caching In.
2. the method as described in claim 1, which is characterized in that at least one operation of the prediction user, comprising:
Obtain triggering message;
According to preset first corresponding relationship, the corresponding operation of the triggering message is determined as at least one described operation, In, first corresponding relationship is used to indicate the corresponding relationship of the operation of different triggering message and the user.
3. method according to claim 1 or 2, which is characterized in that it is described at least one operate corresponding hot spot blocks of files and be Multiple hot spot blocks of files, it is described to read at least one described operation corresponding hot spot blocks of files in caching, comprising:
Read operation is performed simultaneously to the multiple hot spot blocks of files, wherein the read operation includes multiple sub-stages, the multiple Each hot spot blocks of files in hot spot blocks of files is respectively at the different sub-stages of the read operation.
4. method as claimed in claim 3, which is characterized in that the multiple sub-stage includes opening, positioning, reading to initiate and read Response.
5. such as method of any of claims 1-4, which is characterized in that the method also includes:
It is determined according to the system load of the electronic equipment and at least one described size for operating corresponding hot spot blocks of files pre- Take strategy, the size or number for prefetching strategy and being used to indicate the hot spot blocks of files that at least one described operation needs to prefetch Amount;
It is described to read at least one described operation corresponding hot spot blocks of files in caching, comprising: to prefetch strategy according to described At least one described corresponding hot spot blocks of files of operation is read in caching.
6. method as claimed in claim 5, which is characterized in that the system load of the electronic equipment includes the electronic equipment CPU occupation rate, the waiting at least one of process caused by free memory size and I/O.
7. such as method of any of claims 1-6, which is characterized in that the method also includes:
The historical data of the electronic equipment is obtained, the historical data includes user's electronic equipment in different operations Read the hot spot blocks of files in caching;
The operation of user and the second corresponding relationship for reading the hot spot blocks of files in caching are determined according to the historical data;
Corresponding second correspondence relationship information should be related to described second by generating;
At least one operates corresponding hot spot blocks of files described in the determination, comprising:
At least one described corresponding hot spot blocks of files of operation is determined according to second correspondence relationship information.
8. a kind of reading device of file cache characterized by comprising
Prediction module, at least one operation that will be carried out for predicting user, wherein at least one described operation is for starting The application scenarios of electronic equipment;
Determining module, for determining at least one described corresponding hot spot blocks of files of operation;
Read module, for before at least one described operation occurs, at least one to operate corresponding hot spot file by described in Block is read in caching.
9. device as claimed in claim 8, which is characterized in that the prediction module is specifically used for:
Obtain triggering message;
According to preset first corresponding relationship, the corresponding operation of the triggering message is determined as at least one described operation, In, first corresponding relationship is used to indicate the corresponding relationship of the operation of different triggering message and the user.
10. device as claimed in claim 8 or 9, which is characterized in that it is described at least one operate corresponding hot spot blocks of files and be Multiple hot spot blocks of files, the read module are specifically used for:
Read operation is performed simultaneously to the multiple hot spot blocks of files, wherein the read operation includes multiple sub-stages, the multiple Each hot spot blocks of files in hot spot blocks of files is respectively at the different sub-stages of the read operation.
11. device as claimed in claim 10, which is characterized in that the multiple sub-stage include open, positioning, read initiate and Read response.
12. the device as described in any one of claim 8-11, which is characterized in that the determining module is also used to according to The system load of electronic equipment and at least one described size determination for operating corresponding hot spot blocks of files prefetch strategy, described pre- Strategy is taken to be used to indicate the size or quantity of the hot spot blocks of files that at least one described operation needs to prefetch;
The read module according to specifically for prefetching strategy for the corresponding hot spot blocks of files reading of at least one described operation It gets in caching.
13. device as claimed in claim 12, which is characterized in that the system load of the electronic equipment includes that the electronics is set The waiting at least one of process caused by standby CPU occupation rate, free memory size and I/O.
14. the device as described in any one of claim 8-13, which is characterized in that described device further include:
Study module, the study module are specifically used for:
The historical data of the electronic equipment is obtained, the historical data includes user's electronic equipment in different operations Read the hot spot blocks of files in caching;
The operation of user and the second corresponding relationship for reading the hot spot blocks of files in caching are determined according to the historical data;
Corresponding second correspondence relationship information should be related to described second by generating;
The determining module is specifically used for:
At least one described corresponding hot spot blocks of files of operation is determined according to second correspondence relationship information.
CN201810399282.5A 2018-04-28 2018-04-28 The read method and device of file cache Pending CN110427582A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810399282.5A CN110427582A (en) 2018-04-28 2018-04-28 The read method and device of file cache
PCT/CN2019/084476 WO2019206260A1 (en) 2018-04-28 2019-04-26 Method and apparatus for reading file cache

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810399282.5A CN110427582A (en) 2018-04-28 2018-04-28 The read method and device of file cache

Publications (1)

Publication Number Publication Date
CN110427582A true CN110427582A (en) 2019-11-08

Family

ID=68294816

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810399282.5A Pending CN110427582A (en) 2018-04-28 2018-04-28 The read method and device of file cache

Country Status (2)

Country Link
CN (1) CN110427582A (en)
WO (1) WO2019206260A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114077588A (en) * 2020-08-20 2022-02-22 荣耀终端有限公司 Pre-reading method and device
CN115079959A (en) * 2022-07-26 2022-09-20 荣耀终端有限公司 File management method and device and electronic equipment
CN115883910A (en) * 2022-12-27 2023-03-31 天翼云科技有限公司 Progressive elastic caching method and device for fragmented video acceleration

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1204102A (en) * 1998-06-04 1999-01-06 中国地图出版社 Image processing method for electronic map
CN102255963A (en) * 2011-07-01 2011-11-23 清华大学 Method for providing push service for remote file system as required
CN105824820A (en) * 2015-01-04 2016-08-03 华为技术有限公司 Media file buffer memory method and device
CN106572381A (en) * 2016-11-07 2017-04-19 青岛海信电器股份有限公司 Processing method of photo thumbnail and intelligent television
CN107450860A (en) * 2017-08-15 2017-12-08 湖南安存科技有限公司 A kind of map file pre-head method based on distributed storage

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101636870B1 (en) * 2010-02-26 2016-07-06 삼성전자주식회사 Method and apparatus for generating minimal boot image
CN103885901B (en) * 2012-12-21 2019-06-25 联想(北京)有限公司 File reading, storage equipment and electronic equipment
US9632849B2 (en) * 2015-09-15 2017-04-25 Salesforce.Com, Inc. System having in-memory buffer service, temporary events file storage system and events file uploader service
CN107861886A (en) * 2017-11-28 2018-03-30 青岛海信电器股份有限公司 Data cached processing method, device and terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1204102A (en) * 1998-06-04 1999-01-06 中国地图出版社 Image processing method for electronic map
CN102255963A (en) * 2011-07-01 2011-11-23 清华大学 Method for providing push service for remote file system as required
CN105824820A (en) * 2015-01-04 2016-08-03 华为技术有限公司 Media file buffer memory method and device
CN106572381A (en) * 2016-11-07 2017-04-19 青岛海信电器股份有限公司 Processing method of photo thumbnail and intelligent television
CN107450860A (en) * 2017-08-15 2017-12-08 湖南安存科技有限公司 A kind of map file pre-head method based on distributed storage

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114077588A (en) * 2020-08-20 2022-02-22 荣耀终端有限公司 Pre-reading method and device
CN115079959A (en) * 2022-07-26 2022-09-20 荣耀终端有限公司 File management method and device and electronic equipment
CN115883910A (en) * 2022-12-27 2023-03-31 天翼云科技有限公司 Progressive elastic caching method and device for fragmented video acceleration

Also Published As

Publication number Publication date
WO2019206260A1 (en) 2019-10-31

Similar Documents

Publication Publication Date Title
CN110427582A (en) The read method and device of file cache
CN108509501B (en) Query processing method, server and computer readable storage medium
CN106649349A (en) Method, device and system for data caching, applicable to game application
CN109885589A (en) Data query method, apparatus, computer equipment and storage medium
CN108549556B (en) Application program acceleration method, device, terminal and storage medium
CN106681990A (en) Method for reading caching data under mobile cloud storage environment
CN107688488A (en) A kind of optimization method and device of the task scheduling based on metadata
CN108765134B (en) Order data processing method and device, electronic equipment and storage medium
CN112307365A (en) Information display method and device, electronic equipment and storage medium
CN108205471A (en) Method for recovering internal storage and device, computer installation and computer readable storage medium
CN112765942A (en) Text processing method and device, electronic equipment and readable storage medium
CN106959898A (en) Reduce method, device and the mobile terminal of EMS memory occupation
CN111708720A (en) Data caching method, device, equipment and medium
CN106909424A (en) The startup control method and device of a kind of application program
CN110309460B (en) Search result page display method, device and equipment and readable storage medium
CN117235088B (en) Cache updating method, device, equipment, medium and platform of storage system
CN113886033A (en) Task processing method and device
JPH07500441A (en) Buffer memory management method and computer system for implementing the method
CN111480158A (en) File management method and electronic equipment
CN106257449A (en) A kind of information determines method and apparatus
CN110633379B (en) Graphics Processing Unit (GPU) -parallel-operation-based graphics searching system and method
US20220365876A1 (en) Method of cache management based on file attributes, and cache management device operating based on file attributes
JP5819566B2 (en) Data providing apparatus, data providing method, and data providing program
CN110334123A (en) Business datum switching method, device, computer equipment and storage medium
CN110399451B (en) Full-text search engine caching method, system and device based on nonvolatile memory and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination