CN108549556B - Application program acceleration method, device, terminal and storage medium - Google Patents

Application program acceleration method, device, terminal and storage medium Download PDF

Info

Publication number
CN108549556B
CN108549556B CN201810339291.5A CN201810339291A CN108549556B CN 108549556 B CN108549556 B CN 108549556B CN 201810339291 A CN201810339291 A CN 201810339291A CN 108549556 B CN108549556 B CN 108549556B
Authority
CN
China
Prior art keywords
file
loading
target
usage
application program
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810339291.5A
Other languages
Chinese (zh)
Other versions
CN108549556A (en
Inventor
段宽军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Beijing Co Ltd
Original Assignee
Tencent Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Beijing Co Ltd filed Critical Tencent Technology Beijing Co Ltd
Priority to CN201810339291.5A priority Critical patent/CN108549556B/en
Publication of CN108549556A publication Critical patent/CN108549556A/en
Application granted granted Critical
Publication of CN108549556B publication Critical patent/CN108549556B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files
    • G06F9/4451User profiles; Roaming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44521Dynamic linking or loading; Link editing at or after load time, e.g. Java class loading

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention provides an application program acceleration method, an application program acceleration device, a terminal and a storage medium, wherein the method comprises the following steps: acquiring a disk history loading record of each use file of a target application program; determining the loading possibility of each use file in the next operation of the target application program according to the historical loading record of the disk of each use file; determining target use files according to the loading possibility of each use file, wherein the loading possibility of the target use files is higher than that of non-target use files; and caching the target use file so as to load the target use file from the cache when the target application program runs next time and the target use file is loaded. The embodiment of the invention can improve the applicability and the effect of application program acceleration.

Description

Application program acceleration method, device, terminal and storage medium
Technical Field
The invention relates to the field of data reading and writing, in particular to an application program acceleration method, an application program acceleration device, a terminal and a storage medium.
Background
The application program acceleration means that when the application program is started, the use file to be loaded is cached, so that when the application program loads the use file, the cached use file can be directly loaded from the cache, the reading and writing of a disk are reduced, and the purpose of accelerating the application program is achieved.
At present, when the acceleration of an application program is realized, a cached use file is generally fixedly set; although the mode can realize application program acceleration to a certain extent, the application program acceleration is only applicable to the use files of the fixed cache, and the application program acceleration is not applicable to other use files which may be used by the application program; therefore, how to improve the applicability and effect of application acceleration becomes a problem to be considered by those skilled in the art.
Disclosure of Invention
In view of this, embodiments of the present invention provide an application acceleration method, apparatus, terminal and storage medium to improve applicability and effect of application acceleration.
In order to achieve the above purpose, the embodiments of the present invention provide the following technical solutions:
an application acceleration method, comprising:
acquiring a disk history loading record of each use file of a target application program;
determining the loading possibility of each use file in the next operation of the target application program according to the historical loading record of the disk of each use file;
determining target use files according to the loading possibility of each use file, wherein the loading possibility of the target use files is higher than that of non-target use files;
and caching the target use file so as to load the target use file from the cache when the target application program runs next time and the target use file is loaded.
An embodiment of the present invention further provides an application program acceleration apparatus, including:
the loading record acquisition module is used for acquiring the historical loading record of each use file of the target application program;
the loading possibility determining module is used for determining the loading possibility of each use file in the next operation of the target application program according to the historical loading record of the disk of each use file;
the target use file determining module is used for determining the target use files according to the loading possibility of each use file, and the loading possibility of the target use files is higher than that of the non-target use files;
and the cache module is used for caching the target use file so as to load the target use file from the cache when the target application program runs next time and the target use file is loaded.
An embodiment of the present invention further provides a terminal, including: at least one memory and at least one processing chip; the memory stores a program, and the processing chip calls the program to realize the application program acceleration method.
The embodiment of the invention also provides a storage medium, wherein the storage medium is recorded with a program suitable for being executed by a processing chip so as to realize the application program acceleration method.
Based on the above technical solution, the application program acceleration method provided in the embodiments of the present invention may obtain a disk history loading record of each use file of the target application program, and determine, according to the disk history loading record of each use file, a loading possibility of each use file when the target application program runs next time, so as to determine the target use file according to the loading possibility of each use file, so that the determined loading possibility of the target use file is higher than the loading possibility of a non-target use file, so that the target application program can be loaded from a cache when running next time and loading the target use file, and the target use file does not need to be loaded from a disk, thereby accelerating the next running of the target application program.
The loading possibility of each use file in the next operation of the target application program is determined based on the analysis of the historical loading record of the disk of each use file, so that the determined loading possibility of each use file is more accurate by performing data analysis on a large number of historical loading records; therefore, the target use files with high loading possibility are cached, the cached target use files can be loaded when the target application program operates next time at a high probability, and the acceleration accuracy, applicability and effect of the application program are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is an implementation example of an application acceleration method provided by an embodiment of the present invention;
FIG. 2 is a flowchart of an application acceleration method according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating an example of determining a target usage file;
FIG. 4 is another flowchart of an application acceleration method according to an embodiment of the present invention;
FIG. 5 is a flowchart of a method for determining the loading probability of each usage file;
FIG. 6 is an exemplary diagram for determining a loading possibility for using a file;
FIG. 7 is a flowchart illustrating a target application acceleration method according to an embodiment of the present invention;
FIG. 8 is an exemplary diagram of loading a cache file across applications;
FIG. 9 is a flowchart of a method for matching file paths;
FIG. 10 is an exemplary diagram of read and write data using a read and write cache;
FIG. 11 is a flow chart of the write-read cache writing data to disk;
FIG. 12 is another flow chart of the write-read cache writing data to disk;
FIG. 13 is a flow chart of writing data directly to disk;
FIG. 14 is a diagram illustrating an application example of the application acceleration method according to an embodiment of the present invention;
fig. 15 is a block diagram illustrating an application acceleration apparatus according to an embodiment of the present invention;
fig. 16 is another block diagram of an application acceleration device according to an embodiment of the present invention;
fig. 17 is a further block diagram of an application acceleration apparatus according to an embodiment of the present invention;
fig. 18 is a block diagram of another structure of an application acceleration device according to an embodiment of the present invention;
fig. 19 is a hardware configuration diagram of the terminal.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to solve the technical problems of the background art, the inventor of the present invention has considered that the used file used by the application program last time is cached, so as to facilitate the idea of directly loading the used file in the cache when the application program runs next time; however, the inventor further considers that the idea has defects: although the use file used by the application program last time may be used when the application program runs next time, when the application program starts to run, the use file loaded by the application program through user operation may be random based on the randomness of the user operation, so that the use file used by the application program last time is directly cached to accelerate the next running of the application program, and the solution idea is simple and not high in accuracy;
in order to further solve the above-mentioned drawbacks, the inventor of the present invention has made research and thinking and proposed a novel application acceleration concept: the disk history loading records of the use files of the application program are monitored, and the loading possibility of the use files of the application program in the next operation of the application program is determined based on the analysis of the disk history loading records of the use files, so that the use files with high loading possibility are cached, and the use files with high loading possibility can be directly loaded from the cache in the next operation of the application program.
It should be noted that the novel application program acceleration idea proposed by the inventor is essentially different from the idea of caching the use file used by the application program last time; the idea of directly caching the use files used by the application program last time is based on the use condition of the application program running last time, a data analysis process is lacked, the cached use files are relatively comprehensive, and the accuracy is not high;
the novel application program acceleration thought provided by the inventor can more accurately determine the loading possibility of each use file when the application program operates next time from the perspective of big data analysis by introducing analysis of a disk history loading record, and can enable the cached use file to have higher loaded probability when the application program operates next time through caching the use file with high loading possibility, so that the acceleration effect of the application program operating next time can be remarkably improved. Therefore, the novel application program acceleration idea proposed by the inventor is two different technical guides compared with the idea of caching the use files used by the application program last time, and has essential difference.
Based on the above-mentioned novel application program acceleration concept, the application program acceleration method provided in the embodiment of the present invention can pre-determine the loading possibility of each usage file when the target application program runs next time (i.e. the loading possibility of each usage file is loaded during the next start-up process of the target application program and/or during the usage process of the target application program after the next start-up process) based on the disk history loading record of each usage file of the target application program, so that the usage files with high loading possibility can be preferentially cached, and thus, when the target application program runs next time, the usage files with high loading possibility can be directly read from the cache, the read-write of the disk is reduced, and the applicability and the effect of the application program acceleration are improved.
Based on the above-mentioned idea, fig. 1 shows an alternative implementation example of implementing the application acceleration method provided by the embodiment of the present invention by a terminal, as shown in fig. 1:
the disk read-write monitoring process can monitor the disk historical loading records of all the use files of the target application program, wherein the disk historical loading records comprise disk historical read-write records and the like;
after the operating system of the terminal is restarted or the target application program is closed, the resident service process (the resident service process can be started along with the starting of the operating system, the resident service process can be a process registered as the target application program of the system service or a service process of other forms of resident systems) can determine the loading possibility of each used file in the next operation of the target application program according to the disk history loading record of each used file monitored by the disk reading and writing monitoring process, so that the target used file is determined according to the loading possibility of each used file, and the determined loading possibility of the target used file is higher than that of a non-target used file (namely, the target used file is a used file with high loading possibility in each used file); the resident service process can cache the target use file;
when the target application program runs next time, if the target use file needs to be loaded, the target use file can be loaded from the cache, so that the condition that the target use file is loaded from a disk is avoided, the reading and writing of the disk are reduced, and the acceleration of the target application program is realized;
the resident service process may be a process service in which the terminal is running, and may be specifically implemented by executing a corresponding program code by a processing chip of the terminal.
The above example uses the restart of the operating system of the terminal or the shutdown of the target application as a precondition to determine that the target application has a high possibility of loading the target usage file when the target application runs next time, but this is only an optional precondition; the embodiment of the invention can also trigger and determine the target use file with high loading possibility when the target application program is in the running state but the unused time length reaches a certain time length.
Based on the core idea of implementing application acceleration, fig. 2 shows an optional flow of the application acceleration method provided in the embodiment of the present invention, where the method is applicable to a terminal, and specifically can be implemented by a resident service process of the terminal; referring to fig. 2, an application program acceleration method provided in an embodiment of the present invention may include:
and step S10, acquiring the disk history loading record of each use file of the target application program.
The target application program may be an application program to be accelerated in the embodiment of the present invention, and may be any specified application program in application programs installed in the terminal, such as a browser program, an instant messaging program, and the like;
in the embodiment of the invention, the disk read-write monitoring process can monitor the disk loading (including reading and writing) of each used file of the target application program in real time or at regular time, and monitors the disk loading record of each used file, so that the disk historical loading record of each used file can be monitored as time goes on; the history loading records of the disks of the used files can be continuously increased along with the continuous monitoring of the disk reading and writing monitoring process;
as an optional implementation, the embodiment of the present invention may implement updating of the usage file cached by the target application program after the operating system is started or the target application program is closed; based on the method, after the operating system is started or the target application program is closed, the terminal can respond to the starting of the operating system or the closing of the target application program and acquire the historical loading records of the magnetic disk of each use file of the target application program; at the moment, the target application program is in an unoperated state, so that the disk history loading record of each use file of the target application program can be acquired through a resident service process, such as the resident service process using the target application program;
further, since the recent usage can better reflect the possibility of loading the usage file when the target application program runs next time, the embodiment of the present invention may also obtain the disk history loading records of each usage file of the target application program within the current set time period, and not necessarily obtain all the disk history loading records of each usage file.
As an optional implementation, in the embodiment of the present invention, after the operating system is started or a certain time (a time value may be specifically set according to an actual situation) after the target application program is closed, a disk history loading record of each use file of the target application program within a currently set time period is obtained through the resident service process;
of course, the embodiment of the present invention may also support that the target application program is in the running state, but the unused duration reaches a certain duration (for example, the user's unused duration reaches a certain duration), and the disk history loading record of each used file of the target application program is obtained, so as to update the cached used file.
In the embodiment of the present invention, the usage file of the target application may be a file that may be used by the target application at the time of starting and/or after starting, such as: program files (such as a start-up file, a dll file and the like) of the target application program, user data files (such as a user setting file, a user operation history file and the like), system files used by the target application program, third-party program files used (such as a third-party input method, a font, a plug-in and the like called by the target application program) and the like; the form of the use file of the target application program can be determined according to the program type and the program design of the target application program, and is not limited fixedly;
optionally, the history of loading the disk using the file may include: the use file corresponding to the data read and written each time, the offset of the use file, the data size, the loading time and the like are recorded in the disk history.
And step S11, determining the loading possibility of each use file when the target application program runs next time according to the disk history loading record of each use file.
Optionally, based on the history loading record of the disk of each used file, the embodiment of the present invention may determine the usage degree of each used file and the loading acceleration efficiency of each used file; the usage program using the file may indicate at least the number of times of loading the usage file, and of course, may also indicate the data usage coverage rate of the usage file; acceleration efficiency using file loading may indicate the time saved using a file to load from cache compared to loading from disk;
the loading possibility of each used file in the next operation of the target application program can be determined based on the principle that the use degree and the loading acceleration efficiency of the used file are in positive correlation with the loading possibility of the used file; the higher the usage degree and the loading acceleration efficiency of the used file, the higher the loading possibility of the used file.
Step S12, determining the target usage file according to the loading possibility of each usage file, wherein the loading possibility of the target usage file is higher than that of the non-target usage file.
Alternatively, the number of the determined target usage files may be at least one, and preferably a plurality.
And step S13, caching the target use file, and loading the target use file from the cache when the target application program runs next time and the target use file is loaded.
Optionally, as an optional implementation of determining the target usage file, in the embodiment of the present invention, the size of the cache space occupied by the target application program may be set, for example, a memory occupied area may be allocated to the target application program in the kernel memory, and the size of the memory occupied area (that is, the size of the cache space occupied by the target application program) may be set;
the embodiment of the invention can use part or all of the size of the cache space occupied by the target application program to cache the use files, and for convenience of description, the size of the space used for caching the use files in the cache can be called as a first cache space size, and the first cache space size can be the size of the cache space occupied by part or all of the target application program; in this case, according to the order from high to low of the loading possibility, the embodiment of the present invention may cache the usage file until the size of the cached usage file approaches the size of the first cache space;
it should be noted that the size of the cached usage file does not necessarily reach the size of the first cache space, but the size of the cached usage file approaches but is not greater than the size of the first cache space as the cache termination condition;
correspondingly, according to the loading possibility of each use file, the total file size approaches the size of the first cache space and the use files with the highest loading possibility are determined as the target use files;
as an example, as shown in fig. 3, the first cache space size for caching the usage files by the target application may be 30M (megabyte), the usage files may be sorted from high to low according to the loading possibility, for example, the total file size of the usage files 1 to 15 after sorting is 29M, and the total file size of the usage files 1 to 16 is 31M, the usage files 1 to 15 with the total file size approaching 30M may be selected as the target usage files, and cached.
As another optional implementation, the embodiment of the present invention may specify a set number of target usage files, and determine a plurality of usage files of the set number with the highest loading possibility as the target usage files and perform caching;
as another optional implementation, after sorting the usage files according to the loading possibility from high to low, the embodiment of the present invention may use a plurality of usage files sorted in the top N bits (the value of N may be set according to an actual situation, and the higher the loading possibility of the usage file is, the earlier the usage file is sorted) as the target usage file and perform caching.
The method for determining the target use files according to the loading possibility of each use file in the embodiment of the present invention is not limited, as long as the determined loading possibility of the target use files is higher than the loading possibility of the non-target use files, and the number and the determination method of the target use files in the embodiment of the present invention are not limited, and may be specifically selected according to actual situations.
After the target use file is cached, the target application program runs next time, the loading of the target use file can be realized from the cache without loading from a disk, and the reading and writing of the disk can be greatly reduced.
The application program acceleration method provided by the embodiment of the invention can acquire the disk history loading record of each use file of the target application program, and determine the loading possibility of each use file in the next operation of the target application program according to the disk history loading record of each use file, so that the target use file is determined according to the loading possibility of each use file, the determined loading possibility of the target use file is higher than the loading possibility of a non-target use file, and the target application program can be loaded from a cache when running next time and loading the target use file, and the target use file is not required to be loaded from a disk, so that the next operation of the target application program is accelerated.
The loading possibility of each use file in the next operation of the target application program is determined based on the analysis of the historical loading record of the disk of each use file, so that the determined loading possibility of each use file is more accurate by performing data analysis on a large number of historical loading records; therefore, the target use files with high loading possibility are cached, the cached target use files can be loaded when the target application program operates next time at a high probability, and the acceleration accuracy, applicability and effect of the application program are improved.
Further, the application program acceleration method provided by the embodiment of the present invention may be executed after the operating system of the terminal is started or the target application program is closed, and update the cached usage file of the target application program based on the disk history loading record of each usage file of the target application program, so that the cached usage file is a usage file with a high loading possibility when the target application program runs next time, and disk reading and writing when the target application program runs next time are reduced. Because the embodiment of the invention caches the use file with high loading possibility when the target application program operates next time after the operating system of the terminal is started or the target application program is closed each time, the targeted update of the cached use file can be realized aiming at the next operation of the target application program, and the applicability and the effect of acceleration when the target application program operates each time are improved.
It should be noted that, at present, there is an acceleration mode for caching the use file to be loaded when the application program is started; in this way, when the application program is started, the use files (such as the start file and dll file of the application program) required for starting still need to be read from the disk, and the read-write time occupied by the disk is still within the starting time of the application program, so the starting acceleration effect on the application program is low;
according to the embodiment of the invention, the use file of the application program cache is updated after the operating system is started or the application program is closed, so that the starting acceleration of the application program can be ensured to be directly carried out based on the use file (such as the starting file, dll file and the like of the cache in historical high-frequency use) of the cache when the application program is started next time, the read-write time of a disk when the application program is started is saved, and the starting acceleration effect of the application program can be obviously improved.
As an optional implementation for determining the loading possibility of each use file, in the embodiment of the present invention, after obtaining the disk history loading record of each use file, the loading possibility of each use file may be determined based on the use degree and the loading acceleration efficiency of each use file; optionally, fig. 4 shows another optional flow of the application acceleration method provided in the embodiment of the present invention, and referring to fig. 4, the method may include:
and step S20, acquiring the disk history loading record of each use file of the target application program.
Optionally, in the embodiment of the present invention, a historical loading record of a disk of each use file of the target application program within a current set time period may be obtained; all disk history loading records for each usage file of the target application may also be obtained.
And step S21, determining the use degree and the loading acceleration efficiency of each use file according to the disk history loading record of each use file.
In the embodiment of the present invention, the usage level of the usage file may at least indicate the number of times the usage file is loaded, and of course, may also be combined with the data usage coverage rate indicating the usage file.
The loading times of the used files can be determined according to the historical loading records of the disks of the used files, namely the loading times of the used files are determined from the historical loading records of the disks of the used files;
the data usage coverage rate of the usage file refers to the coverage rate of the usage file in which data is used; for a use file, in the historical loading process of the use file, only part of data may be used, and the other part of data is not used, and the data use coverage rate of the use file is measured by the proportion of the used data in the use file to the total data of the use file; for example, if a usage file with a size of 100 bytes has 60 bytes of data used in the historical loading process of the usage file, the data usage coverage of the usage file is 60%.
As an optional implementation, the embodiment of the present invention may determine the loading times and the data usage coverage of each usage file according to the disk history loading record of each usage file, so as to determine the usage degree of each usage file according to the loading times and the data usage coverage of each usage file, and enable the loading times and the data usage coverage to be in positive correlation with the usage degree of the usage file, where if the loading times and the data usage coverage are higher, the usage degree of the usage file is higher;
of course, the embodiment of the present invention may also determine the usage level of each usage file only according to the loading times of each usage file.
After determining the loading times and the data use coverage rate of each use file, as an optional implementation, the embodiment of the present invention may set a loading time weight and a data use coverage rate weight, and for any use file, add a combination result (e.g., a multiplication result) of the loading times and the loading time weight to a combination result (e.g., a multiplication result) of the data use coverage rate and the data use coverage rate weight to obtain the use degree of each use file;
since the usage coverage is in the form of a ratio and the number of loads is in the form of a number of times, it is contemplated that the data usage coverage weight is higher than the number of loads weight.
Of course, setting the weight of the loading times and the weight of the data usage coverage rate is only an optional way, and the embodiment of the present invention may also directly combine the loading times of the usage file with the data usage coverage rate (for example, add or multiply, or other combination operation ways), so as to obtain the usage level of the usage file.
In embodiments of the present invention, load acceleration efficiency using files may indicate the time saved using files to load from cache compared to loading from disk; as an optional implementation, for any of the used files, a first ratio between the memory occupation size of the used file and the disk loading time saved by the used file may be determined to obtain a corresponding first ratio for each used file, so that the loading acceleration efficiency of each used file may be determined according to the corresponding first ratio for each used file.
For example, a corresponding first ratio for a usage file may be expressed as: memory footprint size using files (bytes)/disk access time saved using files (milliseconds);
the disk loading time saved by using the file is calculated according to the disk history loading record of the used file, for example, when loading a dll file, the dll file is monitored to be loaded from the disk in 10 seconds, and if the dll file is in a cache, the dll file can be loaded in only 1 second, so that the disk loading time saved by loading the dll file is 10-1 to 9 seconds.
Optionally, in the embodiment of the present invention, a first ratio between the size of the memory occupied by the used file and the disk loading time saved by the used file may be used as the loading acceleration efficiency of the used file;
in another optional implementation, the embodiment of the present invention may also set a load acceleration efficiency weight, and for any one of the used files, combine (e.g., multiply) a first ratio between a memory occupied size of the used file and a disk loading time saved by the used file with the load acceleration efficiency weight to obtain the load acceleration efficiency of each used file.
Optionally, in the embodiment of the present invention, the load acceleration efficiency weight may be set to be lower than the load times weight, and the data usage coverage weight may be set to be higher than the load times weight.
Step S22, determining the loading possibility of each use file when the target application program operates next time according to the use degree and the loading acceleration efficiency of each use file; the use degree and the loading acceleration efficiency of the used file are positively correlated with the loading possibility of the used file.
Optionally, for a use file, after obtaining the usage degree and the loading acceleration efficiency of the use file, the usage degree and the loading acceleration efficiency of the use file may be added to obtain a loading score of the use file, so as to obtain a loading score of each use file;
therefore, the loading possibility of each use file can be determined according to the loading score of each use file.
For example, a loading score for a usage file may be expressed as: the method comprises the steps that a first ratio of the memory occupation size of a used file to the disk loading time saved by the used file is obtained by the steps of loading times weight, data use coverage rate weight, loading acceleration efficiency weight and the like;
wherein, one optional expression of the usage degree is: the load times weight is the load times + the data usage coverage rate weight is the data usage coverage rate; an alternative representation of load acceleration efficiency is: the loading acceleration efficiency weight is a first ratio of the memory occupation size of the usage file to the disk loading time saved by the usage file.
As an alternative implementation, the loading score of each usage file may be directly used as the loading possibility of each usage file.
As another alternative implementation, if the situation that the more recent historical loading time is considered, the more likely the usage file is to be loaded is considered, that is, according to the usage habit of the user, the more likely the file recently used by the user is to be used subsequently; for each use file, after obtaining the use degree and the loading acceleration efficiency of each use file, adding the use degree and the loading acceleration efficiency of each use file respectively to obtain a loading score of each use file; and respectively performing time attenuation processing on the loading scores of the use files, so that the loading probability of each use file is obtained as the loading score of the use file processed earlier in the latest loading time is lower, thereby ensuring that the use files with higher loading probability are higher in use degree and loading acceleration efficiency and are later in the latest loading time (namely, the use files with higher loading probability are more recently loaded use files).
Step S23, determining the target usage file according to the loading possibility of each usage file, wherein the loading possibility of the target usage file is higher than that of the non-target usage file.
And step S24, caching the target use file, and loading the target use file from the cache when the target application program runs next time and the target use file is loaded.
It can be seen that, in the embodiment of the present invention, after the operating system is started or the target application program is closed, based on the disk history loading record of each usage file of the target application program, the determination of the usage degree and the loading acceleration efficiency of each usage file is re-performed, based on the re-determined usage degree and the re-determined loading acceleration efficiency of each usage file, the loading possibility of each usage file in the next operation of the target application program is re-determined, and the usage degree and the loading acceleration efficiency of each usage file are both in a positive correlation with the loading possibility of each usage file;
therefore, after the target use files with high loading possibility are cached according to the loading possibility of each use file, the cached target use files can be used as the use files which are very likely to be loaded when the target application program operates next time, and the reading and writing of the disk are greatly reduced.
It should be noted that, especially when a new use file (such as a new input method, a font, etc.) exists in a target application program, or a data content of the use file changes (such as an update of a program file such as a startup file, etc.), the embodiment of the present invention may buffer, after an operating system is started or the target application program is closed, the new use file and the use file with changed data content, which are loaded and used from a magnetic disk during the last operation of the target application program, so as to ensure that the loading of the new use file or the use file with changed data content is accelerated, which is an effect that a conventional way of fixedly setting the cached use file cannot achieve.
Preferably, as an alternative implementation of the specific determination of the loading possibility of each usage file, fig. 5 shows an alternative method flow for determining the loading possibility of each usage file, and referring to fig. 5, the method may include:
and step S30, respectively determining the loading times and the data use coverage rate of each use file according to the disk history loading record of each use file.
Optionally, the data usage coverage rate of a usage file may be: the coverage rate of the data in the usage file being used; that is, in the history loading process corresponding to the history loading record of the magnetic disk, the used data in the use file accounts for the proportion of the total data of the use file.
Step S31, for any usage file, adding the combination result of the loading times and the loading times weight to the combination result of the data usage coverage rate and the data usage coverage rate weight to obtain the usage level of each usage file.
Step S32, determining a ratio of the memory occupation size of each usage file to the saved disk loading time according to the disk history loading record of each usage file, and obtaining a corresponding first ratio of each usage file.
For convenience of description, the embodiment of the present invention may use a file with a corresponding first ratio as: the ratio of the occupied memory size of the used file to the saved disk loading time.
Step S33, for any usage file, determining a combination result of the corresponding first ratio and the loading acceleration efficiency weight to obtain the loading acceleration efficiency of each usage file.
It should be noted that steps S30 to S31, steps S32 to S33 are determination processes for the usage level of the usage file and the load acceleration efficiency, respectively; steps S30 to S31, and steps S32 to S33 can be regarded as two different cases of processing, and are not in sequence.
Step S34, for any usage file, the usage degree and the loading acceleration efficiency are added to obtain the loading score of each usage file.
As an example, if the weight of the number of loads is 80, the weight of the data usage coverage is 300, and the weight of the load acceleration efficiency is 60, the load score of a usage file may be: 80 load times +300 data usage coverage +60 memory footprint size of usage file (bytes)/saved disk load time (milliseconds).
Step S35 is to perform time attenuation processing on the loading score of each usage file so that the loading score of the usage file with the earliest loading time is lower to obtain the loading possibility of each usage file.
The loading possibility of the use file can be a score result of the loading score of the use file after time attenuation processing; there are various time attenuation processing manners for the loading score, and the embodiment of the present invention is not limited as long as the loading score of the usage file with earlier loading time is satisfied, the score is lower after the time attenuation processing, and the loading score of the usage file with later loading time is higher after the time attenuation processing.
Optionally, for any usage file, a time decay processing manner of using the loading score of the file may be: determining the monitoring time corresponding to the acquired historical loading record of the disk; for any use file, determining the interval time between the latest loading time and the current time, subtracting the interval time from the monitoring time to obtain a time difference value, and multiplying the ratio of the obtained time difference value to the monitoring time by the corresponding loading score to obtain the loading possibility of each use file.
Wherein, the monitoring time refers to the monitoring time corresponding to the acquired historical loading record of the disk; the interval between the latest loading time of the file and the current time refers to the interval between the latest loading time of the file and the current time; as one implementation, the interval time may be expressed in days, for example, the interval time may be expressed as: the current date minus the date the last file was loaded with;
as an example, it may be assumed that the monitoring time is 15 days (i.e. the obtained historical loading record of the disk is the disk loading record monitored for 15 days), and an example of the representation of the loading possibility of the usage file may be: load score, ((current date-date of last load)/15).
Obviously, the determination manner of the loading possibility of each use file shown in fig. 5 is only optional, and the embodiment of the present invention may also determine the use degree of the use file without using the loading number weight and the data use coverage weight (for example, the use degree of the use file may be determined by directly combining the loading number of the use file and the data use coverage); of course, the loading acceleration efficiency of the used file may also be determined without using the loading acceleration efficiency weight (for example, the first ratio corresponding to the used file may be directly used as the loading acceleration efficiency of the used file);
of course, it is also possible to directly use the loading score of the usage file as the loading possibility of the usage file without performing the time attenuation processing on the loading score.
Optionally, in the foregoing, the obtained disk history loading record of each usage file of the target application may be: in the current set time period, loading records of the disk history of each use file of the target application program; after the set time period is appointed, the historical loading records of the disks of all the use files of the target application program within the current set time period can be obtained; correspondingly, the monitoring time corresponding to the acquired historical loading record of the disk can correspond to the set time period;
for example, if the set time period is 15 days (the specific numerical value can be specified according to the actual situation), the disk history loading records of each use file of the target application program within the current 15 days can be obtained; updating the cached use files based on the disk history loading records of each use file of the monitored target application program within 15 days;
of course, the embodiment of the present invention may also support acquiring all the historical loading records of the disks of each use file of the target application program, and is not limited to acquiring only the historical loading records of the disks within the current set time period.
As an implementation example of the method shown in fig. 5, in the case of obtaining a history loading record of a disk within 15 days from the current time, fig. 6 shows an example of determining a loading possibility for a usage file, as shown in fig. 6:
after the operating system is started or the target application program is closed, the terminal can acquire the historical loading records of the disks of all the use files of the target application program within 15 days from the current time; the disk history loading record of each usage file may include: within 15 days before the current time, the use file, the data size, the loading time and the like loaded by reading and writing data of each time in the history of the disk;
for any use file of the target application program, determining the loading times, the data use coverage rate and a first ratio of the occupied memory size of the use file to the saved disk loading time within 15 days from the current time according to the historical disk loading record of the use file within 15 days from the current time;
adding the product of the loading times and the loading times weight to the product of the data use coverage rate and the data use coverage rate weight to obtain the use degree of the use file;
multiplying the first ratio by the loading acceleration efficiency weight to obtain the loading acceleration efficiency of the use file;
adding the use degree of the use file and the loading acceleration efficiency to obtain a loading score of the use file;
the loading score of the usage file (15 days- (current date-date of last loading of usage file)/15 days) is given to the loading probability of the usage file.
For example, with a set time period of 15 days, a current date of this month 10, a date of the last loading of the file of this month 4, and a loading score of 130; after the time decay processing is performed on the loading score, the obtained loading probability is: 130 × (15- (10-4))/15 ═ 130 ═ 78 (9/15).
It should be noted that the application acceleration method provided in the embodiment of the present invention may be applied in a case where a new use file is added to an application program, or in a case where data content of a cached use file changes.
For example, taking an application as a browser as an example, when a user installs a new translation plug-in (which may also be a new input method, font, plug-in other forms, etc.) in the browser, since the new translation plug-in is very useful, the user may frequently use the new translation plug-in to perform translation; if the traditional mode of using files in the fixed cache is used, because the new translation plug-in is not the use files in the fixed cache, the user can only load the translation plug-in from the disk when starting the browser to call the translation plug-in each time, which undoubtedly increases the disk reading and writing time;
based on the scheme provided by the embodiment of the invention, when a user installs a new translation plug-in the browser, based on the frequent disk history loading record of the translation plug-in, the embodiment of the invention can cache the use file corresponding to the translation plug-in when the cached use file is updated after the operation system is restarted or the browser is closed, so that the use file of the translation plug-in can be directly loaded from the cache after the browser is started next time, the loading acceleration of the translation plug-in is realized, and the acceleration effect of the browser is improved.
For another example, when the program file of the browser is updated or a new bookmark file is added by a user, the program file or the bookmark file of the fixed cache is invalidated, the traditional application program acceleration mode of using the file by the fixed cache fails, and the terminal can only load the updated program file or the new bookmark file from the disk, which undoubtedly increases the read-write time of the disk;
based on the scheme provided by the embodiment of the invention, when the program file is updated or a new bookmark file is added, the embodiment of the invention can cache the updated program file or the new bookmark file after the operating system is restarted or the browser is closed and when the cached use file is updated, so that the updated program file or the new bookmark file can be directly loaded from the cache when the browser is started next time, and the acceleration effect of the browser is improved.
It can be understood that no matter the way of fixedly caching the use file or directly caching the use file used last time, the method has thought difference from the application program acceleration method provided by the embodiment of the present invention, and the embodiment of the present invention can more accurately implement the caching of the use file of the target application program; meanwhile, tests show that the starting speed of the browser can be increased by more than 60% by using the application program acceleration method provided by the embodiment of the invention in the acceleration scene of the browser, and the speed is obviously increased; meanwhile, in tests, the retention rate, the unloading rate, the white screen blocking rate and other indexes of the browser are obviously improved after the browser is accelerated.
After the cache of the target use file is realized based on the application program acceleration method described above, the embodiment of the present invention may record the target use file in the cache list by using the file path of the target use file in the disk as the unique identifier; the cache list takes the file path of each target use file in the disk as an identifier, and records the target use file corresponding to each file path; therefore, the caching of the target use file is realized by caching the cache list;
when the target application program runs next time, whether the use file to be loaded is in the cache can be judged through the cache list, and the target use file existing in the cache is loaded;
optionally, fig. 7 shows another optional flow of the target application program acceleration method provided in the embodiment of the present invention, where the method is applicable to a terminal, and can be executed when a target uses a file cache and the target application program runs next time, and specifically can be implemented by a process of the target application program of the terminal; referring to fig. 7, the method may include:
step S40, a first file path of the usage file to be loaded is acquired.
When the target application program runs next time (for example, when the target application program is started next time and after the target application program is started next time), for the use file to be loaded by the target application program, the file path of the use file can be obtained in the embodiment of the present invention, and the file path of the use file to be loaded can be referred to as a first file path;
for example, in the starting process of the target application program, the starting file and the dll file to be loaded can be used for realizing the starting of the target application program, and at the moment, a first file path of the starting file and the dll file to be loaded can be obtained; for another example, after the target application program is started, based on the operation of the user, the bookmark file to be loaded and the first file path of the user data file indicated by the operation of the user may be obtained.
The first file path referred to herein is a file path of the usage file in the disk, but the embodiment of the present invention is not based on the first file path of the usage file in the disk, and the loading of the usage file is performed from the disk; but under the condition that the cache list takes the file path of each target use file in the disk as the identifier and records the target use file corresponding to each file path, the judgment of whether the use file to be loaded is cached or not is realized based on the first file path of the use file to be loaded in the disk.
And step S41, judging whether the first file path is matched with the file path recorded in the cache list.
Step S42, if the first file path matches the file path recorded in the cache list, loading the target usage file corresponding to the first file path from the cache.
Optionally, if the first file path of the to-be-loaded usage file is not matched with the file path recorded in the cache list, the to-be-loaded usage file is not considered to be in the cache, and the usage file corresponding to the first file path may be read from the disk.
It can be seen that, in the embodiment of the present invention, when a target application program is to load a usage file, a first file path of each usage file to be loaded is filtered, whether the first file path of the usage file to be loaded is matched with a file path recorded in a cache list is determined, if yes, the usage file to be loaded is considered as a cached target usage file, the cache may be used to accelerate the loading of the usage file to be loaded, and a target usage file corresponding to the first file path is loaded from the cache (for example, the target usage file corresponding to the first file path may be loaded from the cache list of the cache); if not, the to-be-loaded use file is not in the cache, the cache cannot be used for loading acceleration of the to-be-loaded use file, and the corresponding use file can be loaded from the disk based on the first file path of the to-be-loaded use file.
Optionally, the embodiment of the present invention may complete loading of the cached target usage file in the kernel layer; compared with the traditional method for completing the loading of the cached target use file in the application layer, the embodiment of the invention can complete the loading of the cached target use file by transferring the application layer to the kernel layer, thereby realizing the sharing use of the cached target use file in a cross-process manner besides realizing the loading of the cached use file by the target application program;
for example, under the condition that the target use file of the browser is cached, the cached target use file is loaded based on the kernel layer, so that the loading access of the non-browser process to the cached target use file can be realized, the acceleration of the non-browser process when accessing the loaded target use file is realized, the utilization rate of the cached target use file is improved, and the overall data loading speed of the terminal is accelerated;
for example, based on the application program acceleration method provided by the embodiment of the present invention, a target usage file with a high possibility of loading when the browser runs next time may be cached in the kernel memory, as shown in fig. 8, a font file is cached in the kernel memory, and then when the browser runs next time, the font file cached in the kernel memory may be used to implement fast loading of the font file, so that a user may use the font file to perform font input in the browser quickly; meanwhile, when the instant messaging application program runs, if the instant messaging application program also needs to load the font file, the instant messaging application program can use the font file cached in the kernel memory based on the cross-process support of the kernel memory, so that the font file can be quickly loaded without loading the font file from a disk;
correspondingly, in the case that the kernel memory implements caching of the target usage file, in the method shown in fig. 7, the first file path for acquiring the usage file to be loaded may be: and acquiring a first file path of the use file to be loaded in response to the use file loading instruction of the target application program or in response to the use file loading instruction of the non-target application program.
Furthermore, in order to improve the matching efficiency between the first file path of the use file to be loaded and the file path recorded in the cache list, the embodiment of the invention provides an improved file path matching mode;
optionally, fig. 9 shows a flow of a file path matching method, and referring to fig. 9, the method may include:
step S50, the character string length of the first file path of the usage file to be loaded is acquired.
Step S51, determining whether the cache list has at least one candidate file path corresponding to the string length of the first file path and the string length of the string length recorded therein, if not, executing step S52, and if so, executing step S53.
Optionally, when the cache of the target usage file is implemented in the kernel memory, the STRING representations (UNICODE _ STRING) of the file path recorded in the cache list may have a STRING length of their own; after a first file path of a to-be-loaded use file is obtained, comparing the character string length of the first file path with the character string length of each file path recorded in a cache list, and judging whether a candidate file path corresponding to the character string length of the first file path exists or not; the number of candidate file paths determined is at least one.
And step S52, loading the corresponding use file from the disk according to the first file path.
If the candidate file path corresponding to the character string length of the first file path is not recorded in the cache list, it indicates that the used file to be loaded is not cached in the kernel memory, and the used file corresponding to the first file path can be loaded from the disk.
Step S53, comparing the first file path with each candidate file path according to the sequence of the low-level directory to the high-level directory of the file path, determining whether a file path consistent with the first file path exists in the candidate file paths, if not, executing step S52, and if so, executing step S54.
After determining that the candidate file path corresponding to the character string length of the first file path is recorded in the cache list, further comparing whether a file path consistent with the first file path exists in the candidate file paths (i.e. performing consistency comparison of the file paths);
the method is different from a mode of comparing whether two file paths are consistent or not from a high-level target to a low-level target, and the method is realized according to the comparison sequence of a low-level directory to a high-level directory of the file paths when comparing the first file path with each candidate file path;
it will be appreciated that, based on the characteristics of the file path, the file path is generally defined by the order of the higher level directory to the lower level directory, as the file path is generally defined by means of the drive/subdirectory + …/filename; if the two file paths are compared to be the same according to the way from the high-level target to the low-level target, the two file paths are the same under the high-level directories (such as the drive letter, the subdirectory and the like) of the two file paths and the file names are different, the comparison of the truly different file names can be performed after the drive letter and the subdirectory of the high-level are compared, and the consistency comparison efficiency of the file paths is extremely low;
based on this, the embodiment of the invention compares whether the first file path is the same as each candidate file path according to the sequence from the low-level directory to the high-level directory of the file path, so that when the low-level directories (such as file names and the like) of the first file path and the candidate file path are different, the difference of the file paths can be immediately found, the screening efficiency of different file paths is greatly improved, and the consistency comparison efficiency of the file paths is improved;
for example, taking the comparison between the following two file paths as an example, the file path a is: c \ Program Files (x86) \ Tencent \ QQBorowser \9.7.12954.400\ icudtl.dat; the file subgrade diameter is as follows: c \ Program Files (x86) \ Tencent \ QQBorowser \9.7.12954.400\ libegl.dll; if the comparison is carried out from the high-level directory to the low-level directory, the 58 th character comparison is carried out on the file paths A and B, then two letters with different character strings, namely i and l, are encountered, and the comparison efficiency of whether the file paths A and B are consistent is extremely low; if the comparison sequence from the low-level directory to the high-level directory is provided according to the embodiment of the invention, the file paths A and B can be found to be different when the characters are compared for the first time (i.e. the comparison result is that t and l are different), so that the comparison efficiency of whether the file paths are consistent or not is improved.
Optionally, a file path that is the same as the first file path does not exist in the candidate file paths, which indicates that the used file to be loaded is not cached in the kernel memory, and the used file corresponding to the first file path may be loaded from the disk.
And step S54, loading the target use file corresponding to the first file path from the cache.
If the candidate file path has a file path consistent with the first file path, the first file path is matched with the file path recorded in the cache list, that is, the used file to be loaded is cached, the target used file corresponding to the first file path can be loaded from the cache, and the loading acceleration of the used file to be loaded is realized.
It should be noted that the method shown in fig. 9 is only an optional way of file path matching provided in the embodiment of the present invention, and the embodiment of the present invention may also support performing consistency comparison on file paths according to the sequence from a high-level target to a low-level target.
In order to further improve the acceleration effect of the application program, the embodiment of the invention can use the read-write cache to cache the target use file, and the read-write cache can also be suitable for the condition that the kernel memory caches the use file.
In the conventional application program acceleration scheme, although a file is cached and used, a self-reading cache is generally used for caching the used file, namely the used file is put into the self-reading cache; however, the use of self-read caching is not obvious for application write data scenarios to speed up;
based on this, the embodiment of the invention uses the read-write cache to cache the target use file, so that the target application program can still read the target use file from the read-write cache when the target application program needs to read the target use file; when the target application program writes data, the data writing instruction of the target application program can be detected, the write-in data corresponding to the data writing instruction is written into the read-write cache, and the response of the completion of the execution of the data writing instruction is immediately fed back to the target application program; through a background system write-in thread (which can be an asynchronous thread) of the read-write cache, when the write-in data reaches a certain condition (such as a certain amount or after a certain interval duration), the write-in data in the read-write cache is written into a disk, so that the write-in data acceleration of an application program is promoted, and the influence on the throughput capacity of the disk to the outside due to frequent multiple read-write of the disk is avoided;
for example, as shown in fig. 10, taking a browser as an example, the browser may accelerate reading of a target use file by using a read-write cache, and all data writing operations of the browser may be directly written into the read-write cache; and then the thread is written into the thread by the system of the read-write cache, and the data written into the read-write cache is written into the disk.
Aiming at the use of the read-write cache, the embodiment of the invention provides the following improvement mode to improve the acceleration effect of the read-write cache;
optionally, fig. 11 shows an optional process of writing data to the disk by the readable and writable cache, and referring to fig. 11, the process may include:
and step S60, marking the written data in the readable and writable cache.
When the target application program is in use, when the target use file has write data (such as modified data or newly added data of the cached target use file), the data can be written into the target use file of the read-write cache, and the write data is marked.
Step S61, if the same target usage file exists in the readable/writable cache, continuously writing the marked write data whose data size reaches the predetermined data size into the disk.
In the embodiment of the invention, the data written in each target use file (the written data in the target use file can be determined based on the marked written data in the target use file) in the readable and writable cache can be detected in real time or at regular time, if the written data in a certain target use file is connected into pieces and reaches a certain scale, the writing operation of a magnetic disk can be triggered, and the written data in the target use file is connected into pieces and reaches the data of the certain scale and is written into the magnetic disk;
specifically, the embodiment of the present invention may set the predetermined data volume, and when there is marked write data in a certain target usage file, where the data volume of the certain target usage file is continuous and reaches the predetermined data volume, it is considered that the data written in the target usage file is connected into pieces and reaches a certain scale, which may trigger a disk write operation, and a system write thread is used to write the marked write data in the target usage file, where the data volume of the target usage file is continuous and reaches the predetermined data volume, into a disk.
It can be understood that the read-write principle of the mechanical disk is that the required data is read and written on the appointed magnetic track by mechanically moving the magnetic head, and the seek time of the moving magnetic head occupies most of the time for one read-write of the magnetic disk, so the embodiment of the invention combines the characteristics of the magnetic disk, uses the readable and writable cache to store the write data of the target application program, and writes the adjacent write data into the magnetic disk as soon as possible, thereby greatly improving the write-in speed of the magnetic disk.
Optionally, fig. 12 shows another alternative flow of writing data to the disk by the read-write cache, and referring to fig. 12, the flow may include:
and step S70, marking the written data in the readable and writable cache.
Step S71 is to write the marked write data in the read/write cache to the disk when the time interval from the last data write to the disk reaches a predetermined time interval.
Alternatively, the last time data is written to the disk may be, as shown in fig. 11, that the same target use file has written data that is continuous and the data amount reaches the predetermined data amount, or, although the condition of writing to the disk shown in fig. 11 is not met, the data has been written to the disk at a predetermined time interval from the previous time;
the embodiment of the invention can force the marked write-in data in the read-write cache to be written into the disk when the time interval from the last time of writing the data into the disk reaches the preset time interval, thereby ensuring the consistency of the data in the cache and the disk in a certain time.
Optionally, when the method shown in fig. 12 is executed, the predetermined time interval for writing the data into the disk may be further divided according to the importance of the target usage FILE, and for the target usage FILE with higher importance (for example, the usage FILE such as the desktop FILE of the browser), a smaller predetermined time interval may be specified, so as to synchronize the written data of the target usage FILE with higher importance to the disk in a timely manner; for the target use files with lower importance, the performance of the disk can be considered, and the synchronization of the write-in data of the target use files with lower importance to the disk can be carried out by selecting a relatively longer preset time interval, so that the synchronization of the write-in data of the read-write cache to the disk can be more accurately realized;
optionally, for convenience of description, in this embodiment of the present invention, the target usage file with higher importance may be referred to as a first type of target usage file, and the target usage file with lower importance may be referred to as a second type of target usage file, that is, the importance of the first type of target usage file is higher than that of the second type of target usage file, so as to set the predetermined time interval corresponding to the first type of target usage file to be a predetermined first time interval, and the predetermined time interval corresponding to the second type of target usage file to be a predetermined second time interval; thus, in the method shown in fig. 12, an alternative implementation of step S71 may be as follows:
writing the marked write data of the first type target use file in the readable and writable cache into the disk when the time interval from the last time of writing the data of the first type target use file into the disk reaches a preset first time interval;
when the time interval from the last time of writing the data of the second type target use file into the disk reaches a preset second time interval, writing the marked write data of the second type target use file in the read-write cache into the disk;
wherein the importance of the first type of target usage files is higher than that of the second type of target usage files, and the predetermined first time interval is lower than the predetermined second time interval.
As another optional implementation, because the space of the readable and writable cache is limited, when the data size of a certain target usage file in the cache increases greatly, the embodiment of the present invention may directly write the write data of the target usage file exceeding a certain increase ratio into the disk, so as to implement effective utilization of the cache space;
alternatively, fig. 13 shows an alternative process of directly writing data to the disk, and referring to fig. 13, the process may include:
and step S80, detecting the data volume increase proportion of each target use file in the read-write cache.
Optionally, as data is written into the readable and writable cache, the data volume of each target use file in the readable and writable cache will increase continuously, and in order to achieve effective utilization of the space of the readable and writable cache, the embodiment of the present invention can achieve or detect the data increase proportion of each target use file in the readable and writable cache regularly;
the data growth rate of a target usage file may be: the data size of the write data written in the target use file/the original data size of the target use file.
Step S81, determining the target usage file with the data volume increase ratio reaching the predetermined ratio threshold, and writing the subsequent write data of the target usage file to the disk to maintain the data volume increase ratio of the target usage file not exceeding the predetermined ratio threshold.
The embodiment of the invention can set a preset proportion threshold value, such as 50%, and the specific numerical value of the preset proportion threshold value can be set according to the actual situation; therefore, when the data volume increase proportion of a certain target use file is detected to reach a preset proportion threshold, the disk write operation can be directly triggered for the data subsequently written into the target use file, and the written data is written into the disk, so that the effective utilization of the read-write cache is improved.
For example, if the predetermined proportion threshold is 50%, for example, the original data size of a certain target usage file is 10M, as data is written into the readable and writable cache, if the data amount increase proportion of the target usage file reaches 50% (that is, the data size of the target usage file is expanded to 15M), if there is write data written into the target usage file subsequently, the data is not cached again, and the subsequent write data is directly written into the disk, so as to maintain the data size of the target usage file to be 15M; that is, if the data size of the target usage file reaches 15M, the following write data for the target usage file will be written directly to the disk and will not be written into the read-write cache any more.
Optionally, in order to improve the space utilization rate of the readable and writable cache, the embodiment of the present invention may release the target usage file whose frequency of loading is lower than the predetermined frequency in the readable and writable cache, so as to recycle the cache space; that is, for target use files which are cached and used or not used in a period of time at low frequency, the embodiment of the invention can release and recycle the cache; for example, a certain program file of the browser is cached after the operating system is started, but in the using process of the browser, the program file is updated and updated, the file content of the program file is changed, and therefore the cached old program file is invalidated.
Further, the released target use files can be recorded, and after the operating system is started or the target application program is closed, the disk history loading records of the released target use files are used as partial input data to determine the loading probability of each use file when the target application program operates next time.
According to the application program acceleration method provided by the embodiment of the invention, the use files with high loading possibility in the next operation of the target application program can be cached according to the disk history loading records of the use files of the target application program, so that the accurate targeted updating of the cached use files can be realized in the next operation of the target application program, and the applicability, the accuracy and the effect of acceleration in the next operation of the target application program are improved. Further, after the operating system of the terminal is started or after the target application program is closed every time, the use file with high loading possibility in the next running of the target application program can be determined and cached.
Meanwhile, the consistency comparison between the file path of the use file to be loaded and the file path of the cached target use file can be realized based on an improved file path matching mode, the file path matching efficiency is improved, whether the use file to be loaded is the cached target use file or not is identified more efficiently, and the acceleration effect when the target application program loads the use file is improved.
And the readable and writable cache is adopted to cache the target use file, so that loading acceleration of the target use file cached by the target application program can be supported, recording of data written by the readable and writable cache to the target application program can also be supported, when the target application program writes data, the data is written into the readable and writable cache firstly, the response of completing the data writing of the target application program is fed back immediately, and then the data is written into the magnetic disk by the readable and writable cache according to conditions, so that frequent reading and writing of the magnetic disk can be reduced, and the data reading and writing effect of the target application program is further accelerated.
As an application example, taking acceleration of a browser installed on a PC (personal computer) as an example, as shown in fig. 14, at a certain time after a user starts an operating system of the PC (for example, a certain time after the user powers on or restarts the PC), a browser resident service process may obtain a disk history read-write record (one form of a disk history loading record of each usage file) of each usage file of the browser, including: the use file, the data size, the loading time and the like corresponding to the historical read-write data of the disk each time; here, a certain service process of the browser can be registered as a system service, and the operating system can automatically run the process and stay resident after being started up until being shut down; the resident browser service process is the browser service process which is registered for system service and is started and resident by the operating system;
the browser resident service process analyzes and determines the loading possibility of each used file in the next operation of the browser according to the history read-write records of the disk of each used file (the specific analysis can refer to the corresponding part in the foregoing, and the description is omitted here);
the browser resident service process determines a target use file with high loading possibility based on the analyzed loading possibility of each use file (the specific determination mode of the target use file can refer to the corresponding part in the foregoing, and is not described herein again), and records each target use file by using a cache list and taking the file path of each target use file as an identifier;
the browser resident service process loads the cache list into the read-write cache;
if the situation that the browser reads the used file (an optional form of the used file needs to be loaded) is involved, the corresponding used file can be read from the readable and writable cache when the read file path of the used file is recorded in the cache list according to the comparison between the read file path of the used file and the file path recorded in the cache list (the specific comparison mode can refer to the corresponding part of the previous text, and is not repeated here);
meanwhile, under the condition that the browser is involved in writing data to a target file, the data can be written into the read-write cache firstly, the response of the target application program for completing the data writing is fed back immediately, and then the data is written into the disk according to the condition through the system writing process of the read-write cache (the condition and the mode of writing the data written into the read-write cache into the disk can refer to the corresponding part of the previous text, so that the situation is not redundant here).
The application program acceleration method provided by the embodiment of the invention can accelerate the reading and writing of the application program from multiple aspects, and greatly improves the applicability and the effect of application program acceleration.
The application acceleration device provided in the embodiment of the present invention is introduced below, and the application acceleration device described below may be considered as a program module that is required to be set by the terminal to implement the application acceleration method provided in the embodiment of the present invention; the contents of the application acceleration device described below may be referred to in correspondence with the contents of the application acceleration method described above.
Fig. 15 is a block diagram of an application acceleration apparatus according to an embodiment of the present invention, where the apparatus is applicable to a terminal, and referring to fig. 15, the apparatus may include:
a loading record obtaining module 100, configured to obtain a disk history loading record of each usage file of the target application program;
a loading possibility determining module 200, configured to determine, according to a disk history loading record of each usage file, a loading possibility of each usage file when the target application program runs next time;
the target usage file determining module 300 is configured to determine a target usage file according to a loading possibility of each usage file, where the loading possibility of the target usage file is higher than that of a non-target usage file;
the cache module 400 is configured to cache the target usage file, so that the target usage file is loaded from the cache when the target application runs next time and the target usage file is loaded.
Optionally, the loading possibility determining module 200 is configured to determine, according to a disk history loading record of each usage file, a loading possibility of each usage file when the target application program runs next time, and specifically includes:
determining the use degree and the loading acceleration efficiency of each use file according to the historical loading record of the disk of each use file;
determining the loading possibility of each use file in the next operation of the target application program according to the use degree and the loading acceleration efficiency of each use file; the use degree and the loading acceleration efficiency of the used file are positively correlated with the loading possibility of the used file.
Optionally, the loading possibility determining module 200 is configured to determine the usage degree of each usage file according to a disk history loading record of each usage file, and specifically includes:
respectively determining the loading times and the data use coverage rate of each use file according to the historical loading record of the disk of each use file; the data usage coverage rate of a usage file represents that the usage file is used by the data which accounts for the total data of the usage file;
determining the use degree of each use file according to the loading times and the data use coverage rate of each use file; the loading times of the used files and the data use coverage rate are positively correlated with the use degree of the used files.
Optionally, the loading possibility determining module 200 is configured to determine the usage degree of each usage file according to the loading frequency and the data usage coverage of each usage file, and specifically includes:
for any use file, the combination result of the loading times and the loading times weight is added with the combination result of the data use coverage rate and the data use coverage rate weight to obtain the use degree of each use file.
Optionally, the loading possibility determining module 200 is configured to determine the loading acceleration efficiency of each usage file according to a disk history loading record of each usage file, and specifically includes:
for any used file, determining a first ratio of the memory occupation size of the used file to the disk loading time saved by the used file to obtain a corresponding first ratio of each used file;
and determining the loading acceleration efficiency of each use file according to the corresponding first ratio of each use file.
Optionally, the loading possibility determining module 200 is configured to determine the loading acceleration efficiency of each usage file according to the corresponding first ratio of each usage file, and specifically includes:
for any usage file, combining the corresponding first ratio with the loading acceleration efficiency weight to obtain the loading acceleration efficiency of each usage file.
Optionally, the loading possibility determining module 200 is configured to determine, according to the usage degree and the loading acceleration efficiency of each usage file, a loading possibility of each usage file when the target application program runs next time, and specifically includes:
for any use file, adding the use degree and the loading acceleration efficiency to obtain a loading score of each use file;
and determining the loading possibility of each use file according to the loading score of each use file.
Optionally, the loading possibility determining module 200 is configured to determine the loading possibility of each use file according to the loading score of each use file, and specifically includes:
and respectively carrying out time attenuation processing on the loading scores of the use files, so that the loading scores of the use files with the earliest loading time are lower after the time attenuation processing, so as to obtain the loading possibility of the use files.
Optionally, the loading possibility determining module 200 is configured to perform time attenuation processing on the loading scores of the respective use files, and specifically includes:
determining the monitoring time corresponding to the acquired historical loading record of the disk;
for any use file, determining the interval time between the latest loading time and the current time, subtracting the interval time from the monitoring time to obtain a time difference value, and multiplying the ratio of the obtained time difference value to the monitoring time by the corresponding loading score to obtain the loading possibility of each use file.
Optionally, the target usage file determining module 300 is configured to determine the target usage file according to the loading possibility of each usage file, and specifically includes:
determining a plurality of use files with the total file size approaching to the size of the first cache space and the highest loading possibility as the target use file; the first cache space size is the cache space size occupied by part or all of the target application programs.
Optionally, the loading record obtaining module 100 is configured to obtain a disk history loading record of each use file of the target application program, and specifically includes:
and responding to the starting of the operating system or the closing of the target application program, and acquiring the disk history loading record of each use file of the target application program in the current set time period through the resident service process.
Optionally, the caching module 400 is configured to cache the target usage file, and specifically includes:
using the cache list to record the target use files corresponding to the file paths by taking the file paths of the target use files as identifiers;
and caching the cache list.
Optionally, fig. 16 shows another structural block diagram of the application acceleration apparatus according to the embodiment of the present invention, and in combination with fig. 15 and fig. 16, the apparatus may further include:
the file loading module 500 is configured to obtain a first file path of a usage file to be loaded; judging whether the first file path is matched with a file path recorded in a cache list or not; and if the first file path is matched with the file path recorded in the cache list, loading the target use file corresponding to the first file path from the cache.
Optionally, the file loading module 500 is configured to determine whether the first file path matches a file path recorded in the cache list, and specifically includes:
judging whether the character string length is recorded in the cache list or not, and at least one candidate file path corresponding to the character string length of the first file path;
if the character string length is recorded in the cache list, comparing the first file path with each candidate file path according to the sequence from the low-level directory to the high-level directory of the file path, and judging whether the file path consistent with the first file path exists in the candidate file paths.
Optionally, the caching module 400 is configured to cache the cache list, and specifically includes:
caching the cache list in a kernel memory, wherein the kernel memory supports cross-process access of non-target application programs;
correspondingly, the file loading module 500 is configured to obtain a first file path of a to-be-loaded usage file, and specifically includes:
and detecting a file using loading instruction of the target application program, or detecting a file using loading instruction of a non-target application program, and acquiring a first file path of a file to be loaded.
Optionally, the cache may be a read-write cache; fig. 17 is a block diagram illustrating still another structure of an application acceleration apparatus according to an embodiment of the present invention, and in conjunction with fig. 15 and fig. 17, the apparatus may further include:
a write processing module 600, configured to detect a write data instruction of a target application; writing the write-in data corresponding to the write-in data instruction into the read-write cache, and feeding back a response of the completion of the execution of the write-in data instruction; and writing the write data in the read-write cache to the disk.
Optionally, the write processing module 600 is configured to write the write data in the readable and writable cache to a disk, and specifically includes:
marking the write data in the read-write cache;
if the same target use file exists in the read-write cache, continuously writing the marked write data with the data volume reaching the preset data volume into the magnetic disk.
Optionally, the write processing module 600 is configured to write the write data in the readable and writable cache to a disk, and further includes:
and writing the marked write data in the read-write cache to the disk when a preset time interval is reached from the time interval of last data writing to the disk.
Optionally, the write processing module 600 is configured to write the marked write data in the readable and writable cache to the disk when a time interval from the last time of writing data to the disk reaches a predetermined time interval, and specifically includes:
writing the marked write data of the first type target use file in the readable and writable cache into the disk when the time interval from the last time of writing the data of the first type target use file into the disk reaches a preset first time interval;
when the time interval from the last time of writing the data of the second type target use file into the disk reaches a preset second time interval, writing the marked write data of the second type target use file in the read-write cache into the disk;
wherein the importance of the first type of target usage files is higher than that of the second type of target usage files, and the predetermined first time interval is lower than the predetermined second time interval.
Optionally, fig. 18 is a block diagram of another structure of the application acceleration apparatus according to the embodiment of the present invention, and with reference to fig. 17 and fig. 18, the apparatus may further include:
the data write-through disk module 700 is configured to detect a data volume increase ratio of each target usage file in the read-write cache; determining a target use file with the data volume increase ratio reaching a preset ratio threshold, and writing subsequent write-in data of the target use file into a disk so as to maintain the data volume increase ratio of the target use file not to exceed the preset ratio threshold;
and a releasing module 800, configured to release the target usage file whose frequency of loading is lower than the predetermined frequency in the readable and writable cache.
Alternatively, the data write-through disk module 700 and the release module 800 may be used alternatively.
The application program acceleration device provided by the embodiment of the invention can accelerate the reading and writing of the application program from multiple aspects, and greatly improves the applicability and the effect of application program acceleration.
The embodiment of the invention also provides a terminal, which can realize the functions of the program modules by executing corresponding programs; the terminal may be implemented by a user device such as a PC, a smart phone, or a tablet computer, and fig. 19 shows an optional hardware structure of the terminal, and referring to fig. 19, the terminal may include: at least one processing chip 1, at least one communication interface 2, at least one memory 3 and at least one communication bus 4;
in the embodiment of the present invention, the number of the processing chip 1, the communication interface 2, the memory 3, and the communication bus 4 is at least one, and the processing chip 1, the communication interface 2, and the memory 3 complete mutual communication through the communication bus 4;
the processing chip 1 may be a central processing unit CPU or an application Specific Integrated circuit asic or one or more Integrated circuits configured to implement embodiments of the present invention.
The memory 3 may comprise a high-speed RAM memory and may also comprise a non-volatile memory, such as at least one disk memory.
Wherein, the memory 3 stores programs, and the processing chip 1 calls the programs stored in the memory 3 to realize the steps of the application program acceleration method.
The embodiment of the invention also provides a storage medium, wherein the storage medium stores a program suitable for processing chip calling so as to realize the steps of the application program acceleration method.
The program called by the processing chip and the program stored in the storage medium mainly realize the following functions:
acquiring a disk history loading record of each use file of a target application program;
determining the loading possibility of each use file in the next operation of the target application program according to the historical loading record of the disk of each use file;
determining target use files according to the loading possibility of each use file, wherein the loading possibility of the target use files is higher than that of non-target use files;
and caching the target use file so as to load the target use file from the cache when the target application program runs next time and the target use file is loaded.
For specific refinement and extension of program functions, reference may be made to the foregoing description, which is not repeated here.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processing chip, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (24)

1. An application acceleration method, comprising:
acquiring a disk history loading record of each use file of a target application program;
determining the use degree and the loading acceleration efficiency of each use file according to the historical loading record of the disk of each use file;
determining the loading possibility of each use file in the next operation of the target application program according to the use degree and the loading acceleration efficiency of each use file;
determining target use files according to the loading possibility of each use file, wherein the loading possibility of the target use files is higher than that of non-target use files;
and caching the target use file so as to load the target use file from the cache when the target application program runs next time and the target use file is loaded.
2. The method according to claim 1, wherein the usage level and the loading acceleration efficiency of the usage file are positively correlated with the loading possibility of the usage file.
3. The method for accelerating an application program according to claim 2, wherein the determining the usage level of each usage file according to the disk history loading record of each usage file comprises:
respectively determining the loading times and the data use coverage rate of each use file according to the historical loading record of the disk of each use file; the data usage coverage rate of a usage file represents that the usage file is used by the data which accounts for the total data of the usage file;
determining the use degree of each use file according to the loading times and the data use coverage rate of each use file; the loading times of the used files and the data use coverage rate are positively correlated with the use degree of the used files.
4. The method for accelerating applications of claim 3, wherein the determining the usage level of each usage file according to the loading times and the data usage coverage of each usage file comprises:
for any use file, the combination result of the loading times and the loading times weight is added with the combination result of the data use coverage rate and the data use coverage rate weight to obtain the use degree of each use file.
5. The method for accelerating the application program according to claim 2, wherein the determining the loading acceleration efficiency of each usage file according to the disk history loading record of each usage file comprises:
for any used file, determining a first ratio of the memory occupation size of the used file to the disk loading time saved by the used file to obtain a corresponding first ratio of each used file;
and determining the loading acceleration efficiency of each use file according to the corresponding first ratio of each use file.
6. The method for accelerating an application program according to claim 5, wherein the determining the loading acceleration efficiency of each usage file according to the corresponding first ratio of each usage file comprises:
for any usage file, combining the corresponding first ratio with the loading acceleration efficiency weight to obtain the loading acceleration efficiency of each usage file.
7. The method for accelerating the application program according to any one of claims 2 to 6, wherein the determining the loading possibility of each usage file when the target application program runs next time according to the usage degree and the loading acceleration efficiency of each usage file respectively comprises:
for any use file, adding the use degree and the loading acceleration efficiency to obtain a loading score of each use file;
and determining the loading possibility of each use file according to the loading score of each use file.
8. The method for accelerating an application program according to claim 7, wherein the determining the possibility of loading each usage file according to the loading score of each usage file comprises:
and respectively carrying out time attenuation processing on the loading scores of the use files, so that the loading scores of the use files with the earliest loading time are lower after the time attenuation processing, so as to obtain the loading possibility of the use files.
9. The method for accelerating an application program according to claim 8, wherein the time-attenuating the loading score of each usage file includes:
determining the monitoring time corresponding to the acquired historical loading record of the disk;
for any use file, determining the interval time between the latest loading time and the current time, subtracting the interval time from the monitoring time to obtain a time difference value, and multiplying the ratio of the obtained time difference value to the monitoring time by the corresponding loading score to obtain the loading possibility of each use file.
10. The application acceleration method of claim 1, wherein the determining the target usage file according to the loading possibility of each usage file comprises:
determining the total file size approaching to the first cache space size and the at least one use file with the highest loading possibility as the target use file; the first cache space size is the cache space size occupied by part or all of the target application programs.
11. The application program acceleration method according to claim 1, wherein the obtaining of the disk history loading record of each usage file of the target application program comprises:
and responding to the starting of the operating system or the closing of the target application program, and acquiring the disk history loading record of each use file of the target application program in the current set time period through the resident service process.
12. The application acceleration method of claim 1, wherein caching the target usage file comprises:
using the cache list to record the target use files corresponding to the file paths by taking the file paths of the target use files as identifiers;
and caching the cache list.
13. The application acceleration method of claim 12, characterized in that the method further comprises:
acquiring a first file path of a to-be-loaded use file;
judging whether the first file path is matched with a file path recorded in a cache list or not;
and if the first file path is matched with the file path recorded in the cache list, loading the target use file corresponding to the first file path from the cache.
14. The application acceleration method of claim 13, wherein the determining whether the first file path matches a file path recorded in a cache list comprises:
judging whether the character string length is recorded in the cache list or not, and at least one candidate file path corresponding to the character string length of the first file path;
if the character string length is recorded in the cache list, comparing the first file path with each candidate file path according to the sequence from the low-level directory to the high-level directory of the file path, and judging whether the file path consistent with the first file path exists in the candidate file paths.
15. The application acceleration method of claim 13 or 14, wherein the caching the cache list comprises:
caching the cache list in a kernel memory, wherein the kernel memory supports cross-process access of non-target application programs;
the acquiring a first file path of the to-be-loaded use file comprises:
and detecting a file using loading instruction of the target application program, or detecting a file using loading instruction of a non-target application program, and acquiring a first file path of a file to be loaded.
16. The application acceleration method of claim 1, characterized in that the cache is a read-write cache; the method further comprises the following steps:
detecting a write data instruction of a target application program;
writing the write-in data corresponding to the write-in data instruction into the read-write cache, and feeding back a response of the completion of the execution of the write-in data instruction;
and writing the write data in the read-write cache to the disk.
17. The application acceleration method of claim 16, wherein writing the write data in the read-write cache to the disk comprises:
marking the write data in the read-write cache;
if the same target use file exists in the read-write cache, continuously writing the marked write data with the data volume reaching the preset data volume into the magnetic disk.
18. The application acceleration method of claim 17, wherein writing the write data in the read-write cache to the disk further comprises:
and writing the marked write data in the read-write cache to the disk when a preset time interval is reached from the time interval of last data writing to the disk.
19. The method for accelerating an application program according to claim 18, wherein writing the marked write data in the read-write cache to the disk when a predetermined time interval has been reached since the last time the data was written to the disk comprises:
writing the marked write data of the first type target use file in the readable and writable cache into the disk when the time interval from the last time of writing the data of the first type target use file into the disk reaches a preset first time interval;
when the time interval from the last time of writing the data of the second type target use file into the disk reaches a preset second time interval, writing the marked write data of the second type target use file in the read-write cache into the disk;
wherein the importance of the first type of target usage files is higher than that of the second type of target usage files, and the predetermined first time interval is lower than the predetermined second time interval.
20. The application acceleration method of claim 16, characterized in that the method further comprises:
detecting the data volume increase proportion of each target use file in the read-write cache;
and determining a target use file with the data volume increase ratio reaching a preset ratio threshold, and writing subsequent write data of the target use file into a disk so as to maintain the data volume increase ratio of the target use file not to exceed the preset ratio threshold.
21. The application acceleration method of claim 16, further comprising:
and releasing the target use files of which the frequency of loading is lower than the preset frequency in the readable and writable cache.
22. An application acceleration apparatus, comprising:
the loading record acquisition module is used for acquiring the historical loading record of each use file of the target application program;
the loading possibility determining module is used for determining the use degree and the loading acceleration efficiency of each use file according to the historical loading record of the disk of each use file; determining the loading possibility of each use file in the next operation of the target application program according to the use degree and the loading acceleration efficiency of each use file;
the target use file determining module is used for determining the target use files according to the loading possibility of each use file, and the loading possibility of the target use files is higher than that of the non-target use files;
and the cache module is used for caching the target use file so as to load the target use file from the cache when the target application program runs next time and the target use file is loaded.
23. A terminal, comprising: at least one memory and at least one processing chip; the memory stores a program that the processing chip calls to implement the application acceleration method of any of claims 1-21.
24. A storage medium recorded with a program adapted to be executed by a processing chip to implement the application acceleration method of any one of claims 1 to 21.
CN201810339291.5A 2018-04-16 2018-04-16 Application program acceleration method, device, terminal and storage medium Active CN108549556B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810339291.5A CN108549556B (en) 2018-04-16 2018-04-16 Application program acceleration method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810339291.5A CN108549556B (en) 2018-04-16 2018-04-16 Application program acceleration method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN108549556A CN108549556A (en) 2018-09-18
CN108549556B true CN108549556B (en) 2021-06-01

Family

ID=63515039

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810339291.5A Active CN108549556B (en) 2018-04-16 2018-04-16 Application program acceleration method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN108549556B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109885408B (en) * 2019-03-13 2022-05-03 四川长虹电器股份有限公司 Lightweight browser resource optimization method based on Android system
CN110147258B (en) * 2019-04-19 2022-08-16 平安科技(深圳)有限公司 Method and device for improving program loading efficiency, computer equipment and storage medium
CN112306823B (en) * 2019-07-31 2022-05-10 上海哔哩哔哩科技有限公司 Disk management method, system, device and computer readable storage medium
CN110968508B (en) * 2019-11-21 2021-05-14 腾讯科技(深圳)有限公司 Method, device, terminal and storage medium for determining loading time of applet
CN111741053B (en) * 2020-04-22 2023-06-23 百度在线网络技术(北京)有限公司 Data pre-downloading method, device, server, terminal and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102736940A (en) * 2012-06-21 2012-10-17 北京像素软件科技股份有限公司 Resource loading method
CN104133691A (en) * 2014-05-05 2014-11-05 腾讯科技(深圳)有限公司 Startup acceleration method and device
CN104346194A (en) * 2014-04-18 2015-02-11 腾讯科技(深圳)有限公司 Method, device and electronic equipment for starting file loading
CN104572205A (en) * 2015-01-12 2015-04-29 安一恒通(北京)科技有限公司 Method and device for software acceleration

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6385641B1 (en) * 1998-06-05 2002-05-07 The Regents Of The University Of California Adaptive prefetching for computer network and web browsing with a graphic user interface
US7334218B2 (en) * 2002-09-02 2008-02-19 International Business Machines Corporation Method for adaptively assigning of data management applications to data objects
CN102662690B (en) * 2012-03-14 2014-06-11 腾讯科技(深圳)有限公司 Method and apparatus for starting application program
CN103268219B (en) * 2013-05-28 2016-05-11 北京航空航天大学 Mass file based on pipelined architecture instructs the type parallel processing accelerated method of looking ahead
CN105094861A (en) * 2014-05-06 2015-11-25 腾讯科技(深圳)有限公司 Webpage application program loading method, device and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102736940A (en) * 2012-06-21 2012-10-17 北京像素软件科技股份有限公司 Resource loading method
CN104346194A (en) * 2014-04-18 2015-02-11 腾讯科技(深圳)有限公司 Method, device and electronic equipment for starting file loading
CN104133691A (en) * 2014-05-05 2014-11-05 腾讯科技(深圳)有限公司 Startup acceleration method and device
CN104572205A (en) * 2015-01-12 2015-04-29 安一恒通(北京)科技有限公司 Method and device for software acceleration

Also Published As

Publication number Publication date
CN108549556A (en) 2018-09-18

Similar Documents

Publication Publication Date Title
CN108549556B (en) Application program acceleration method, device, terminal and storage medium
US8135904B2 (en) Method and apparatus for facilitating fast wake-up of a non-volatile memory system
CN107481762B (en) Trim processing method and device of solid state disk
JP5211751B2 (en) Calculator, dump program and dump method
CN108733306B (en) File merging method and device
CN111324303B (en) SSD garbage recycling method, SSD garbage recycling device, computer equipment and storage medium
KR970059941A (en) Resource management method and apparatus for information processing system with multitasking function
CN109656779A (en) Internal memory monitoring method, device, terminal and storage medium
US9727479B1 (en) Compressing portions of a buffer cache using an LRU queue
CN108228449B (en) Terminal device control method and device, terminal device and computer readable storage medium
CN112286459A (en) Data processing method, device, equipment and medium
CN112269534A (en) Data reading method, device and equipment and computer readable storage medium
CN115470157A (en) Prefetching method, electronic device, storage medium, and program product
US20170262328A1 (en) Information processing system, information processing apparatus, information processing method, and computer-readable non-transitory storage medium
CN101131649A (en) Updating speed improving method for read-only memory of device with flash memory
CN110134615B (en) Method and device for acquiring log data by application program
WO2019206260A1 (en) Method and apparatus for reading file cache
CN107450859B (en) Method and device for reading file data
CN110543463A (en) Log storage method, device and equipment and readable storage medium
JP2011165093A (en) Memory access examination device, memory access examination method and program
CN107291483B (en) Method for intelligently deleting application program and electronic equipment
CN110767258B (en) Data erasure command test method and related device
CN115079959B (en) File management method and device and electronic equipment
CN112732182A (en) NAND data writing method and related device
Kim et al. Comparison of hybrid and hierarchical swap architectures in Android by using NVM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20190130

Address after: Room 1601-1608, Floor 16, Yinke Building, 38 Haidian Street, Haidian District, Beijing

Applicant after: Tencent Technology (Beijing) Co., Ltd

Address before: 518000 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 Floors

Applicant before: Tencent Technology (Shenzhen) Co., Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant