CN111859225B - Program file access method, apparatus, computing device and medium - Google Patents

Program file access method, apparatus, computing device and medium Download PDF

Info

Publication number
CN111859225B
CN111859225B CN202010764737.6A CN202010764737A CN111859225B CN 111859225 B CN111859225 B CN 111859225B CN 202010764737 A CN202010764737 A CN 202010764737A CN 111859225 B CN111859225 B CN 111859225B
Authority
CN
China
Prior art keywords
program file
cache
area
cache area
stored
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010764737.6A
Other languages
Chinese (zh)
Other versions
CN111859225A (en
Inventor
尹勇
李峰
罗涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202010764737.6A priority Critical patent/CN111859225B/en
Publication of CN111859225A publication Critical patent/CN111859225A/en
Application granted granted Critical
Publication of CN111859225B publication Critical patent/CN111859225B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • G06F16/9574Browsing optimisation, e.g. caching or content distillation of access to content, e.g. by caching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/122Replacement control using replacement algorithms of the least frequently used [LFU] type, e.g. with individual count value
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/12Replacement control
    • G06F12/121Replacement control using replacement algorithms
    • G06F12/123Replacement control using replacement algorithms with age lists, e.g. queue, most recently used [MRU] list or least recently used [LRU] list
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • G06F2212/1021Hit rate improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/15Use in a specific computing environment
    • G06F2212/154Networked environment

Abstract

The present disclosure provides a method for accessing a program file, which can be used in the financial field, and the method includes: determining the current program file to be accessed; moving the current program file from the first cache area to the second cache area under the condition that the current program file is stored in the first cache area; storing the current program file to the first cache region when the current program file is not stored in at least the first cache region and the second cache region; and responding to the access request of the client, and sending the program files stored in the first cache area or the second cache area to the client, wherein the time when the program files stored in the first cache area are accessed meets the preset time condition, and the frequency when the program files stored in the second cache area are accessed meets the preset frequency condition. The disclosure also provides an access device for the program file, a computing device and a medium.

Description

Program file access method, apparatus, computing device and medium
Technical Field
The present disclosure relates to the field of computer technology, and in particular, to a program file accessing method, a program file accessing apparatus, a computing device, and a computer readable storage medium.
Background
In recent years, in order to further enhance the verticality of the large-scale internet platform in each field, the application ecology is further perfected, the application mode of the small program of the large-scale internet platform is successively introduced, and the small program bonus era is formally started. The user attention and the number of users of each applet are very different, so that the applet with high user attention and large user number is always frequently accessed. Therefore, there is a large amount of hot data on the network, and when these hot data are frequently accessed by a user, it is necessary to frequently read the program file of the applet from the server, resulting in a long access delay of the server.
Disclosure of Invention
In view of this, the present disclosure provides an optimized program file access method, a program file access apparatus, a computing device, and a computer-readable storage medium.
One aspect of the present disclosure provides a method for accessing a program file, including: determining a current program file to be accessed, moving the current program file from a first cache area to a second cache area under the condition that the current program file is stored in the first cache area, storing the current program file in the first cache area when the current program file is not stored in at least the first cache area and the second cache area, and sending the program file stored in the first cache area or the second cache area to a client in response to an access request of the client, wherein the time when the program file stored in the first cache area is accessed meets a preset time condition, and the frequency when the program file stored in the second cache area is accessed meets a preset frequency condition.
According to an embodiment of the present disclosure, after moving the current program file from the first cache area to the second cache area, the method further includes: rearranging the program files in the second cache area according to the accessed frequency.
According to an embodiment of the present disclosure, after storing the current program file in the first cache area, the method further includes: rearranging the program files in the first cache area according to the accessed time.
According to an embodiment of the present disclosure, the method further includes: and updating the accessed frequency of the current program file in the second cache area under the condition that the current program file is stored in the second cache area, and rearranging the program files in the second cache area according to the accessed frequency.
According to an embodiment of the present disclosure, the method further includes: and when the data volume of the program files stored in the first cache area and the second cache area is larger than the preset data volume, moving part of the program files in the first cache area to a first additional storage area and/or moving part of the program files in the second cache area to a second additional storage area.
According to an embodiment of the present disclosure, when the current program file is not stored in the first cache area and the second cache area, storing the current program file to the first cache area includes: and storing the current program file into the first cache area when the current program file is not stored in the first cache area, the second cache area, the first additional storage area and the second additional storage area.
According to an embodiment of the present disclosure, the method further includes: and when the current program file is stored in the first additional storage area, moving the current program file to the second additional storage area, and storing the current program file in the first cache area.
According to an embodiment of the present disclosure, the method further includes: and when the current program file is stored in the second additional storage area, moving the current program file to the second cache area, and rearranging the program files in the second cache area according to the accessed frequency.
According to an embodiment of the present disclosure, the moving the part of the program files in the first buffer area to the first additional storage area and/or moving the part of the program files in the second buffer area to the second additional storage area includes: and when the number of the program files stored in the first cache area is smaller than or equal to the target number, moving part of the program files in the second cache area to the second additional storage area.
According to an embodiment of the present disclosure, the method further includes: the target number is increased by 1 when the current program file is stored in the first additional storage area, and the target number is decreased by 1 when the current program file is stored in the second additional storage area.
According to an embodiment of the present disclosure, the method further includes, when a data amount of the program files stored in the first cache area and the second cache area is less than or equal to a preset data amount: deleting a part of the program files in the first additional storage area when the data amounts of the program files stored in the first cache area and the first additional storage area are larger than a preset data amount, deleting a part of the program files in the second additional storage area when the data amounts of the program files stored in the first cache area and the first additional storage area are smaller than or equal to a preset data amount, and deleting a part of the program files in the second additional storage area when the data amounts of the program files stored in the second cache area and the second additional storage area are larger than a preset data amount.
According to an embodiment of the present disclosure, the preset time condition includes that a time when the program file is accessed is within a preset time range, and the preset frequency condition includes that a number of times when the program file is accessed within the preset time range is greater than 1 time.
Another aspect of the present disclosure provides an access apparatus for a program file, including: the device comprises a determining module, a mobile module, a storage module and a sending module. The system comprises a determining module, a moving module, a storage module and a sending module, wherein the determining module is used for determining a current program file to be accessed, the moving module is used for moving the current program file from a first cache area to a second cache area under the condition that the current program file is stored in the first cache area, the storage module is used for storing the current program file into the first cache area when the current program file is not stored in the first cache area and the second cache area, and the sending module is used for responding to an access request of a client and sending the program file stored in the first cache area or the second cache area to the client. The time when the program files stored in the first cache area are accessed meets the preset time condition, and the frequency when the program files stored in the second cache area are accessed meets the preset frequency condition.
According to an embodiment of the present disclosure, after moving the current program file from the first cache area to the second cache area, the apparatus further includes: and the first arrangement module is used for rearranging the program files in the second cache area according to the accessed frequency.
According to an embodiment of the present disclosure, after storing the current program file in the first cache area, the apparatus further includes: and the second arrangement module is used for rearranging the program files in the first cache area according to the accessed time.
According to an embodiment of the present disclosure, the above apparatus further includes: an updating module and a third arranging module. The updating module is used for updating the accessed frequency of the current program files in the second cache area under the condition that the current program files are stored in the second cache area, and the third arrangement module is used for rearranging the program files in the second cache area according to the accessed frequency.
According to an embodiment of the present disclosure, the above apparatus further includes: and the additional moving module is used for moving part of the program files in the first cache area to the first additional storage area and/or moving part of the program files in the second cache area to the second additional storage area when the data volume of the program files stored in the first cache area and the second cache area is larger than the preset data volume.
According to an embodiment of the present disclosure, when the current program file is not stored in the first cache area and the second cache area, storing the current program file to the first cache area includes: and storing the current program file into the first cache area when the current program file is not stored in the first cache area, the second cache area, the first additional storage area and the second additional storage area.
According to an embodiment of the present disclosure, the above apparatus further includes: and the moving and storing module is used for moving the current program file to the second additional storage area and storing the current program file to the first buffer area when the current program file is stored in the first additional storage area.
According to an embodiment of the present disclosure, the above apparatus further includes: and the moving and arranging module is used for moving the current program file to the second cache area when the current program file is stored in the second additional storage area, and rearranging the program files in the second cache area according to the accessed frequency.
According to an embodiment of the present disclosure, the moving the part of the program files in the first buffer area to the first additional storage area and/or moving the part of the program files in the second buffer area to the second additional storage area includes: and when the number of the program files stored in the first cache area is smaller than or equal to the target number, moving part of the program files in the second cache area to the second additional storage area.
According to an embodiment of the present disclosure, the above apparatus further includes: adding modules and subtracting modules. Wherein the increasing module is configured to increase the target number by 1 when the current program file is stored in the first additional storage area, and the decreasing module is configured to decrease the target number by 1 when the current program file is stored in the second additional storage area.
According to an embodiment of the present disclosure, the above apparatus further includes: a first deletion module and a second deletion module. The first deleting module is used for deleting partial program files in the first additional storage area when the data amount of the program files stored in the first cache area and the first additional storage area is larger than the preset data amount, and the second deleting module is used for deleting partial program files in the second additional storage area when the data amount of the program files stored in the first cache area and the first additional storage area is smaller than or equal to the preset data amount and when the data amount of the program files stored in the second cache area and the second additional storage area is larger than the preset data amount.
According to an embodiment of the present disclosure, the preset time condition includes that a time when the program file is accessed is within a preset time range, and the preset frequency condition includes that a number of times when the program file is accessed within the preset time range is greater than 1 time.
Another aspect of the present disclosure provides a computing device, comprising: one or more processors; and a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method as described above.
Another aspect of the present disclosure provides a non-transitory readable storage medium storing computer executable instructions which, when executed, are configured to implement a method as described above.
Another aspect of the present disclosure provides a computer program comprising computer executable instructions which when executed are for implementing a method as described above.
According to the embodiments of the present disclosure, with the access method of a program file as described above, it is possible to at least partially solve the technical problem in the related art that frequently reading a program file from a server causes a long-time access delay of the server. Therefore, the technical effects of improving the access efficiency of the program file, reducing the access time and improving the cache hit rate can be realized.
Drawings
For a more complete understanding of the present disclosure and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
Fig. 1 schematically illustrates an application scenario of an access method of a program file and an access apparatus of the program file according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of a method of accessing a program file according to an embodiment of the disclosure;
FIG. 3 schematically illustrates a flow chart of a method of accessing a program file according to another embodiment of the present disclosure;
FIG. 4 schematically illustrates a flow chart of a method of accessing a program file according to another embodiment of the present disclosure;
FIG. 5 schematically illustrates a block diagram of an access device for program files according to an embodiment of the disclosure; and
FIG. 6 schematically illustrates a block diagram of a computer system for enabling access to program files, according to an embodiment of the disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Where expressions like at least one of "A, B and C, etc. are used, the expressions should generally be interpreted in accordance with the meaning as commonly understood by those skilled in the art (e.g.," a system having at least one of A, B and C "shall include, but not be limited to, a system having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Some of the block diagrams and/or flowchart illustrations are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable control apparatus, such that the instructions, when executed by the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart.
Thus, the techniques of this disclosure may be implemented in hardware and/or software (including firmware, microcode, etc.). Additionally, the techniques of this disclosure may take the form of a computer program product on a computer-readable storage medium having instructions stored thereon, the computer program product being for use by or in connection with an instruction execution system. In the context of this disclosure, a computer-readable storage medium may be any medium that can contain, store, communicate, propagate, or transport the instructions. For example, a computer-readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. Specific examples of the computer-readable storage medium include the following: magnetic storage devices such as magnetic tape or hard disk (HDD); optical storage devices such as compact discs (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; and/or a wired/wireless communication link.
The embodiment of the disclosure provides a program file access method, which comprises the following steps: the accessed current program file is determined, and the current program file is moved from the first cache area to the second cache area in the case that the current program file is stored in the first cache area. And storing the current program file into the first cache area when the current program file is not stored in at least the first cache area and the second cache area. Next, in response to an access request of the client, the program file stored in the first cache area or the second cache area is transmitted to the client. The time when the program files stored in the first cache area are accessed meets the preset time condition, and the frequency when the program files stored in the second cache area are accessed meets the preset frequency condition.
It should be noted that, the method and the device for accessing a program file according to the embodiments of the present disclosure may be used in the financial field, and may also be used in any field other than the financial field, and the application fields of the method and the device for accessing a program file according to the embodiments of the present disclosure are not limited.
Fig. 1 schematically illustrates an application scenario of an access method of a program file and an access apparatus of a program file according to an embodiment of the present disclosure. It should be noted that fig. 1 illustrates only an example of an application scenario in which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, but it does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments, or scenarios.
As shown in fig. 1, an application scenario 100 according to this embodiment may include clients 101, 102, 103, a network 104, and a server 105. The network 104 is the medium used to provide communication links between the clients 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
A user may interact with the server 105 through the network 104 using clients 101, 102, 103 to receive or send messages, etc. Various communication client applications may be installed on clients 101, 102, 103, such as shopping class applications, web browser applications, search class applications, instant messaging tools, mailbox clients, social platform software, and the like (by way of example only).
The clients 101, 102, 103 may be a variety of electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server (by way of example only) that provides support for websites browsed by users using clients 101, 102, 103. The background management server may analyze and process the received data such as the user request, and feed back the processing result (e.g., the web page, information, or data obtained or generated according to the user request) to the client.
The server 105 may, for example, cache a plurality of program files, and the server 105 may receive accesses to the program files by the clients 101, 102, 103 and send the program files to the clients 101, 102, 103.
It should be noted that, the method for accessing a program file provided by the embodiment of the present disclosure may be generally performed by the server 105. Accordingly, the access device for the program files provided in the embodiments of the present disclosure may be generally disposed in the server 105. The method of accessing a program file provided by the embodiments of the present disclosure may also be performed by a server or cluster of servers other than the server 105 and capable of communicating with the clients 101, 102, 103 and/or the server 105. Accordingly, the access means for the program files provided by the embodiments of the present disclosure may also be provided in a server or a server cluster different from the server 105 and capable of communicating with the clients 101, 102, 103 and/or the server 105.
It should be understood that the number of clients, networks, and servers in fig. 1 is merely illustrative. There may be any number of clients, networks, and servers, as desired for implementation.
Caching techniques are, for example, to store media files (program files) using memory that is limited in space but is fast, and to employ cache replacement algorithms to improve the efficiency of access to data in the system. Caching algorithms are one of the core technologies in the computer fields of storage systems, databases, web servers and the like. In current applet open platform projects, LRU (Lease Recently Used, least recently used replacement) is typically employed, with least recently used replacement algorithms.
LRU algorithm is a more commonly used basic cache replacement algorithm, the idea being to replace the least recently used data block in the cache out of the cache when data replacement is performed. The implementation method is also relatively simple, and can capture the 'most recent' characteristic of the workload. The last accessed time of program file installation is sequenced, the last accessed program file is placed at the head of the queue, and the program file at the tail of the queue is deleted when the last accessed program file is replaced. LRU is able to capture the "most recent" characteristic of the workload, but not the "frequency" characteristic, i.e., recently accessed program files can be cached by the LRU algorithm, but it is difficult to achieve caching of program files with a higher frequency of access.
In the disclosed embodiments, the program file may be cached by the ARC (Adaptive Replacement Cache) algorithm. Wherein, ARC uses four LRU chains, for example: t1, T2, B1 and B2. T1 can be used as a first cache area, T2 can be used as a second cache area, B1 can be used as a first additional storage area, and B2 can be used as a second additional storage area. T1, T2, B1, and B2 may capture recent characteristics and frequency characteristics, respectively. Data such as T1 and T2 are stored in the cache, and data of B1 and B2 are not stored in the cache. T1 is used for managing recently accessed program files, and T2 is used for managing multiple accessed program files.
B1 and B2 can respectively receive the program files eliminated from T1 and T2 and carry out subsequent management. The ARC algorithm transfers multiple accessed program files from T1 to T2 for frequency, so that the cache can manage both "near term" and "frequency" important characteristics. The ARC algorithm manages the program files used at high frequency and the program files used recently, so that the ARC caching algorithm has obvious improvement on hit rate compared with the LRU caching algorithm.
According to the embodiment of the disclosure, the ARC algorithm is improved, so that two important characteristics of 'recent' and 'frequency' can be considered simultaneously when the program file is cached, the hit rate of the cache is improved, frequent access to a server is reduced, and the network load is reduced.
The following describes a program file access method according to an exemplary embodiment of the present disclosure with reference to fig. 2 to 4 in conjunction with the application scenario of fig. 1. It should be noted that the above application scenario is only shown for the convenience of understanding the spirit and principles of the present disclosure, and the embodiments of the present disclosure are not limited in any way in this respect.
Fig. 2 schematically illustrates a flowchart of a method of accessing a program file according to an embodiment of the present disclosure.
As shown in fig. 2, the access method of the program file of the embodiment of the present disclosure may include, for example, operations S210 to S240. The method of accessing a program file of an embodiment of the present disclosure may be performed, for example, by the server 105 shown in fig. 1.
In operation S210, the accessed current program file is determined.
According to the embodiments of the present disclosure, for example, the current program file is stored in the server, the program file may be a kind of media file, and the media file may be stored in the media server.
The server may include a first cache area and a second cache area, where both the first cache area and the second cache area may be used to cache program files. By storing the program files in a cache manner, the speed of accessing the program files can be improved. The time when the program files stored in the first cache area are accessed meets the preset time condition, and the frequency when the program files stored in the second cache area are accessed meets the preset frequency condition.
The preset time condition includes that the time when the program file is accessed is within a preset time range, and the preset time range may be 1 month, 2 months, and the like in the near term. The program files stored in the first cache area are, for example, program files accessed 1 time in the near future. That is, the first cache area may be used to store program files that have been accessed 1 time recently.
The preset frequency condition comprises that the number of times that the program file is accessed in a preset time range is greater than 1. That is, the second cache region may be used to store program files that are accessed multiple times.
In operation S220, in the case where the current program file is stored in the first cache area, the current program file is moved from the first cache area to the second cache area.
If the first cache area has stored the current program file, it indicates that the current program file has been accessed once in the past. And the current program file is accessed again, the current program file can be represented as being accessed twice, and the current program file can be moved from the first cache area to the second cache area for storage.
In operation S230, when the current program file is not stored in at least the first cache area and the second cache area, the current program file is stored to the first cache area.
In one embodiment, when neither the first cache region nor the second cache region stores the current program file, it may indicate that the current program file is accessed for the first time, and then the current program file may be stored in the first cache region.
Next, in operation S240, the program file stored in the first cache area or the second cache area is transmitted to the client in response to the access request of the client.
According to the embodiment of the disclosure, the server stores the recently accessed program files in the first cache area and stores the program files with higher access frequency in the second cache area, so that when the client accesses the program files, the server can read the program files from the first cache area or the second cache area, thereby improving the access efficiency of the program files, reducing access delay and improving the cache hit rate.
In another embodiment, the server of the embodiments of the present disclosure may further include a first additional storage area and a second additional storage area. Wherein the first additional storage area and the second additional storage area may be non-cache areas. The first additional storage area may be used to store the program files deleted from the first cache area, and the number of times the program files stored in the first additional storage area are accessed may be 1. The second additional storage area may be used to store the program files deleted from the second cache area, and the number of times the program files stored in the second additional storage area are accessed may be 2 times or more.
When the server further includes the first additional storage area and the second additional storage area, storing the current program file to the first cache area in operation S230 with respect to when the current program file is not stored in at least the first cache area and the second cache area may include: when the current program file is not stored in the first buffer area, the second buffer area, the first additional storage area, and the second additional storage area, the current program file may be stored to the first buffer area.
Fig. 3 schematically illustrates a flow chart of a method of accessing a program file according to another embodiment of the present disclosure.
As shown in fig. 3, the access method of the program file of the embodiment of the present disclosure may include, for example, operations S301 to S311. In which operation S301 is the same as or similar to operation S210, operation S303 is the same as or similar to operation S230, and operation S305 is the same as or similar to operation S220, for example.
In the embodiment of the disclosure, T1 represents a first cache area, T2 represents a second cache area, B1 represents a first additional storage area, and B2 represents a second additional storage area.
In operation S301, the accessed current program file is determined.
In operation S302, it is determined whether the current program file is stored in the first buffer area T1, the second buffer area T2, the first additional storage area B1, and the second additional storage area B2. If the current program file is not stored in any one of the first buffer area T1, the second buffer area T2, the first additional storage area B1, and the second additional storage area B2, operation S303 is performed. If the current program file is stored in any one of the first buffer area T1, the second buffer area T2, the first additional storage area B1 and the second additional storage area B2, a corresponding operation is performed according to a specific storage location of the current program file. For example, operation S305 is performed when the current program file is stored in the first buffer area T1, operation S307 is performed when the current program file is stored in the second buffer area T2, operation S308 is performed when the current program file is stored in the first additional storage area B1, and operation S309 is performed when the current program file is stored in the second additional storage area B2.
In operation S303, the current program file is stored in the first buffer area T1.
According to an embodiment of the present disclosure, after storing the current program file in the first buffer area T1, operation S304 may be performed.
In operation S304, the program files in the first buffer area T1 are rearranged by the accessed time.
For example, the first buffer area T1 is a storage queue, program files in the queue are arranged according to access time, and program files with the latest access time are arranged at the head of the queue. After storing the current program file in the first buffer area T1, the program files in the first buffer area T1 may be rearranged by access time. Alternatively, since the current program file is just accessed, the current program file may be directly stored in the queue head of the first cache area T1.
When the current program file is stored in the first buffer area T1, operations S305 to S306 may be performed.
In operation S305, the current program file is moved from the first buffer area T1 to the second buffer area T2.
If the first cache area T1 has stored the current program file, this indicates that the current program file has been accessed once in the past. And the current program file is accessed again, it may indicate that the current program file is accessed 2 times, and the current program file may be moved from the first cache area T1 to the second cache area T2 for storage.
After the current program file is moved from the first buffer area T1 to the second buffer area T2, operation S306 may be performed.
In operation S306, the program files in the second buffer area T2 are rearranged at the accessed frequency.
For example, the second buffer area T2 is a storage queue, the program files in the queue are arranged according to the access frequency, and the program file with the highest access frequency is arranged at the head of the queue. After storing the current program file in the second buffer area T2, all program files in the second buffer area T2 may be rearranged according to the access frequency.
In operation S307, in the case where the current program file is stored in the second cache area T2, the accessed frequency of the current program file in the second cache area T2 is updated, and the program files in the second cache area T2 are rearranged according to the accessed frequency.
To facilitate understanding, embodiments of the present disclosure may characterize access frequency, for example, in terms of access times. For example, if the second cache area T2 has stored the current program file, and the number of times the current program file has been accessed in the past is 2. When the current program file is accessed again, the number of accesses to the current program file stored in the second buffer area T2 may be increased by 1, and the number of accesses to the updated current program file may be 3. Then, all program files in the second buffer area T2 may be rearranged in accordance with the accessed frequency (number of times).
S308, when the current program file is stored in the first additional storage area B1, the current program file is moved to the second additional storage area B2, and the current program file is stored in the first buffer area T1.
Since the first additional storage area B1 is used to store the program files deleted from the first buffer area T1, the program files in the first additional storage area B1 are accessed earlier than the program files in the first buffer area T1, and the number of times the program files in the first additional storage area B1 are accessed is 1. When the current program file has been stored in the first additional storage area B1, it may be indicated that the current program file has been accessed 1 time in the past earlier time. And the current program file is accessed again, it may indicate that the current program file is accessed twice, and the current program file may be moved from the first additional storage area B1 to the second additional storage area B2 for storage. And since the current program file has just been accessed, which is the most recently accessed program file, the current program file may be stored into the first cache area T1.
S309, when the current program file is stored in the second additional storage area B2, the current program file is moved to the second cache area T2, and the program files in the second cache area T2 are rearranged according to the accessed frequency.
Since the second additional storage area B2 is used for storing the program files deleted from the second buffer area T2, the program files in the second additional storage area B2 are accessed less frequently than the program files in the second buffer area T2, and the number of times of accessing the program files in the second additional storage area B2 is 2 or more. When the current program file has been stored in the second additional storage area B2, it may be indicated that the current program file has been accessed at least 2 times in the past. And the current program file may be moved from the second additional storage area B2 to the second buffer area T2 for storage when it is accessed again. Then, all program files in the second cache area T2 may be rearranged with the accessed frequency, for example, the program files are arranged at the head of the queue with high accessed frequency.
In the embodiment of the present disclosure, program files with higher access frequency are managed through the second cache area T2 and the second additional storage area B2. The access frequency of the stored program files is different, so that the probability of the program files being cleared is different. For example, when cleaning the program files in the second cache area T2, the program files with low access frequency may be cleaned first, so as to implement more accurate management of the program files.
The disclosed embodiments also define a target number P for characterizing recently accessed program files.
S310, when the current program file is stored in the first additional storage area B1, the target number P is increased by 1.
S311, when the current program file is stored in the second additional storage area B2, the target number P is reduced by 1.
The target number P is used as a reference for cleaning up the program files.
Fig. 4 schematically illustrates a flow chart of a method of accessing a program file according to another embodiment of the present disclosure.
As shown in fig. 4, the access method of the program file of the embodiment of the present disclosure may include operations S401 to S408, for example.
In operation S401, it is determined whether the data amount of the program files stored in the first and second buffer areas T1 and T2 is greater than the preset data amount C. If yes, executing operations S402-S404; if not, operation S405 is performed.
According to an embodiment of the present disclosure, the preset data amount C may be, for example, 10 mega, 100 mega, or the like.
For example, when the data amounts of the program files stored in the first and second buffer areas T1 and T2 are greater than the preset data amount C, a part of the program files in the first buffer area T1 is moved to the first additional storage area B1 and/or a part of the program files in the second buffer area T2 is moved to the second additional storage area B2.
According to the embodiment of the disclosure, in order to improve the cache hit rate, when the data amount of the program files stored in the first cache area T1 and the second cache area T2 is greater than the preset data amount C, the stored program files need to be cleaned.
In operation S402, it is determined whether the number of program files stored in the first buffer area T1 is greater than the target number P. If yes, operation S403 is performed; if not, operation S404 is performed.
In operation S403, when the number of program files stored in the first buffer area T1 is greater than the target number P, a part of the program files in the first buffer area T1 is moved to the first additional storage area B1.
For example, program files are sequentially traversed from the tail of the queue of the first cache area T1, and the program files at the tail are moved to the first additional storage area B1 until the number of program files stored in the first cache area T1 is less than or equal to the target number P.
In operation S404, when the number of program files stored in the first buffer area T1 is less than or equal to the target number P, part of the program files in the second buffer area T2 are moved to the second additional storage area B2.
For example, program files are sequentially traversed from the tail of the queue of the second buffer area T2, and the program files at the tail are moved to the second additional storage area B2 until the number of program files stored in the second buffer area T2 is less than or equal to the target number P.
In operation S405, it is determined whether the data amount of the program files stored in the first buffer area T1 and the first additional storage area B1 is greater than the preset data amount C. If yes, operation S406 is performed, and if not, operation S407 is performed.
In operation S406, when the data amounts of the program files stored in the first buffer area T1 and the first additional storage area B1 are greater than the preset data amount C, part of the program files in the first additional storage area B1 are deleted until the data amounts of the program files stored in the first buffer area T1 and the first additional storage area B1 are less than or equal to the preset data amount C.
In operation S407, it is determined whether the data amount of the program files stored in the second buffer area T2 and the second additional storage area B2 is greater than the preset data amount C. If so, operation S408 is performed.
In operation S408, when the data amounts of the program files stored in the second buffer area T2 and the second additional storage area B2 are greater than the preset data amount C, part of the program files in the second additional storage area B2 are deleted until the data amounts of the program files stored in the second buffer area T2 and the second additional storage area B2 are less than or equal to the preset data amount C.
Fig. 5 schematically illustrates a block diagram of an access device for program files according to an embodiment of the present disclosure.
As shown in fig. 5, the access device 500 of the program file may include, for example, a determination module 510, a mobile module 520, a storage module 530, and a transmission module 540.
The determination module 510 may be used to determine the current program file being accessed. According to an embodiment of the present disclosure, the determining module 510 may perform, for example, operation S210 described above with reference to fig. 2, which is not described herein.
The moving module 520 may be configured to move the current program file from the first cache area to the second cache area in a case where the current program file is stored in the first cache area. According to an embodiment of the present disclosure, the mobile module 520 may perform, for example, operation S220 described above with reference to fig. 2, which is not described herein.
The storage module 530 may be configured to store the current program file to the first cache area when the current program file is not stored in the first cache area and the second cache area. According to an embodiment of the present disclosure, the storage module 530 may perform, for example, operation S230 described above with reference to fig. 2, which is not described herein.
The transmitting module 540 may be configured to transmit the program file stored in the first cache area or the second cache area to the client in response to an access request of the client. The transmitting module 540 may, for example, perform operation S240 described above with reference to fig. 2 according to an embodiment of the present disclosure, which is not described herein.
According to the embodiment of the disclosure, the time when the program file stored in the first buffer area is accessed satisfies a preset time condition, and the frequency when the program file stored in the second buffer area is accessed satisfies a preset frequency condition.
According to an embodiment of the present disclosure, after moving the current program file from the first cache area to the second cache area, the apparatus 500 may further include: and the first arrangement module is used for rearranging the program files in the second cache area according to the accessed frequency.
According to an embodiment of the present disclosure, after storing the current program file in the first cache area, the apparatus 500 may further include: and the second arrangement module is used for rearranging the program files in the first buffer area according to the accessed time.
According to an embodiment of the present disclosure, the apparatus 500 may further include: an updating module and a third arranging module. The updating module is used for updating the accessed frequency of the current program files in the second cache area under the condition that the current program files are stored in the second cache area, and the third arrangement module is used for rearranging the program files in the second cache area according to the accessed frequency.
According to an embodiment of the present disclosure, the apparatus 500 may further include: and the additional moving module is used for moving part of the program files in the first cache area to the first additional storage area and/or moving part of the program files in the second cache area to the second additional storage area when the data volume of the program files stored in the first cache area and the second cache area is larger than the preset data volume.
According to an embodiment of the present disclosure, the apparatus 500 may further include: and the moving and storing module is used for moving the current program file to the second additional storage area when the current program file is stored in the first additional storage area, and storing the current program file in the first buffer area.
According to an embodiment of the present disclosure, the apparatus 500 may further include: and the moving and arranging module is used for moving the current program file to the second cache area when the current program file is stored in the second additional storage area, and rearranging the program files in the second cache area according to the accessed frequency.
According to an embodiment of the present disclosure, the apparatus 500 may further include: adding modules and subtracting modules. Wherein the increasing module is used for increasing the target number by 1 when the current program file is stored in the first additional storage area, and the decreasing module is used for decreasing the target number by 1 when the current program file is stored in the second additional storage area.
According to an embodiment of the present disclosure, the apparatus 500 may further include: a first deletion module and a second deletion module. The first deleting module is used for deleting partial program files in the first additional storage area when the data amount of the program files stored in the first cache area and the first additional storage area is larger than the preset data amount, and the second deleting module is used for deleting partial program files in the second additional storage area when the data amount of the program files stored in the first cache area and the first additional storage area is smaller than or equal to the preset data amount and when the data amount of the program files stored in the second cache area and the second additional storage area is larger than the preset data amount.
The present disclosure also provides a computing device that may include: one or more processors and a storage device. The storage device may be used to store one or more programs. Wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method as mentioned above.
Another aspect of the present disclosure provides a non-volatile readable storage medium storing computer executable instructions that, when executed, may be used to implement the above-mentioned method.
Another aspect of the present disclosure provides a computer program comprising computer executable instructions which, when executed, may be used to implement the above-mentioned method.
Any number of modules, sub-modules, units, sub-units, or at least some of the functionality of any number of the sub-units according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented as split into multiple modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system-on-chip, a system-on-substrate, a system-on-package, an Application Specific Integrated Circuit (ASIC), or in any other reasonable manner of hardware or firmware that integrates or encapsulates the circuit, or in any one of or a suitable combination of three of software, hardware, and firmware. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be at least partially implemented as computer program modules, which when executed, may perform the corresponding functions.
For example, any of the acquisition module 510, the determination module 510, the mobile module 520, the storage module 530, and the transmission module 540 may be combined in one module to be implemented, or any of the modules may be split into a plurality of modules. Alternatively, at least some of the functionality of one or more of the modules may be combined with at least some of the functionality of other modules and implemented in one module. According to embodiments of the present disclosure, at least one of the determination module 510, the movement module 520, the storage module 530, and the transmission module 540 may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable way of integrating or packaging the circuits, or in any one of or a suitable combination of any of the three. Alternatively, at least one of the determining module 510, the moving module 520, the storing module 530, and the transmitting module 540 may be at least partially implemented as a computer program module, which may perform the corresponding functions when being executed.
FIG. 6 schematically illustrates a block diagram of a computer system for enabling access to program files, according to an embodiment of the disclosure. The computer system illustrated in fig. 6 is merely an example and should not be construed as limiting the functionality and scope of use of the embodiments of the present disclosure.
As shown in fig. 6, computer system 600 includes a processor 601, a computer readable storage medium 602. The system 600 may perform a method according to an embodiment of the present disclosure.
In particular, the processor 601 may include, for example, a general purpose microprocessor, an instruction set processor and/or an associated chipset and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. Processor 601 may also include on-board memory for caching purposes. The processor 601 may be a single processing unit or a plurality of processing units for performing different actions of the method flows according to embodiments of the disclosure.
The computer-readable storage medium 602 may be, for example, any medium that can contain, store, communicate, propagate, or transport the instructions. For example, a readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. Specific examples of the readable storage medium include: magnetic storage devices such as magnetic tape or hard disk (HDD); optical storage devices such as compact discs (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; and/or a wired/wireless communication link.
The computer-readable storage medium 602 may comprise a computer program 603, which computer program 603 may comprise code/computer-executable instructions which, when executed by the processor 601, cause the processor 601 to perform a method according to an embodiment of the present disclosure or any variant thereof.
The computer program 603 may be configured with computer program code comprising computer program modules, for example. For example, in an example embodiment, code in the computer program 603 may include one or more program modules, including for example 603A, module 603B, … …. It should be noted that the division and number of modules is not fixed, and that a person skilled in the art may use suitable program modules or combinations of program modules depending on the actual situation, which when executed by the processor 601, enable the processor 601 to perform the method according to embodiments of the present disclosure or any variations thereof.
According to an embodiment of the present disclosure, at least one of the determining module 510, the moving module 520, the storing module 530, and the transmitting module 540 may be implemented as computer program modules described with reference to fig. 6, which when executed by the processor 601, may implement the respective operations described above.
The present disclosure also provides a computer-readable storage medium that may be embodied in the apparatus/device/system described in the above embodiments; or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs that when executed implement the methods described above.
According to embodiments of the present disclosure, the computer-readable storage medium may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable storage medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, fiber optic cable, radio frequency signals, or the like, or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that the features recited in the various embodiments of the disclosure and/or in the claims may be combined in various combinations and/or combinations, even if such combinations or combinations are not explicitly recited in the disclosure. In particular, the features recited in the various embodiments of the present disclosure and/or the claims may be variously combined and/or combined without departing from the spirit and teachings of the present disclosure. All such combinations and/or combinations fall within the scope of the present disclosure.
While the present disclosure has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present disclosure as defined by the appended claims and their equivalents. The scope of the disclosure should, therefore, not be limited to the above-described embodiments, but should be determined not only by the following claims, but also by the equivalents of the following claims.

Claims (10)

1. An access method for a program file, comprising:
determining the current program file to be accessed;
moving the current program file from a first cache region to a second cache region under the condition that the current program file is stored in the first cache region;
storing the current program file to the first cache region when the current program file is not stored in at least the first cache region and the second cache region; and
transmitting the program file stored in the first cache area or the second cache area to the client in response to an access request of the client,
the time when the program files stored in the first cache area are accessed meets the preset time condition, and the frequency when the program files stored in the second cache area are accessed meets the preset frequency condition.
2. The method of claim 1, wherein after moving the current program file from the first cache region to the second cache region, the method further comprises:
rearranging the program files in the second cache area according to the accessed frequency.
3. The method of claim 1, wherein after storing the current program file to the first cache region, the method further comprises:
rearranging the program files in the first cache area according to the accessed time.
4. The method of claim 1, further comprising:
updating the accessed frequency of the current program file in the second cache area under the condition that the current program file is stored in the second cache area; and
rearranging the program files in the second cache area according to the accessed frequency.
5. The method of claim 1, further comprising:
and when the data volume of the program files stored in the first cache area and the second cache area is larger than the preset data volume, moving part of the program files in the first cache area to a first additional storage area and/or moving part of the program files in the second cache area to a second additional storage area.
6. The method of claim 5, wherein storing the current program file to the first cache region when the current program file is not stored in at least the first cache region and the second cache region comprises:
and storing the current program file into the first cache area when the current program file is not stored in the first cache area, the second cache area, the first additional storage area and the second additional storage area.
7. The method of claim 6, further comprising:
and when the current program file is stored in the first additional storage area, moving the current program file to the second additional storage area, and storing the current program file in the first cache area.
8. The method of claim 7, further comprising:
and when the current program file is stored in the second additional storage area, moving the current program file to the second cache area, and rearranging the program files in the second cache area according to the accessed frequency.
9. The method of claim 8, wherein the moving the portion of the program files in the first cache region to the first additional storage region and/or the portion of the program files in the second cache region to the second additional storage region comprises:
When the number of the program files stored in the first cache area is larger than the target number, moving part of the program files in the first cache area to the first additional storage area; and
and when the number of the program files stored in the first cache area is less than or equal to the target number, moving part of the program files in the second cache area to the second additional storage area.
10. The method of claim 9, further comprising:
increasing the target number by 1 when the current program file is stored in the first additional storage area; and
the target number is reduced by 1 when the current program file is stored in the second additional storage area.
CN202010764737.6A 2020-07-31 2020-07-31 Program file access method, apparatus, computing device and medium Active CN111859225B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010764737.6A CN111859225B (en) 2020-07-31 2020-07-31 Program file access method, apparatus, computing device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010764737.6A CN111859225B (en) 2020-07-31 2020-07-31 Program file access method, apparatus, computing device and medium

Publications (2)

Publication Number Publication Date
CN111859225A CN111859225A (en) 2020-10-30
CN111859225B true CN111859225B (en) 2023-08-22

Family

ID=72954371

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010764737.6A Active CN111859225B (en) 2020-07-31 2020-07-31 Program file access method, apparatus, computing device and medium

Country Status (1)

Country Link
CN (1) CN111859225B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113613063B (en) * 2021-07-16 2023-08-04 深圳市明源云科技有限公司 Application anomaly reduction method, device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101682621A (en) * 2007-03-12 2010-03-24 思杰系统有限公司 Systems and methods for cache operations
CN103620576A (en) * 2010-11-01 2014-03-05 七网络公司 Caching adapted for mobile application behavior and network conditions
CN107818026A (en) * 2016-09-14 2018-03-20 中兴通讯股份有限公司 A kind of method and apparatus of cache partitions reconstruct
CN108287836A (en) * 2017-01-09 2018-07-17 腾讯科技(深圳)有限公司 A kind of resource caching method and device
CN108536486A (en) * 2018-04-08 2018-09-14 苏州犀牛网络科技有限公司 The loading method and device of RN small routines

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9696943B2 (en) * 2015-11-09 2017-07-04 International Business Machines Corporation Accessing stored data

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101682621A (en) * 2007-03-12 2010-03-24 思杰系统有限公司 Systems and methods for cache operations
CN103620576A (en) * 2010-11-01 2014-03-05 七网络公司 Caching adapted for mobile application behavior and network conditions
CN107818026A (en) * 2016-09-14 2018-03-20 中兴通讯股份有限公司 A kind of method and apparatus of cache partitions reconstruct
CN108287836A (en) * 2017-01-09 2018-07-17 腾讯科技(深圳)有限公司 A kind of resource caching method and device
CN108536486A (en) * 2018-04-08 2018-09-14 苏州犀牛网络科技有限公司 The loading method and device of RN small routines

Also Published As

Publication number Publication date
CN111859225A (en) 2020-10-30

Similar Documents

Publication Publication Date Title
US9773011B2 (en) On-demand caching in a WAN separated distributed file system or clustered file system cache
US9678735B2 (en) Data caching among interconnected devices
US8566547B2 (en) Using a migration cache to cache tracks during migration
JP5450841B2 (en) Mechanisms for supporting user content feeds
US11017152B2 (en) Optimizing loading of web page based on aggregated user preferences for web page elements of web page
CN107197359B (en) Video file caching method and device
US9614925B2 (en) Intelligent file pre-fetch based on access patterns
CN103491152A (en) Metadata obtaining method, device and system in distributed file system
US20160315835A1 (en) Tracking content sharing across a variety of communications channels
CN109714229B (en) Performance bottleneck positioning method of distributed storage system
CN103607312A (en) Data request processing method and system for server system
US20210286730A1 (en) Method, electronic device and computer program product for managing cache
CN107506154A (en) A kind of read method of metadata, device and computer-readable recording medium
US10901631B2 (en) Efficient adaptive read-ahead in log structured storage
CN111859225B (en) Program file access method, apparatus, computing device and medium
JP5272428B2 (en) Predictive cache method for caching information with high access frequency in advance, system thereof and program thereof
US9959245B2 (en) Access frequency approximation for remote direct memory access
US20180089210A1 (en) Tracking access pattern of inodes and pre-fetching inodes
CN112948444A (en) Management method and device for cache data
CN110020373A (en) The method and apparatus that static page is stored, browsed
US20180089086A1 (en) Tracking access pattern of inodes and pre-fetching inodes
CN110413215B (en) Method, apparatus and computer program product for obtaining access rights
US9787564B2 (en) Algorithm for latency saving calculation in a piped message protocol on proxy caching engine
CN109992428B (en) Data processing method and system
CN113688160A (en) Data processing method, processing device, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant