CN110955486A - File caching efficiency tracking method and device, storage medium and terminal - Google Patents
File caching efficiency tracking method and device, storage medium and terminal Download PDFInfo
- Publication number
- CN110955486A CN110955486A CN201811126354.5A CN201811126354A CN110955486A CN 110955486 A CN110955486 A CN 110955486A CN 201811126354 A CN201811126354 A CN 201811126354A CN 110955486 A CN110955486 A CN 110955486A
- Authority
- CN
- China
- Prior art keywords
- file
- preset
- file access
- access information
- program code
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003860 storage Methods 0.000 title claims abstract description 81
- 238000000034 method Methods 0.000 title claims abstract description 61
- 230000001960 triggered effect Effects 0.000 claims abstract description 19
- 230000006870 function Effects 0.000 claims description 69
- 238000001914 filtration Methods 0.000 claims description 20
- 238000004590 computer program Methods 0.000 claims description 12
- 238000001514 detection method Methods 0.000 claims description 4
- 230000002829 reductive effect Effects 0.000 abstract description 22
- 230000002452 interceptive effect Effects 0.000 abstract description 5
- 230000008569 process Effects 0.000 description 15
- 239000002609 medium Substances 0.000 description 12
- 230000003993 interaction Effects 0.000 description 10
- 230000002093 peripheral effect Effects 0.000 description 9
- 230000008901 benefit Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 5
- 238000005457 optimization Methods 0.000 description 4
- 238000005192 partition Methods 0.000 description 4
- 230000001360 synchronised effect Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000012120 mounting media Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/17—Details of further file system functions
- G06F16/172—Caching, prefetching or hoarding of files
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The embodiment of the application discloses a method and a device for tracking file caching efficiency, a storage medium and a terminal. The method comprises the following steps: detecting that a preset file access event is triggered; acquiring file access information corresponding to the preset access event based on a preset program code; and storing the file access information by adopting a storage format corresponding to the preset virtual machine, wherein the file access information is used for indicating a user space to calculate the file caching efficiency based on the file access information. By adopting the technical scheme, the file caching efficiency can be dynamically tracked when the operating system runs, and the interactive data volume of the kernel space and the user space is effectively reduced, so that the burden of tracking the file caching efficiency on the operating system is reduced, and the system stability is improved.
Description
Technical Field
The embodiment of the application relates to the technical field of terminals, in particular to a method and a device for tracking file caching efficiency, a storage medium and a terminal.
Background
Currently, many data are stored in the form of files. In an operating system of a terminal, a file cache (page cache) is a very important role, and caching file contents in a memory can improve the access efficiency of files.
However, since the memory capacity is much smaller than that of the external storage, the file cache needs to be continuously deleted and added as the terminal usage time increases. Therefore, more accurately deleting unnecessary file caches or adding file caches possibly used in the subsequent process in advance is an important subject of file cache optimization. The efficiency of dynamically tracking the file cache is the most direct and important criterion for judging and optimizing the file cache effect. However, the scheme for dynamically tracking the efficiency of the file cache in the related art is still not perfect, and generates a very large burden on the system.
Disclosure of Invention
The embodiment of the application provides a method and a device for tracking file caching efficiency, a storage medium and a terminal, and a tracking scheme capable of optimizing the file caching efficiency is provided.
In a first aspect, an embodiment of the present application provides a method for tracking file caching efficiency, including:
detecting that a preset file access event is triggered;
acquiring file access information corresponding to the preset access event based on a preset program code, wherein the preset program code comprises a program code compiled based on a preset virtual machine, and the preset program code is executed before a function to be called corresponding to the preset file access event;
and storing the file access information by adopting a storage format corresponding to the preset virtual machine, wherein the file access information is used for indicating a user space to calculate the file caching efficiency based on the file access information.
In a second aspect, an embodiment of the present application further provides a device for tracking file caching efficiency, where the device includes:
the event detection module is used for detecting that a preset file access event is triggered;
the information acquisition module is used for acquiring file access information corresponding to the preset access event based on a preset program code, wherein the preset program code comprises a program code compiled based on a preset virtual machine, and the preset program code is executed before a function to be called corresponding to the preset file access event;
and the information storage module is used for storing the file access information by adopting a storage format corresponding to the preset virtual machine, wherein the file access information is used for indicating a user space to calculate the file caching efficiency based on the file access information.
In a third aspect, embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a method for tracking file caching efficiency according to embodiments of the present application.
In a fourth aspect, an embodiment of the present application provides a terminal, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the computer program to implement the method for tracking file caching efficiency according to the embodiment of the present application.
The embodiment of the application provides a file caching efficiency tracking scheme, wherein if a preset file access event is triggered, file access information corresponding to the preset access event is obtained based on a preset program code, and the file access information is stored in a storage format corresponding to a preset virtual machine, wherein the file access information is used for indicating a user space to calculate file caching efficiency based on the file access information. By adopting the technical scheme, the preset program code can be inserted in the kernel space before the function to be called corresponding to the preset file access event based on the preset virtual machine in the kernel space, the file access information is acquired and stored by utilizing the preset program code so that the user space can calculate the file caching efficiency based on the file access information, the file caching efficiency can be dynamically tracked when the operating system runs, the interaction between the kernel space and the user space is effectively reduced, the burden of tracking the file caching efficiency on the operating system is reduced, and the system stability is improved.
Drawings
Fig. 1 is a flowchart of a method for tracking file caching efficiency according to an embodiment of the present disclosure;
FIG. 2 is a flowchart of another method for tracking file caching efficiency according to an embodiment of the present disclosure;
FIG. 3 is a flowchart of another method for tracking file caching efficiency according to an embodiment of the present disclosure;
fig. 4 is a block diagram illustrating a structure of a device for tracking file caching efficiency according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 6 is a block diagram of a smart phone according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently or simultaneously. In addition, the order of the steps may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Fig. 1 is a flowchart of a method for tracking file caching efficiency according to an embodiment of the present application, which may be performed by a device for tracking file caching efficiency, where the device may be implemented by software and/or hardware, and may be generally integrated in a terminal. As shown in fig. 1, the method includes:
For example, the terminal in the embodiment of the present application may include a device provided with an operating system, such as a mobile phone, a tablet computer, a notebook computer, a computer, and an intelligent appliance.
The type of the operating system in the embodiment of the present application is not limited, and may include, for example, an Android (Android) operating system, a Symbian (Symbian) operating system, a Windows (Windows) operating system, an apple (ios) operating system, and the like. For convenience of explanation, the following description will be given by taking the android operating system as an example in the embodiments of the present application. In the Android operating system of the terminal, access to various files is very frequent. In Android file access, file caching is a very important role, and file access efficiency can be improved by caching file contents in a memory. However, since the memory capacity is limited and is much smaller than the capacity of the external storage, caching a file in memory may require deleting other files that have been cached. That is, the file caching process is a process of continuously deleting and adding files. Therefore, how to delete the unnecessary file cache more accurately or add the file cache which is possibly used later in advance is an important subject of file cache optimization. At present, the efficiency of dynamically tracking the file cache is the most direct and important standard for judging and optimizing the file cache effect. In the related art, a Linux kernel-level tracking framework similar to ftrace is generally adopted to dynamically track the efficiency of file caching in real time. However, the adoption of such a tracking framework can cause a great burden on an operating system, and the stability of the system is easily affected, so that improvement is needed.
For example, when kernel information is required to be tracked, a Linux kernel-level tracking framework similar to ftrace is used. The ftrace is a function tracker (function tracker) which can only record the function call flow of the kernel, and nowadays, ftrace has been developed into a framework (frame), which can support developers to add more kinds of trace functions by adopting a plug-in (plugin) manner, and help developers to know the runtime behavior of the Linux kernel so as to perform fault debugging or performance analysis, and thus, can be used for tracing the kernel information of file access. However, when a trace frame such as ftrace is used for file access tracing, collected information is written into a ring buffer (ring buffer), and the information cannot be screened or counted in the collection process. Thus, a large amount of kernel space and user space interactions occur, and the user space program occupies a large amount of resources of a processor, such as a Central Processing Unit (CPU). In addition, if other types of information than the kernel standard format information are to be collected, corresponding codes need to be inserted into the kernel, and inserting codes into the kernel based on an architecture such as ftrace results in an exception such as kernel crash, which reduces system stability, causes a long development period, and is difficult to integrate in an actual mass production system.
In the embodiment of the application, file access tracking can be realized based on the preset virtual machine in the kernel space, and then, the user space calculates the file caching efficiency based on the file access tracking result. Illustratively, the preset file access event may include at least one of file read (read), file write (write), file synchronization (fsync), and file data synchronization (fdatasync). Wherein read represents reading a file; write represents a write file; fsync represents all the modified file data in the synchronous memory to the storage device, and besides the modified content (dirty page) of the synchronous file, fsync also contains the description information (metadata, including size, access time st _ atime & st _ mtime, etc.) of the synchronous file; fdatasync, which indicates that data is refreshed to disk, functions similarly to fsync, but synchronizes metadata only if necessary, thus reducing an IO write operation. Of course, other file access events may be included, the type or calling of the file access event may be different in different operating systems, and those skilled in the art can make an adaptive selection according to the operating system actually used.
For many operating systems, the kernel is generally implemented based on Linux, the system bottom layer is generally Linux kernel, the system may perform partition of kernel space and user space, and different operating system partition modes or partition results may be different. The user space generally refers to a memory area where a user process is located, an application program runs in the user space, and data of the user process is stored in the user space; the kernel space is a memory area occupied by an operating system, the operating system and a driver run in the kernel space, and data of the operating system is stored in the system space. Therefore, the user data and the system data can be isolated, and the stability of the system is ensured. Generally, the user space and the kernel space interact through a system call (system call), which may be understood as a set of all system calls provided by an operating system implementation, that is, A Program Interface (API), which is an Interface between an Application program and a system. The main function of the operating system is to provide a good environment for managing hardware resources and for application developers to make applications more compatible, for which purpose the kernel provides a series of multi-kernel functions with predefined functions, presented to the user through a set of interfaces called system calls. The system calls the request of the application program to the kernel, calls the corresponding kernel function to complete the required processing, and returns the processing result to the application program.
In the embodiment of the application, when the application accesses the file, the kernel space needs to be accessed in a system calling manner, that is, the corresponding system calling interface needs to be called to access the kernel space, so that whether the preset file access event is triggered can be judged according to whether the system calling interface corresponding to the preset file access event is called, and if the preset file access event is called, the preset file access event can be considered to be triggered.
For example, after the corresponding system call interface is called, a corresponding function in the kernel space needs to be called to implement file access, and the corresponding function may be referred to as a function to be called. Taking the read, write, fsync, and fdatasync as examples, each preset file access event corresponds to a corresponding function to be called, and is used for implementing functions of reading a file, writing a file, synchronizing a file, and synchronizing file data. In different operating systems, the specific implementation forms of the functions may be different, and the embodiments of the present application are not limited.
And step 120, acquiring file access information corresponding to the preset access event based on a preset program code.
In the embodiment of the application, a preset program code can be inserted before a function to be called in a kernel space based on a program code writing mode of a preset virtual machine in advance, and the preset program code is used for acquiring file access information. And executing the preset program code before the function to be called corresponding to the preset file access event. And the file access information is acquired by inserting a preset program code based on the preset virtual machine, so that the stability of the system can be ensured.
For example, the preset program code may be executed first, and then the function to be called is executed, where the preset program code is used to obtain information related to file access when the function to be called is executed, and the information is referred to as file access information in this embodiment of the present application. The file access information may include detailed information reflecting the file access process, including the name of the file, the storage path, and the body of the accessed file, such as what file was accessed, where the file was stored, which application accessed the file, the particular access mode and the number of times the file was accessed, and so forth. The number of times that the file is accessed can be obtained by counting the number of times that the file is accessed by the application program, that is, the file is read once by the application a, the number of times that the file is accessed is increased by 1, then the file is written once by the application B, the number of times that the file is accessed is increased by 1, then the file is synchronized once by the application a, the number of times that the file is accessed is increased again, and so on, the number of times that the file is accessed by the application program changes as the application program operates the file.
And step 130, storing the file access information by adopting a storage format corresponding to the preset virtual machine.
It should be noted that the file access information is used to instruct the user space to calculate the file caching efficiency based on the file access information.
In the embodiment of the application, the acquired file access information is stored by using the storage format corresponding to the preset virtual machine instead of the kernel standard format, so that the storage space can be saved. The user space does not need to read the file access information continuously, but can read the required file access information at one time regularly or when needed, so that the interaction times of the kernel space and the user space can be effectively reduced, and the storage format of the preset virtual machine is adopted, so that the read information amount is less, the transmission amount is small, the interaction data amount is also reduced, the burden of file access tracking on the system is reduced, and the system stability is improved.
After the user space obtains the file access information, the file access times of each file and the times of reading the file from the external storage are extracted, and the file caching efficiency is calculated based on the file access times of each file and the times of reading the file from the external storage. If the file caching efficiency is calculated by adopting the formula:
it should be noted that, whether the content of the file needs to be read or the content needs to be written into the file, the file needs to be read first, and therefore, for operations of different access types for the file, the file needs to be read first from the file cache or the external storage.
By tracking the file caching efficiency in real time, a strong basis can be provided for file caching optimization, for example, if the file caching efficiency of the file a is 70%, it can also be referred to as that the hit rate of the file a is 70%, that is, the probability that the file a is read from the cache is 70%, in other words, the probability that the file a is read from the external storage is 30%. Similarly, the file caching efficiency for file b was calculated to be 10%. Assuming that the time difference between the last access time and the current time is smaller than the time difference between the last access time and the current time of the file a for the file b (that is, the file which is not accessed again for the longest time is the file a), the file b can be cleared from the file cache without deleting the file a from the file cache when the file cache is optimized based on the file cache efficiency.
It is to be understood that the optimization manner of the file cache based on the file cache efficiency is not limited to the manner listed in the above example, and the files in the file cache may also be sorted according to the file cache efficiency, cleaned from the beginning with the lowest file efficiency in a time sequence, and the like, and the embodiment of the present application is not particularly limited.
According to the technical scheme of the embodiment, if the preset file access event is triggered, file access information corresponding to the preset access event is obtained based on a preset program code, and the file access information is stored in a storage format corresponding to a preset virtual machine, wherein the file access information is used for indicating a user space to calculate the file caching efficiency based on the file access information. By adopting the technical scheme, the preset program code can be inserted in the kernel space before the function to be called corresponding to the preset file access event based on the preset virtual machine in the kernel space, the file access information is acquired and stored by utilizing the preset program code so that the file caching efficiency can be calculated by the user space based on the file access information, the file caching efficiency can be dynamically tracked when the operating system runs, the interactive data volume of the kernel space and the user space is effectively reduced, the burden of tracking the file caching efficiency on the operating system is reduced, and the system stability is improved.
In some embodiments, the predetermined virtual machine includes an extended berkeley packet Filter (eBPF), and the storage format corresponding to the predetermined virtual machine includes a hash table. The eBPF is a set of virtual machines implemented in the kernel, which was originally designed to implement filtering of network packets, and nowadays has the capability of inserting and executing virtual machine code at any position of the kernel, and the inserted virtual machine code is also subjected to a large amount of detection before, and is guaranteed not to affect the stability of the system. The storage format currently specified in eBPF includes a Hash table (Hash table), which is a data structure directly accessed according to a Key value (Key value for short), and it is used to access a record by mapping the Key value to a position in the table to speed up the lookup. The advantage of this arrangement is that system stability can be further ensured, and interaction between kernel space and user space is reduced.
Fig. 2 is a flowchart of another method for tracking file caching efficiency according to an embodiment of the present application, taking a preset virtual machine as eBPF as an example, the method includes the following steps:
For example, whether the preset file access event is triggered may be determined according to whether a system call interface corresponding to the preset file access event is called, and if the preset file access event is called, the preset file access event may be considered to be triggered. The default file access events may include read, write, fsync, and fdatasync.
For example, it is determined whether a preset program code written based on a preset virtual machine exists at an initial position where a function to be called corresponding to a preset file access event is called in a virtual file system layer in a kernel space.
It is assumed that the current default file access event is read, the corresponding function to be called may be vfs _ read function, and the function prototype may be ssize _ t vfs _ read (struct file × file, char __ user × buf, size _ count, and loff _ t × pos). And inserting preset program code written based on eBPF at the position before the vfs _ read function.
In the embodiment of the application, if the preset file access event is triggered, before the file access information corresponding to the preset file access event is acquired based on the preset program code, whether the preset program code written based on the preset virtual machine exists in the kernel space before the function to be called corresponding to the preset file access event is judged. The benefit of this is that existing code in the kernel space is not destroyed. The virtual file system layer can be understood as a file abstraction layer, a function to be called for realizing file access is generally located in the layer, a program based on a preset virtual machine can be written in the initial position for calling the function to be called, and file access information can be acquired, so that the opportunity and the position when whether a preset program code exists are judged to be more clear, and the preset program code can be guaranteed to be successfully executed.
For example, the predetermined program code may be executed first, and then the function to be called may be executed. For some functions to be called, the file access information exists in the form of function parameters, and the parameter content corresponding to the function to be called can be obtained through a preset program code so as to obtain the file access information. And the file access information exists in the form of elements of a kernel data structure, and the kernel data structure content corresponding to the function to be called can be obtained through a preset program code. And some functions to be called exist, some file access information can exist in the function parameters, and some files can exist in corresponding kernel data structures, so that the function parameter content and the kernel data structure content related in the execution process of the functions to be called can be obtained by utilizing the preset program codes. This has the advantage that the file access information can be successfully and accurately acquired. When a function to be called is executed, some file access information exists in function parameters, such as a file path, a file size or a file ID; there may be some corresponding kernel data structures, such as a file name, application name for accessing the file, or application ID for accessing the file. The existing form is not limited, and after the function parameter content or the kernel data structure content is obtained, conversion may be required, so as to obtain the finally required file access information.
And 204, determining file access information corresponding to the function to be called according to the parameter content or the kernel data structure content.
The file access information may include, among other things, a file name, a file ID, a file path, an application name to access the file, an application ID to access the file, and so on.
For example, as can be seen from the vfs _ read function, the parameter buf points to a memory address in the user space, so that a file path in the file access information can be obtained. The file access information can be comprehensively obtained by acquiring the function parameter content and the kernel data structure content. After the function parameter content or the kernel data structure content is obtained, conversion may be required, so as to obtain the finally required file access information. For example, some kernel data structures may have file descriptions or numbers for representing file names, and the file descriptions or numbers have a one-to-one correspondence with the file names, and thus may be converted into the desired file names.
It should be noted that the file identifier includes a file ID, and the body of the access file includes an application ID, for example, for the file a, the file ID may be 123, and for the application a, the application ID may be MN, then for the application a to access an access record of the file a, the key value in the second hash table may be MN123, which represents that the application a accesses the file a. Also, the key value corresponds to MN123, and the actual content stored therein may be a file name, a file path, the number of times application a accesses file a, and the like. Wherein the number of times the file a is accessed by the application a is a statistical value from when the file a is created (or the application a is installed). The file access information storage method has the advantages that the file access information can be stored simply and accurately, and query is facilitated.
The application that is the subject of accessing the file may be a third-party application or a system application, and the application may be an application process, an application thread, or the like.
It should be noted that the file identifier may further include a device number to which the file belongs. The device number is understood to be a reference number of a partition of hardware or software in the terminal, such as a data area, a system area, and a memory card area. Files with the same file ID may exist in two different device numbers, and if the file ID and the application ID are used as key values, the problem of file confusion may occur. In this case, if the device number, the file ID, and the application ID are used as key values, the problem of file confusion can be effectively solved.
And step 206, when the kernel space receives a preset reading request of the user space, inquiring the second hash table according to the preset reading request, and feeding back an inquiry result to the user space so that the user space can calculate the file caching efficiency based on the file access information.
Illustratively, when the kernel space receives a preset reading request of the user space, the second hash table is queried according to the preset reading request, and the columns corresponding to the key values and the columns corresponding to the application access times of the file are taken as query results and sent to the user space. And the user space counts the number of times of file access of each file based on the query result, and the number of times of reading the file from the external storage in the number of times of file access of each file. The user space calculates the file caching efficiency of each file based on the number of file accesses of each file and the number of times that the file is read from the external storage. The advantage of this design is that the statistical work is transferred to the user space, reducing the amount of data processed by the processes in the kernel space. The process of the user space may perform further analysis or statistics on the query results. For example, the number of times that each file is accessed by the application program, the number of times that each file is read from the external storage when being accessed by the application program, which files have a large number of reads, which applications access the files having a large number of reads, the maximum number of times that the applications access the files having a large number of reads, and the like, thereby analyzing the file access habits of the user.
Optionally, when the kernel space receives a preset read request of the user space, the second hash table is queried according to the read request to obtain a key value and the number of times that the file is accessed by the application, the number of times that each file is accessed by the application and the number of times that each file is accessed by the application are counted, the number of times that each file is accessed by the application is read from the external storage is counted, and the number of times that each file is accessed by the application and the number of times that each file is read from the external storage are fed back to the user space as a query result. The user space calculates the file caching efficiency of each file based on the number of file accesses of each file and the number of times that the file is read from the external storage. The design has the advantages that the data volume of interaction between the kernel space and the user space is reduced, and the system burden is reduced.
And step 207, executing the function to be called.
For example, if the starting position of the function to be called does not have the preset program code, it may be shown that the current file access event is not concerned, the file access of this time does not need to be tracked, and the corresponding file access information does not need to be acquired, and the function to be called may be directly executed.
The method for tracking the file caching efficiency provided by the embodiment of the application realizes real-time file access statistics in a kernel by using an eBPF frame, obtains file access information by inserting an eBPF program at an initial position where a function to be called is called, stores the file access information in a hash table, queries the hash table according to a preset read request by the kernel space when the preset read request of a user space is received, and feeds back a query result to the user space. Practice proves that the transmission amount can be reduced to dozens of KB each time, and therefore by adopting the scheme provided by the embodiment of the application, the interactive data amount of the kernel space and the user space can be effectively reduced, the burden of file access tracking on a system is reduced, and the stability of the system is improved.
Fig. 3 is a flowchart of another method for tracking file caching efficiency according to an embodiment of the present application, where the method includes:
For example, a file tracking setting interface may be presented to the user in the terminal, and the user may input a filter condition setting operation based on the setting interface, such as selecting a file in the application program concerned by the user, a file accessed by the application program concerned by the user, or a file stored in a specific path as a target application, a target file, or a target storage path. And the user space generates a first hash table according to the filtering condition setting operation input by the user, and the first hash table is used for indicating the kernel space to perform real-time access tracking on the file in the target application, the file accessed by the target application or the file under the target storage path. Optionally, the application name, the file name, or the storage path may be used as a key value, and the filtering manner may be used as the corresponding storage content (e.g., retained or filtered). Optionally, the filtering manner may also be used as a key value, and the application name, the file name, or the storage path may be used as the corresponding storage content, and so on.
The kernel space acquires a first hash table in which the filter condition information is stored.
And 305, acquiring the parameter content or the kernel data structure content corresponding to the function to be called through the preset program code.
And step 306, determining file access information corresponding to the function to be called according to the parameter content or the kernel data structure content.
And 307, filtering the file access information according to the filtering condition information in the first hash table.
It should be noted that the filtering may be forward filtering or reverse filtering. This has the advantage that the file access information can be selectively filtered before being stored, further reducing the amount of storage. In addition, the first hash table is transmitted from the user space to the kernel space, which can support the user to set the preset hash table, for example, a certain application concerned by the user or a file under a certain path is selected as the filtering condition information, and the kernel space is instructed to screen out the file access information corresponding to the files for storage.
For example, whether a file corresponding to the current file access information belongs to a target application program (or is in a target path, or is accessed by the target application program) may be determined, if so, it is indicated that the file needs to be tracked, and the current file access information may be retained; if not, the file access is the file access which does not need to be tracked, and the current file access information can be ignored, namely, the filtering is carried out.
And 308, storing the filtered file access information by taking the file identification contained in the filtered file access information and the main body of the access file as key values of the hash table to obtain a second hash table.
The method has the advantages that the user space can send the preset reading request to the kernel space at any time or at regular intervals according to own will, the kernel space completes the query, the user space does not read the whole hash table stored with the file access information and query by itself, the interactive data amount between the user space and the kernel space can be further reduced, and the load of the operation for tracking the file caching efficiency on the system is reduced.
And step 310, executing the function to be called.
According to the file access tracking method provided by the embodiment of the application, a file which needs to be tracked according to the file caching efficiency can be preset by a user, after the file access information is acquired in the kernel space through the eBPF program, corresponding filtering is carried out according to the setting of the user, and the file access information is stored in the hash table, so that the storage capacity can be further reduced, the pertinence and the individuation of file caching efficiency tracking can be enhanced, when the file access information needs to be read in the user space, the interactive data quantity of the kernel space and the user space can be further reduced, the scheme of tracking the file caching efficiency in a light weight mode is realized, and the file access tracking method is favorable for being deployed in a mass production system.
Fig. 4 is a block diagram of a device for tracking file caching efficiency according to an embodiment of the present disclosure, where the device may be implemented by software and/or hardware, and is generally integrated in a terminal, and the device may dynamically track the file caching efficiency when a system runs by performing a method for tracking the file caching efficiency. As shown in fig. 4, the apparatus includes:
an event detection module 410, configured to detect that a preset file access event is triggered;
an information obtaining module 420, configured to obtain file access information corresponding to a preset access event based on a preset program code, where the preset program code includes a program code written based on a preset virtual machine, and the preset program code is executed before a function to be called corresponding to the preset file access event;
an information storage module 430, configured to store the file access information in a storage format corresponding to the preset virtual machine, where the file access information is used to instruct a user space to calculate file caching efficiency based on the file access information.
The embodiment of the application provides a device for tracking file caching efficiency, wherein if a preset file access event is triggered, file access information corresponding to the preset access event is acquired based on a preset program code, and the file access information is stored in a storage format corresponding to a preset virtual machine, so that a user space can read the file access information and the file caching efficiency can be calculated based on the file access information. By adopting the technical scheme, the preset program code can be inserted in the kernel space before the function to be called corresponding to the preset file access event based on the preset virtual machine in the kernel space, the file access information is acquired and stored by utilizing the preset program code so that the user space can calculate the file caching efficiency based on the file access information, the file caching efficiency can be dynamically tracked when the operating system runs, the interaction between the kernel space and the user space is effectively reduced, the burden of tracking the file caching efficiency on the operating system is reduced, and the system stability is improved.
Optionally, the method further includes:
and the interpretation module is used for judging whether the preset program code compiled based on the preset virtual machine exists at the initial position of calling the function to be called corresponding to the preset file access event in the virtual file system layer in the kernel space before the file access information corresponding to the preset access event is acquired based on the preset program code.
Optionally, the information obtaining module 420 is specifically configured to:
acquiring parameter content or kernel data structure content corresponding to the function to be called through the preset program code;
and determining file access information corresponding to the function to be called according to the parameter content or the kernel data structure content.
It should be noted that the preset virtual machine includes an extended burley packet filter eBPF, and a storage format corresponding to the preset virtual machine includes a hash table.
Optionally, the information storage module 430 is specifically configured to:
and storing the file access information by taking the file identification contained in the file access information and the main body of the access file as key values of a hash table to obtain a second hash table.
Optionally, the method further includes:
and the result feedback module is used for inquiring the second hash table according to the preset reading request and feeding back the inquiry result to the user space when the kernel space receives the preset reading request of the user space after the file access information is stored in the storage format corresponding to the preset virtual machine.
Optionally, the method further includes:
the information filtering module is used for acquiring a first hash table stored with filtering condition information before the file access information is stored in a storage format corresponding to the preset virtual machine, wherein the first hash table is transmitted to a kernel space from the user space;
filtering the file access information according to the filtering condition information;
accordingly, the information storage module 430 is configured to:
and storing the filtered file access information by adopting a storage format corresponding to the preset virtual machine.
Optionally, the file access information includes the number of times of file access and the number of times of reading the file from an external storage;
correspondingly, the calculating, by the user space, the file caching efficiency based on the file access information includes:
and calculating the file caching efficiency by the user space based on the file access times and the times of reading the file from the external storage.
Embodiments of the present application also provide a storage medium containing computer-executable instructions, which when executed by a computer processor, perform a method for tracking file caching efficiency, the method comprising:
detecting that a preset file access event is triggered;
acquiring file access information corresponding to the preset access event based on a preset program code, wherein the preset program code comprises a program code compiled based on a preset virtual machine, and the preset program code is executed before a function to be called corresponding to the preset file access event;
and storing the file access information by adopting a storage format corresponding to the preset virtual machine, wherein the file access information is used for indicating a user space to calculate the file caching efficiency based on the file access information.
Storage medium-any of various types of memory devices or storage devices. The term "storage medium" is intended to include: mounting media such as CD-ROM, floppy disk, or tape devices; computer system memory or random access memory such as DRAM, DDR RAM, SRAM, EDO RAM, Lanbas (Rambus) RAM, etc.; non-volatile memory such as flash memory, magnetic media (e.g., hard disk or optical storage); registers or other similar types of memory elements, etc. The storage medium may also include other types of memory or combinations thereof. In addition, the storage medium may be located in a first computer system in which the program is executed, or may be located in a different second computer system connected to the first computer system through a network (such as the internet). The second computer system may provide program instructions to the first computer for execution. The term "storage medium" may include two or more storage media that may reside in different locations, such as in different computer systems that are connected by a network. The storage medium may store program instructions (e.g., embodied as a computer program) that are executable by one or more processors.
Of course, the storage medium provided in the embodiments of the present application includes computer-executable instructions, where the computer-executable instructions are not limited to the above-mentioned tracking operation of file caching efficiency, and may also perform related operations in the tracking method of file caching efficiency provided in any embodiment of the present application.
The embodiment of the application provides a terminal, wherein an operating system is arranged in the terminal, and the tracking device for the file caching efficiency provided by the embodiment of the application can be integrated in the terminal. The terminal can be a smart phone, a PAD (PAD), a handheld game console, an intelligent wearable device and the like. Fig. 5 is a schematic structural diagram of a terminal according to an embodiment of the present application. As shown in fig. 5, the terminal includes a memory 510 and a processor 520. The memory 510 includes an internal memory and an external memory, and is used for storing computer programs, cache files, file access information and the like; the processor 520 reads and executes the computer programs stored in the memory 510. The processor 520, when executing the computer program, performs the steps of: detecting that a preset file access event is triggered; acquiring file access information corresponding to the preset access event based on a preset program code, wherein the preset program code comprises a program code compiled based on a preset virtual machine, and the preset program code is executed before a function to be called corresponding to the preset file access event; and storing the file access information by adopting a storage format corresponding to the preset virtual machine, wherein the file access information is used for indicating a user space to calculate the file caching efficiency based on the file access information.
The memory and the processor listed in the above examples are part of the components of the terminal, and the terminal may further include other components. Taking a smart phone as an example, a possible structure of the terminal is described. Fig. 6 is a block diagram of a smart phone according to an embodiment of the present application. As shown in fig. 6, the smart phone may include: memory 601, a Central Processing Unit (CPU) 602 (also known as a processor, hereinafter CPU), a peripheral interface 603, a Radio Frequency (RF) circuit 605, an audio circuit 606, a speaker 611, a touch screen 612, a power management chip 608, an input/output (I/O) subsystem 609, other input/control devices 610, and an external port 604, which communicate via one or more communication buses or signal lines 607.
It should be understood that the illustrated smartphone 600 is merely one example of a terminal, and that the smartphone 600 may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The following describes in detail a smartphone integrated with a tracking apparatus for file caching efficiency according to this embodiment.
A memory 601, the memory 601 being accessible by the CPU602, the peripheral interface 603, and the like, the memory 601 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other volatile solid state storage devices. The memory 601 stores a computer program, and may also store a preset file, a preset white list, and the like.
A peripheral interface 603, said peripheral interface 603 may connect input and output peripherals of the device to the CPU602 and the memory 601.
An I/O subsystem 609, the I/O subsystem 609 may connect input and output peripherals on the device, such as a touch screen 612 and other input/control devices 610, to the peripheral interface 603. The I/O subsystem 609 may include a display controller 6091 and one or more input controllers 6092 for controlling other input/control devices 610. Where one or more input controllers 6092 receive electrical signals from or transmit electrical signals to other input/control devices 610, the other input/control devices 610 may include physical buttons (push buttons, rocker buttons, etc.), dials, slide switches, joysticks, click wheels. It is noted that the input controller 6092 may be connected to any one of: a keyboard, an infrared port, a USB interface, and a pointing device such as a mouse.
A touch screen 612, which touch screen 612 is an input interface and an output interface between the user terminal and the user, displays visual output to the user, which may include graphics, text, icons, video, and the like.
The display controller 6091 in the I/O subsystem 609 receives electrical signals from the touch screen 612 or transmits electrical signals to the touch screen 612. The touch screen 612 detects a contact on the touch screen, and the display controller 6091 converts the detected contact into an interaction with a user interface object displayed on the touch screen 612, that is, to implement a human-computer interaction, where the user interface object displayed on the touch screen 612 may be an icon for running a game, an icon networked to a corresponding network, or the like. It is worth mentioning that the device may also comprise a light mouse, which is a touch sensitive surface that does not show visual output, or an extension of the touch sensitive surface formed by the touch screen.
The RF circuit 605 is mainly used to establish communication between the mobile phone and the wireless network (i.e., network side), and implement data reception and transmission between the mobile phone and the wireless network. Such as sending and receiving short messages, e-mails, etc. In particular, RF circuitry 605 receives and transmits RF signals, also referred to as electromagnetic signals, through which RF circuitry 605 converts electrical signals to or from electromagnetic signals and communicates with a communication network and other devices. RF circuitry 605 may include known circuitry for performing these functions including, but not limited to, an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC (CODEC) chipset, a Subscriber Identity Module (SIM), and so forth.
The audio circuit 606 is mainly used to receive audio data from the peripheral interface 603, convert the audio data into an electric signal, and transmit the electric signal to the speaker 611.
The speaker 611 is used to convert the voice signal received by the handset from the wireless network through the RF circuit 605 into sound and play the sound to the user.
And a power management chip 608 for supplying power and managing power to the hardware connected to the CPU602, the I/O subsystem, and the peripheral interface.
The terminal provided by the embodiment of the application can dynamically track the file caching efficiency when the operating system runs, and effectively reduces the interaction between the kernel space and the user space, so that the burden of tracking the file caching efficiency on the operating system is reduced, and the system stability is improved.
The device, the storage medium, and the terminal for tracking file caching efficiency provided in the above embodiments may execute the method for tracking file caching efficiency provided in any embodiment of the present application, and have corresponding functional modules and beneficial effects for executing the method. For the technical details not described in detail in the above embodiments, reference may be made to the method for tracking file caching efficiency provided in any embodiment of the present application.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present application and the technical principles employed. It will be understood by those skilled in the art that the present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the appended claims.
Claims (11)
1. A method for tracking file caching efficiency is characterized by comprising the following steps:
detecting that a preset file access event is triggered;
acquiring file access information corresponding to the preset access event based on a preset program code, wherein the preset program code comprises a program code compiled based on a preset virtual machine, and the preset program code is executed before a function to be called corresponding to the preset file access event;
and storing the file access information by adopting a storage format corresponding to the preset virtual machine, wherein the file access information is used for indicating a user space to calculate the file caching efficiency based on the file access information.
2. The method according to claim 1, before obtaining the file access information corresponding to the preset access event based on a preset program code, further comprising:
and judging whether a preset program code compiled based on a preset virtual machine exists at the initial position of calling a function to be called corresponding to a preset file access event in a virtual file system layer in the kernel space.
3. The method according to claim 2, wherein obtaining file access information corresponding to the preset access event based on a preset program code comprises:
acquiring parameter content or kernel data structure content corresponding to the function to be called through the preset program code;
and determining file access information corresponding to the function to be called according to the parameter content or the kernel data structure content.
4. The method of claim 1, wherein the predetermined virtual machine comprises an extended burley packet filter eBPF, and wherein the storage format corresponding to the predetermined virtual machine comprises a hash table.
5. The method according to claim 4, before storing the file access information in the storage format corresponding to the preset virtual machine, further comprising:
acquiring a first hash table stored with filter condition information, wherein the first hash table is transmitted to a kernel space from the user space;
filtering the file access information according to the filtering condition information;
correspondingly, storing the file access information by adopting a storage format corresponding to the preset virtual machine includes:
and storing the filtered file access information by adopting a storage format corresponding to the preset virtual machine.
6. The method according to claim 4, wherein storing the file access information in a storage format corresponding to the preset virtual machine comprises:
and storing the file access information by taking the file identification contained in the file access information and the main body of the access file as key values of a hash table to obtain a second hash table.
7. The method according to claim 6, further comprising, after storing the file access information in a storage format corresponding to the preset virtual machine:
and when the kernel space receives a preset reading request of the user space, inquiring the second hash table according to the preset reading request, and feeding back an inquiry result to the user space.
8. The method according to any one of claims 1 to 7, wherein the file access information includes the number of file accesses and the number of times the file is read from an external storage;
correspondingly, the calculating, by the user space, the file caching efficiency based on the file access information includes:
and calculating the file caching efficiency by the user space based on the file access times and the times of reading the file from the external storage.
9. An apparatus for tracking file caching efficiency, comprising:
the event detection module is used for detecting that a preset file access event is triggered;
the information acquisition module is used for acquiring file access information corresponding to the preset access event based on a preset program code, wherein the preset program code comprises a program code compiled based on a preset virtual machine, and the preset program code is executed before a function to be called corresponding to the preset file access event;
and the information storage module is used for storing the file access information by adopting a storage format corresponding to the preset virtual machine, wherein the file access information is used for indicating a user space to calculate the file caching efficiency based on the file access information.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a method for tracking file caching efficiency as claimed in any one of claims 1 to 8.
11. A terminal comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method for tracking file caching efficiency according to any one of claims 1 to 8 when executing the computer program.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811126354.5A CN110955486B (en) | 2018-09-26 | 2018-09-26 | File caching efficiency tracking method and device, storage medium and terminal |
PCT/CN2019/093512 WO2020062981A1 (en) | 2018-09-26 | 2019-06-28 | Method and apparatus for tracking file caching efficiency, and storage medium and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811126354.5A CN110955486B (en) | 2018-09-26 | 2018-09-26 | File caching efficiency tracking method and device, storage medium and terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110955486A true CN110955486A (en) | 2020-04-03 |
CN110955486B CN110955486B (en) | 2022-08-23 |
Family
ID=69950248
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811126354.5A Expired - Fee Related CN110955486B (en) | 2018-09-26 | 2018-09-26 | File caching efficiency tracking method and device, storage medium and terminal |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110955486B (en) |
WO (1) | WO2020062981A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112699021A (en) * | 2020-12-08 | 2021-04-23 | 网易(杭州)网络有限公司 | Information processing method and device, terminal equipment and server |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116743903B (en) * | 2022-09-09 | 2024-05-14 | 荣耀终端有限公司 | Chip identification method and electronic equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103514110A (en) * | 2012-06-20 | 2014-01-15 | 华为技术有限公司 | Cache management method and device for nonvolatile memory device |
CN103731396A (en) * | 2012-10-10 | 2014-04-16 | 中国移动通信集团江西有限公司 | Resource access method and system and cache resource information pushing device |
US20150269067A1 (en) * | 2014-03-21 | 2015-09-24 | Symantec Corporation | Systems and methods for identifying access rate boundaries of workloads |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100571281C (en) * | 2007-06-29 | 2009-12-16 | 清华大学 | Great magnitude of data hierarchical storage method |
US9910689B2 (en) * | 2013-11-26 | 2018-03-06 | Dynavisor, Inc. | Dynamic single root I/O virtualization (SR-IOV) processes system calls request to devices attached to host |
-
2018
- 2018-09-26 CN CN201811126354.5A patent/CN110955486B/en not_active Expired - Fee Related
-
2019
- 2019-06-28 WO PCT/CN2019/093512 patent/WO2020062981A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103514110A (en) * | 2012-06-20 | 2014-01-15 | 华为技术有限公司 | Cache management method and device for nonvolatile memory device |
CN103731396A (en) * | 2012-10-10 | 2014-04-16 | 中国移动通信集团江西有限公司 | Resource access method and system and cache resource information pushing device |
US20150269067A1 (en) * | 2014-03-21 | 2015-09-24 | Symantec Corporation | Systems and methods for identifying access rate boundaries of workloads |
Non-Patent Citations (1)
Title |
---|
SASHA GOLDSHTEIN: ""Profiling JVM Applications in Production"", 《SRECON18 AMERICAS CONFERENCE》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112699021A (en) * | 2020-12-08 | 2021-04-23 | 网易(杭州)网络有限公司 | Information processing method and device, terminal equipment and server |
Also Published As
Publication number | Publication date |
---|---|
CN110955486B (en) | 2022-08-23 |
WO2020062981A1 (en) | 2020-04-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110955631B (en) | File access tracking method and device, storage medium and terminal | |
CN110955584B (en) | Block device access tracking method and device, storage medium and terminal | |
US11397590B2 (en) | Method for preloading application, storage medium, and terminal | |
WO2019120037A1 (en) | Model construction method, network resource preloading method and apparatus, medium, and terminal | |
CN108512695B (en) | Method and device for monitoring application blockage | |
CN108153647B (en) | Log processing method and device, terminal equipment and storage medium | |
CN107872523B (en) | Network data loading method and device, storage medium and mobile terminal | |
CN108038231B (en) | Log processing method and device, terminal equipment and storage medium | |
WO2019227994A1 (en) | Method and apparatus for updating application prediction model, storage medium, and terminal | |
CN110046497B (en) | Function hook realization method, device and storage medium | |
CN110888821B (en) | Memory management method and device | |
CN110222288B (en) | Page display method, device and storage medium | |
CN106896900B (en) | Display control method and device of mobile terminal and mobile terminal | |
CN109948090B (en) | Webpage loading method and device | |
US10901947B2 (en) | Method for recognizing infrequently-used data and terminal | |
CN112148579B (en) | User interface testing method and device | |
CN109033247B (en) | Application program management method and device, storage medium and terminal | |
CN106776259B (en) | Mobile terminal frame rate detection method and device and mobile terminal | |
CN110955486B (en) | File caching efficiency tracking method and device, storage medium and terminal | |
CN106980447B (en) | Information processing method and device and terminal | |
CN110955614B (en) | Method and device for recovering file cache, storage medium and terminal | |
CN108984374B (en) | Method and system for testing database performance | |
WO2021254200A1 (en) | Page thrashing protection method and apparatus for memory reclaim of operating system | |
CN108170576B (en) | Log processing method and device, terminal equipment and storage medium | |
CN108549695B (en) | Data interaction method and device, terminal equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20220823 |