CN112558866B - Data pre-reading method, mobile terminal and computer readable storage medium - Google Patents

Data pre-reading method, mobile terminal and computer readable storage medium Download PDF

Info

Publication number
CN112558866B
CN112558866B CN202011407098.4A CN202011407098A CN112558866B CN 112558866 B CN112558866 B CN 112558866B CN 202011407098 A CN202011407098 A CN 202011407098A CN 112558866 B CN112558866 B CN 112558866B
Authority
CN
China
Prior art keywords
memory
reading
pressure
step length
reading step
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011407098.4A
Other languages
Chinese (zh)
Other versions
CN112558866A (en
Inventor
李培锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oppo Chongqing Intelligent Technology Co Ltd
Original Assignee
Oppo Chongqing Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo Chongqing Intelligent Technology Co Ltd filed Critical Oppo Chongqing Intelligent Technology Co Ltd
Priority to CN202011407098.4A priority Critical patent/CN112558866B/en
Publication of CN112558866A publication Critical patent/CN112558866A/en
Application granted granted Critical
Publication of CN112558866B publication Critical patent/CN112558866B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0658Controller construction arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0674Disk device
    • G06F3/0676Magnetic disk device

Abstract

The application discloses a data pre-reading method, a mobile terminal and a computer readable storage medium. The method comprises the following steps: obtaining a pre-reading request of target data; acquiring memory information and IO information according to the pre-reading request; when the memory information and the IO information meet the preset requirement, adjusting the current pre-reading step length to be the maximum pre-reading step length supported by the system; and pre-reading the target data to the memory according to the maximum pre-reading step length. By the mode, the data use response speed can be improved in the scene that system resources are sufficient.

Description

Data pre-reading method, mobile terminal and computer readable storage medium
Technical Field
The present application relates to the field of caching, and in particular, to a method for pre-reading data, a mobile terminal, and a computer-readable storage medium.
Background
Generally, most disk I/O reads and writes are sequential, and the storage of ordinary files on the disk occupies contiguous sectors. When the file is read and written, the moving times of the magnetic head can be reduced, and the reading and writing performance is improved. When a program reads a file, it is typically accessed sequentially from the first byte to the last byte. Therefore, a plurality of adjacent sectors on the disk in the same file are usually accessed by the read process, and the pre-read mechanism is generated accordingly, so as to improve the disk performance and further improve the throughput of the system.
Pre-reading is to read a plurality of continuous file pages from a common file or a block device file into a memory before data is actually accessed. The read-ahead algorithm predicts the pages to be accessed and reads them into the cache in advance of the batch.
At present, the Linux pre-reading mechanism is to enable an accessed file to be loaded into a kernel in advance, so that a process does not need to wait for a long IO time when reading a certain page, and therefore the system can read more pages each time when reading the page. The pre-reading process is usually required to be performed for a plurality of times from the beginning to the maximum value of the pre-reading set by the system, so that the response speed of data use is reduced, the waiting time is slightly long, and the system reflects the phenomena of card pause and the like.
Disclosure of Invention
A first aspect of an embodiment of the present application provides a method for pre-reading data, where the method includes: obtaining a pre-reading request of target data; acquiring memory information and IO information according to the pre-reading request; when the memory information and the IO information meet the preset requirement, adjusting the current pre-reading step length to be the maximum pre-reading step length supported by the system; and pre-reading the target data to the memory according to the maximum pre-reading step length.
A second aspect of an embodiment of the present application provides a mobile terminal, including: the acquisition module is used for acquiring a read-ahead request of target data; the obtaining module is further configured to obtain memory information and IO information according to the read-ahead request; the adjusting module is connected with the acquiring module and used for adjusting the current pre-reading step length to be the maximum pre-reading step length supported by a system when the memory information and the IO information meet the preset requirement, wherein the current pre-reading step length is smaller than or equal to the maximum pre-reading step length of the system; and the pre-reading module is connected with the adjusting module and is used for pre-reading the target data and caching the target data into the memory according to the maximum pre-reading number step length.
A third aspect of an embodiment of the present application provides another mobile terminal, including: a processor and a memory, the memory having stored therein a computer program, the processor being adapted to execute the computer program to perform the method of the first aspect of the embodiments of the present application.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the method provided by the first aspect of embodiments of the present application.
The beneficial effect of this application is: different from the situation of the prior art, the method and the device for pre-reading the disk are used for aiming at the long waiting situation that the pre-reading process is often required to be carried out for a plurality of times between the pre-reading from the beginning to the maximum value of the pre-reading set by the system, acquiring the memory information and the IO information according to the pre-reading request, timely monitoring the actual situation of the system memory and the performance of the disk, and adjusting the current pre-reading step length to be the maximum pre-reading step length supported by the system when the memory information and the IO information meet the preset requirement, so that the number of the pre-reading is reduced to 1, and the pre-reading is carried out from the beginning to the maximum value of the pre-reading set by the system under the condition that the system resources are sufficient. Through the mode, the method and the device can effectively adjust the current pre-reading step length to be the maximum pre-reading step length supported by the system, and improve the response speed of data use, so that the pre-reading waiting time is reduced, and the pre-reading performance of the system is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic flow chart diagram of a first embodiment of a method for pre-reading data of the present application;
FIG. 2 is a flowchart illustrating an embodiment of step S12 of FIG. 1;
FIG. 3 is a flowchart illustrating an embodiment of step S13 in FIG. 1;
FIG. 4 is a schematic flow chart of another embodiment of step S12 in FIG. 3;
FIG. 5 is a flowchart illustrating an embodiment of step S32 in FIG. 3 or step S44 in FIG. 4;
FIG. 6 is a flowchart illustrating an embodiment of step S42 of FIG. 3;
FIG. 7 is a schematic flow chart illustrating dynamic detection of memory loading according to the present application;
FIG. 8 is a flow chart of a second embodiment of the method for pre-reading data of the present application;
FIG. 9 is a schematic block diagram of an embodiment of a mobile terminal of the present application;
FIG. 10 is a schematic block diagram of another embodiment of a mobile terminal of the present application;
FIG. 11 is a schematic block diagram of one embodiment of a computer-readable storage medium of the present application;
fig. 12 is a schematic block diagram of a hardware architecture of a mobile terminal of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples. Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a first embodiment of a method for pre-reading data according to the present application. The method comprises the following specific steps:
s11: obtaining a pre-reading request of target data;
generally, if a plurality of consecutive file pages are to be read from a normal file or a block device file into a memory before the data is actually accessed, the device is provided with a read ahead request for a read ahead of the data. In most cases, this kernel read-ahead mechanism can significantly improve disk performance because the number of commands processed by the disk controller is reduced, and each command can read multiple adjacent sectors, thereby improving system response time.
Generally, after a pre-reading request of target data is obtained, a pre-reading execution process may call a function do _ genetic _ file _ read (), call page _ cache _ sync _ read (), or call page _ cache _ async _ read (), where page _ cache _ sync _ read () represents a synchronous reading interface, and a process thereof must select and the like to be read and can go down; and the page _ cache _ async _ readahead () represents an asynchronous reading interface, the process of the asynchronous reading interface does not need to wait, another thread reading interface is started, and the two threads do not influence each other, so that an important basic condition is provided for asynchronous pre-reading.
In order to improve the efficiency and effect speed of pre-reading, asynchronous pre-reading is adopted, and then a file page is allocated by using a memory mapping relation in a kernel according to a call command, wherein the file page can be understood as a storage page of a file in a memory, which can be mapped to a disk. Then, executing a system call readahead () in each file page to execute a pre-reading instruction; the read ahead instruction is executed according to readahead (), and the posix _ fadvise () system call is executed. Wherein, the readahead () informs the system that the current page needs to be pre-read, and the specific number of pages is to be pre-read immediately; the posix _ fadvise indicates that the system is told that this block of memory is about to be accessed, allowing the system to optimize without immediate read-ahead as with the readhead interface. Based on the posix _ fadvise () system call, using the MADV _ wired command therein, the madvise () system call is executed to notify the kernel file that a specific region of the memory map will be accessed in the future.
S12: acquiring memory information and IO information according to the pre-reading request;
the memory information reflects the situation of the memory resource, and for the memory resource, because the pre-reading of the file cache is to put the file page into the memory in advance, the pre-reading process must apply for the memory, so it is necessary to ensure that the memory resource of the system is sufficient, and if the memory resource is insufficient, the file cache pre-reading may enter a blocking state because the memory is insufficient.
The IO information reflects the condition of IO resources, for the IO resources, file cache pre-reading means that a file is read into a memory from a disk, IO operation is required in the process of reading from the disk, and if IO is busy at the moment, pre-reading is also caused to carry out a blocking state, so that the IO busy state of the current system needs to be judged.
In order to set the number of file cache read ahead (size) in advance as the maximum read ahead number set by the system according to the resource of the system memory and the busy degree of IO. According to the pre-reading request, the memory information and the IO information can be obtained, so that the obtained memory information and the IO information are judged, a judgment reference is provided, and the current pre-reading is adjusted based on the reference.
S13: when the memory information and the IO information meet the preset requirement, adjusting the current pre-reading step length to be the maximum pre-reading step length supported by the system;
generally, the time of each pre-read is associated with the size of the pre-read. The larger the size, the longer the pre-read time. However, one large size pre-read time is much shorter than several small size pre-read times because device addressing and the like are time consuming.
Based on this, it can be seen that the memory information and the IO information are obtained according to the pre-reading request, the actual condition of the system memory and the performance of the disk can be monitored in time, and when the memory information and the IO information meet the preset requirement, the current pre-reading step length is adjusted to the maximum pre-reading step length supported by the system, so that the pre-reading time can be shortened.
Specifically, when the memory information and the IO information satisfy the preset requirement, the current pre-reading step length is adjusted to the maximum pre-reading step length supported by the system, so that the number of pre-reading times can be reduced to 1, and the pre-reading is set to the maximum value from the beginning to the system under the condition that the system resources are sufficient, which can effectively improve the response speed of data.
S14: and pre-reading the target data to the memory according to the maximum pre-reading step length.
Typically, the incremental number of read ahead is set by a forward window, which is understood to mean that the current window is the current page of the document being read in, and the forward window represents the page being read in advance because the system predicts that the page will be accessed later.
Generally speaking, the pre-reading number is continuously increased, namely the pre-reading number of the forward window is continuously increased, and the optimal state is that the file of the current window is accessed completely, the file page read by the pre-reading window is just hit, and then a new file page is read through the pre-reading window, so that the pre-reading window is continuously changed into the current window.
Therefore, target data are pre-read into the memory according to the maximum pre-reading step length, the response speed of data use can be improved, the pre-reading waiting time is reduced, and the pre-reading performance of the system is improved.
Therefore, according to the method and the device, aiming at the long waiting condition that the pre-reading process is often required to be performed for a plurality of times from the beginning to the maximum value of the pre-reading set by the system, the memory information and the IO information are obtained according to the pre-reading request, the actual condition of the system memory and the performance of the disk can be timely monitored, and then when the memory information and the IO information meet the preset requirements, the current pre-reading step length is adjusted to be the maximum pre-reading step length supported by the system, so that the pre-reading times can be reduced to 1 time, and the maximum value of the pre-reading set by the system from the beginning is pre-read under the condition that the system resources are sufficient. Through the mode, the method and the device can effectively adjust the current pre-reading step length to be the maximum pre-reading step length supported by the system, and improve the response speed of data use, so that the pre-reading waiting time is reduced, and the pre-reading performance of the system is improved.
Further, according to the read-ahead request, the memory information and the IO information of the system are obtained, please refer to fig. 2, and fig. 2 is a schematic flowchart of an embodiment of step S12 in fig. 1; the method comprises the following steps:
s21: reading used memory of memory resources corresponding to the memory information of the system;
different memory information represents different conditions of memory resources, if the memory resources are sufficient, the use of the memory has more potential, the adopted speed-up pre-reading method is more, and the used memory of the memory resources corresponding to the memory information of the system can be read usually, so that the unused memory resources of the memory can be known.
It is noted that the remaining free memory of the system can be read, which to some extent can be comparable to unused memory resources.
Of course, for the acquisition of the used memory and the remaining free memory, the usage of the memory resource may be monitored in real time by installing a monitoring unit on the disk, for example, setting an interface monitoring module on the disk, and in addition, a person skilled in the art can fully think that other acquisition manners can acquire the used memory of the memory resource corresponding to the memory information of the system.
S22: judging whether the unused memory resource is larger than a preset unused memory resource threshold value or not according to the used memory;
after the used memory of the memory resource corresponding to the memory information of the system is obtained, the used memory resource can be subtracted from the total memory amount under the condition that the total memory amount is known, so that the subtracted difference is obtained, and the unused memory resource is represented.
Generally speaking, a memory resource threshold is preset in a system and used for judging the condition of unused memory resources, and according to used memory, whether the unused memory resources are larger than the preset unused memory resource threshold is judged; if the unused memory resource is larger than the preset unused memory resource threshold, step S23 is performed, that is, the IO pressure corresponding to the IO information is read, so as to adjust the current pre-reading step length to the maximum pre-reading step length supported by the system; if the unused memory resource is less than or equal to the preset unused memory resource threshold, step S24 is performed, that is, the available memory of the system is obtained.
Further, when the memory information and the IO information satisfy the preset requirement, the current pre-reading step length is adjusted to the maximum pre-reading step length supported by the system, specifically refer to fig. 3, where fig. 3 is a flowchart of an embodiment of step S13 in fig. 1, and the step specifically includes the following detailed steps:
s31: judging whether the IO pressure is smaller than a preset IO pressure threshold value or not;
generally, an IO pressure threshold is preset in the system for determining the magnitude of the IO pressure. And the IO pressure is used as an important index for indicating that the IO is busy, and whether the IO pressure is busy or not can be judged by judging whether the IO pressure is smaller than a preset IO pressure threshold value or not.
If the IO pressure is smaller than the IO pressure threshold, which indicates that the IO is not busy, and there is redundant idle to process the memory pre-reading, step S32 is performed, that is, the current pre-reading step length is adjusted to the maximum pre-reading step length supported by the system; if the IO pressure is greater than or equal to the IO pressure threshold, indicating that the IO is busy, step S33 is performed, that is, the file data of the target file is cached in the memory according to the current pre-reading step length.
Further, referring to fig. 4, specifically, according to the read-ahead request, the memory information and the IO information are obtained, and fig. 4 is a schematic flowchart of another specific embodiment of step S12 in fig. 3, where the step specifically includes:
s41: judging whether the available memory is larger than a preset available memory threshold value or not;
the judgment of the remaining free memory is intuitive, the memory which is not used in the system is shown, if the memory exceeds a certain threshold value, namely the comparison system is more abundant in the free memory, the memory resource at the moment is more abundant, and the pre-reading can be directly carried out to the maximum extent.
For the representation of the memory information, in addition to representing unused memory resources by using the remaining free memory, more specifically, the representation may also be represented by using an available memory, where the available memory includes a file cache, and the relationship between the available memory, the remaining free memory, and the file cache is shown as the following relation:
available memory = remaining free memory + file cache
When the system presets an available memory threshold, the available memory threshold can be used for judging the size of the available memory; if the available memory is greater than the available memory threshold, step S42 is performed, i.e., the memory recovery pressure of the system is obtained.
S43: judging whether a memory pressure value corresponding to the memory recovery pressure is smaller than a preset memory pressure threshold value or not;
generally, a memory pressure threshold is preset in the system for determining the magnitude of the memory recovery pressure. In order to judge the memory resource more correctly and avoid the difficulty of file cache recovery, the invention adds a memory recovery pressure monitor, the file cache pre-reading optimization can be carried out only if the available memory and the memory recovery pressure reach the standard, and the memory pressure monitor can be carried out to the memory recovery for obtaining.
For whether the optimization condition is met, specifically, when the memory pressure corresponding to the memory recovery pressure is obtained, whether the memory pressure value corresponding to the memory recovery pressure is smaller than a preset memory pressure threshold value is judged, and whether the optimization condition is met can be judged.
If the memory pressure value is smaller than the memory pressure threshold, step S44 is performed, that is, the IO pressure corresponding to the IO information is obtained, so as to adjust the current pre-reading step length to be the maximum pre-reading step length supported by the system.
The obtaining of the IO pressure corresponding to the IO information may specifically include: acquiring the total IO amount of a system and a time period used for completing the total IO amount; and obtaining the IO load corresponding to the IO pressure based on the quotient of the total IO amount and the time period so as to monitor the IO in real time.
The design of obtaining the IO pressure corresponding to the IO information is that a monitoring IO is added to the system, so that the load of the IO of the current system can be accurately read in real time when the memory is recycled. At present, the statistics of the IO system is put on a scheduling subsystem, the Linux can perform task scheduling in a fixed time, and the total IO amount generated in the process of the system is read at the moment, wherein the calculation formula of the IO load is as follows:
IO load = IO total amount/time
Generally, the total amount of IO for a period of time divided by the time represents the IO load for that period of time. The IO load judgment is added, so that the adjustment of pre-reading is more accurate.
If the available memory is less than or equal to the available memory threshold and if the memory pressure value is greater than or equal to the memory pressure threshold, step S45 is performed, that is, the file data of the target file is cached in the memory according to the current pre-reading step length.
Further, if the IO pressure is smaller than the IO pressure threshold, the current pre-reading step size is adjusted to be the maximum pre-reading step size supported by the system, please refer to fig. 5, where fig. 5 is a schematic flowchart of an embodiment of step S32 in fig. 3 or step S44 in fig. 4, and the step specifically includes:
s51: acquiring the current pre-reading step length as an increasing reference of the system pre-reading step length;
when the system is judged to meet the optimization condition, the obtained current pre-reading step length can be used as an increasing reference of the system pre-reading step length, and the current setting condition of the system pre-reading step length can be integrally known based on the reference.
For example, when the obtained current pre-reading step length is 4 pages and the maximum pre-reading step length in the system is 512 pages, it indicates that the tolerable degree of the memory pre-reading condition is small and the improved space is large; if the step size of the obtained current pre-reading number is 256 pages, and the maximum pre-reading step size in the system is not changed and is still 512 pages, it indicates that the tolerance of the original memory pre-reading condition is larger in practice, and the improved space is slightly smaller.
S52: setting the increasing amplitude of the current pre-reading step length based on the memory information;
based on the memory information, the increasing amplitude of the current pre-reading step length is set, so that sufficient system memory can be more favorably utilized. Specifically, if the maximum pre-reading number set by the system is 512 pages, the size of the first pre-reading may be set to be 4 pages, the size of the first pre-reading may also be set to be 8 pages, and the size of the first pre-reading may also be set to be 16 pages, and so on, which may be defined herein according to the requirement.
S53: and increasing the current pre-reading step size to be the maximum pre-reading step size supported by the system by increasing the reference and increasing the amplitude.
If the reference is increased and the amplitude is increased, that is, knowing the allocation relationship between the current pre-reading step size and the maximum pre-reading step size, the updated pre-reading step size can be obtained by adding the amplitude to the reference, so that the current pre-reading step size can be increased to the maximum pre-reading step size supported by the system by increasing the reference and the amplitude.
Specifically, for example, if the maximum pre-reading number set by the system is 512 pages, the size of the first pre-reading is 4 pages, and the maximum pre-reading number set by the system can be reached only after 8 pre-reading processes, which is relatively long; when the size of the first pre-reading is 256 pages, the maximum pre-reading number set by the system can be reached through 2 pre-reading processes, and if the current pre-reading number step length is directly increased to be the maximum pre-reading number step length supported by the system, the maximum pre-reading number set by the system can be reached through 1 pre-reading process.
At the moment, the memory of the system is abundant, the current pre-reading step length is increased to be the maximum pre-reading step length supported by the system, and the pre-reading process times can be effectively optimized, so that a long waiting process is not needed, more pages can be pre-read early, the probability of page missing of the system file cache is optimized, and the pre-reading fluency of the system is improved.
Further, referring to fig. 6 specifically, fig. 6 is a schematic flowchart of an embodiment of step S42 in fig. 3, where the step includes:
s61: reading a first timestamp corresponding to the memory recovery pressure, and performing memory recovery on the system;
when the memory allocation is to find a low memory, the kernel can perform memory recovery, when the memory recovery is optimized, a first timestamp can be added for recording the moment of reading the memory recovery pressure, the beginning of the memory recovery by the kernel is indicated by reading the first timestamp corresponding to the memory recovery pressure, and thus, the memory recovery is performed by the kernel system.
Specifically, in the process of recovering the pressure of the memory, the memory load can be dynamically detected, in order to dynamically monitor the real-time situation of the memory load, the first timestamp can be recorded through the memory load monitoring interface, so that the recovery of the memory pressure is started.
S62: acquiring a second timestamp after the memory is recovered;
s63: and subtracting the first timestamp from the second timestamp to obtain a pressure value of the memory recovery to be used as the current memory recovery pressure, so that the memory recovery pressure is set to be 0 after the memory recovery is successful.
By calculating the time of a single memory reclamation, if the time of a single memory reclamation success is too long, it indicates that the current memory pressure is large. A time threshold needs to be recorded at the beginning of the reclamation. The duration of a single memory reclamation record is denoted as STALL _ TIME, and STALL _ TIME = second timestamp — first timestamp, STALL _ TIME represents the pressure of the memory reclamation, and STALL _ TIME is dynamically variable. It should be noted that, after the memory recovery is successful, i.e. after the memory is sufficient, the STALL _ TIME needs to be set to 0 immediately, which avoids the occurrence of misjudgment during the pre-reading process.
By calculating the time of a single memory reclamation, if the time of a single memory reclamation success is too long, it indicates that the current memory pressure is large. Therefore, a time threshold needs to be counted when the recovery starts, and when the recovery process judges whether the memory is sufficiently recovered at the moment, the total time currently elapsed is continuously calculated. The elapsed TIME is STALL _ TIME, and when memory reclamation is successful, STALL _ TIME is set to 0, indicating that the current memory is sufficient and there is no memory pressure.
Because the application scenarios provided in this application are diverse and have a wide application scenario, such as an application start scenario, in this application, a typical scenario may be specifically an application started by a WeChat application as an example, please refer to fig. 7 and 8, and fig. 7 is a schematic flow diagram of dynamically detecting a memory load in this application; fig. 8 is a schematic flowchart of a second embodiment of the method for pre-reading data of the present application. The method for pre-reading the data of the present application will be described in detail below with reference to specific application scenarios.
In most cases, the core pre-read mechanism can significantly improve disk performance because the number of commands processed by the disk controller is reduced, with each command reading multiple adjacent sectors. In addition, the read-ahead mechanism also improves system response time. Of course, when most accesses to a process are random reads, a read ahead is detrimental to the system because it wastes kernel Cache space. The read ahead is reduced or turned off when the kernel determines that the most recently used I/O accesses are not sequential. The read-ahead algorithm predicts the pages to be accessed and reads them into the cache in advance of the batch.
The main functions and tasks of pre-reading include:
batch production: and small I/O is gathered into large I/O so as to improve the utilization rate of a disk and improve the throughput of a system.
In advance: and hiding the I/O delay of the disk for the application program so as to accelerate the program operation.
And (3) prediction: this is the core task of the read-ahead algorithm. The achievement of the first two functions depends on accurate predictive power. Currently, mainstream operating systems such as Linux, freeBSD and Solaris all follow a simple and effective principle that reading modes are divided into two categories of random reading and sequential reading, and only the sequential reading is pre-read. This principle is relatively conservative, but ensures a high read-ahead hit rate, while the efficiency/coverage is also good. Since sequential reads are the simplest and most common, random reads are also really difficult to predict in the kernel.
Pre-reading is performed in the following cases:
when the kernel processes a file data reading request of a user process; at this time, page _ cache _ sync _ readhead () or page _ cache _ async _ readhead () is called, and we have seen that it is called by the function do _ generic _ file _ read ();
when a kernel allocates a page for a file memory mapping (memory mapping);
the user program executes a system call readahead ();
when the user program executes posix _ fadvise () system call;
when the user program executes the madvise () system call, the MADV _ rolled command is used to notify the kernel file that a specific area of the memory map will be accessed in the future.
Pre-reading a file requires a complex algorithm:
reading data is performed on a page-by-page basis, and only the access page portion of the file is considered without considering the offset from within the page.
The read ahead may be incrementally incremented as long as the process continues to read data sequentially.
When the current access is not sequential to the previous access (i.e., random access), the read ahead must be reduced or stopped. The pre-reading must be stopped when the process is constantly accessing the same page of the file (only a small portion of the file is accessed), or when all the sides of the file are already in the page cache.
The standard for judging the sequence of the two read accesses by the kernel is as follows: the first page of the request is adjacent to the last page last accessed. Accessing a given file, the read-ahead algorithm uses two sets of pages: a current window and an advance window. The current window includes the page that the process has requested or the page that the kernel has read ahead and is in the page cache. (the pages in the current window are not necessarily up-to-date, as there may still be I/O data transfers in progress.) the forward window (ahead window) contains pages that are read ahead next to the kernel in the current window. Pages in the forward window are not requested by the process, but the kernel assumes that the process accesses these pages sooner or later.
When the kernel determines that a sequential access and the initial page belong to the current window, it checks whether the forward window has already been established. If not, the kernel establishes a new forward window and triggers a read operation for the corresponding file page. Ideally, the pages being accessed by the process are all in the current window, while the file pages in the forward window are being transferred. When the page accessed by the process is in the forward window, the forward window becomes the current window.
From the above mechanism, it can be seen that the design flow of kernel pre-reading is to first determine whether the read is sequential read, and perform different operations:
if the reading is sequential, the number of pre-reading is gradually increased, generally 4 file pages are pre-read at the beginning, and the number of pre-reading pages is gradually increased by multiples of 2 at the back until the number of pre-reading pages set by the system is increased. If the read is random, the read ahead is reduced or turned off.
The pre-reading process needs to be carried out for a plurality of times from the beginning to the maximum value set by the system. For example, the maximum pre-reading number set by the system is 512 pages, the size of the first pre-reading is 4 pages, and the maximum pre-reading number set by the system can be reached only through 8 pre-reading processes, which is relatively long. If the memory of the system is abundant at this time, theoretically, more pages can be read in advance without going through such a long process, the probability of missing pages in the file cache of the system can be optimized at a certain probability, and the fluency of the system is provided.
The Linux memory pressure detection is mainly performed through two ways:
firstly, judging the remaining free memory: the value is very intuitive, the current unused memory in the system is shown, if the memory exceeds a certain threshold value, namely the idle memory of the comparison system is abundant, the memory resource at the moment is abundant, and the memory can be directly read in advance to the maximum extent;
secondly, available memory judgment: available memory = remaining free memory + file cache
After the file caches of Linux are read in, if the internal memory is not in shortage, the file caches cannot be recycled, and the file caches can be stored in the system so as to quickly respond when the file caches are reused next time, but the recycling of the file caches is mostly faster. Therefore, if the remaining free space is insufficient, but the available memory exceeds a certain threshold, the current memory pressure is large if the time for single memory recovery is too long through calculating the time for single memory recovery. A time threshold is required at the start of recovery.
Through comparison and judgment, the system is considered to have abundant memory resources at the moment. In addition, in order to judge the memory resource more correctly and avoid the difficulty of file cache recovery, the invention adds one more memory recovery pressure monitoring, and the file cache pre-reading optimization can be carried out only if the available memory and the memory recovery pressure reach the standard.
The monitoring of the memory pressure can be carried out by recovering the memory, and the logic of judgment is simpler. Specifically, a schematic flow chart of a supervision mechanism for memory recycling is shown in fig. 7, and specifically includes the following steps:
s71: detecting a memory load;
s72: distributing the memory;
s73: judging whether the system is low memory;
s74: if the memory is low, recording a first timestamp, such as timestamp 1;
s75: performing memory recovery;
s76: acquiring a second timestamp, such as timestamp 2, after the memory is recycled;
s77: judging whether the memory of the system is sufficient at the moment;
s78: if the memory of the system is sufficient, setting the memory recovery pressure to be 0;
s79: and finishing memory recovery.
When the memory allocation is to find a low memory, the kernel performs memory recovery, the TIME stamp 1 is added in the optimization when the memory recovery starts, the TIME stamp 2 is continuously acquired in the recovery process, and the duration of the memory recovery is marked as STALL _ TIME, STALL _ TIME = TIME stamp 2-TIME stamp 1, STALL \/TIME indicates the pressure of the memory recovery.
It should be noted that, after the memory recovery is successful, i.e., after the memory is sufficient, the STALL _ TIME needs to be set to 0 immediately, which avoids the occurrence of false determination during the pre-reading process.
For the judgment of IO resources, the design adds a design for monitoring IO to the system, and can accurately read the load of the IO of the current system in real time when the memory is recovered. At present, the statistics of an IO system is put on a scheduling subsystem, the Linux can schedule tasks in a fixed time, the IO total amount generated in the system in the process is read at the moment, and the calculation formula of the IO load is as follows:
IO load = IO total amount/time
The total IO amount in a period of time divided by the time is the IO load in the period of time. The IO load judgment is added here to make the adjustment of the pre-reading more accurate.
As shown in fig. 8, the pre-read reaches the maximum number of pre-read pages in advance according to the memory and IO resources. The optimized process of Linux asynchronous reading calls a page _ cache _ async _ readahead function in a kernel for prereading, the designed logic is simple, when the system performs prereading, the memory resource and IO resource of the system at the moment are judged, and whether the prereading size needs to be set as the maximum prereading page number set by the system in advance is judged.
Setting the pre-reading size as the maximum pre-reading of the system in advance, and judging the adopted means:
s81: entering asynchronous pre-reading;
s82: in the asynchronous pre-reading process, reading the rest idle memory of the system;
s83: judging whether the residual free memory is larger than a free memory threshold value;
s84: if the idle memory is larger than the idle memory threshold value, reading the available memory of the system;
s85: judging whether the available memory is larger than the available memory threshold value or not;
s86: if the available memory is larger than the available memory threshold value, acquiring system memory recovery pressure;
the acquisition of the system memory recovery pressure is specifically determined by the mechanism for detecting the memory pressure, as shown in fig. 7.
S87: judging whether the memory recovery pressure is smaller than a memory pressure threshold value or not;
s88: if the memory recovery pressure is smaller than the memory pressure threshold, acquiring the IO pressure of the system to further judge the memory recovery pressure;
s89: judging whether the IO pressure is smaller than an IO pressure threshold value;
s90: if the IO pressure is smaller than the IO pressure threshold, adjusting the pre-reading size to be the maximum pre-reading set by the system;
s91: performing a pre-read if the free memory is less than or equal to a free memory threshold, if the available memory is less than or equal to an available memory threshold, if the memory reclamation pressure is greater than or equal to a memory pressure threshold, and if the IO pressure is greater than or equal to an IO pressure threshold.
Therefore, the step length of each pre-reading is increased by monitoring the memory information and the IO information in real time, so that the data volume of each pre-reading is more, the pre-reading times are reduced, the time of each pre-reading is related to the data volume of the pre-reading, and although not certain, the time of the pre-reading can still be saved by unifying the multi-time pre-reading flow into one pre-reading.
As can be seen from the above and the flow shown in fig. 8, the condition for setting the size of the pre-read in advance to the maximum number of pre-read pages set by the system is one of the following: the remaining free pages of the system are above a certain threshold and the IO load is below a certain threshold; or the system availability is above a certain threshold, the memory reclamation pressure is below a threshold, and the IO load is below a certain threshold; and setting the pre-reading size as the maximum pre-reading page number set by the system in advance according to the adequacy of the memory and IO (input/output) resources of the system. After the optimization, the response speed of the file use can be improved in the scene with sufficient system resources and in the start of the WeChat application.
Referring to fig. 9, fig. 9 is a schematic block diagram of an embodiment of a detection device of an electronic device according to the present application. The embodiment of the application provides a detection apparatus 6 for an electronic device, including:
an obtaining module 61, configured to obtain a read-ahead request of target data;
the obtaining module 61 is further configured to obtain memory information and IO information according to the read-ahead request;
the adjusting module 62 is connected to the obtaining module 61, and is configured to adjust the current pre-reading step length to a maximum pre-reading step length supported by the system when the memory information and the IO information meet a preset requirement, where the current pre-reading step length is less than or equal to the maximum pre-reading step length of the system;
and the pre-reading module 63 is connected with the adjusting module 62 and is used for caching the pre-read target data into the memory according to the maximum pre-read step length.
Therefore, according to the application, aiming at the situation that the pre-reading often needs to be performed for a long waiting time in the process of several times of pre-reading between the beginning of the pre-reading and the maximum value of the pre-reading set by the system, the obtaining module 61 obtains the memory information and the IO information according to the pre-reading request, the actual situation of the system memory and the performance of the disk can be monitored in time, and then when the memory information and the IO information meet the preset requirement, the current pre-reading step length is adjusted to be the maximum pre-reading step length supported by the system through the adjusting module 62, the number of the pre-reading times can be reduced to 1, so that the pre-reading module 63 pre-reads the maximum value of the pre-reading from the beginning to the system setting under the condition that the system resources are sufficient. Through the mode, the method and the device can effectively adjust the current pre-reading step length to be the maximum pre-reading step length supported by the system, and improve the response speed of data use, so that the pre-reading waiting time is reduced, and the pre-reading performance of the system is improved.
Further, please refer to fig. 10, where fig. 10 is a schematic diagram of another embodiment of the mobile terminal according to the present application. The embodiment of the present application provides another mobile terminal 7, including: the processor 71 and the memory 72, the memory 72 stores a computer program 721, and the processor 71 is configured to execute the computer program 721 in the method according to the first aspect of the embodiment of the present application, which is not described herein again.
Referring to fig. 11, fig. 11 is a schematic block diagram of an embodiment of a computer-readable storage medium of the present application. If implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in the computer readable storage medium 80. Based on such understanding, the technical solutions of the present application, which are essential or contribute to the prior art, or all or part of the technical solutions may be embodied in the form of a software product, which is stored in a storage device and includes several instructions (computer program 81) for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the methods according to the embodiments of the present application. The foregoing storage device includes: various media such as a usb disk, a portable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and electronic devices such as a computer, a mobile phone, a notebook computer, a tablet computer, and a camera having the storage medium.
The description of the implementation process of the computer program in the computer readable storage medium can refer to the above description of the method embodiment of the mobile terminal 80 of the present application, and will not be repeated here.
Referring to fig. 12, fig. 12 is a schematic block diagram of a hardware architecture of a mobile terminal according to the present application, where the mobile terminal 900 may be an industrial computer, a tablet computer, a mobile phone, a notebook computer, and the like, and the mobile phone is taken as an example in the embodiment. The mobile terminal 900 may be configured to include a Radio Frequency (RF) circuit 910, a memory 920, an input unit 930, a display unit 940, a sensor 950, an audio circuit 960, a WiFi (wireless fidelity) module 970, a processor 980, a power supply 990, and the like. Wherein the RF circuit 910, the memory 920, the input unit 930, the display unit 940, the sensor 950, the audio circuit 960, and the WiFi module 970 are respectively connected to the processor 980; the power supply 990 is used to supply power to the entire mobile terminal 900.
Specifically, the RF circuit 910 is used for transmitting and receiving signals; the memory 920 is used for storing data instruction information; the input unit 930 is used for inputting information, and may specifically include a touch panel 931 and other input devices 932 such as operation keys; the display unit 940 may include a display panel or the like; the sensor 950 includes an infrared sensor, a laser sensor, etc. for detecting a user approach signal, a distance signal, etc.; a speaker 961 and a microphone (or microphone) 962 are coupled to the processor 980 via the audio circuit 960 for receiving and transmitting sound signals; the WiFi module 970 is configured to receive and transmit WiFi signals, and the processor 980 is configured to process data information of the mobile terminal.
The above description is only a part of the embodiments of the present application, and not intended to limit the scope of the present application, and all equivalent devices or equivalent processes performed by the content of the present application and the attached drawings, or directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. A method for pre-reading data, the method comprising:
obtaining a pre-reading request of target data;
according to the pre-reading request, obtaining IO information and the rest idle memory of the system;
when the IO information meets a preset requirement, judging whether the residual idle memory is larger than an idle memory threshold value; if the residual idle memory is larger than the idle memory threshold, adjusting the current pre-reading step length to be the maximum pre-reading step length supported by the system, and pre-reading the target data into the memory according to the maximum pre-reading step length;
if the residual free memory is less than or equal to the free memory threshold value, reading the available memory of the system; wherein the available memory comprises the residual idle memory and a file cache;
judging whether the available memory is larger than an available memory threshold value or not; if the available memory is smaller than or equal to the available memory threshold, caching the file data of the target file into the memory according to the current pre-reading number step length;
if the available memory is larger than the available memory threshold value, acquiring the memory recovery pressure of the system; judging whether the memory pressure value corresponding to the memory recovery pressure is smaller than a preset memory pressure threshold value or not; if the memory pressure value is greater than or equal to the memory pressure threshold, caching the file data of the target file into the memory according to the current pre-reading step length;
if the memory pressure value is smaller than the memory pressure threshold, adjusting the current pre-reading step length to be the maximum pre-reading step length supported by the system, and pre-reading the target data to the memory according to the maximum pre-reading step length.
2. The method of claim 1,
reading a used memory of a memory resource corresponding to the memory information of the system according to the pre-reading request, and acquiring the residual idle memory according to the used memory;
and reading the IO pressure corresponding to the IO information when the residual idle memory is larger than the idle memory threshold value, and adjusting the current pre-reading step length according to the IO pressure.
3. The method of claim 2,
reading IO pressure corresponding to the IO information, and adjusting the current pre-reading step length according to the IO pressure, including:
judging whether the IO pressure is smaller than a preset IO pressure threshold value or not;
if the IO pressure is smaller than the IO pressure threshold, adjusting the current pre-reading step length to be the maximum pre-reading step length supported by the system;
if the IO pressure is larger than or equal to the IO pressure threshold, caching the file data of the target file into a memory according to the current pre-reading step length.
4. The method of claim 3,
reading the IO pressure corresponding to the IO information includes:
acquiring the IO total amount of the system and the time period used for completing the IO total amount;
and obtaining the IO load corresponding to the IO pressure based on the quotient of the IO total amount and the time period so as to monitor the IO in real time.
5. The method of claim 2,
and when the memory pressure value is smaller than the memory pressure threshold value, obtaining the IO pressure corresponding to the IO information and adjusting the current pre-reading step length according to the IO pressure.
6. The method of claim 2,
the adjusting the current pre-reading step length according to the IO pressure includes:
acquiring the current pre-reading step length as an increasing reference of the system pre-reading step length;
setting the increasing amplitude of the current pre-reading step length based on the memory information;
and increasing the current pre-reading step length to be the maximum pre-reading step length supported by the system according to the increase reference and the increase amplitude.
7. The method of claim 1,
the acquiring the memory recovery pressure of the system comprises:
reading a first timestamp corresponding to the memory recovery pressure, and performing memory recovery on the system;
acquiring a second timestamp after the memory is recovered;
and subtracting the first timestamp from the second timestamp to obtain a pressure value of memory recovery to serve as the current memory recovery pressure, so that the memory recovery pressure is set to be 0 after the memory recovery is successful.
8. A mobile terminal, comprising:
the acquisition module is used for acquiring a pre-reading request of target data;
the obtaining module is further configured to obtain memory information and IO information according to the read-ahead request;
the adjusting module is connected with the acquiring module and used for adjusting the current pre-reading step length to be the maximum pre-reading step length supported by a system when the memory information and the IO information meet the preset requirement, wherein the current pre-reading step length is smaller than or equal to the maximum pre-reading step length of the system;
the pre-reading module is connected with the adjusting module and used for pre-reading the target data and caching the target data into a memory according to the maximum pre-reading number step length;
wherein the mobile terminal is configured to perform the method of any one of claims 1 to 7.
9. A mobile terminal, comprising: a processor and a memory, the memory having stored therein a computer program for executing the computer program to implement the method of any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN202011407098.4A 2020-12-03 2020-12-03 Data pre-reading method, mobile terminal and computer readable storage medium Active CN112558866B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011407098.4A CN112558866B (en) 2020-12-03 2020-12-03 Data pre-reading method, mobile terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011407098.4A CN112558866B (en) 2020-12-03 2020-12-03 Data pre-reading method, mobile terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112558866A CN112558866A (en) 2021-03-26
CN112558866B true CN112558866B (en) 2022-12-09

Family

ID=75048421

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011407098.4A Active CN112558866B (en) 2020-12-03 2020-12-03 Data pre-reading method, mobile terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112558866B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114461588B (en) * 2021-08-20 2023-01-24 荣耀终端有限公司 Method for adjusting pre-reading window and electronic equipment
CN114461589B (en) * 2021-08-24 2023-04-11 荣耀终端有限公司 Method for reading compressed file, file system and electronic equipment
CN113760192B (en) * 2021-08-31 2022-09-02 荣耀终端有限公司 Data reading method, data reading apparatus, storage medium, and program product
CN114327284B (en) * 2021-12-30 2023-02-03 河北建筑工程学院 Data processing method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102508638A (en) * 2011-09-27 2012-06-20 华为技术有限公司 Data pre-fetching method and device for non-uniform memory access
CN105955821A (en) * 2016-04-21 2016-09-21 北京小米移动软件有限公司 Method and device for pre-reading
CN109542361A (en) * 2018-12-04 2019-03-29 郑州云海信息技术有限公司 A kind of distributed memory system file reading, system and relevant apparatus
CN110209502A (en) * 2019-06-05 2019-09-06 北京奇艺世纪科技有限公司 A kind of information storage means, device, electronic equipment and storage medium
CN110888746A (en) * 2019-12-10 2020-03-17 Oppo(重庆)智能科技有限公司 Memory management method and device, storage medium and electronic equipment
CN111930513A (en) * 2020-08-31 2020-11-13 Oppo(重庆)智能科技有限公司 File pre-reading adjusting method and device, electronic equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7536529B1 (en) * 2005-06-10 2009-05-19 American Megatrends, Inc. Method, system, apparatus, and computer-readable medium for provisioning space in a data storage system
CN106970881B (en) * 2017-03-10 2020-04-28 浙江大学 Hot and cold page tracking and compression recovery method based on large page
CN111258967A (en) * 2020-02-11 2020-06-09 西安奥卡云数据科技有限公司 Data reading method and device in file system and computer readable storage medium
CN111930307B (en) * 2020-07-30 2022-06-17 北京浪潮数据技术有限公司 Data reading method, device and equipment and computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102508638A (en) * 2011-09-27 2012-06-20 华为技术有限公司 Data pre-fetching method and device for non-uniform memory access
CN105955821A (en) * 2016-04-21 2016-09-21 北京小米移动软件有限公司 Method and device for pre-reading
CN109542361A (en) * 2018-12-04 2019-03-29 郑州云海信息技术有限公司 A kind of distributed memory system file reading, system and relevant apparatus
CN110209502A (en) * 2019-06-05 2019-09-06 北京奇艺世纪科技有限公司 A kind of information storage means, device, electronic equipment and storage medium
CN110888746A (en) * 2019-12-10 2020-03-17 Oppo(重庆)智能科技有限公司 Memory management method and device, storage medium and electronic equipment
CN111930513A (en) * 2020-08-31 2020-11-13 Oppo(重庆)智能科技有限公司 File pre-reading adjusting method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112558866A (en) 2021-03-26

Similar Documents

Publication Publication Date Title
CN112558866B (en) Data pre-reading method, mobile terminal and computer readable storage medium
KR102114388B1 (en) Method and apparatus for compressing memory of electronic device
CN111158910B (en) Memory management method and device, storage medium and electronic equipment
US8590001B2 (en) Network storage system with data prefetch and method of operation thereof
CN111352861B (en) Memory compression method and device and electronic equipment
CN110888746A (en) Memory management method and device, storage medium and electronic equipment
US11263139B2 (en) Hardware accelerators and access methods thereof
EP3531265A1 (en) Dram-based storage cache method and intelligent terminal
CN111930513B (en) File pre-reading adjusting method and device, electronic equipment and storage medium
CN111274039A (en) Memory recovery method and device, storage medium and electronic equipment
CN112711387A (en) Method and device for adjusting capacity of buffer area, electronic equipment and readable storage medium
CN115421651A (en) Data processing method of solid state disk, electronic device and medium
CN113316794A (en) Data management device for supporting high-speed artificial neural network operation by data cache based on data position of artificial neural network
CN115145735A (en) Memory allocation method and device and readable storage medium
WO2021047398A1 (en) Method and device for storage block reclaiming, storage medium, and electronic device
CN114168495A (en) Enhanced read-ahead capability for memory devices
US7512753B2 (en) Disk array control apparatus and method
CN114564315A (en) Memory allocation method and device, electronic equipment and medium
CN114416178A (en) Data access method, device and non-transitory computer readable storage medium
CN111078405B (en) Memory allocation method and device, storage medium and electronic equipment
CN112130766A (en) Data writing method, device, equipment and storage medium based on Flash memory
CN113138940A (en) Memory recovery method and device, electronic equipment and storage medium
CN114968546A (en) Load processing method and related device
JP5233541B2 (en) Memory control circuit, electronic device control device, and multifunction device
CN117492662B (en) Pre-reading determination method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant