CN113760192B - Data reading method, data reading apparatus, storage medium, and program product - Google Patents
Data reading method, data reading apparatus, storage medium, and program product Download PDFInfo
- Publication number
- CN113760192B CN113760192B CN202111017226.9A CN202111017226A CN113760192B CN 113760192 B CN113760192 B CN 113760192B CN 202111017226 A CN202111017226 A CN 202111017226A CN 113760192 B CN113760192 B CN 113760192B
- Authority
- CN
- China
- Prior art keywords
- data
- read
- reading
- hit rate
- memory
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The application discloses a data reading method, a data reading device, a storage medium and a program product, and belongs to the technical field of computers. The method comprises the following steps: when the target service runs, the pre-reading data hit rate of the target service in the last running is obtained, wherein the pre-reading data hit rate is the rate of data pre-read from a file system to a memory to be accessed. And then, setting a first pre-reading window threshold value according to the hit rate of the pre-reading data, wherein the first pre-reading window threshold value is the maximum data volume which can be pre-read once when the target service pre-reads data from the file system to the memory in the current operation process. And finally, pre-reading data from the file system to the memory according to the first pre-reading window threshold value. In the application, the threshold of the first pre-reading window depends on the hit rate of the pre-reading data, so that the pre-reading data volume of the target service in the current operation process is matched with the hit rate of the pre-reading data, excessive or insufficient pre-reading of the data can be avoided, and reasonable use of system resources can be ensured.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a data reading method, an apparatus, a storage medium, and a program product.
Background
In a scenario of starting an application, playing a game, etc., a plurality of Input Output (IO) requests are often generated, and the IO requests are used for requesting to sequentially read data in a file system. In order to improve the data reading efficiency, a data pre-reading mechanism is arranged. Specifically, each time an IO request is received, in addition to accessing data requested by the IO request, subsequent data is sequentially read into the memory in the file system, that is, the subsequent data is read into the memory in advance. Therefore, when the next IO request is received, the data requested by the IO request can be directly accessed in the data pre-read to the memory without reading from the file system, and the data access can be accelerated. However, the data pre-read from the file system into the memory is not always accessed, and when more pre-read data is not accessed, the memory resource and the IO resource are wasted.
Disclosure of Invention
The application provides a data reading method, a data reading device, a storage medium and a program product, which can avoid excessive or insufficient pre-reading of data, thereby ensuring reasonable use of system resources. The technical scheme is as follows:
in a first aspect, a data reading method is provided. In the method, when a target service runs, the pre-reading data hit rate of the target service in the last running is obtained. And then, setting a first pre-reading window threshold according to the pre-reading data hit rate. And finally, pre-reading data from the file system to the memory according to the first pre-reading window threshold value.
The target service is a service that needs to sequentially read data stored in the file system during operation. That is, the target service needs to read the data stored in the file system at run-time. For example, the target service may be to start an application, perform some operation (such as displaying a game interface) in the application, and the like, which is not limited in this embodiment of the application.
The read-ahead data hit rate is the rate at which data from the file system read-ahead to the memory is accessed. That is, the read-ahead data hit rate is: amount of read-ahead data accessed/total amount of read-ahead data.
The pre-reading window threshold is the maximum data volume that can be pre-read at a single time when data is pre-read from the file system to the memory, and is used for limiting the size of the pre-reading window used when data is pre-read from the file system to the memory, that is, the size of the pre-reading window cannot exceed the pre-reading window threshold. In other words, when an IO request is received, if data needs to be pre-read from the file system to the memory, the data amount pre-read from the file system to the memory cannot exceed the data amount indicated by the pre-read window threshold at most, that is, the size of the pre-read window used in the pre-read at this time cannot exceed the pre-read window threshold.
In the application, the first pre-reading window threshold is the maximum data volume which can be pre-read once when the target service pre-reads data from the file system to the memory in the current operation process, and the first pre-reading window threshold depends on the hit rate of the pre-reading data, so that the pre-reading data volume in the current operation process of the target service can be matched with the hit rate of the pre-reading data, thereby avoiding excessive or insufficient pre-reading of data, further ensuring reasonable use of system resources, and ensuring normal operation of other services.
In a possible mode, the terminal can determine the first pre-reading window threshold value according to the pre-reading data hit rate when the target service operates last time when the target service just starts to operate, so that the pre-reading data volume can be limited according to the first pre-reading window threshold value in the whole operation process of the target service, thereby avoiding excessive or insufficient pre-reading of data, further ensuring reasonable use of system resources, and ensuring normal operation of other services.
In this case, when the target service runs, the operation of obtaining the hit rate of the pre-read data when the target service last runs may be: receiving an IO request generated in the running process of a target service, wherein the IO request is used for requesting to read data stored in a file system; if the data requested by the IO request is not found in the memory, the pre-reading data hit rate of the target service in the last operation is obtained. Accordingly, after data is pre-read from the file system to the memory according to the first pre-read window threshold, the data requested by the IO request may also be accessed in the memory.
If the terminal does not find the data requested by the IO request in the memory, it indicates that the IO request is the first IO request generated in the current operation process of the target service, i.e., it indicates that the target service just starts to operate, then the pre-read data hit rate of the target service in the last operation process may be obtained, so that the pre-read window threshold value that needs to be used in the current operation process of the target service may be set according to the pre-read data hit rate in the following, and thus the pre-read data amount may be limited according to the pre-read window threshold value in the whole operation process of the target service.
Optionally, the operation of setting the first read-ahead window threshold according to the read-ahead data hit rate may be: if the hit rate of the pre-read data is smaller than the first hit rate threshold, the pre-read window threshold used in the last operation of the target service is reduced and then is used as the first pre-read window threshold. And if the pre-reading data hit rate is greater than or equal to the first hit rate threshold and less than or equal to the second hit rate threshold, taking the pre-reading window threshold used in the last operation of the target service as the first pre-reading window threshold, wherein the second hit rate threshold is greater than the first hit rate threshold. And if the hit rate of the pre-reading data is greater than the second hit rate threshold, increasing the pre-reading window threshold used when the target service operates last time and then using the increased pre-reading window threshold as the first pre-reading window threshold.
If the pre-read data hit rate is smaller than the first hit rate threshold, it indicates that the rate of data pre-read from the file system to the memory being accessed is smaller in the last operation process of the target service, that is, more data which is not accessed is pre-read in the last operation process of the target service, and indicates that the pre-read window threshold used in the last operation process of the target service is larger. In order to avoid unnecessary and excessive pre-reading of data in the current running process of the target service, the pre-reading window threshold used in the last running process of the target service may be reduced and then used as the first pre-reading window threshold, so as to reduce the amount of pre-reading data and avoid wasting system resources.
If the pre-read data hit rate is greater than or equal to the first hit rate threshold and less than or equal to the second hit rate threshold, it means that the rate of data pre-read from the file system to the memory in the last operation process of the target service is moderate, that is, the amount of pre-read data in the last operation process of the target service is relatively proper, which means that the pre-read window threshold used in the last operation process of the target service is moderate. In the current running process of the target service, the pre-reading window threshold used in the last running of the target service can be kept, that is, the pre-reading window threshold used in the last running of the target service is directly used as the first pre-reading window threshold.
If the pre-read data hit rate is greater than the second hit rate threshold, it indicates that the ratio of pre-reading data from the file system to the accessed data in the memory is large in the last operation process of the target service, that is, the pre-read data volume in the last operation process of the target service is beneficial to the system operation. In order to further improve the pre-reading efficiency in the current operation process of the target service, the pre-reading window threshold used in the last operation of the target service may be increased and then used as the first pre-reading window threshold, so as to increase the pre-reading data amount.
Alternatively, the data stored in the file system is compressed data, for example, the file system may be data stored in an extensible read-only file system (EROFS), that is, compressed data. In this case, before the data is pre-read from the file system to the memory according to the first pre-read window threshold, a data decompression rule during the current operation of the target service may be further set according to the pre-read data hit rate, where the data decompression rule is used to indicate whether decompression is required when the compressed data is pre-read from the file system to the memory. Therefore, in the current running process of the target service, whether decompression is needed when the compressed data is preread from the file system to the memory depends on the hit rate of the preread data, so that the waste of CPU resources is avoided as much as possible.
Optionally, if the hit rate of the pre-read data is smaller than the first hit rate threshold, which indicates that the amount of the pre-read data is excessive, the pre-read compressed data may be delayed to be decompressed when being accessed, so as to avoid wasting CPU resources, and thus the data decompression rule during the current operation of the target service may be set as: the compressed data is pre-read from the file system to the memory without decompression.
If the pre-read data hit rate is greater than or equal to the first hit rate threshold and less than or equal to the second hit rate threshold, which indicates that the pre-read data amount is moderate, normal decompression can be performed while pre-reading compressed data, so that the data decompression rule during the current operation of the target service can be set as follows: when the compressed data is read from the file system to the memory in advance, decompression is needed.
If the hit rate of the pre-read data is greater than the second hit rate threshold, which indicates that the pre-read effect is better, the compressed data can be pre-read while being decompressed normally, so that the data decompression rule during the current operation of the target service can be set as follows: decompression is needed when the compressed data is pre-read from the file system to the memory.
If the data decompression rule indicates that decompression is not needed, when data is pre-read from the file system to the memory according to the first pre-read window threshold, compressed data is directly pre-read from the file system to the memory according to the first pre-read window threshold, the pre-read compressed data is not decompressed, and the compressed data is stored in the memory at this time. In this case, when accessing the data requested by the IO request in the memory, the compressed data requested by the IO request in the memory is decompressed first, and then the decompressed data is accessed. That is, when the hit rate of the pre-read data is low, the compressed data is not decompressed when the compressed data is pre-read from the file system to the memory, and the compressed data is decompressed when specific compressed data in the memory is subsequently accessed. Therefore, decompression of a large amount of compressed data which cannot be actually accessed subsequently in the pre-reading stage can be avoided, unnecessary decompression operation is reduced, and CPU resources are saved.
If the data decompression rule indicates that decompression is required, when data is pre-read from the file system to the memory according to the first pre-read window threshold, pre-reading compressed data from the file system to the memory according to the first pre-read window threshold, and decompressing the pre-read compressed data, wherein the decompressed data is stored in the memory at this time. In this case, when accessing the data requested by the IO request in the memory, the decompressed data requested by the IO request may be directly accessed in the memory. That is to say, under the condition that the hit rate of the pre-read data is moderate or high, the compressed data is pre-read from the file system to the memory and decompressed at the same time, and the decompressed data in the memory can be directly accessed subsequently, so that the pre-read effect can be better exerted.
Further, the method and the device can adjust the pre-reading window threshold used when the target service runs, and can also adjust the pre-reading window threshold used when the application process runs, and the specific process is as follows:
after the application process is switched from the foreground to the background to run, taking a pre-reading window threshold value used by the application process before switching as a second pre-reading window threshold value; and in the background running process of the application process, when an IO request of the application process is received, pre-reading data from a file system to a memory according to a specified pre-reading window threshold, and accessing the data requested by the IO request of the application process in the memory, wherein the specified pre-reading window threshold is smaller than a second pre-reading window threshold.
The specified pre-read window threshold is a small pre-read window threshold that is preset. The specified pre-read window threshold is less than the second pre-read window threshold. Under the condition, when the application process is switched to the background operation, the threshold value of the pre-reading window is adjusted to be small so as to reduce the pre-reading data volume and reduce the system resource consumption, thereby reducing the influence on foreground application. In some cases, the specified pre-reading window threshold may be set to 0, so that pre-reading is not performed when the application process is switched to the background operation, and thus, system resource consumption may be greatly reduced.
For the pre-reading in the background running process of the application process, under the condition that the data stored in the file system is compressed data, the pre-read compressed data is not decompressed every time the compressed data is pre-read from the file system to the memory, and the compressed data is backlogged when specific compressed data in the memory is accessed subsequently, so that the resource consumption of the system is further reduced.
After the application process is switched to the foreground to run from the background, in the foreground running process of the application process, pre-reading data from the file system to the memory according to the second pre-reading window threshold value every time the IO request of the application process is received, and accessing the data requested by the IO request of the application process in the memory. That is, when the application process is switched to foreground operation, the pre-reading window threshold is restored to normal, so as to ensure the pre-reading effect and the normal operation of the application process.
For the pre-reading in the foreground operation process of the application process, under the condition that the data stored in the file system is compressed data, the pre-read compressed data is decompressed every time the compressed data is pre-read from the file system to the memory, so that the decompressed data in the memory can be directly accessed in the subsequent process, the pre-reading effect can be further ensured, and the normal operation of the application process can be ensured.
In a second aspect, a data reading apparatus is provided, which has a function of implementing the behavior of the data reading method in the first aspect described above. The data reading device comprises at least one module, and the at least one module is used for realizing the data reading method provided by the first aspect.
In a third aspect, a data reading apparatus is provided, where the structure of the data reading apparatus includes a processor and a memory, and the memory is used to store a program that supports the data reading apparatus to execute the data reading method provided in the first aspect, and store data used to implement the data reading method in the first aspect. The processor is configured to execute programs stored in the memory. The data reading device may further comprise a communication bus for establishing a connection between the processor and the memory.
In a fourth aspect, a computer-readable storage medium is provided, which has instructions stored therein, and when the computer-readable storage medium runs on a computer, the computer is caused to execute the data reading method of the first aspect.
In a fifth aspect, there is provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the data reading method of the first aspect described above.
The technical effects obtained by the second, third, fourth and fifth aspects are similar to the technical effects obtained by the corresponding technical means in the first aspect, and are not described herein again.
Drawings
Fig. 1 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 2 is a block diagram of a software system of a terminal according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a data reading process provided by an embodiment of the present application;
fig. 4 is a flowchart of a data reading method provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of a data reading system according to an embodiment of the present application;
FIG. 6 is a flow chart of another data reading method provided by the embodiments of the present application;
FIG. 7 is a schematic interface diagram of a music application launched according to an embodiment of the present application;
FIG. 8 is a schematic diagram of another data reading process provided by an embodiment of the present application;
fig. 9 is a schematic structural diagram of a data reading apparatus according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It should be understood that reference to "a plurality" in this application means two or more. In the description of this application, "/" indicates an inclusive meaning, for example, A/B may indicate either A or B; "and/or" herein is only an association relationship describing an associated object, and means that there may be three relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, for the convenience of clearly describing the technical solutions of the present application, the words "first", "second", and the like are used to distinguish the same items or similar items having substantially the same functions and actions. Those skilled in the art will appreciate that the terms "first," "second," etc. do not denote any order or quantity, nor do the terms "first," "second," etc. denote any order or importance.
Before explaining the data reading method provided by the embodiment of the present application in detail, the terminal according to the embodiment of the present application will be explained.
Fig. 1 is a schematic structural diagram of a terminal according to an embodiment of the present application. Referring to fig. 1, the terminal 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation to the terminal 100. In other embodiments of the present application, terminal 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The controller may be, among other things, a neural center and a command center of the terminal 100. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces, such as an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, among others.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the terminal 100, and may also be used to transmit data between the terminal 100 and peripheral devices. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The USB interface 130 may also be used to connect other terminals, such as AR devices, etc.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the terminal 100. The charging management module 140 may also supply power to the terminal 100 through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may also be disposed in the same device.
The wireless communication function of the terminal 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in terminal 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. Such as: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication and the like applied to the terminal 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication applied to the terminal 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), Bluetooth (BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves via the antenna 2 to radiate the electromagnetic waves.
In some embodiments, the antenna 1 of the terminal 100 is coupled with the mobile communication module 150 and the antenna 2 is coupled with the wireless communication module 160 so that the terminal 100 can communicate with a network and other devices through a wireless communication technology. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technologies, among others. GNSS may include Global Positioning System (GPS), global navigation satellite system (GLONASS), beidou satellite navigation system (BDS), quasi-zenith satellite system (QZSS), and/or Satellite Based Augmentation System (SBAS).
The terminal 100 implements a display function through the GPU, the display screen 194, and the application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-o led, a quantum dot light-emitting diode (QLED), or the like. In some embodiments, terminal 100 may include 1 or N displays 194, where N is an integer greater than 1.
The terminal 100 can implement a photographing function through the ISP, the camera 193, the video codec, the GPU, the display screen 194, and the application processor, etc.
The ISP is used to process the data fed back by the camera 193. For example, when taking a picture, open the shutter, on light passed through the lens and transmitted camera light sensing element, light signal conversion was the signal of telecommunication, and camera light sensing element transmits the signal of telecommunication to ISP and handles, turns into the image that the naked eye is visible. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the terminal 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. Such as saving files of music, video, etc. in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the terminal 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, and the like) required by at least one function, and the like. The storage data area may store data (e.g., audio data, a phonebook, etc.) created during use of the terminal 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The terminal 100 can implement audio functions, such as music playing, recording, etc., through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
Next, a software system of the terminal 100 will be explained.
The software system of the terminal 100 may adopt a hierarchical architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. In the embodiment of the present application, an Android (Android) system with a layered architecture is taken as an example to exemplarily describe a software system of the terminal 100.
Fig. 2 is a block diagram of a software system of the terminal 100 according to an embodiment of the present disclosure. Referring to fig. 2, the layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system layer, and a kernel layer from top to bottom.
The application layer may include a series of application packages. As shown in fig. 2, the application packages may include camera, gallery, calendar, phone, map, navigation, WLAN, bluetooth, music, games, short messages, etc. applications.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions. As shown in FIG. 2, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like. The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like. The content provider is used to store and retrieve data, which may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc., and makes the data accessible to applications. The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system can be used for constructing a display interface of an application program, and the display interface can be composed of one or more views, such as a view for displaying a short message notification icon, a view for displaying characters and a view for displaying pictures. The phone manager is used to provide communication functions of the terminal 100, such as management of call states (including connection, hang-up, etc.). The resource manager provides various resources, such as localized strings, icons, pictures, layout files, video files, etc., to the application. The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. For example, a notification manager is used to notify download completion, message alerts, and the like. The notification manager may also be a notification that appears in the form of a chart or scrollbar text at the top status bar of the system, such as a notification of a background running application. The notification manager may also be a notification that appears on the screen in the form of a dialog window, such as prompting a text message in a status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system. The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android. The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules, such as: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like. The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications. The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc. The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like. The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following describes exemplary workflow of software and hardware of the terminal 100 in connection with a game application startup scenario.
When the touch sensor 180K receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, timestamp of the touch operation, and the like). The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the original input event. Taking the touch operation as a click operation, and taking the control corresponding to the click operation as the control of the game application icon as an example, the game application calls the interface of the application program framework layer to start the game application, and then calls the kernel layer to start the display driver, and displays the application interface of the game application through the display screen 194.
Next, an application scenario of the data reading method provided in the embodiment of the present application is described.
Under the scenes of starting the application, playing games and the like, the data reading amount is large. In addition, the terminal generally runs in a multi-task mode, concurrent IO requests exist frequently, and data reading amount is further increased. In order to improve data reading efficiency, a data pre-reading mechanism is arranged in an operating system such as Android and Linux. The pre-reading means that when an IO request is received, more data than the requested data is read from the file system at a time and cached in the memory, so that the requested data can be directly accessed in the memory when the next IO request arrives, and thus, the data access can be accelerated. However, the data pre-read from the file system to the memory is not always accessed, and when more pre-read data is not accessed, the waste of IO resources and memory resources is caused.
Also, in some terminals, EROFS is employed. Data in the EROFS is compressed and stored, so that the occupied space of the data can be reduced, and the random reading performance can be improved. The read ahead in EROFS is also accompanied by a decompression operation on the compressed data, thereby occupying Central Processing Unit (CPU) resources. On a terminal with small memory capacity and low CPU performance, IO resources, memory resources and CPU resources are all in shortage, and if too many IO resources, memory resources and CPU resources are occupied by pre-reading, that is, too many system resources are occupied, the application runs slowly, and the stuck phenomenon occurs.
Therefore, the embodiment of the present application provides a data reading method, which can set a pre-reading window threshold used by a service in the current operation according to a pre-reading data hit rate of the service in the last operation. Therefore, the threshold of the pre-reading window is limited, and the threshold of the pre-reading window is matched with the hit rate of the pre-reading data, so that excessive or insufficient pre-reading of the data can be avoided, reasonable use of system resources can be guaranteed, and normal operation of other services can be guaranteed.
The data pre-read mechanism is explained next.
Fig. 3 is a schematic diagram of a data reading process according to an embodiment of the present application. Referring to FIG. 3, assume that during the process of launching an application, three IO requests are generated to sequentially read data in the file system. The specific reading process is as follows:
the operating system receives a first IO request generated in the process of starting the application, and the first IO request is used for requesting to read data 1 in the file system. The operating system first looks up data 1 in memory. Because the read is the first read, the data 1 cannot be found in the memory, and a synchronous read-ahead is triggered at this time. When the operating system carries out synchronous pre-reading, a pre-reading window is initialized, and the size of the pre-reading window determines the pre-reading data volume of the time. Assuming that the size of the initialized pre-read window is 4, 4 data (i.e. data 1 to data 4) are read from the file system into the memory. It can be seen that the first IO request requests data 1, and the operating system reads data 1 to data 4 from the file system, where the last 3 data (i.e. data 2 to data 4) all belong to the pre-read data, and at this time, the operating system accesses data 1 requested by the first IO request in the memory. In this case, the operating system marks the first pre-read data (i.e., data 2) in the 3 pre-read data pre-read this time. When the pre-read data marked with the pre-read mark is subsequently accessed to the memory, the operating system performs an asynchronous pre-read, which is described in detail below.
The operating system receives a second IO request generated during the process of starting the application, and the second IO request is used for requesting to read data 2 and data 3 in the file system. The operating system first looks up data 2 and data 3 in memory. Because data 2 and data 3 have been previously pre-read from the file system into the memory, data 2 and data 3 may be found in the memory, at which point data 2 and data 3 requested by the second IO request are accessed directly in the memory. In this case, since data 2 is marked with a read-ahead flag, an asynchronous read-ahead is triggered when data 2 is accessed. When the operating system performs the asynchronous pre-reading, the size of the pre-reading window used in the last synchronous pre-reading is increased and then is used as the size of the pre-reading window to be used in the asynchronous pre-reading, for example, 2 times of the size of the pre-reading window used in the last synchronous pre-reading can be used as the size of the pre-reading window to be used in the asynchronous pre-reading, that is, the size of the pre-reading window to be used in the asynchronous pre-reading is 8, and then on the basis of the last synchronous pre-reading, that is, starting from the data (i.e., data 5) located after the last pre-read data (i.e., data 4) in the file system, 8 data (i.e., data 5 to data 12) are read from the file system into the memory. It can be seen that the second IO request requests data 2 and data 3, and the operating system reads data 5 to data 12 from the file system, and data 5 to data 12 all belong to the read-ahead data. In this case, the operating system marks the pre-read mark on the first pre-read data (i.e. data 5) in the 8 pre-read data pre-read this time. When the pre-read data marked with the pre-read mark is subsequently accessed to the memory, the operating system performs an asynchronous pre-read, which is described in detail below.
And the operating system receives a third IO request generated in the process of starting the application, wherein the third IO request is used for requesting to read data 4-data 7 in the file system. The operating system first looks up data 4-7 in memory. Since the data 4 to 7 have been previously read from the file system into the memory, the data 4 to 7 can be found in the memory, and at this time, the data 4 to 7 requested by the third IO request are directly accessed in the memory. In this case, since data 5 is marked with a read-ahead flag, an asynchronous read-ahead is triggered when data 5 is accessed. When the operating system performs the asynchronous pre-reading, the size of the pre-reading window used in the last asynchronous pre-reading is increased and then is used as the size of the pre-reading window to be used in the asynchronous pre-reading, for example, 2 times of the size of the pre-reading window used in the last asynchronous pre-reading can be used as the size of the pre-reading window to be used in the asynchronous pre-reading, that is, the size of the pre-reading window to be used in the asynchronous pre-reading is 16, and on the basis of the last asynchronous pre-reading, that is, starting from the data (that is, data 13) located after the last data (that is, data 12) pre-read in the last time in the file system, 16 data (that is, data 13 to data 28) are read from the file system into the memory. It can be seen that the third IO request requests data 4 to data 7, and the operating system reads data 13 to data 28 from the file system, and all of data 13 to data 28 belong to the read-ahead data. In this case, the operating system marks the first pre-read data (i.e., data 13) of the 16 pre-read data to be pre-read this time. When the pre-read data marked with the pre-read mark is accessed in the memory subsequently, the operating system carries out asynchronous pre-reading once.
It should be noted that, when the data is pre-read from the file system to the memory, the data in the file system may be pre-read to a page cache (page cache) in the memory, and when an IO request is subsequently received, the data requested by the IO request may be accessed in the page cache.
According to the data reading process, in the process of starting the application, when the operating system receives an IO request, the subsequent data can be read in advance from the file system to the memory while accessing the data requested by the IO request. Moreover, the size of the pre-reading window used in each pre-reading process is larger than that used in the previous pre-reading process, for example, the size of the pre-reading window used in each pre-reading process can be continuously increased by 2 times. In this case, as the IO request is continuously received, the size of the read-ahead window used in each read-ahead operation becomes larger and larger, and the data read-ahead from the file system to the memory in each read-ahead operation becomes larger and larger. However, the data pre-read from the file system to the memory is not always accessed, and when more pre-read data is not accessed, the system resources are seriously wasted.
Therefore, the data reading method provided in the embodiment of the application can limit the size of the pre-reading window, and cannot lead the size of the pre-reading window to be increased without limit, so that unnecessary excessive pre-reading of data can be avoided, further, waste of system resources can be avoided, and normal operation of other services can be ensured.
Next, a data reading method provided in the embodiment of the present application will be described.
Fig. 4 is a flowchart of a data reading method provided in an embodiment of the present application, where the method is applied to a terminal, and in particular, may be applied to an operating system of the terminal. Referring to fig. 4, the method includes:
step 401: and the terminal determines the operation of the target service.
The target service is a service that needs to sequentially read data stored in the file system during operation. That is, the target service needs to read data stored in the file system at runtime. For example, the target service may be to start an application, perform some operation (such as displaying a game interface) in the application, and the like, which is not limited in this embodiment of the application.
Step 402: and the terminal acquires the hit rate of the pre-read data when the target service operates last time.
The read-ahead data hit rate is the rate at which data from the file system read-ahead to the memory is accessed. That is, the read-ahead data hit rate is: amount of read-ahead data accessed/total amount of read-ahead data. For example, as shown in fig. 3, assuming that the target service is a start application, in the process of starting the application last time, three IO requests are generated to sequentially read data stored in the file system. When the terminal receives the first IO request, 3 pieces of data are read in advance from the file system to the memory, and 0 piece of read-in-advance data is accessed in the memory; when the terminal receives the second IO request, 8 pieces of data are preread from the file system to the memory, and 2 pieces of preread data are accessed in the memory; when the terminal receives the third IO request, 16 pieces of data are preread from the file system to the memory, and 4 pieces of preread data are accessed in the memory, the amount of the preread data to be accessed is 0+2+ 4-6, the total amount of the preread data is 3+8+ 16-27, and thus the hit rate of the preread data is 6/27-22.22%.
Step 403: the terminal judges whether the pre-read data hit rate is smaller than a first hit rate threshold value.
The first hit rate threshold may be set in advance, and the first hit rate threshold may be set smaller. For example, the first hit rate threshold may be 30%, etc.
If the hit rate of the pre-read data is less than the first hit rate threshold, continue to execute step 404; if the pre-read data hit rate is greater than or equal to the first hit rate threshold, proceed to step 406.
Step 404: if the hit rate of the pre-reading data is smaller than the first hit rate threshold, the terminal reduces the pre-reading window threshold used when the target service operates last time and then takes the reduced pre-reading window threshold as the first pre-reading window threshold.
The pre-reading window threshold is the maximum data volume that can be pre-read at a single time when data is pre-read from the file system to the memory, and is used for limiting the size of the pre-reading window used when data is pre-read from the file system to the memory, that is, the size of the pre-reading window cannot exceed the pre-reading window threshold. In other words, when an IO request is received, if data needs to be pre-read from the file system to the memory, the data amount pre-read from the file system to the memory cannot exceed the data amount indicated by the pre-read window threshold at most, that is, the size of the pre-read window used in the pre-read at this time cannot exceed the pre-read window threshold.
The first pre-reading window threshold is a pre-reading window threshold that needs to be used when the target service is running at this time, that is, the maximum data volume that can be pre-read at a single time when the data is pre-read from the file system to the memory in the running process of the target service at this time. In other words, each time an IO request generated in the current operation process of the target service is received, if data needs to be pre-read from the file system to the memory, the amount of data pre-read from the file system to the memory cannot exceed the amount of data indicated by the first pre-read window threshold at most, that is, the size of the pre-read window used in the current pre-read cannot exceed the first pre-read window threshold.
If the pre-read data hit rate is smaller than the first hit rate threshold, it means that the rate of pre-reading data from the file system to the memory being accessed is smaller in the last operation process of the target service, that is, more un-accessed data is pre-read in the last operation process of the target service, and it means that the pre-read window threshold used in the last operation process of the target service is larger. In order to avoid unnecessary and excessive pre-reading of data in the current running process of the target service, the pre-reading window threshold used in the last running process of the target service may be reduced and then used as the first pre-reading window threshold, so as to reduce the amount of pre-reading data and avoid wasting system resources. Optionally, the terminal may multiply the pre-read window threshold used when the target service is run last time by a% to obtain the first pre-read window threshold. a may be a preset positive value less than 100, e.g., a may be 50, 60, etc.
In some embodiments, the data stored in the file system is compressed data, for example, the file system may be an EROFS, and the data stored in the EROFS is compressed data. In this case, the pre-reading of the compressed data in the file system may also be accompanied by a decompression operation of the compressed data. Since the decompression operation occupies the CPU resource, in order to avoid the waste of the CPU resource, in the embodiment of the present application, whether to perform decompression while performing pre-reading may also be determined according to the hit rate of the pre-reading data.
In this case, if the pre-read data hit rate is smaller than the first hit rate threshold, the terminal may further perform step 405: the data decompression rule set by the terminal when the target service operates at this time is as follows: the compressed data is pre-read from the file system to the memory without decompression.
If the hit rate of the pre-read data is less than the first hit rate threshold, which indicates that the amount of the pre-read data is excessive, the pre-read compressed data can be decompressed after being delayed to be accessed, so as to avoid wasting the CPU resource.
Step 406: if the pre-reading data hit rate is greater than or equal to the first hit rate threshold, the terminal judges whether the pre-reading data hit rate is greater than a second hit rate threshold.
The second hit rate threshold may be preset, and the second hit rate threshold may be set to be larger, and the second hit rate threshold is larger than the first hit rate threshold. For example, the second hit rate threshold may be 70%, etc.
If the pre-read data hit rate is less than or equal to the second hit rate threshold, then proceed to step 407; if the pre-read data hit rate is greater than the second hit rate threshold, the following step 409 is performed.
Step 407: and if the hit rate of the pre-reading data is less than or equal to the second hit rate threshold, the terminal takes the pre-reading window threshold used when the target service operates last time as the first pre-reading window threshold.
If the pre-read data hit rate is greater than or equal to the first hit rate threshold and less than or equal to the second hit rate threshold, it means that the rate of data pre-read from the file system to the memory in the last operation process of the target service is moderate, that is, the amount of pre-read data in the last operation process of the target service is relatively proper, which means that the pre-read window threshold used in the last operation process of the target service is moderate. In the current running process of the target service, the pre-reading window threshold used in the last running of the target service can be kept, that is, the pre-reading window threshold used in the last running of the target service is directly used as the first pre-reading window threshold.
In some embodiments, the data stored in the file system is compressed data, and at this time, if the hit rate of the pre-read data is greater than or equal to the first hit rate threshold and less than or equal to the second hit rate threshold, the terminal may further perform step 408: the data decompression rule set by the terminal when the target service operates at this time is as follows: decompression is needed when the compressed data is pre-read from the file system to the memory.
If the hit rate of the pre-read data is greater than or equal to the first hit rate threshold and less than or equal to the second hit rate threshold, which indicates that the pre-read data amount is moderate, the pre-read compressed data can be decompressed normally at the same time, that is, the terminal can decompress the pre-read compressed data from the file system to the memory in the current operation process of the target service.
Step 409: and if the hit rate of the pre-reading data is greater than the second hit rate threshold, the terminal increases the pre-reading window threshold used when the target service operates last time and then takes the increased pre-reading window threshold as the first pre-reading window threshold.
If the pre-read data hit rate is greater than the second hit rate threshold, it indicates that the rate of data pre-read from the file system to the memory being accessed is greater in the last operation process of the target service, that is, the pre-read data volume in the last operation process of the target service is beneficial to the system operation. In order to further improve the pre-reading efficiency in the current running process of the target service, the pre-reading window threshold used in the last running process of the target service may be increased and then used as the first pre-reading window threshold, so as to increase the pre-reading data amount. Optionally, the terminal may multiply the read-ahead window threshold used when the target service is run last time by b to obtain a first read-ahead window threshold. b may be a preset positive value greater than 1, e.g., b may be 2, 3, etc.
In some embodiments, the data stored in the file system is compressed data, and at this time, if the hit rate of the pre-read data is greater than the second hit rate threshold, the terminal may further perform step 410: the data decompression rule set in the current operation of the target service is as follows: when the compressed data is read from the file system to the memory in advance, decompression is needed.
If the hit rate of the pre-read data is greater than the second hit rate threshold, which indicates that the pre-read effect is better, the compressed data can be pre-read and normally decompressed at the same time, that is, the terminal can decompress the compressed data from the file system to the memory when the compressed data is pre-read in the current operation process of the target service.
After the terminal sets the first pre-reading window threshold and the data decompression rule that need to be used in the current operation process of the target service through the above steps, the terminal may continue to execute step 411.
Step 411: and in the current operation process of the target service, the terminal pre-reads data from the file system to the memory according to the first pre-reading window threshold value and determines whether to decompress the pre-read data according to the data decompression rule.
When the terminal reads data from the file system to the memory in advance according to the first pre-reading window threshold, the size of a pre-reading window to be used in the pre-reading at this time is determined, and if the size of the pre-reading window is smaller than or equal to the first pre-reading window threshold, the data volume indicated by the size of the pre-reading window is pre-read from the file system to the memory; if the size of the pre-reading window is larger than the first pre-reading window threshold, the size of the pre-reading window is adjusted to the first pre-reading window threshold again, and the data volume indicated by the first pre-reading window threshold is pre-read from the file system to the memory.
If the data decompression rule indicates that decompression is not needed, when the terminal reads data from the file system to the memory in advance according to the first pre-reading window threshold, the terminal reads compressed data from the file system to the memory in advance directly according to the first pre-reading window threshold, decompression is not performed on the pre-read compressed data, and the compressed data is stored in the memory at this time.
If the data decompression rule indicates that decompression is needed, when the terminal reads data from the file system to the memory in advance according to the first pre-reading window threshold, the terminal reads compressed data from the file system to the memory in advance according to the first pre-reading window threshold, and decompresses the pre-read compressed data, and at this time, the decompressed data is stored in the memory.
Step 412: and when the target service finishes running, the terminal acquires the pre-reading data hit rate of the target service in the current running.
When the target service finishes operating, the terminal can acquire the total amount of pre-reading data and the amount of accessed pre-reading data in the current operating process of the target service, and then divide the amount of accessed pre-reading data by the total amount of pre-reading data to obtain the hit rate of the pre-reading data in the current operating process of the target service. Therefore, the pre-reading window threshold value needed to be used in the next operation process can be conveniently determined according to the pre-reading data hit rate in the next operation of the target service.
In the embodiment of the application, when the target service operates at this time, the terminal sets the first pre-reading window threshold according to the pre-reading data hit rate when the target service operates at the last time. The first pre-reading window threshold is the maximum data volume which can be pre-read in a single time when the target service pre-reads data from the file system to the memory in the current operation process. Because the first pre-reading window threshold value depends on the pre-reading data hit rate, the pre-reading data volume of the target service in the current operation process can be matched with the pre-reading data hit rate, so that excessive or insufficient pre-reading of data can be avoided, reasonable use of system resources can be further ensured, and normal operation of other services can be also ensured. And under the condition that compressed data is stored in the file system, the terminal can also set a data decompression rule of the target service during the current operation according to the pre-read data hit rate, and whether decompression is needed when the compressed data is pre-read to the memory from the file system depends on the size of the pre-read data hit rate, so that the waste of CPU resources can be avoided as much as possible.
Next, relevant modules involved in the data reading method provided by the embodiment of the present application are described.
Fig. 5 is a schematic diagram of a data reading system according to an embodiment of the present application. Referring to fig. 5, the data reading system includes: the system comprises a service identification module 501, a pre-reading data hit rate statistic module 502, a pre-reading window threshold value adjusting module 503 and a pre-reading data decompression module 504.
The service identification module 501 is configured to determine the operation of the target service, that is, to execute the step 401.
The pre-read data hit rate statistic module 502 is configured to obtain a pre-read data hit rate of the target service when running last time, that is, to execute the step 402.
The pre-reading window threshold adjusting module 503 is configured to set a pre-reading window threshold that needs to be used in the current operation process of the target service according to the hit rate of the pre-reading data, that is, set a first pre-reading window threshold. Specifically, the pre-reading window threshold adjusting module 503 is configured to reduce the pre-reading window threshold used in the last operation of the target service to serve as the first pre-reading window threshold when the hit rate of the pre-reading data is smaller than the first hit rate threshold; taking a pre-reading window threshold used when the target service is operated last time as a first pre-reading window threshold under the condition that the hit rate of the pre-reading data is greater than or equal to a first hit rate threshold and is less than or equal to a second hit rate threshold; when the hit rate of the pre-read data is greater than the second hit rate threshold, the pre-read window threshold used when the target service is run last time is increased to be the first pre-read window threshold, that is, to execute the above step 404, step 407, or step 409.
The pre-read data decompression module 504 is configured to set a data decompression rule during the current operation of the target service according to the hit rate of the pre-read data. Specifically, the pre-read data decompression module 504 is configured to, when the hit rate of the pre-read data is smaller than the first hit rate threshold, set the data decompression rule as: the compressed data is not required to be decompressed when being pre-read from the file system to the memory; when the pre-read data hit rate is greater than or equal to a first hit rate threshold and less than or equal to a second hit rate threshold, setting the data decompression rule as follows: decompression is needed when the compressed data are pre-read from the file system to the memory; and under the condition that the pre-reading data hit rate is greater than a second hit rate threshold, setting the data decompression rule as follows: the compressed data is required to be decompressed when the compressed data is pre-read from the file system to the memory, i.e. the step 405, the step 408 or the step 410 is executed.
Therefore, in the current operation process of the target service, data can be pre-read from the file system to the memory according to the first pre-read window threshold set by the pre-read window threshold adjustment module 503, and whether to decompress the pre-read data can be determined according to the data decompression rule set by the pre-read data decompression module 504.
It is worth noting that in some embodiments, the terminal may determine, according to the hit rate of the pre-reading data during the last operation of the target service, a pre-reading window threshold (i.e., a first pre-reading window threshold) that needs to be used during the current operation of the target service, so that, during the whole operation of the target service, the amount of the pre-reading data may be limited according to the first pre-reading window threshold, thereby preventing excessive or insufficient pre-reading of the data, and further ensuring reasonable use of system resources, and also ensuring normal operation of other services. This is explained in detail below:
fig. 6 is a flowchart of a data reading method provided in an embodiment of the present application, where the method is applied to a terminal, and in particular, may be applied to an operating system of the terminal. Referring to fig. 6, the method includes:
step 601: and the terminal receives an IO request generated in the running process of the target service, wherein the IO request is used for requesting to read the data stored in the file system.
The target service is a service that needs to read data stored in the file system sequentially in the running process, and therefore IO requests for requesting to read data stored in the file system sequentially are generated in the running process of the target service. For example, the target service may be to start an application, perform some operation (such as displaying a game interface) in the application, and the like, which is not limited in this embodiment of the application.
Step 602: and the terminal searches the data requested by the IO request in the memory.
Step 603: if the terminal does not find the data requested by the IO request in the memory, the pre-reading data hit rate of the target service in the last operation is obtained.
The read-ahead data hit rate is the rate at which data from the file system read-ahead to the memory is accessed. That is, the read-ahead data hit rate is: amount of read-ahead data accessed/total amount of read-ahead data. For example, as shown in fig. 3, assuming that the target service is an application to be started, three IO requests are generated to sequentially read data stored in the file system in the process of starting the application last time. When the terminal receives a first IO request, 3 pieces of data are preread from the file system to the memory, and 0 piece of preread data is accessed in the memory; when the terminal receives the second IO request, 8 pieces of data are preread from the file system to the memory, and 2 pieces of preread data are accessed in the memory; when the terminal receives the third IO request, 16 pieces of data are preread from the file system to the memory, and 4 pieces of preread data are accessed in the memory, the amount of the preread data to be accessed is 0+2+ 4-6, the total amount of the preread data is 3+8+ 16-27, and thus the hit rate of the preread data is 6/27-22.22%.
If the terminal does not find the data requested by the IO request in the memory, it indicates that the IO request is the first IO request generated in the current operation process of the target service, i.e., it indicates that the target service just starts to operate, then the pre-read data hit rate of the target service in the last operation process may be obtained, so that the pre-read window threshold value that needs to be used in the current operation process of the target service may be set according to the pre-read data hit rate in the following, and thus the pre-read data amount may be limited according to the pre-read window threshold value in the whole operation process of the target service.
Step 604: and the terminal sets a first pre-reading window threshold according to the pre-reading data hit rate.
The pre-reading window threshold is the maximum data volume that can be pre-read at a single time when data is pre-read from the file system to the memory, and is used for limiting the size of the pre-reading window used when data is pre-read from the file system to the memory, that is, the size of the pre-reading window cannot exceed the pre-reading window threshold. In other words, when an IO request is received, if data needs to be pre-read from the file system to the memory, the data amount pre-read from the file system to the memory cannot exceed the data amount indicated by the pre-read window threshold at most, that is, the size of the pre-read window used in the pre-read at this time cannot exceed the pre-read window threshold.
The first pre-reading window threshold is a pre-reading window threshold that needs to be used when the target service is running at this time, that is, the maximum data volume that can be pre-read at a single time when the data is pre-read from the file system to the memory in the running process of the target service at this time. In other words, each time an IO request generated in the current operation process of the target service is received, if data needs to be pre-read from the file system to the memory, the amount of data pre-read from the file system to the memory cannot exceed the amount of data indicated by the first pre-read window threshold at most, that is, the size of the pre-read window used in the current pre-read cannot exceed the first pre-read window threshold.
In some embodiments, the operation of step 604 may be: if the hit rate of the pre-read data is smaller than a first hit rate threshold, reducing a pre-read window threshold used when the target service operates last time, and then taking the reduced pre-read window threshold as a first pre-read window threshold; if the hit rate of the pre-reading data is greater than or equal to the first hit rate threshold and less than or equal to the second hit rate threshold, taking the pre-reading window threshold used when the target service operates last time as a first pre-reading window threshold; and if the hit rate of the pre-reading data is greater than the second hit rate threshold, increasing the pre-reading window threshold used when the target service operates last time and then using the increased pre-reading window threshold as the first pre-reading window threshold.
The first hit rate threshold and the second hit rate threshold may be both preset, the first hit rate threshold may be set to be smaller, the second hit rate threshold may be set to be larger, and the second hit rate threshold is larger than the first hit rate threshold. For example, the first hit rate threshold may be 30% and the second hit rate threshold may be 70%.
If the pre-read data hit rate is smaller than the first hit rate threshold, it indicates that the rate of data pre-read from the file system to the memory being accessed is smaller in the last operation process of the target service, that is, more data which is not accessed is pre-read in the last operation process of the target service, and indicates that the pre-read window threshold used in the last operation process of the target service is larger. In order to avoid unnecessary and excessive pre-reading of data in the current running process of the target service, the pre-reading window threshold used in the last running process of the target service may be reduced and then used as the first pre-reading window threshold, so as to reduce the amount of pre-reading data and avoid wasting system resources. Optionally, the terminal may multiply the read-ahead window threshold used when the target service is last run by a% to obtain a first read-ahead window threshold. a may be a preset positive value less than 100, e.g., a may be 50, 60, etc.
If the pre-read data hit rate is greater than or equal to the first hit rate threshold and less than or equal to the second hit rate threshold, it indicates that the rate of data pre-read from the file system to the memory is moderate in the last operation process of the target service, that is, the amount of pre-read data in the last operation process of the target service is relatively proper, and it indicates that the pre-read window threshold used in the last operation process of the target service is moderate. In the current running process of the target service, the pre-reading window threshold used in the last running of the target service can be kept, that is, the pre-reading window threshold used in the last running of the target service is directly used as the first pre-reading window threshold.
If the pre-read data hit rate is greater than the second hit rate threshold, it indicates that the rate of data pre-read from the file system to the memory being accessed is greater in the last operation process of the target service, that is, the pre-read data volume in the last operation process of the target service is beneficial to the system operation. In order to further improve the pre-reading efficiency in the current running process of the target service, the pre-reading window threshold used in the last running process of the target service may be increased and then used as the first pre-reading window threshold, so as to increase the pre-reading data amount. Optionally, the terminal may multiply the read-ahead window threshold used when the target service is run last time by b to obtain a first read-ahead window threshold. b may be a preset positive value greater than 1, e.g., b may be 2, 3, etc.
It should be noted that, in the case that the target service is running for the first time, the read-ahead data hit rate of the target service running last time is not obtained in step 603, in this case, the terminal may not set the first read-ahead window threshold according to the read-ahead data hit rate through step 604, but may directly set the first read-ahead window threshold as the initial read-ahead window threshold. The initial pre-read window threshold may be set in advance.
Step 605: and the terminal pre-reads data from the file system to the memory according to the first pre-reading window threshold value and accesses the data requested by the IO request in the memory.
When the terminal pre-reads data from the file system to the memory according to the first pre-reading window threshold, determining the size of a pre-reading window to be used during the pre-reading at this time, and if the size of the pre-reading window is smaller than or equal to the first pre-reading window threshold, pre-reading the data volume indicated by the size of the pre-reading window from the file system to the memory; if the size of the pre-reading window is larger than the first pre-reading window threshold, the size of the pre-reading window is adjusted to the first pre-reading window threshold again, and the data volume indicated by the first pre-reading window threshold is pre-read from the file system to the memory.
The operation of determining the size of the pre-reading window to be used by the terminal when performing the pre-reading is similar to the operation of determining the size of the pre-reading window to be used when performing the pre-reading in the related art, and this is not described in detail in the embodiments of the present application. For example, for the first IO request generated in the current operation process of the target service, if the first pre-reading, that is, synchronous pre-reading, is performed in the current operation process of the target service, the size of the pre-reading window to be used in the current pre-reading process may be determined as the size of the initial pre-reading window, and the size of the initial pre-reading window may be set in advance. For other IO requests generated in the current operation process of the target service after the first IO request, non-first pre-reading, that is, asynchronous pre-reading, is performed in the current operation process of the target service, the size of a pre-reading window used in the previous pre-reading process can be increased and then used as the size of a pre-reading window to be used in the current pre-reading process, for example, 2 times of the size of a pre-reading window used in the previous synchronous pre-reading process can be used as the size of a pre-reading window to be used in the asynchronous pre-reading process.
For the first IO request generated in the current operation process of the target service, the terminal performs synchronous pre-reading. That is, while the terminal reads data from the file system to the memory in advance according to the first pre-read window threshold, the terminal also reads the data requested by the IO request from the file system to the memory, and at this time, the data requested by the IO request can be accessed in the memory.
For other IO requests generated in the current running process of the target service and after the first IO request, the terminal performs asynchronous pre-reading. That is, in the step 602, when the terminal searches for the data requested by the IO request in the memory, the terminal may search for the data requested by the IO request, at this time, the terminal may directly access the data requested by the IO request in the memory, and if the data requested by the IO request accessed in the memory has data marked with the pre-read mark, the terminal may pre-read the data from the file system to the memory according to the first pre-read window threshold.
In some embodiments, the data stored in the file system is compressed data, for example, the file system may be an EROFS, and the data stored in the EROFS is compressed data. In this case, the pre-reading of the compressed data in the file system may also be accompanied by a decompression operation of the compressed data. Since the decompression operation occupies the CPU resource, in order to avoid the waste of the CPU resource, in the embodiment of the present application, whether to perform decompression while performing pre-reading may also be determined according to the hit rate of the pre-reading data.
Specifically, the terminal may set a data decompression rule during the current operation of the target service according to the pre-read data hit rate while determining the first pre-read window threshold according to the pre-read data hit rate. The data decompression rule is used for indicating whether decompression is needed when the compressed data is pre-read from the file system to the memory. That is, in the current operation process of the target service, whether decompression is needed when the compressed data is preread from the file system to the memory depends on the hit rate of the preread data, so as to avoid wasting the CPU resource as much as possible.
Optionally, if the hit rate of the pre-read data is smaller than the first hit rate threshold, which indicates that the amount of the pre-read data is excessive, the pre-read compressed data may be delayed to be decompressed when being accessed, so as to avoid wasting CPU resources, and therefore the terminal may set the data decompression rule when the target service is running at this time as: the compressed data is pre-read from the file system to the memory without decompression.
If the hit rate of the pre-read data is greater than or equal to the first hit rate threshold and less than or equal to the second hit rate threshold, which indicates that the pre-read data amount is moderate, normal decompression can be performed while pre-reading compressed data, so that the terminal can set the data decompression rule when the target service operates at this time as follows: when the compressed data is read from the file system to the memory in advance, decompression is needed.
If the hit rate of the pre-read data is greater than the second hit rate threshold, which indicates that the pre-read effect is better, the compressed data can be pre-read and normally decompressed at the same time, so that the terminal can set the data decompression rule when the target service operates at this time as follows: decompression is needed when the compressed data is pre-read from the file system to the memory.
If the data decompression rule indicates that decompression is not needed, the terminal reads compressed data from the file system to the memory in advance according to the first pre-reading window threshold value when the terminal reads the data from the file system to the memory in advance according to the first pre-reading window threshold value, the pre-read compressed data is not decompressed, and the compressed data is stored in the memory at this time. In this case, when the terminal accesses the data requested by the IO request in the memory, the terminal decompresses the compressed data requested by the IO request in the memory first, and then accesses the decompressed data. That is, under the condition that the hit rate of the pre-read data is low, the terminal does not decompress the pre-read compressed data from the file system to the memory, and decompresses the compressed data when accessing specific compressed data in the memory subsequently. Therefore, decompression of a large amount of compressed data which cannot be actually accessed subsequently in the pre-reading stage can be avoided, unnecessary decompression operation is reduced, and CPU resources are saved.
If the data decompression rule indicates that decompression is required, the terminal reads compressed data from the file system to the memory in advance according to the first pre-reading window threshold when the terminal reads the data from the file system to the memory in advance according to the first pre-reading window threshold, and decompresses the pre-read compressed data, and at this time, the data stored in the memory is decompressed. In this case, when the terminal accesses the data requested by the IO request in the memory, the terminal may directly access the decompressed data requested by the IO request in the memory. That is to say, under the condition that the hit rate of the pre-read data is moderate or high, the terminal decompresses the compressed data from the file system to the memory in advance, and can directly access the decompressed data in the memory in the follow-up process, so that the pre-read effect can be better exerted.
Further, the embodiment of the present application may adjust not only the pre-read window threshold used during the running of the target service, but also the pre-read window threshold used during the running of the application process, and the specific process is as follows:
after the application process is switched from the foreground to the background to run, taking a pre-reading window threshold value used by the application process before switching as a second pre-reading window threshold value; in the background running process of the application process, when an IO request of the application process is received, data is pre-read from a file system to a memory according to a specified pre-reading window threshold, and the data requested by the IO request of the application process is accessed in the memory.
The specified pre-read window threshold is a small pre-read window threshold that is preset. The specified pre-read window threshold is less than the second pre-read window threshold. Under the condition, when the application process is switched to the background operation, the threshold value of the pre-reading window is adjusted to be small so as to reduce the pre-reading data volume and reduce the system resource consumption, thereby reducing the influence on foreground application. In some cases, the specified pre-reading window threshold may be set to 0, so that pre-reading is not performed when the application process is switched to the background operation, and thus, system resource consumption may be greatly reduced.
When the terminal reads data from the file system to the memory in advance according to the specified pre-reading window threshold, the size of a pre-reading window to be used in the pre-reading is determined firstly, and if the size of the pre-reading window is smaller than or equal to the specified pre-reading window threshold, the data volume indicated by the size of the pre-reading window is pre-read from the file system to the memory; if the size of the pre-reading window is larger than the threshold value of the appointed pre-reading window, the size of the pre-reading window is re-adjusted to the threshold value of the appointed pre-reading window, and the data volume indicated by the threshold value of the appointed pre-reading window is pre-read from the file system to the memory.
The operation of determining the size of the pre-reading window to be used by the terminal when performing the pre-reading is similar to the operation of determining the size of the pre-reading window to be used when performing the pre-reading in the related art, and this is not described in detail in the embodiments of the present application. For example, for a first IO request generated during the current operation (including foreground operation and background operation) of the application process, if the first IO request is pre-read, that is, synchronous pre-read, during the current operation of the application process, the size of a pre-read window to be used during the current pre-read may be determined as the size of an initial pre-read window, and the size of the initial pre-read window may be set in advance. For other IO requests generated in the current running process of the application process after the first IO request, non-first pre-reading, that is, asynchronous pre-reading, is performed in the current running process of the application process, the size of a pre-reading window used in the previous pre-reading process can be increased and then used as the size of a pre-reading window to be used in the current pre-reading process, for example, 2 times of the size of the pre-reading window used in the previous synchronous pre-reading process can be used as the size of the pre-reading window to be used in the asynchronous pre-reading process.
For the first IO request generated in the current running process of the application process, after receiving the IO request, the terminal searches for data requested by the IO request in the memory, and since the data is read for the first time, the data requested by the IO request cannot be searched in the memory, and at this time, the terminal can perform synchronous pre-reading. That is, the terminal reads data from the file system to the memory according to the specified pre-read window threshold, and reads the data requested by the IO request from the file system to the memory, and then can access the data requested by the IO request in the memory.
For other IO requests after the first IO request generated in the current running process of the application process, after receiving the IO request, the terminal searches for data requested by the IO request in the memory, and since data pre-reading is performed before, the terminal can search for the data requested by the IO request in the memory, and at this time, the terminal can perform asynchronous pre-reading. That is, the terminal may directly access the data requested by the IO request in the memory, and if the data requested by the IO request accessed in the memory includes data marked with a read-ahead mark, the terminal may read the data from the file system to the memory in advance according to the specified read-ahead window threshold.
For the pre-reading in the background running process of the application process, under the condition that the data stored in the file system is compressed data, the pre-read compressed data is not decompressed every time the compressed data is pre-read from the file system to the memory, and the compressed data is backlogged when specific compressed data in the memory is accessed subsequently, so that the resource consumption of the system is further reduced.
After the application process is switched from the background to the foreground for running, in the foreground running process of the application process, when an IO request of the application process is received, data is pre-read from the file system to the memory according to the second pre-reading window threshold, and the data requested by the IO request of the application process is accessed in the memory. That is, when the application process is switched to foreground operation, the pre-reading window threshold is restored to normal, so as to ensure the pre-reading effect and the normal operation of the application process.
When the terminal reads data from the file system to the memory in advance according to the second pre-reading window threshold, the size of a pre-reading window to be used in the pre-reading at this time is determined, and if the size of the pre-reading window is smaller than or equal to the second pre-reading window threshold, the data volume indicated by the size of the pre-reading window is pre-read from the file system to the memory; if the size of the pre-reading window is larger than the threshold value of the second pre-reading window, the size of the pre-reading window is adjusted to the threshold value of the second pre-reading window again, and the data volume indicated by the threshold value of the second pre-reading window is pre-read from the file system to the memory.
For the first IO request generated in the current running process of the application process, after receiving the IO request, the terminal searches for data requested by the IO request in the memory, and since the data is read for the first time, the data requested by the IO request cannot be searched in the memory, and at this time, the terminal can perform synchronous pre-reading. That is, the terminal reads data from the file system to the memory according to the second read-ahead window threshold, and reads data requested by the IO request from the file system to the memory, and then can access the data requested by the IO request in the memory.
For other IO requests after the first IO request generated in the current running process of the application process, after receiving the IO request, the terminal searches for data requested by the IO request in the memory, and since data pre-reading is performed before, the terminal can search for the data requested by the IO request in the memory, and at this time, the terminal can perform asynchronous pre-reading. That is, the terminal may directly access the data requested by the IO request in the memory, and if the data requested by the IO request accessed in the memory includes data marked with a pre-read mark, the terminal may pre-read the data from the file system to the memory according to the second pre-read window threshold.
For the pre-reading in the foreground operation process of the application process, under the condition that the data stored in the file system is compressed data, the pre-read compressed data is decompressed every time the compressed data is pre-read from the file system to the memory, so that the decompressed data in the memory can be directly accessed in the subsequent process, the pre-reading effect can be further ensured, and the normal operation of the application process can be ensured.
In the embodiment of the application, after the terminal receives an IO request generated in the target service operation process, if data requested by the IO request is not found in the memory, which indicates that the IO request is a first IO request generated in the target service operation process this time, a pre-read data hit rate of the target service in the last operation is obtained, and then a first pre-read window threshold is set according to the pre-read data hit rate. And then, the terminal reads data from the file system to the memory in advance according to the first pre-reading window threshold value and accesses the data requested by the IO request in the memory. Because the first pre-reading window threshold is the maximum data volume which can be pre-read once when the data is pre-read from the file system to the memory in the current operation process of the target service, and the first pre-reading window threshold depends on the hit rate of the pre-reading data, the pre-reading data volume in the current operation process of the target service can be matched with the hit rate of the pre-reading data, so that excessive or insufficient pre-reading of the data can be avoided, the reasonable use of system resources can be further ensured, and the normal operation of other services can be also ensured.
The data reading method described above is exemplified below with reference to fig. 7 and 8.
As shown in a diagram in fig. 7, icons of a plurality of applications are displayed on the mobile phone home interface 701. The user clicks on an icon of the music application therein to launch the music application. Three IO requests are generated in total to sequentially read data in the file system during the process of starting the music application. Assume that the data stored in the file system is compressed data, and the first hit rate threshold is 30% and the second hit rate threshold is 70%. Referring to fig. 8, the data reading process when the music application is started is as follows:
the operating system of the mobile phone receives a first IO request generated in the process of starting the music application, wherein the first IO request is used for requesting to read data 1 in the file system. The operating system first looks up data 1 in memory. Because it is the first read, data 1 cannot be found in the memory. In this case, the operating system acquires the pre-read data hit rate at the last time the music application was started. Assuming that the data reading process when the music application was last started is as shown in fig. 3, the hit rate of the pre-read data when the music application was last started is 22.22%, which is less than 30% of the first hit rate threshold. In this case, the data decompression rule when the music application is started at this time is set as follows: the compressed data is pre-read from the file system to the memory without decompression. And the threshold value of the pre-reading window used when the music application is started last time is reduced and then is used as the threshold value of the pre-reading window needed when the music application is started this time, namely, the threshold value of the pre-reading window is used as the first threshold value of the pre-reading window. Assuming that the threshold of the pre-reading window used when the music application was started last time is 20, the threshold of the pre-reading window used when the music application was started last time is 20% multiplied by 50%, resulting in the first threshold of the pre-reading window being 10. And then, triggering synchronous pre-reading according to the first pre-reading window threshold. Specifically, when the operating system performs synchronous pre-reading, a pre-reading window is initialized. Assuming that the initialized pre-read window size is 4, since the pre-read window size (i.e. 4) is smaller than the first pre-read window threshold (i.e. 10), the data size (i.e. 4 data, i.e. data 1 to data 4) indicated by the pre-read window size can be read from the file system into the memory, and at this time, the data 1 to data 4 stored in the memory are all compressed data. It can be seen that the first IO request requests data 1, and the operating system reads data 1 to data 4 together from the file system, wherein the last 3 data (i.e. data 2 to data 4) belong to the pre-read data. Then, the operating system decompresses the data 1 requested by the first IO request stored in the memory, and accesses the decompressed data. In this case, the operating system marks the first pre-read data (i.e., data 2) in the 3 pre-read data pre-read this time. When the pre-read data marked with the pre-read mark is subsequently accessed to the memory, the operating system performs an asynchronous pre-read, which is described in detail below.
The operating system receives a second IO request generated during the start of the music application, the second IO request requesting to read data 2 and data 3 in the file system. The operating system first looks up data 2 and data 3 in memory. Since the data 2 and the data 3 have been previously pre-read from the file system into the memory, the data 2 and the data 3 can be found in the memory, and at this time, the data 2 and the data 3 requested by the second IO request stored in the memory are decompressed, and the decompressed data are accessed. In this case, since data 2 is marked with a read-ahead flag, an asynchronous read-ahead is triggered based on the first read-ahead window threshold when data 2 is accessed. Specifically, when the operating system performs this asynchronous pre-reading, the size of the pre-reading window used in the last synchronous pre-reading which is 2 times larger than the size of the pre-reading window to be used in this asynchronous pre-reading is used as the size of the pre-reading window to be used in this asynchronous pre-reading, that is, the size of the pre-reading window to be used in this asynchronous pre-reading is 8. Since the size (i.e. 8) of the pre-read window is smaller than the first pre-read window threshold (i.e. 10), on the basis of the last synchronous pre-read, that is, starting from the data (i.e. data 5) located after the last data (i.e. data 4) pre-read last time in the file system, the data amount (i.e. 8 data, that is, data 5 to data 12) indicated by the size of the pre-read window is read from the file system into the memory, and at this time, the data 5 to data 12 stored in the memory are all compressed data. It can be seen that the second IO request requests data 2 and data 3, and the operating system reads data 5 to data 12 from the file system, and data 5 to data 12 all belong to the read-ahead data. In this case, the operating system marks the pre-read mark on the first pre-read data (i.e. data 5) in the 8 pre-read data pre-read this time. When the pre-read data marked with the pre-read mark is subsequently accessed to the memory, the operating system performs an asynchronous pre-read, which is described in detail below.
And the operating system receives a third IO request generated in the process of starting the music application, wherein the third IO request is used for requesting to read data 4-data 7 in the file system. The operating system first looks up data 4-7 in memory. Since the data 4 to 7 have been previously read from the file system into the memory, the data 4 to 7 can be found in the memory, and at this time, the data 4 to 7 requested by the third IO request stored in the memory are decompressed, and the decompressed data are accessed. In this case, since the data 5 is marked with the read-ahead flag, when the data 5 is accessed, asynchronous read-ahead is triggered according to the first read-ahead window threshold. Specifically, when the operating system performs this asynchronous pre-reading, the size of the pre-reading window used in the last asynchronous pre-reading of 2 times is used as the size of the pre-reading window to be used in this asynchronous pre-reading, that is, the size of the pre-reading window to be used in this asynchronous pre-reading is 16. Since the size (i.e. 16) of the pre-read window is greater than the first pre-read window threshold (i.e. 10), on the basis of the last asynchronous pre-read, that is, starting from the data (i.e. data 13) located after the last data (i.e. data 12) pre-read last time in the file system, the data amount (i.e. 10 data, that is, data 13 to data 22) indicated by the first pre-read window threshold may be read from the file system into the memory, where at this time, the data 13 to data 22 stored in the memory are all compressed data. It can be seen that the third IO request requests data 4 to data 7, and the operating system reads data 13 to data 22 from the file system, and data 13 to data 22 all belong to the pre-read data. In this case, the operating system marks the first pre-read data (i.e. data 13) in the 10 pre-read data pre-read this time. When the pre-read data marked with the pre-read mark is accessed in the memory subsequently, the operating system carries out asynchronous pre-reading once.
When the operating system accesses the decompressed data of data 4 to data 7 requested by the third IO request in the memory, the process of starting the music application is completed, and at this time, the mobile phone switches from the main interface 701 shown in a in fig. 7 to the application interface 702 shown in b in fig. 7.
In the process of starting the music application this time, as shown in fig. 8, three IO requests are generated to sequentially read the data stored in the file system. When the terminal receives the first IO request, 3 pieces of data are read in advance from the file system to the memory, and 0 piece of read-in-advance data is accessed in the memory; when the terminal receives the second IO request, 8 pieces of data are preread from the file system to the memory, and 2 pieces of preread data are accessed in the memory; when the terminal receives the third IO request, 10 pieces of data are preread from the file system to the memory, and 4 pieces of preread data are accessed in the memory, the amount of the preread data to be accessed is 0+2+ 4-6, the total amount of the preread data is 3+8+ 10-21, and thus the hit rate of the preread data when the music application is started at this time is 6/21-28.57%. Compared with the data reading process when the music application is started last time as shown in fig. 3, the pre-reading data amount when the music application is started this time is reduced, and the hit rate of the pre-reading data is improved, so that unnecessary excessive pre-reading of the data is avoided, further, waste of system resources can be avoided, and normal operation of other services can be ensured.
Fig. 9 is a schematic structural diagram of a data reading apparatus provided in an embodiment of the present application, where the data reading apparatus may be implemented by software, hardware, or a combination of the two as part or all of a computer device, and the computer device may be the terminal shown in fig. 1. Referring to fig. 9, the apparatus includes: an acquisition module 901, a setting module 902 and a reading module 903.
An obtaining module 901, configured to obtain, when a target service runs, a pre-read data hit rate when the target service runs last time, that is, to execute step 401 and step 402 in the embodiment of fig. 4;
a setting module 902, configured to set a first pre-read window threshold according to the pre-read data hit rate, that is, to execute step 604 in the foregoing embodiment of fig. 6;
a reading module 903, configured to pre-read data from the file system to the memory according to the first pre-read window threshold, that is, to execute step 411 in the foregoing embodiment of fig. 4 or to execute step 605 in the foregoing embodiment of fig. 6.
Optionally, the obtaining module 901 is specifically configured to execute step 601, step 602, and step 603 in the embodiment of fig. 6, that is, the obtaining module 901 is specifically configured to:
receiving an IO request generated in the running process of a target service, wherein the IO request is used for requesting to read data stored in a file system;
and if the data requested by the IO request is not found in the memory, acquiring the pre-reading data hit rate of the target service when the target service operates last time.
Optionally, the apparatus further comprises:
an access module, configured to access the data requested by the IO request in the memory, that is, to execute step 605 in the embodiment of fig. 6.
Optionally, the setting module 902 is specifically configured to perform step 403, step 406, step 404, step 407, and step 409 in the foregoing embodiment of fig. 4, that is, the setting module 902 is specifically configured to:
if the hit rate of the pre-read data is smaller than the first hit rate threshold, reducing a pre-read window threshold used when the target service operates last time and taking the reduced pre-read window threshold as a first pre-read window threshold;
if the hit rate of the pre-read data is greater than or equal to the first hit rate threshold and less than or equal to the second hit rate threshold, taking the pre-read window threshold used when the target service operates last time as the first pre-read window threshold, wherein the second hit rate threshold is greater than the first hit rate threshold;
and if the hit rate of the pre-reading data is greater than the second hit rate threshold, increasing the pre-reading window threshold used when the target service operates last time and then using the increased pre-reading window threshold as the first pre-reading window threshold.
Optionally, the setting module 902 is further configured to:
and setting a data decompression rule in the current operation of the target service according to the pre-reading data hit rate, wherein the data decompression rule is used for indicating whether decompression is needed when the compressed data is pre-read from the file system to the memory.
Optionally, the reading module 903 is configured to:
and if the data decompression rule indicates that decompression is required, pre-reading the compressed data from the file system to the memory according to the first pre-reading window threshold, and decompressing the pre-read compressed data.
Optionally, the setting module 902 is further configured to perform step 403, step 406, step 405, step 408, and step 410 in the above-described embodiment of fig. 4, that is, the setting module 902 is further configured to:
if the pre-read data hit rate is smaller than the first hit rate threshold, setting the data decompression rule as follows: the compressed data is not required to be decompressed when being pre-read from the file system to the memory;
if the pre-read data hit rate is greater than or equal to the first hit rate threshold and less than or equal to the second hit rate threshold, setting the data decompression rule as follows: decompression is needed when the compressed data is pre-read from the file system to the memory; the second hit rate threshold is greater than the first hit rate threshold;
if the pre-read data hit rate is greater than the second hit rate threshold, setting the data decompression rule as follows: decompression is needed when the compressed data is pre-read from the file system to the memory.
Optionally, the file system is an EROFS.
Optionally, the apparatus further comprises:
and the determining module is used for taking the pre-reading window threshold used by the application process before switching as a second pre-reading window threshold after the application process is switched from the foreground to the background to run.
The reading module 903 is further configured to, in the background running process of the application process, pre-read data from the file system to the memory according to the specified pre-read window threshold value when receiving the IO request of the application process, and access data requested by the IO request of the application process in the memory, where the specified pre-read window threshold value is smaller than the second pre-read window threshold value.
The reading module 903 is further configured to, after the application process is switched from the background to the foreground and runs, pre-read data from the file system to the memory according to the second pre-read window threshold when the IO request of the application process is received in the foreground running process of the application process, and access data requested by the IO request of the application process in the memory.
In the embodiment of the application, when the target service runs, the pre-reading data hit rate of the target service in the last running is obtained. And then, setting a first pre-reading window threshold according to the pre-reading data hit rate. And finally, pre-reading data from the file system to the memory according to the first pre-reading window threshold value. Because the first pre-reading window threshold is the maximum data volume which can be pre-read once when the data is pre-read from the file system to the memory in the current operation process of the target service, and the first pre-reading window threshold depends on the hit rate of the pre-reading data, the pre-reading data volume in the current operation process of the target service can be matched with the hit rate of the pre-reading data, so that excessive or insufficient pre-reading of the data can be avoided, the reasonable use of system resources can be further ensured, and the normal operation of other services can be also ensured.
It should be noted that: in the data reading apparatus provided in the foregoing embodiment, when reading data, only the division of each functional module is illustrated, and in practical applications, the functions may be allocated by different functional modules according to needs, that is, the internal structure of the apparatus is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the data reading apparatus and the data reading method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
In the above embodiments, the implementation may be wholly or partly realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., Digital Versatile Disk (DVD)), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above description is not intended to limit the present application to the particular embodiments disclosed, but is intended to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present application.
Claims (9)
1. A method of reading data, the method comprising:
when a target service operates, obtaining the pre-reading data hit rate of the target service during the last operation, wherein the pre-reading data hit rate is the rate of data pre-read from a file system to a memory to be accessed, and the data stored in the file system is compressed data;
setting a first pre-reading window threshold value according to the hit rate of the pre-reading data, wherein the first pre-reading window threshold value is the maximum data volume which can be pre-read in a single time when the data is pre-read from the file system to the memory in the current operation process of the target service;
setting a data decompression rule in the current operation of the target service according to the pre-reading data hit rate, wherein the data decompression rule is used for indicating whether decompression is needed when the compressed data is pre-read from the file system to the memory;
and if the data decompression rule indicates that decompression is required, pre-reading the compressed data from the file system to the memory according to the first pre-reading window threshold, and decompressing the pre-read compressed data.
2. The method of claim 1, wherein the obtaining the read-ahead data hit rate of the target service when it last run in the target service run-time comprises:
receiving an input/output (IO) request generated in the running process of the target service, wherein the IO request is used for requesting to read data stored in the file system;
if the data requested by the IO request is not found in the memory, obtaining the pre-reading data hit rate of the target service when the target service operates last time;
after pre-reading the compressed data from the file system to the memory according to the first pre-reading window threshold and decompressing the pre-read compressed data, the method further includes:
and accessing the data requested by the IO request in the memory.
3. The method of claim 1 or 2, wherein setting a first read-ahead window threshold according to the read-ahead data hit rate comprises:
if the hit rate of the pre-read data is smaller than a first hit rate threshold, reducing a pre-read window threshold used when the target service operates last time, and using the reduced pre-read window threshold as the first pre-read window threshold;
if the hit rate of the pre-read data is greater than or equal to the first hit rate threshold and less than or equal to a second hit rate threshold, taking a pre-read window threshold used in the last operation of the target service as the first pre-read window threshold, wherein the second hit rate threshold is greater than the first hit rate threshold;
and if the hit rate of the pre-read data is greater than the second hit rate threshold, increasing a pre-read window threshold used when the target service operates last time, and then using the increased pre-read window threshold as the first pre-read window threshold.
4. The method of claim 1, wherein the setting of the data decompression rule at the current run time of the target service according to the pre-read data hit rate comprises:
if the pre-read data hit rate is smaller than a first hit rate threshold, setting the data decompression rule as follows: the compressed data is not required to be decompressed when being pre-read from the file system to the memory;
if the pre-read data hit rate is greater than or equal to the first hit rate threshold and less than or equal to a second hit rate threshold, setting the data decompression rule as follows: decompression is needed when the compressed data is pre-read from the file system to the memory; the second hit rate threshold is greater than the first hit rate threshold;
if the pre-read data hit rate is greater than the second hit rate threshold, setting the data decompression rule as follows: and when the compressed data is pre-read from the file system to the memory, decompression is needed.
5. The method according to any of claims 1, 2, 4, wherein said file system is an extensible read-only file system, EROFS.
6. The method of any of claims 1, 2, or 4, further comprising:
after the application process is switched from a foreground to a background to run, taking a pre-reading window threshold value used by the application process before switching as a second pre-reading window threshold value;
and in the background running process of the application process, pre-reading data from the file system to the memory according to a specified pre-reading window threshold value and accessing the data requested by the IO request of the application process in the memory every time the IO request of the application process is received, wherein the specified pre-reading window threshold value is smaller than the second pre-reading window threshold value.
7. The method of claim 6, wherein the method further comprises:
after the application process is switched from the background to the foreground to run, in the foreground running process of the application process, when an IO request of the application process is received, data is pre-read from the file system to the memory according to the second pre-reading window threshold, and the data requested by the IO request of the application process is accessed in the memory.
8. A data reading apparatus, characterized in that the apparatus comprises:
the system comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring the pre-reading data hit rate when a target service operates last time, the pre-reading data hit rate is the rate of accessing data pre-read from a file system to a memory, and the data stored in the file system is compressed data;
a setting module, configured to set a first pre-reading window threshold according to the hit rate of the pre-reading data, where the first pre-reading window threshold is a maximum data amount that can be pre-read at a single time when the target service pre-reads data from the file system to the memory in the current operation process; setting a data decompression rule in the current operation of the target service according to the pre-reading data hit rate, wherein the data decompression rule is used for indicating whether decompression is needed when the compressed data is pre-read from the file system to the memory;
and the reading module is used for pre-reading the compressed data from the file system to the memory according to the first pre-reading window threshold value and decompressing the pre-read compressed data if the data decompression rule indicates that decompression is required.
9. A computer-readable storage medium having stored therein instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111017226.9A CN113760192B (en) | 2021-08-31 | 2021-08-31 | Data reading method, data reading apparatus, storage medium, and program product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111017226.9A CN113760192B (en) | 2021-08-31 | 2021-08-31 | Data reading method, data reading apparatus, storage medium, and program product |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113760192A CN113760192A (en) | 2021-12-07 |
CN113760192B true CN113760192B (en) | 2022-09-02 |
Family
ID=78792369
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111017226.9A Active CN113760192B (en) | 2021-08-31 | 2021-08-31 | Data reading method, data reading apparatus, storage medium, and program product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113760192B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117130952B (en) * | 2023-01-10 | 2024-06-21 | 荣耀终端有限公司 | Pre-reading method and pre-reading device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1999034356A2 (en) * | 1997-12-30 | 1999-07-08 | Genesis One Technologies, Inc. | Disk cache enhancer with dynamically sized read request based upon current cache hit rate |
CN106681659A (en) * | 2016-12-16 | 2017-05-17 | 郑州云海信息技术有限公司 | Data compression method and device |
CN107480150A (en) * | 2016-06-07 | 2017-12-15 | 阿里巴巴集团控股有限公司 | A kind of file loading method and device |
CN108932315A (en) * | 2018-06-21 | 2018-12-04 | 郑州云海信息技术有限公司 | A kind of method and relevant apparatus of data decompression |
CN111818122A (en) * | 2020-05-28 | 2020-10-23 | 北京航空航天大学 | A Data Prefetching Method for WAN Based on Traffic Fairness |
CN112445725A (en) * | 2019-08-27 | 2021-03-05 | 华为技术有限公司 | Method and device for pre-reading file page and terminal equipment |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112558866B (en) * | 2020-12-03 | 2022-12-09 | Oppo(重庆)智能科技有限公司 | Data pre-reading method, mobile terminal and computer readable storage medium |
-
2021
- 2021-08-31 CN CN202111017226.9A patent/CN113760192B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1999034356A2 (en) * | 1997-12-30 | 1999-07-08 | Genesis One Technologies, Inc. | Disk cache enhancer with dynamically sized read request based upon current cache hit rate |
CN107480150A (en) * | 2016-06-07 | 2017-12-15 | 阿里巴巴集团控股有限公司 | A kind of file loading method and device |
CN106681659A (en) * | 2016-12-16 | 2017-05-17 | 郑州云海信息技术有限公司 | Data compression method and device |
CN108932315A (en) * | 2018-06-21 | 2018-12-04 | 郑州云海信息技术有限公司 | A kind of method and relevant apparatus of data decompression |
CN112445725A (en) * | 2019-08-27 | 2021-03-05 | 华为技术有限公司 | Method and device for pre-reading file page and terminal equipment |
CN111818122A (en) * | 2020-05-28 | 2020-10-23 | 北京航空航天大学 | A Data Prefetching Method for WAN Based on Traffic Fairness |
Non-Patent Citations (3)
Title |
---|
Burst-Cycle Data Compression Schemes for Pre-Fuse Wafer-Level Test in Large Scale High-Speed embedded DRAM;Ryo Fukuda 等;《2004 Symposium On VLSl Circuits Digest of Technical Papers》;20050415;全文 * |
Linux内核的文件预读;吴峰光;《软件世界》;20071108(第21期);全文 * |
基于pNFS的小文件间数据预读机制研究;杨洪章等;《计算机研究与发展》;20141215;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN113760192A (en) | 2021-12-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115016706B (en) | Thread scheduling method and electronic equipment | |
CN113553130B (en) | Method for executing drawing operation by application and electronic equipment | |
CN112527476A (en) | Resource scheduling method and electronic equipment | |
CN114461588A (en) | Method for adjusting pre-reading window and electronic equipment | |
WO2023061014A1 (en) | Task management method and apparatus | |
CN113760192B (en) | Data reading method, data reading apparatus, storage medium, and program product | |
CN113760191B (en) | Data reading method, data reading apparatus, storage medium, and program product | |
CN113835802A (en) | Device interaction method, system, device and computer readable storage medium | |
CN112783418B (en) | Method for storing application program data and mobile terminal | |
CN115167953A (en) | Application interface display method, device, equipment and storage medium | |
CN115145513A (en) | Screen projection method, system and related device | |
WO2021042881A1 (en) | Message notification method and electronic device | |
WO2023174322A1 (en) | Layer processing method and electronic device | |
WO2023051036A1 (en) | Method and apparatus for loading shader | |
CN118113189A (en) | Display method, display device and wearable equipment | |
CN117407127A (en) | Thread scheduling method and electronic equipment | |
CN117519959A (en) | Memory management method and electronic device | |
CN117729561B (en) | System upgrading method, terminal and storage medium | |
CN117724825B (en) | Interface display method and electronic equipment | |
WO2024032430A1 (en) | Memory management method and electronic device | |
CN117707718B (en) | Process management method, electronic device and readable storage medium | |
CN114860354B (en) | List loading method and electronic equipment | |
CN115840528A (en) | Method for setting waterline of storage disc, electronic equipment and storage medium | |
CN119271575A (en) | Pre-reading method and electronic device | |
WO2024093431A1 (en) | Image drawing method and electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20230916 Address after: 201306 building C, No. 888, Huanhu West 2nd Road, Lingang New Area, Pudong New Area, Shanghai Patentee after: Shanghai Glory Smart Technology Development Co.,Ltd. Address before: Unit 3401, unit a, building 6, Shenye Zhongcheng, No. 8089, Hongli West Road, Donghai community, Xiangmihu street, Futian District, Shenzhen, Guangdong 518040 Patentee before: Honor Device Co.,Ltd. |