CN115048035A - Cache management method, device and related equipment - Google Patents

Cache management method, device and related equipment Download PDF

Info

Publication number
CN115048035A
CN115048035A CN202110252260.8A CN202110252260A CN115048035A CN 115048035 A CN115048035 A CN 115048035A CN 202110252260 A CN202110252260 A CN 202110252260A CN 115048035 A CN115048035 A CN 115048035A
Authority
CN
China
Prior art keywords
cache
address
target file
file
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110252260.8A
Other languages
Chinese (zh)
Inventor
梁辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110252260.8A priority Critical patent/CN115048035A/en
Publication of CN115048035A publication Critical patent/CN115048035A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/188Virtual file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0643Management of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements

Abstract

The application provides a cache management method, a cache management device and related equipment, wherein the method comprises the following steps: receiving a reading request of a target file; responding to the reading request, and allocating a first cache region for the target file, wherein the first cache region is a cache region corresponding to the reading target file; receiving a write request for a target file; responding to the write request, distributing a second cache region for the target file, and acquiring the identifier of the target file; and writing the target file into a target cache address according to the target file identifier, wherein the target cache address is an address in the second cache region.

Description

Cache management method, device and related equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a cache management method and apparatus, and a related device.
Background
With the development of internet services, electronic devices such as mobile phones, digital cameras and vehicle-mounted recorders are increasingly popularized, and the storage of massive data such as application/video parallel downloading and multi-path video recording brings great challenges to the storage of an operating system. Most current storage is by writing data to a page cache (page cache) and then writing the data to a disk that is permanently stored. In the pagecache scheme, a page is used as a basic memory unit to copy and store user file data, and because page addresses are randomly allocated by a system and physical addresses of files stored in adjacent page addresses in a disk are discontinuous, when an operating system writes file data in the cache into the disk, the file data in several adjacent pages cannot be merged and written into the disk together. When a plurality of pages in the cache are written into a disk, multiple disk input/output (IO) requests are required, and the number of disk accesses is large. Meanwhile, the data written to the disk by a single disk IO request is limited by the size of a page, and the optimal bandwidth of the disk is generally far larger than the data size of a single page, so that the disk bandwidth is wasted, the average writing speed is low, the disk IO jitter problem and the data loss problem are easy to occur, and the system performance is influenced.
Disclosure of Invention
The application provides a cache management method, a cache management device and related equipment, which can reduce IO times of storage equipment, improve writing speed, reduce the memory overhead of a cache and improve system performance.
In a first aspect, an embodiment of the present application provides a cache management method, where the method includes:
the electronic equipment receives a reading request of a target file, responds to the reading request of the target file, and allocates a first cache region for the target file, wherein the first cache region is a cache region corresponding to the reading target file.
The electronic equipment receives a write-in request for the target file, responds to the write-in request for the target file, allocates a second cache region for the target file, and acquires an identifier of the target file; and the electronic equipment caches the target file to a target cache address according to the target file identifier, wherein the target cache address is an address in the second cache region.
The target file needing to be operated by the read request and the write request is cached in a partitioned mode, and the identification of the target file is associated with the cache address, so that the file identifications in the adjacent cache addresses in the cache are adjacent, namely the physical addresses of the files cached in the adjacent cache addresses on the storage device are continuous. Therefore, when the data in the buffer is written into the storage device, the file data in the adjacent buffer addresses can be merged and written into the storage device, the IO request times of the storage device are reduced, the bandwidth of the IO request of the storage device at a time is increased, the bandwidth of the storage device is not wasted, the writing speed of the storage device is increased, and the performance of the system is improved.
The target file identification is an identification number of the target file, and the identification of the target file is used for indicating the storage address of the target file in the storage device.
In a possible implementation manner, within a preset time length, when the number of times of acquiring the read request is smaller than the number of times of writing the request, the allocated capacity of the first cache area is smaller than the capacity of the second cache area. Or, in a preset time length, when the number of times of acquiring the read request is greater than the number of times of writing the request, the allocated capacity of the first cache area is greater than the capacity of the second cache area.
In a possible implementation manner, the first buffer area and the second buffer area each include one or more buffer blocks, and the number and size of the buffer blocks may be preset.
By reasonably allocating the capacity of the second buffer area and the first buffer area according to the number of times of the read requests and the number of times of the write requests, when the number of times of obtaining the read requests is less than the number of times of the write requests, the allocated capacity of the first buffer area is less than the capacity of the second buffer area, thereby avoiding the problem of allocating a large amount of buffer capacity due to less read times, avoiding the waste of the memory of the buffer,
in a specific implementation manner, the electronic device caches the target file to the target cache address according to the target file identifier, which specifically includes: and the electronic equipment detects that the identifier of the target file exists in the second cache region, caches the target file to a first cache address, wherein the first cache address is an address corresponding to the identifier of the target file in the second cache region.
In a specific implementation manner, the electronic device caches the target file to the target cache address according to the target file identifier, which specifically includes: the electronic equipment detects that the identifier of the target file does not exist in the second cache region and detects that the first identifier exists in the second cache region, wherein the first identifier is the last identifier adjacent to the identifier of the target file; the electronic equipment caches the target file to a second cache address, wherein the second cache address is an adjacent next block address of a third cache address, and the file corresponding to the first identification is cached in the third cache address.
In one possible implementation, before caching the target file to the second cache address, the method further includes: and when the electronic equipment detects that the second cache address is occupied by one process, writing the files cached in the second cache address into the storage equipment.
In a specific implementation, the method further includes: and the electronic equipment detects that the second cache address is not fully written by the process, and writes the files cached in the second cache address into the storage equipment.
In a specific implementation, the method further includes: the electronic equipment detects that the second cache address is fully written by the process, the fourth cache address is an idle address, and the fourth cache address is an adjacent next block address of the second cache address, and writes the file cached in the second cache address into the storage equipment; the electronic equipment detects that a second cache address is fully written by a process, the second cache address and a fifth cache address are not idle addresses, the fourth cache address is an adjacent next block address of the second cache address, the adjacent next block address of the fifth cache address is an idle address, files in the second cache address and the fifth cache address are merged, and the merged files are written into the storage equipment.
In a specific implementation manner, merging files from the second cache address to the fifth cache address, and writing the merged files into the storage device specifically includes: the second cache address to the fifth cache address comprise N cache blocks, and the following operations are executed on the first cache block to the Nth cache block, wherein the first cache block is cached in the second cache address, and the ith cache block is any one of the first cache block to the Nth cache block; detecting whether file identifications in the ith cache block and the (i + 1) th cache block are adjacent or not; when detecting that the file identifications in the ith cache block and the (i + 1) th cache block are not adjacent, combining the first cache block with the file in the ith cache block, and writing the combined file into the storage device; when detecting that the file identifications in the ith cache block and the (i + 1) th cache block are adjacent, detecting whether the (i + 1) th cache block is fully written; when detecting that the (i + 1) th cache block is not fully written, combining the first cache block to the file in the (i + 1) th cache block, and writing the combined file into the storage device; and when detecting that the (i + 1) th cache block is fully written, taking the (i + 1) th cache block as an ith cache block, taking the (i + 2) th cache block as an (i + 1) th cache block, and detecting whether the file identifiers in the (i) th cache block and the (i + 1) th cache block are written adjacently or not.
In one possible implementation, the method further includes: the electronic equipment caches the target file to the first cache region when detecting that the identifier of the target file is smaller than a second identifier, wherein the second identifier is the minimum identifier in all file identifiers in the second cache region; or, the electronic device caches the target file to a sixth cache address when detecting that the identifier of the target file is larger than the second identifier, wherein the sixth cache address is an idle address in the second cache region, and the second identifier is the smallest identifier among all file identifiers in the second cache region.
In a possible implementation manner, the electronic device detects that the target file identifier is smaller than a second identifier, and identifies the process in which the target file is located as a slow-write process, where the second identifier is the smallest identifier among all file identifiers in the second cache region; or, the electronic device detects that the target file identifier is larger than a second identifier, and identifies the process in which the target file is located as a fast writing process, where the second identifier is the smallest identifier among all file identifiers in the second cache region.
In a possible implementation manner, when the electronic device identifies the process in which the target file is located as a slow-writing process, a first cache region is allocated to the target file; or when the electronic equipment identifies the process of the target file as the fast writing process, allocating a second cache region for the target file.
By caching the target file corresponding to the slow writing process and the target file corresponding to the fast writing process in a partition mode, the situation that the file corresponding to the slow writing process occupies the cache address of the file corresponding to the fast writing process is avoided, and the average writing speed of the file into the storage device is improved.
In a possible implementation manner, when the electronic device detects that the file of the target file identifier does not exist in the second cache region, the electronic device may further detect whether the second identifier is smaller than the target file identifier. When the detected target file identification is smaller than the second identification, caching the target file in a first cache region; and when the identification of the detected target file is larger than the second identification, the electronic equipment detects whether the file of the first identification exists in the second cache region. If the file with the first identifier does not exist in the second cache region, the electronic equipment caches the target file to a sixth cache address; and if the file with the first identifier exists in the second cache region, the electronic equipment writes the target file into the storage equipment, wherein the second identifier is the minimum identifier in all file identifiers in the second cache region.
In a second aspect, an embodiment of the present application provides a cache management apparatus, which includes units configured to execute the cache management method in the first aspect or any one of the possible implementation manners of the first aspect.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor and a memory; the processor executes the code in the memory to perform the method as provided by the first aspect and any one of the possible implementations of the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, which includes instructions that, when executed on a computer, cause the computer to perform a method as provided in the first aspect or any one of the possible implementation manners of the first aspect.
The present application can further combine to provide more implementations on the basis of the implementations provided by the above aspects.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the background art of the present application, the drawings required to be used in the embodiments or the background art of the present application will be described below.
Fig. 1 is an application scenario related to an embodiment of the present application;
FIG. 2 is a user interface for displaying an application menu provided by an embodiment of the present application;
fig. 3 is a recording interface provided in an embodiment of the present application;
FIG. 4 is a system architecture to which embodiments of the present application relate;
fig. 5 is a system framework diagram of a cache system according to an embodiment of the present application;
fig. 6 is a flowchart of a cache management method according to an embodiment of the present application;
fig. 7 is a schematic diagram of a cache area according to an embodiment of the present disclosure;
FIG. 8 is a detailed flow chart provided by an embodiment of the present application;
fig. 9 is a schematic diagram illustrating that a cache address is occupied according to an embodiment of the present application;
FIG. 10 is a schematic diagram of another example of an occupied cache address provided in this application;
FIG. 11 is a schematic diagram of a merging algorithm provided in an embodiment of the present application;
12 a-12 e are schematic diagrams of several document merges provided by embodiments of the present application;
fig. 13 is a schematic diagram of a cache address being occupied according to an embodiment of the present application;
FIG. 14 is a flow chart of a fast-slow recognition algorithm provided by an embodiment of the present application;
fig. 15 is a schematic diagram of a cache apparatus provided in an embodiment of the present application;
fig. 16 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The terminology used in the examples section of this application is for the purpose of describing particular embodiments of the invention only and is not intended to be limiting of the invention.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items and includes such combinations.
First, some concepts involved in the embodiments of the present application will be described.
A Virtual File System (VFS) is an interface that allows different file system implementations to be used with the operating system. The VFS provides a uniform interface for file and file system operation for a user program, and shields the difference and operation details of different file systems.
File Systems (FS) are methods and data structures used by an operating system to store files on devices (usually disks, but also NAND Flash based solid state drives) or partitions, i.e. methods to organize files on storage devices. The software mechanism in the operating system that is responsible for managing and storing file information is called a file management system, referred to as a file system for short. The file system consists of three parts: file system interface, software assembly for manipulating and managing objects, objects and properties. From a system perspective, a file system is a system that organizes and allocates space of a file storage device, is responsible for file storage, and protects and retrieves stored files. In particular, the file system is responsible for creating files for users, storing, reading, modifying, dumping files, controlling access to files, revoking files when a user is no longer in use, and the like. A File Allocation Table (FAT) is one of common file systems, and is a table for recording file locations, in which a disk space is divided into a certain number of sectors as a unit, and such units are clusters, and the number of sectors included in a cluster must be an integer power of 2. The maximum number of clusters is 64 sectors, i.e., 32 KB. All clusters are numbered starting with 2, each cluster having its own address number. Both user files and directories are stored in clusters. The FAT is classified into FAT12, FAT16 and FATASK32, and the FAT tables are distinguished in that the number of binary bits used to record any cluster of links is 12, 16 and 32 respectively. When the file system allocates the disk address to store the user data, the user data are always stored according to the sequence from small to large, and for the read and write requests of each file, a cluster address is corresponding to the read and write requests in the disk space. When reading and writing a file, the cluster address of the file data stored in the disk can be obtained according to the cluster number recorded in the FAT area by the file.
The embodiment of the present application provides a cache management method, which associates a cluster number of a target file with a cache address, so that file cluster numbers in adjacent cache addresses in a cache are consecutive, that is, physical addresses of files cached in the adjacent cache addresses on a storage device are consecutive. Therefore, when the file in the buffer is written into the storage device, the file data in the adjacent buffer addresses can be merged and written into the storage device together, the IO request times of the storage device are reduced, the bandwidth of a single IO request is increased, the bandwidth of the storage device is not wasted, meanwhile, the target file needing to be operated by the read request and the write request is cached in a partition mode, the target file corresponding to a fast progress and the target file corresponding to a slow progress are cached in a partition mode, the average write speed is improved, the sizes of the two buffer areas can be set automatically, the waste of the memory of the buffer is avoided, and the system performance is improved.
The electronic device according to the embodiment of the present application may be a car recorder, a desktop computer, a game machine, a television, a projector, a Personal Digital Assistant (PDA), a portable computer, a network tablet, a tablet computer, a wireless telephone, a mobile phone, a smart phone, an electronic book, a Portable Multimedia Player (PMP), an MP3 player, a portable game machine, a navigation device, a black box, a digital camera, a Digital Multimedia Broadcasting (DMB) player, a three-dimensional (3D) television, a smart television, a digital audio recorder, a digital audio player, a digital picture recorder, a digital picture player, a digital video recorder, a digital video player. This is not a limitation of the present application. The following describes an embodiment of the present application by taking a drive recorder as an example.
The following describes an application scenario related to the present application.
Fig. 1 is an application scenario related to an embodiment of the present application. As shown in fig. 1, a user drives on a road using a vehicle equipped with a drive recording apparatus 100. The driving recording device 100 mainly records surrounding conditions in the driving process in real time through one or more built-in cameras, and stores the recorded one or more paths of video data in a memory card.
Fig. 2 shows an exemplary user interface 110 for displaying an application menu on a tachograph system 100. The user interface 110 includes: gallery icon 111, settings icon 112, video icon 113, camera icon 114, home interface icon 115, and time indicator 116. Wherein:
the gallery icon 111 may be used to launch a gallery application, for example, the drive recording device 100 may launch the gallery application in response to a user operation, such as a touch operation, acting on the gallery icon 111, so that pictures and videos stored in the drive recording device 100 are displayed. The pictures and videos stored in the driving recording equipment 100 include pictures and videos shot by the driving recording equipment 100 through a camera application program.
The setting icon 112 is used for displaying a setting interface of the driving recording equipment 100;
the video icon 113 may be used for real-time recording, and when a touch operation acting on the video icon 113 is detected, the driving recording device 100 may call a camera to record surrounding conditions in real time;
the camera icon 114 may be used to take a picture, and when a touch operation acting on the camera icon 114 is detected, the driving recording device 100 may call a camera to take a picture of surrounding conditions;
a home interface icon 115, which is configured to display a home interface when a touch operation on the home interface icon 115 is detected;
the information indication icon 116 is used for indicating at least one of time information, date information, weather information, or a short message or an instant message received by a mobile phone or a wearable device of a user who maintains a wireless connection with the automobile data recorder.
In some embodiments, the user interface 110 illustratively shown in FIG. 2 may be the primary interface.
It is understood that fig. 2 only illustrates the user interface on the driving recording device 100 by way of example, and should not be construed as limiting the embodiments of the present application.
As shown in fig. 2, the user may click the video icon 113 on the interface 110 of the driving recording apparatus 100, the driving recording apparatus 100 detects a touch operation applied to the video icon 113, and in response to the touch operation, the driving recording apparatus 100 may call a camera to record the surrounding situation in real time, and the driving recording apparatus 100 may display a recording interface 210 as shown in fig. 3. Fig. 3 illustrates an exemplary recording interface 210 on the driving recording device 100, and as shown in fig. 3, the recording interface 210 includes: a video display area 211, a home interface icon 212, a return icon 213, and an information indicator 214. Wherein:
the video display area 211 is used for displaying the current picture shot by the camera;
main interface icon 212 is identical to main interface icon 115 of user interface 110 and will not be described in detail herein;
the return icon 213 is used to return to the previous page, and when the touch operation performed on the return icon 214 is detected, the driving recording device 100 may display the previous page of the current page;
the information indicator 214 is identical to the information indicator 116 of the user interface 110 and will not be described in detail herein.
The above-listed user operations for starting recording by driving recording device 100 are not limited, and in a specific implementation, other user operations may also be used for starting recording by driving recording device 100, which is not limited in this application.
When the driving recording equipment 100 is started to record, the driving recording equipment 100 sends the recorded video to the CPU for processing, the video data processed by the CPU is cached in the buffer, and when a preset condition is reached, the file in the buffer is written into the storage equipment.
In one possible implementation, the buffer in the above embodiments may be, but is not limited to, a Dynamic Random Access Memory (DRAM), a Static Random Access Memory (SRAM), or other volatile memory devices.
Fig. 4 is a system architecture according to an embodiment of the present application, and as shown in fig. 4, the system includes: an application layer, a kernel layer, and a physical layer. When the driving recording equipment 100 is started, the camera 1 and/or the camera 2 in the driving recording equipment 100 write the collected video or photo as a target file into the storage equipment. The application sends a write request for the target file to the kernel space layer. The virtual file system in the kernel space layer provides a uniform file system interface, the file system detects whether a write request for a target file is direct IO, if so, the target file is directly written into the storage device, and if not, the target file is cached through the cache system, and then the target file is written into the storage device. Similarly, when a user uses the player to view a video or a photo in the driving recording device 100, the application program makes a read request for the target file, the application program sends the read request for the target file to the kernel space layer, the file system detects whether an IO write request for the target file is direct IO, if so, the IO write request is directly IO, the IO write request is directly read from the storage device, and if not, the video or the photo in the storage device is read into the buffer as the target file.
The application program is not limited to a camera, and may be other application programs, and the object file is not limited to a video or a photograph, and may also be other files, which is not limited in this application.
Fig. 5 exemplarily shows a system framework diagram of a cache system, and as shown in fig. 5, the cache system includes a read-write identification module 501, a cache management module 502, a fast-slow identification module 503, a cache merging module 504, and a cache 505. Wherein:
the read-write identification module 501 may be configured to identify a type of an IO request for a target file, where the type of the IO request includes a read request and a write request. For example, when an application performs a write operation on a target file, a write request to the target file may be identified, and when a read operation is performed on the target file, a read request to the target file may be identified.
The cache management module 502 may be used to manage all operations in the cache system, such as read operations, write operations, allocating a cache area, and detecting a file identifier. It is to be appreciated that the cache management module 502 may manage the cache system in response to IO requests from the file system for the target file. For example, the cache management module 502 may determine a cache address of a target file in the cache 505 in response to a write request from the file system for the target file.
And a fast-slow identifying module 503, configured to identify whether the process in which the target file is located is a slow writing process or a fast writing process.
The cache merging module 504 is configured to merge the target file data according to the instruction of the cache management module 502.
The buffer 505 may be used to buffer data for read operations and/or write operations. For example, when the IO request of the application program for the target file is a read request, the cache management module 502 first reads the target file from the storage device into the cache 505, and then the cache management module 502 reads the target file from the cache 505. When the IO request for the target file is a write request, the cache management module 502 caches the target file in the cache 505 first, and then writes the target file in the cache 505 into the storage device.
Fig. 6 shows a cache management method provided in an embodiment of the present application, which is applied to the cache system shown in fig. 5, and as shown in fig. 6, the cache management method includes:
s101: receiving an IO request for a target file, and determining a cache region of the target file according to the type of the IO request.
In a specific implementation manner, before receiving an IO request of an application program for a target file, the cache management module 502 divides the cache 505 into a first cache region and a second cache region, where the first cache region and the second cache region both include one or more cache blocks, and the size and the number of the cache blocks may be preset, for example, the size may be a FAT file system window size, such as 32K, and the number may be 3 or 5, which is not limited in this application. Illustratively, as shown in fig. 7, the first cache region may include 7 cache blocks BLK 1 to BLK 7, the second cache region may include 21 cache blocks BLK 1 to BLK 21, and the size of the cache block in the first cache region may be identical to the size of the cache block in the second cache region.
The cache management module 502 receives an IO request of an application program for a target file, and determines a cache region of the target file according to a type of the IO request for the target file. The types of the IO requests include read requests and write requests. Specifically, the read-write identifying module 501 calls a system function to identify an IO request type for a target file, if the received IO request for the target file is a read request, a first cache region is allocated for the target file, and when the read request is executed, the cache management module 502 reads the target file in the storage device into the first cache region. If the received IO request for the target file is a write request, a second cache region is allocated for the target file, and when the system executes the write request, the cache management module 502 caches the target file in the second cache region first, and then writes the target file in the second cache region into the storage device. The system function may be a write function or a read function of the file system, and may also be another function, which is not limited in this embodiment of the present application.
Optionally, within a preset time length, when the number of times of obtaining the read request is smaller than the number of times of the write request, the allocated capacity of the first cache area is smaller than the capacity of the second cache area, that is, the number of cache blocks of the first cache area is smaller than the number of cache blocks of the second cache area. Or within a preset time length, when the times of acquiring the read requests are greater than the times of writing the requests, the allocated capacity of the first cache region is greater than the capacity of the second cache region.
According to the method and the device, the target files needing to be operated by the read request and the write request are cached in the partition mode, the size of the target file cache area needing to be operated by the read request and the write request is controlled, when the read operation of the application program is less, a small amount of cache space can be allocated for the target files needing to be operated by the read request, and the waste of the cache space is avoided. For example, in the driving recording device 100, generally, a user mainly records surrounding conditions in the driving process in real time through one or more built-in cameras, and stores the recorded one-way or multi-way video data in a memory card thereof, it can be understood that in the driving recording device, in most cases, videos or photos are written into the storage device as target files, in few cases, videos or photos in the driving recording device are viewed, the number of write requests in the driving recording device is far greater than that of read requests, so that a first buffer area can be allocated for a read request, and a second buffer area can be allocated for a write request, thereby avoiding wasting buffer space.
S102: when the received IO request of the target file is a write request, acquiring the identifier of the target file, and writing the target file into the target cache address according to the identifier of the target file.
Specifically, when the read-write identification module 501 identifies that the IO request of the application program to the target file is a write request, the cache management module 502 obtains an identifier of the target file and records the identifier of the target file. The identification of the target file is the identification number of the target file, and the identification of the target file is used for indicating the storage position of the target file in the storage device. The identification of the target file may be a cluster number in embodiments of the present application. It can be understood that, after receiving an IO request from an application program to a target file, the file system assigns an identifier, that is, a cluster number, to the target file. The buffer management module 502 records the cluster number of the file that needs to be operated by all write requests. After obtaining the cluster number N of the target file, the cache management module 502 detects the cluster numbers of all the cached files in the second cache region, checks whether a file with the cluster number N exists in the second cache region, and caches the target file to a first cache address if a file with the cluster number N exists, where the first cache address is a cache address corresponding to a file with the cluster number N in the second cache region; if the file with the cluster number of N does not exist in the second cache region, the cache management module 502 detects whether the file with the cluster number of N-1 exists in the second cache region.
When the file with the cluster number N-1 does not exist in the second cache region, the cache management module 502 detects whether the cluster number N of the target file is smaller than the minimum cluster number in the second cache region, where the cluster number N-1 is a first identifier and the first identifier is an adjacent previous identifier of the target file. When the cluster number N of the target file is smaller than the minimum cluster number in the second cache region, the cache management module 502 caches the target file in the first cache region; when the cluster number N of the target file is greater than the minimum cluster number in the second cache region, the cache management module 502 caches the target file to a sixth cache address, and records the cluster number N of the target file in the second cache region, where the minimum cluster number is the second identifier, and the sixth cache address is a free cache address in the second cache region.
Optionally, the fast-slow recognition module 503 detects that the identifier of the target file is smaller than the second identifier, and recognizes the process in which the target file is located as a slow-write process; or, the fast-slow identifying module 503 detects that the identification of the target file is greater than the second identification, and identifies the process in which the target file is located as a fast-write process.
Optionally, the fast-slow identification module 503 sends the identification result of the process where the target file is located to the cache management module 502. The cache management module 502 receives that the process of identifying the target file sent by the fast and slow identifying module 503 is a slow writing process, and the cache management module 502 allocates a first cache region for the target file; or, the cache management module 502 receives that the process in which the identified target file is sent by the fast-slow identifying module 503 is a fast-write process, and the cache management module 502 allocates a second cache region for the target file.
When a file with a cluster number of N-1 exists in the second cache region, the cache management module 502 searches for a cache address corresponding to the file with the cluster number of N-1, and the cache management module 502 detects whether a next cache address of the cache address corresponding to the file with the cluster number of N-1 is occupied. If the next block of cache address with the cluster number being the cache address corresponding to the N-1 file is not occupied, the cache management module 502 caches the target file in a second cache address, and records the cluster number of the target file in a second cache region, where the second cache address is an adjacent next block of address of a third cache address, and the third cache address is a cache address of the file corresponding to the first identifier. When the next cache address with the cluster number being the cache address corresponding to the N-1 file is occupied, that is, the second cache address is occupied, the cache management module 502 writes the file in the second cache address into the storage device, releases the space of the second cache address, and the cache management module 502 caches the target file to the second cache address and records the cluster number N of the target file in the second cache region.
By associating the cluster number of the target file with the cache address, the cluster numbers of the files in the adjacent cache addresses in the cache are continuous, that is, the physical addresses of the files cached in the adjacent cache addresses on the storage device are continuous. Therefore, when the system writes the data cached in the disk into the disk, the file data in the adjacent cache addresses can be merged and written into the storage device, so that the IO request times are reduced, the bandwidth of a single IO request is increased, and the bandwidth of the storage device is not wasted.
Next, taking the cluster number as the identifier of the target file as an example, a specific process of determining the target cache address corresponding to the target file in the embodiment of the present application is exemplarily described with reference to fig. 8.
S201, detecting whether the IO request of the target file is a writing request, if so, executing S202, and if not, caching the target file into a first cache region.
Specifically, after the cache management module 502 receives an IO request of an application program for a target file, the read-write identification module 501 identifies whether the IO request for the target file is a write request. If the IO request for the target file is identified as a read request, the cache management module 502 allocates a first cache region for the target file, and if the IO request for the target file is identified as a write request, S202 is executed.
S202, allocating a second cache area and acquiring the cluster number N of the target file.
Specifically, when the read-write identification module 501 identifies that the IO request for the target file is a write request, the cache management module 502 allocates a second cache region for the target file, and the cache management module 502 obtains the cluster number N of the target file from the file system.
S203, whether the second cache region has a file with the cluster number of N is detected, if yes, the target file is cached to the first cache address, and if not, S204 is executed.
Specifically, the cache management module 502 detects whether a file with a cluster number of N exists in the second cache region, and caches the target file in the first cache address if the file with the cluster number of N exists in the second cache region, where the first cache address is a cache address corresponding to the cluster number of N in the second cache region; if the file with the cluster number N does not exist in the second cache region, the operation in S204 is executed.
S204, detecting whether a file with a cluster number of N-1 exists in the second cache region, if so, caching the target file to a second cache address, and if not, executing S205.
Specifically, when the cache management module 502 detects that the file with the cluster number N does not exist in the second cache region, the cache management module 502 detects whether the file with the cluster number N-1 exists in the second cache region. If it is detected that the file with the cluster number of N-1 does not exist in the second cache region, executing S205; and if detecting that the file with the cluster number of N-1 exists in the second cache region, caching the target file to a second cache address, wherein the second cache address is the next block address adjacent to the cache address corresponding to the file with the cluster number of N-1.
Specifically, before caching the target file to the second cache address, the cache management module 502 needs to detect whether the second cache address is occupied. When the second cache address is occupied, the cache management module 502 detects whether the second cache address is full. If it is detected that the second cache address is not fully written, the cache management module 502 writes the file cached in the second cache address into the storage device, releases the cache space of the second cache address, and then writes the target file into the second cache address. For example, as shown in fig. 9, the cache address corresponding to the cluster number N-1 is BLK N-1, and the next cache address of the cache address corresponding to the cluster number N-1, that is, the second cache address is BLK N, it can be seen that the BLK N is not fully written, and the cache management module 502 can write the file in the BLK N into the storage device, release the space of the BLK N, and then write the target file into the BLK N.
If it is detected that the second cache address is fully written, the cache management module 502 detects whether a next block address of the second cache address is an idle address. If the next block address of the second cache address is detected to be an idle address, the cache management module 502 writes the file cached in the second cache address into the storage device; if it is detected that the next block of address of the second cache address is not an idle address, a fifth cache address is searched, the cache management module 502 sends the second cache address and the fifth cache address to the cache merging module 503, after receiving the second cache address and the fifth cache address sent by the cache management module 502, the cache merging module 504 merges files cached in the second cache address and the fifth cache address to obtain a merged file, and writes the merged file into the storage device, where the second cache address to the fifth cache address are not idle addresses, and the next block of cache address of the fifth cache address is an idle address. For example, as shown in fig. 10, when the second cache address BLK n is fully written by the process, and the second cache address BLK n to the cache address BLK n +2 are fully written by the process task3, the next cache address (BLK n +3) of the cache address BLK n +2 is a free address, and the cache address BLK n +2 is the fifth cache address.
Optionally, the second to fifth cache addresses include N cache blocks, where the first cache block is cached at the second cache address, and the ith cache block is any one of the first to nth cache blocks. As shown in fig. 11, the above process of merging the files cached in the second cache address to the fifth cache address may include the following steps:
s2041: detecting whether the file cluster numbers in the ith cache block and the (i + 1) th cache block are adjacent, if not, executing S2042; if yes, go to S2043.
S2042: and merging the first cache block to the file in the ith cache block, and writing the merged file into the storage device.
S2043: detecting whether the (i + 1) th cache block is fully written, if not, executing S2045; if yes, go to S2044.
S2044: detecting whether the (i + 1) th cache block is the Nth cache block, if not, executing S2045; if yes, go to S2046.
S2045: and merging the first cache block to the file in the (i + 1) th cache block, and writing the merged file into the storage device.
S2046: the i +1 th cache block is used as the ith cache block, and the i +2 th cache block is used as the i +1 th cache block.
Specifically, when it is detected that the i +1 th cache block is not fully written, and it is detected that the i +1 th cache block is not the nth cache block, the i +1 th cache block is updated to the i +1 th cache block, the i +2 th cache block is updated to the i +1 th cache block, and the steps of S2041 to S2046 are repeated.
The combined files are written into the storage device by combining the files corresponding to the adjacent cluster numbers in the adjacent cache addresses, so that the IO times of the storage device are reduced, the IO bandwidth of the single storage device is increased, and the bandwidth of the storage device is not wasted.
For example, as shown in fig. 12a to fig. 12e, when it is detected that the second cache address BLK N to the cache address BLK N +2 are all written with files, at this time, the cache address BLK N +2 is a fifth cache address, the first cache block is cached in the second cache address, the nth cache block is cached in the fifth cache address, and N is three, the files in the second cache address BLK N to the cache address BLK N +2 are merged. Detecting whether the file cluster numbers in the first cache block and the second cache block are adjacent, if the file cluster numbers in the first cache block and the second cache block are not adjacent, as shown in fig. 12a, writing the file in the first cache block into the storage device, and releasing the space of the first cache block; and if the first cache block is adjacent to the file cluster number in the second cache block, detecting whether the second cache block is fully written. If it is detected that the second cache block is not fully written, as shown in fig. 12b, merging the file from the first cache block to the second cache block, writing the merged file into the storage device, and releasing the space from the first cache block to the second cache block; and if the second cache block is detected to be fully written, detecting whether cluster numbers of files in the second cache block and the third cache block are adjacent or not. If it is detected that the cluster numbers of the files in the second cache block and the third cache block are not adjacent, as shown in fig. 12c, merging the files from the first cache block to the second cache block, writing the merged files into the storage device, and releasing the space from the first cache block to the second cache block; and detecting whether the second cache block is adjacent to the cluster number of the file in the third cache block or not, and detecting whether the third cache block is fully written. If it is detected that the third cache block is not fully written, as shown in fig. 12d, merging the files in the first cache block to the third cache block, writing the merged files into the storage device, and releasing the space from the first cache block to the third cache block; if it is detected that the third cache block is fully written, as shown in fig. 12e, merging the files in the first cache block to the third cache block, writing the merged file into the storage device, and releasing the space from the first cache block to the third cache block.
S205, detecting whether the cluster number N is smaller than the minimum cluster number in the second cache region, and if not, caching the target file to a sixth cache address; if yes, caching the target file to the first cache region.
Specifically, when detecting that no file with a cluster number of N-1 exists in the second cache region, the cache management module 502 detects whether the cluster number N is smaller than the minimum cluster number in the second cache region. At this time, the fast/slow identification module 503 acquires the cluster number N of the target file, detects whether the process in which the target file is located is a slow process, and sends the detection result to the cache management module 502. If the cluster number N is smaller than the minimum cluster number in the second cache region, the fast-slow identification module 503 identifies the process where the target file is located as a slow writing process and sends the identification result to the cache management module, and after receiving the identification result sent by the fast-slow identification module 503, the cache management module 502 allocates a first cache region to the target file and caches the target file to the first cache region; if the cluster number N is greater than the minimum cluster number in the second cache region, the fast-slow identification module 503 identifies the process in which the target file is located as a fast writing process, and sends the identification result to the cache management module 502, after receiving the result that is sent by the fast-slow identification module 503 and identified as a fast process, the cache management module 502 allocates the second cache region for the target file, caches the target file to the sixth cache address, and records the cluster number of the target file in the second cache region. And the sixth cache address is a free cache address in the second cache region. If the cluster number N is smaller than the minimum cluster number in the second cache region, the cache management module 502 caches the target file in the first cache region. It should be noted that, since the file system always stores file data according to the descending order when allocating the cluster address of the storage device, when the cluster number N is smaller than the lowest cluster number in the second cache region, this situation will occur when the cache address corresponding to the target file is occupied by other processes. For example, fig. 13 shows a schematic diagram that a cache address provided in this embodiment of the present application is occupied, as shown in fig. 13, it is assumed that there are 5 cache blocks BLK 1 to BLK 5 in the second cache region, and during the first round of caching, the processes of 5 file write requests with cluster numbers of 1 to 5 cached in the second cache region are task1 to task5, where the process task3 of the file write request with cluster number of 3 sleeps due to system scheduling, and the file corresponding to the process task3 writes only part of the file data in the cache block BLK 3, and also part of the file data is not written. When the other cache blocks are full of data, or when the free space of the cache block in the second cache region is smaller than the preset value, the cache management module 502 writes the file data in the second cache region into the storage device, or writes part of the file data of the cache block in the second cache region into the storage device, and releases the whole space or part of the space of the second cache region, so as to cache a subsequent file. And after releasing the whole space or part of the space of the second cache region, starting the second round of cache, when starting caching the file with the cluster number of 6 in the second cache region, starting updating the cluster number recorded in the second cache region, and when starting caching the file with the cluster number of 9, updating the cluster number recorded in the second cache region to 6-9, wherein the cache block BLK 3 corresponding to the process task3 in the first round of cache is occupied by the process task 8. If the process task3 is waken up during the first round of caching at this time, and the process task3 continues to write the file data with the cluster number of 3 that is not written in the first round of caching, then the cache management module 502 detects whether the file with the cluster number of 3 exists in the second cache region, at this time, because the cluster number in the second cache region has been updated to 6 to 9, the file with the cluster number of 3 does not exist in the second cache region, the file with the cluster number of 2 does not exist in the second cache region, and the cluster number of 3 is smaller than the minimum cluster number 6 in the second cache region, the fast and slow identification module identifies the process task3 in which the file with the cluster number of 3 exists as a slow process, and caches the file with the cluster number of 3 in the first cache region, thereby avoiding affecting the writing speed of other processes, and improving the average writing speed, thereby improving the system performance.
Fig. 14 is a flowchart illustrating the fast and slow recognition algorithm in the fast and slow recognition module 503, as shown in fig. 14, including:
s2051: and acquiring a target file cluster number N.
S2052: detecting whether the cluster number N is smaller than the minimum cluster number in the second cache area, if not, executing S2053; if yes, go to S2054.
S2053: and identifying the process in which the target file is positioned as a fast writing process.
S2054: and identifying the process in which the target file is positioned as a slow writing process.
Optionally, when the cache management module 502 detects that there is no file with a cluster number N in the second cache region, the cache management module 502 may further execute S205, that is, detect whether the cluster number N is smaller than the minimum cluster number in the second cache region. When the cluster number N is smaller than the minimum cluster number in the second cache region, the cache management module 502 caches the target file in the first cache region; and when the detected cluster number N is larger than the minimum cluster number in the second cache region, executing S204, namely detecting whether the file with the cluster number of N-1 exists in the second cache region. If it is detected that the file with the cluster number N-1 does not exist in the second cache region, the cache management module 502 caches the target file to the sixth cache address, and records the cluster number of the target file in the second cache region. If the file with the cluster number of N-1 exists in the second cache region, searching the cache address of the file with the cluster number of N-1, and caching the target file to the second cache address.
By associating the cluster number of the target file with the cache address, the file cluster numbers in the adjacent cache addresses in the cache are continuous, namely the physical addresses of the files cached in the adjacent cache addresses on the storage device are continuous. Therefore, when the system writes the data cached in the storage device, the file data in the adjacent cache addresses can be merged and written into the storage device together, the IO request times of the storage device are reduced, the bandwidth of the IO request of a single storage device is increased, the bandwidth of the storage device is not wasted, meanwhile, the target file needing to be operated by the read request and the write request is cached in a partition mode, the target file corresponding to the fast writing process and the target file corresponding to the slow writing process are cached in a partition mode, the average writing speed is improved, the sizes of the two cache areas can be set automatically, the waste of a cache memory is avoided, and the system performance is improved.
It should be noted that, for simplicity of description, the above method embodiments are described as a series of action combinations, but those skilled in the art should understand that the present invention is not limited by the described sequence of actions, and those skilled in the art should understand that the embodiments described in the specification all belong to the preferred embodiments, and the actions involved are not necessarily required by the present invention.
Other reasonable combinations of steps, which can be conceived by one skilled in the art from the above description, also fall within the scope of the present invention. Further, those skilled in the art will appreciate that the embodiments described in this specification are presently preferred embodiments and that no particular act is required to implement the invention.
The cache management method provided by the embodiment of the present application is described in detail above with reference to fig. 1 to 14, and the cache device and the electronic device provided by the embodiment of the present application are described below with reference to fig. 15 to 16.
Fig. 15 is a schematic diagram of a caching apparatus 1500 according to an embodiment of the present application, where the caching apparatus 1500 includes a communication unit 1501 and a processing unit 1502, where,
a communication unit 1501 receives a read request for a target file and receives a write request for the target file.
A processing unit 1502, configured to respond to a read request for a target file, allocate a first cache area to the target file, where the first cache area is a cache area corresponding to the read target file; and responding to the write request of the target file, and allocating a second cache region for the target file.
The communication unit 1501 is also used to acquire the identification of the target file.
The processing unit 1502 is further configured to write the target file into a target cache address according to the target file identifier, where the target cache address is an address in the second cache region.
The target file identification is an identification number of the target file, and the identification of the target file is used for indicating the storage address of the target file in the storage device.
In a possible implementation manner, the processing unit 1502 is further configured to allocate, within a preset time length, a capacity of the first buffer smaller than a capacity of the second buffer when the number of times of obtaining the read request is smaller than the number of times of obtaining the write request. Alternatively, the processing unit 1502 is further configured to allocate, within a preset time duration, the capacity of the first buffer area that is greater than the capacity of the second buffer area when the number of times of obtaining the read request is greater than the number of times of obtaining the write request.
In a specific implementation manner, the processing unit 1502 is specifically configured to: and when the identification of the target file exists in the second cache region, caching the target file to a first cache address, wherein the first cache address is an address corresponding to the identification of the target file in the second cache region.
In a specific implementation manner, the processing unit 1502 is specifically configured to detect that an identifier of the target file does not exist in the second cache region, and detect that a first identifier exists in the second cache region, where the first identifier is an adjacent previous identifier of the target file; and caching the target file to a second cache address, wherein the second cache address is an adjacent next block address of a third cache address, and the file corresponding to the first identifier is cached in the third cache address.
In a possible implementation manner, the processing unit 1502 is specifically configured to, before caching the target file to the second cache address, further configured to, when detecting that the second cache address is occupied by a process, write the file cached in the second cache address into the storage device.
In a specific implementation manner, the processing unit 1502 is further configured to detect that the second cache address is not fully written by the process, and write the file cached in the second cache address into the storage device; detecting that a second cache address is fully written by a process, a fourth cache address is an idle address, the fourth cache address is an adjacent next block address of the second cache address, and writing a file cached in the second cache address into a storage device; and when the second cache address is detected to be fully written by the process, and the second cache address is not an idle address from the fifth cache address, the fourth cache address is an adjacent next block address of the second cache address, the adjacent next block address of the fifth cache address is an idle address, the files in the second cache address and the fifth cache address are merged, and the merged files are written into the storage device.
In a specific implementation manner, the processing unit 1502 is specifically configured to include N cache blocks in the second to fifth cache addresses, and perform the following operations on the first to nth cache blocks, where the first cache block is cached at the second cache address, and the ith cache block is any one of the first to nth cache blocks; detecting whether file identifications in the ith cache block and the (i + 1) th cache block are adjacent or not; when detecting that the file identifications in the ith cache block and the (i + 1) th cache block are not adjacent, combining the first cache block with the file in the ith cache block, and writing the combined file into the storage device; when detecting that the file identifications in the ith cache block and the (i + 1) th cache block are adjacent, detecting whether the (i + 1) th cache block is fully written; when detecting that the (i + 1) th cache block is not fully written, combining the first cache block to the file in the (i + 1) th cache block, and writing the combined file into the storage device; and when detecting that the (i + 1) th cache block is fully written, taking the (i + 1) th cache block as an ith cache block, taking the (i + 2) th cache block as an (i + 1) th cache block, and detecting whether the file identifiers in the (i) th cache block and the (i + 1) th cache block are written adjacently or not.
In a specific implementation manner, the processing unit 1502 is specifically configured to detect that an identifier of a target file is smaller than a second identifier, and cache the target file in the first cache region, where the second identifier is a smallest identifier of all file identifiers in the second cache region; or, when the fact that the identification of the target file is larger than the second identification is detected, caching the target file to a sixth cache address, wherein the sixth cache address is an idle address in a second cache region, and the second identification is the minimum identification in all file identifications in the second cache region.
In a possible implementation manner, the processing unit 1502 is further configured to detect that the identifier of the target file is smaller than the second identifier, and identify the process in which the target file is located as a slow-writing process; alternatively, the processing unit 1502 is further configured to detect that the target file identifier is greater than a second identifier, and identify the process in which the target file is located as a fast writing process, where the second identifier is the smallest identifier among all file identifiers in the second cache area.
In a possible implementation manner, the processing unit 1502 is specifically configured to allocate a first cache region for the target file when the process in which the target file is located is a slow-write process; or when the process of the target file is a fast writing process, allocating a second cache region for the target file.
It should be understood that the buffer apparatus 1500 according to the embodiment of the present application may be implemented by an application-specific integrated circuit (ASIC), or a Programmable Logic Device (PLD), which may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof. When the cache management methods shown in fig. 1 to 14 can be implemented by software, the cache apparatus 1500 and the modules thereof may also be software modules.
Fig. 16 is a schematic structural diagram of an electronic device according to an embodiment of the present application, where the electronic device 1600 includes: the processor 1610, the communication interface 1620 and the memory 1630 are connected to each other through a bus 1640, wherein the processor 1610 is configured to execute instructions stored in the memory 1630. The memory 1630 stores program code, and the processor 1610 may call the program code stored in the memory 1630 to perform the following operations:
receiving a reading request for the target file, responding to the reading request for the target file, and allocating a first cache region for the target file, wherein the first cache region is a cache region corresponding to the reading target file. Receiving a write-in request for a target file, responding to the write-in request for the target file, allocating a second cache region for the target file, and acquiring an identifier of the target file; and writing the target file into a target cache address according to the target file identifier, wherein the target cache address is an address in the second cache region.
In the embodiment of the present application, the processor 1610 may have various specific implementations, for example, the processor 1610 may be any one or a combination of multiple processors of a CPU, a GPU, a TPU, or an NPU, and the processor 1610 may also be a single-core processor or a multi-core processor. The processor 1610 may be made up of a combination of a CPU (GPU, TPU, or NPU) and hardware chips. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof. Processor 1610 may also be implemented using logic devices with built-in processing logic, such as an FPGA or a Digital Signal Processor (DSP).
Communication interface 1620 may be a wired interface or a wireless interface for communicating with other modules or devices. The wired interface may be an ethernet interface, a Controller Area Network (CAN) interface, or a Local Interconnect Network (LIN) interface, and the wireless interface may be a cellular network interface or a wireless lan interface.
The memory 1630 may be a non-volatile memory, such as a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash memory. The memory 1630 may also be volatile memory, which can be Random Access Memory (RAM), that acts as external cache memory.
Memory 1630 may also be used to store instructions and data that processor 1610 may invoke in order to implement the operations described above for processing unit 1502. Moreover, electronic device 1600 may contain more or fewer components than shown in FIG. 15, or have a different arrangement of components.
The bus 1640 may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 15, but this is not intended to represent only one bus or type of bus.
Optionally, the electronic device 1600 may further include an input/output interface 1650, where the input/output interface 1650 is connected to an input/output device for receiving input information and outputting an operation result.
It should be understood that the electronic device 1600 in the embodiment of the present application may correspond to the cache apparatus 1500 in the above embodiment, and details are not described herein again.
The embodiments of the present application further provide a non-transitory computer storage medium, where instructions are stored in the computer storage medium, and when the instructions are run on a processor, the method steps in the foregoing method embodiments may be implemented, and specific implementation of the processor of the computer storage medium in executing the method steps may refer to specific operations in the foregoing method embodiments, and details are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded or executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more collections of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium, or a semiconductor medium, which may be a solid state hard disk.
The foregoing is only illustrative of the present application. Those skilled in the art can conceive of changes or substitutions based on the specific embodiments provided in the present application, and all such changes or substitutions are intended to be included within the scope of the present application.

Claims (13)

1. A method for cache management, the method comprising:
receiving a reading request of a target file;
responding to the reading request, and allocating a first cache region for the target file, wherein the first cache region is a cache region corresponding to the reading of the target file;
receiving a write request to the target file;
responding to the write request, allocating a second cache region for the target file, and acquiring the identifier of the target file;
and caching the target file to a target cache address according to the target file identifier, wherein the target cache address is an address in the second cache region.
2. The method of claim 1, wherein the target file identifier is an identification number of the target file, and wherein the target file identifier is used to indicate a storage address of the target file in a storage device.
3. The method according to claim 1 or 2, comprising:
and in a preset time length, when the times of acquiring the read requests are less than the times of writing the requests, the allocated capacity of the first cache region is less than the capacity of the second cache region.
4. The method according to any one of claims 1 to 3, wherein the caching the target file to a target cache address according to the target file identifier comprises:
and caching the target file to a first cache address when the second cache region is detected to have the identifier of the target file, wherein the first cache address is an address corresponding to the identifier of the target file in the second cache region.
5. The method according to any one of claims 1 to 3, wherein the caching the target file to a target cache address according to the target file identifier comprises:
detecting that the identifier of the target file does not exist in the second cache region, and detecting that a first identifier exists in the second cache region, wherein the first identifier is an adjacent previous identifier of the target file;
and caching the target file to a second cache address, wherein the second cache address is an adjacent next block address of a third cache address, and the file corresponding to the first identifier is cached in the third cache address.
6. The method of claim 5, further comprising, prior to said caching said target file to a second cache address:
and when detecting that the second cache address is occupied by a process, writing the files cached in the second cache address into a storage device.
7. The method of claim 6, further comprising:
when the second cache address is detected to be not fully written by the process, writing the files cached in the second cache address into the storage device; detecting that the second cache address is fully written by the process, a fourth cache address is an idle address, the fourth cache address is an adjacent next block address of the second cache address, and writing the file cached in the second cache address into the storage device;
and detecting that the second cache address is fully written by the process, and the second cache address to a fifth cache address are not idle addresses, the fourth cache address is an adjacent next block address of the second cache address, the adjacent next block address of the fifth cache address is an idle address, merging files from the second cache address to the fifth cache address, and writing the merged files into the storage device.
8. The method of claim 7, wherein merging the files from the second cache address to the fifth cache address and writing the files to the storage device comprises:
the second cache address to the fifth cache address comprise N cache blocks, and the following operations are executed on a first cache block to an Nth cache block, wherein the first cache block is cached in the second cache address, and the ith cache block is any one of the first cache block to the Nth cache block;
detecting whether file identifications in the ith cache block and the (i + 1) th cache block are adjacent or not;
when detecting that the file identifiers in the ith cache block and the (i + 1) th cache block are not adjacent, merging the first cache block to the file in the ith cache block, and writing the merged file into the storage device;
when detecting that the file identifications in the ith cache block and the (i + 1) th cache block are adjacent, detecting whether the (i + 1) th cache block is fully written;
when detecting that the (i + 1) th cache block is not fully written, merging the first cache block to a file in the (i + 1) th cache block, and writing the merged file into the storage device;
and when the i +1 th cache block is detected to be fully written, taking the i +1 th cache block as the ith cache block, taking the i +2 th cache block as the i +1 th cache block, and detecting whether the file identifiers in the i +1 th cache block and the ith cache block are adjacent.
9. The method of claim 5, further comprising:
when the identification of the target file is smaller than a second identification, caching the target file into the first cache region, wherein the second identification is the minimum identification in all file identifications in the second cache region; alternatively, the first and second liquid crystal display panels may be,
and caching the target file to a sixth cache address when the fact that the identification of the target file is larger than a second identification is detected, wherein the sixth cache address is an idle address in the second cache region, and the second identification is the minimum identification in all file identifications in the second cache region.
10. The method according to any one of claims 1 to 9, further comprising:
when the target file identification is smaller than the second identification, identifying the process where the target file is located as a slow-writing process, wherein the second identification is the minimum identification in all file identifications in the second cache region; or, when it is detected that the target file identifier is larger than the second identifier, identifying the process in which the target file is located as a fast writing process, where the second identifier is the smallest identifier among all file identifiers in the second cache region.
11. The method of claim 10, wherein the method comprises:
when the process of the target file is the slow writing process, a first cache region is allocated to the target file; or when the process of the target file is the fast writing process, allocating a second cache region for the target file.
12. An electronic device, comprising: a processor and memory, the processor executing code in the memory to perform the method of any of claims 1 to 11.
13. A computer-readable storage medium comprising instructions which, when executed on a computer, cause the computer to perform the method of any one of claims 1 to 11.
CN202110252260.8A 2021-03-08 2021-03-08 Cache management method, device and related equipment Pending CN115048035A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110252260.8A CN115048035A (en) 2021-03-08 2021-03-08 Cache management method, device and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110252260.8A CN115048035A (en) 2021-03-08 2021-03-08 Cache management method, device and related equipment

Publications (1)

Publication Number Publication Date
CN115048035A true CN115048035A (en) 2022-09-13

Family

ID=83156571

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110252260.8A Pending CN115048035A (en) 2021-03-08 2021-03-08 Cache management method, device and related equipment

Country Status (1)

Country Link
CN (1) CN115048035A (en)

Similar Documents

Publication Publication Date Title
CN111506262B (en) Storage system, file storage and reading method and terminal equipment
US20210240636A1 (en) Memory Management Method and Apparatus
US9792227B2 (en) Heterogeneous unified memory
RU2629448C2 (en) System and method of controlling and organizing web-browser cash
WO2017107414A1 (en) File operation method and device
CN105637470B (en) Method and computing device for dirty data management
CN109857573B (en) Data sharing method, device, equipment and system
CN112954244A (en) Method, device and equipment for realizing storage of monitoring video and storage medium
CN113204407A (en) Memory over-allocation management method and device
CN111078410A (en) Memory allocation method and device, storage medium and electronic equipment
US20060036663A1 (en) Method and apparatus for effective data management of files
CN111813813A (en) Data management method, device, equipment and storage medium
CN109783321B (en) Monitoring data management method and device and terminal equipment
US20110106815A1 (en) Method and Apparatus for Selectively Re-Indexing a File System
WO2023124423A1 (en) Storage space allocation method and apparatus, and terminal device and storage medium
CN115048035A (en) Cache management method, device and related equipment
CN112286448B (en) Object access method and device, electronic equipment and machine-readable storage medium
CN109634877B (en) Method, device, equipment and storage medium for realizing stream operation
US10496318B1 (en) System and method for capacity management in multi-tiered storage
US20110106861A1 (en) Interface Techniques Providing Contiguous Storage For Files
CN108959517B (en) File management method and device and electronic equipment
CN116049021B (en) Storage space management method, electronic device, and computer-readable storage medium
US11960723B2 (en) Method and system for managing memory associated with a peripheral component interconnect express (PCIE) solid-state drive (SSD)
CN111831655B (en) Data processing method, device, medium and electronic equipment
CN112395244B (en) Access device and method for processor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination