CN117827117A - Method and device for reducing disk I/O and electronic equipment - Google Patents

Method and device for reducing disk I/O and electronic equipment Download PDF

Info

Publication number
CN117827117A
CN117827117A CN202410030126.7A CN202410030126A CN117827117A CN 117827117 A CN117827117 A CN 117827117A CN 202410030126 A CN202410030126 A CN 202410030126A CN 117827117 A CN117827117 A CN 117827117A
Authority
CN
China
Prior art keywords
file
written
preset
block
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410030126.7A
Other languages
Chinese (zh)
Inventor
简丽荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Cool Data Technology Co ltd
Original Assignee
Beijing Cool Data Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Cool Data Technology Co ltd filed Critical Beijing Cool Data Technology Co ltd
Priority to CN202410030126.7A priority Critical patent/CN117827117A/en
Publication of CN117827117A publication Critical patent/CN117827117A/en
Pending legal-status Critical Current

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application relates to a method, a device and electronic equipment for reducing disk I/O, comprising the following steps: receiving a writing request, processing a file to be processed corresponding to the writing request to obtain a file to be written, and writing data to be written corresponding to the writing request in a preset file block in a memory, wherein the data to be written comprises metadata corresponding to the file to be written, and the metadata comprises storage information of the file to be written in the file block. By the method, fragmentation in a file system can be reduced based on file merging, so that storage space is saved, and meanwhile, the disk addressing time is reduced, so that the reading speed of the file is improved.

Description

Method and device for reducing disk I/O and electronic equipment
Technical Field
The present disclosure relates to the field of computers, and in particular, to a method, an apparatus, and an electronic device for reducing disk I/O.
Background
Currently, hashdata computing clusters deployed in a production environment have the concept of fragments, wherein fragments are responsible for fragments, and when more fragments are configured, the fragments that each fragment is responsible for correspondingly become more. At this time, during one insert operation, the shards can generate many small files, and many files about 1k can be found in the production environment. When more and more fragments are configured, the corresponding small files are more and more, and particularly when high concurrency is caused, the disk I/O is higher when the insert operation and the select operation are carried out, so that the performance is influenced. In the actual production environment, the segment cache is found to reach four million small files, and when two hundred read-write requests are supported concurrently, the disk I/O is basically full.
Disclosure of Invention
In view of the foregoing, the present application proposes a method for reducing disk I/O to solve the problems presented in the background art.
According to an aspect of the present application, there is provided a method for reducing disk I/O, including:
receiving a write request;
processing the file to be processed corresponding to the writing request to obtain the file to be written;
writing data to be written corresponding to the writing request in a preset file block in a memory;
the data to be written contains metadata corresponding to the file to be written, and the metadata comprises storage information of the file to be written in the file block.
As an optional implementation manner of the present application, optionally, processing the file to be processed corresponding to the write request to obtain the file to be written includes:
judging whether the size of the file to be processed is smaller than a preset value or not according to the size of the file to be processed;
and if the file to be processed is smaller than a preset value, judging that the file to be processed is a file to be written.
As an optional implementation manner of the present application, optionally, writing the data to be written corresponding to the writing request in the preset file block in the memory includes:
judging whether a preset file block exists in the memory or not;
if the preset file block does not exist, applying for the preset file block to the memory.
As an optional implementation manner of the present application, optionally, writing the data to be written corresponding to the write request in a preset file block in the memory, further includes:
if a preset file block exists in the memory, comparing the size of the residual space of the preset file block with the size of the file to be written;
and if the size of the residual space of the preset file block is smaller than the size of the file to be written, applying for a new preset file block to the memory.
As an optional embodiment of the present application, optionally, applying for a new preset file block to the memory includes:
checking whether the number of the file blocks in the memory reaches a preset number;
and deleting the file blocks which are not in use if the number of the file blocks reaches the preset number.
As an optional embodiment of the present application, optionally, further comprising:
the data to be written also comprises an index relation of the file to be written, wherein the index relation comprises a preset file block to which the file to be written belongs;
and generating index information of the preset file block according to the affiliated index relation.
As an optional embodiment of the present application, optionally, further comprising:
receiving a read request;
searching the file to be read corresponding to the reading request in the index information;
when index information of the file to be read exists, positioning a preset file block to which the file to be read belongs according to the index information;
and obtaining the data to be read corresponding to the file to be read according to the metadata of the file to be read in the preset file block to which the file to be read belongs.
As an optional embodiment of the present application, optionally, further comprising:
receiving a file deleting request;
searching the file to be deleted corresponding to the file deleting request in the index information;
when index information of the file to be deleted exists, positioning a preset file block to which the file to be deleted belongs according to the index information;
marking a designated position in metadata of the file to be deleted in a preset file block of the file to be deleted;
and according to the mark, deleting the file to be deleted.
According to two aspects of the present application, there is provided an apparatus for reducing disk I/O, including:
the receiving request module is used for receiving a writing request;
the processing module is used for processing the file to be processed corresponding to the writing request to obtain the file to be written;
the writing data module is used for writing the data to be written corresponding to the writing request in a preset file block in the memory;
the metadata module is used for containing metadata corresponding to the file to be written in the data to be written in, and the metadata comprises storage information of the file to be written in the file block.
According to a third aspect of the present application, there is provided an electronic device comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to implement one of the methods of reducing disk I/O described above when executing the executable instructions.
The beneficial effects of this application:
according to the method and the device for writing the file to be written, the writing request is received, the file to be written is obtained by processing the file to be processed corresponding to the writing request, the data to be written corresponding to the writing request is written in a preset file block in a memory, metadata corresponding to the file to be written is contained in the data to be written, and the metadata comprises storage information of the file to be written in the file block. The file merging can reduce fragmentation in a file system, thereby saving storage space, and reducing disk addressing time, thereby improving file reading speed.
Other features and aspects of the present application will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features and aspects of the present application and together with the description, serve to explain the principles of the present application.
FIG. 1 illustrates a flow chart of a method of reducing disk I/O according to an embodiment of the present application;
FIG. 2 illustrates a flow chart of writing files in a method of reducing disk I/O in an embodiment of the present application;
FIG. 3 illustrates a flow chart of a read file in a method of reducing disk I/O in an embodiment of the present application;
FIG. 4 is a flowchart of deleting a file in a method for reducing disk I/O according to an embodiment of the present application;
FIG. 5 illustrates a flowchart of a restart recovery in a method of reducing disk I/O in an embodiment of the present application;
FIG. 6 illustrates a block diagram of an apparatus for disk I/O reduction in accordance with an embodiment of the present application.
Detailed Description
Various exemplary embodiments, features and aspects of the present application will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
It should be understood, however, that the terms "center," "longitudinal," "transverse," "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," "clockwise," "counter-clockwise," "axial," "radial," "circumferential," and the like indicate or are based on the orientation or positional relationship shown in the drawings, and are merely for convenience of description or to simplify the description, and do not indicate or imply that the devices or elements referred to must have a particular orientation, be configured and operated in a particular orientation, and therefore should not be construed as limiting the present application.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
In addition, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present application. It will be understood by those skilled in the art that the present application may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits have not been described in detail as not to unnecessarily obscure the present application.
Example 1
FIG. 1 illustrates a flow chart of a method of reducing disk I/O according to an embodiment of the present application. Disk I/O refers to input and output between disk and memory, and when an operating system stores files, it takes a cluster or a block as a unit, and if many small files are scattered on different clusters or blocks, it may cause waste of storage space. Through the process, a plurality of small files are combined, so that fragmentation in a file system can be reduced, and the disk addressing time is reduced, thereby saving space and improving the reading speed of the files. As shown in fig. 1, the flowchart includes:
s100, receiving a write-in request;
in this embodiment, the file generated by the segment responsible for the segment when the insert operation or the select operation is performed on the computer is to be stored, and the memory receives a request initiated by the controller to write the file to store the file.
S200, processing the file to be processed corresponding to the writing request to obtain the file to be written;
after the file generated by the segment responsible during the insert operation or the select operation is needed to be managed, in this embodiment, the number of files to be processed corresponding to the write request is very large, the size of the files to be processed needs to be determined, the files to be processed are divided into small files and other files according to a preset determination rule, in this embodiment, for the files to be processed determined to be small files, that is, the files to be written to, when the write operation is performed on the files to be written to, a corresponding data structure is created, and a background thread is notified to start working, wherein mechanisms such as a lock, a read-write lock, a CAS are used to ensure the accuracy and consistency of the concurrent multithreading.
S300, writing data to be written corresponding to the writing request into a preset file block in a memory;
in this embodiment, the preset file block is a file block of 8MB, and the file block is used for storing file information and metadata information of a small file; when the small file to be written needs to be stored, the data to be written corresponding to the file to be written is written in the file block of 8 MB.
S400, the data to be written contains metadata corresponding to the file to be written, wherein the metadata comprises storage information of the file to be written in the file block.
The metadata information includes a file length, a start position of the file in a large block, a position of a file bitmap, a file name, a check value, an MD5 value, and the like.
As an optional implementation manner of the present application, optionally, in step S200, processing the file to be processed corresponding to the write request to obtain the file to be written includes:
s201, judging whether the size of the file to be processed is smaller than a preset value or not according to the size of the file to be processed;
in a computer, the process of copying files within the computer to a hard disk may be understood as a write operation.
In this embodiment, the file to be written is a small file to be written, and the size corresponding to the small file is set to be less than or equal to 16KB; many files to be processed need to be cached in the computer, and in order to manage small files, a size range for judging the small files is formulated.
S202, if the file to be processed is smaller than a preset value, judging that the file to be processed is a file to be written.
After determining that the file to be processed is a small file, writing the small file into the file block of 8MB, and storing file data corresponding to the small file into the file block of 8 MB.
As an optional embodiment of the present application, optionally, in step S300, writing the data to be written corresponding to the write request in the preset file block in the memory includes:
s301, judging whether a preset file block exists in a memory or not;
in this embodiment, referring to fig. 2, when writing a small file by using a file block, it is first determined whether there is a file block of 8MB currently in the memory.
S302, if the preset file block does not exist, applying for the preset file block to the memory.
When there is no 8MB file block in the memory, a new 8MB file block is applied to the memory to write the small file.
As an optional implementation manner of the present application, optionally, writing the data to be written corresponding to the write request in a preset file block in the memory, further includes:
if a preset file block exists in the memory, comparing the size of the residual space of the preset file block with the size of the file to be written;
before writing a small file, other small files may be written in the file block of 8MB, but the space of the file block is remained, at this time, the file block is the current file block, and the size of the remained space of the current file block is compared with the size of the small file.
And if the size of the residual space of the preset file block is smaller than the size of the file to be written, applying for a new preset file block to the memory.
When the size of the remaining space of the current file block is smaller than the size of the small file, the remaining space cannot completely store the small file, and the current file block is closed because the small file cannot be stored across blocks. Thus applying for a new 8MB file block to the memory.
As an optional embodiment of the present application, optionally, applying for a new preset file block to the memory includes:
checking whether the number of the file blocks in the memory reaches a preset number;
the number of file blocks in the memory is not infinite, and for convenience in managing small files, a portion of the memory is used to provide file blocks of equal size for storing small files, and in this embodiment, the number threshold set for the number of file blocks is 16, and then the number of file blocks in the memory cannot exceed 16.
And deleting the file blocks which are not in use if the number of the file blocks reaches the preset number.
When a new file block needs to be applied to the memory, if 16 file blocks exist at the moment, the existing file blocks are checked, and the file blocks which are not used for storing small files are removed to regenerate the new file block. In one possible scenario, some small files have been cleaned, and small files that have been cleaned are not stored in the corresponding file blocks. The corresponding file blocks become idle, and the elimination ensures the time sequence of data writing, thereby being convenient for managing the creation time of the file blocks and the storage information of small files in the file blocks. When the memory is insufficient, the least recently used data in the memory is eliminated by the LRU algorithm, and in the embodiment, the unused file blocks are eliminated.
As an optional embodiment of the present application, optionally, further comprising:
the data to be written also comprises an index relation of the file to be written, wherein the index relation comprises a preset file block to which the file to be written belongs;
in the present embodiment, for example, a file to be written is written in the file block 1, and in the course of writing, a statement reflecting the index relationship of the file to be written in the file block 1 is added.
And generating index information of the preset file block according to the affiliated index relation.
And generating index information of the file block by recording the index relation corresponding to the small file in the preset file block. For example, in file block 1, there are 5 small files, and the index information generated about file block 1 is 5 small files that determine that file block 1 specifically stores.
As an optional embodiment of the present application, optionally, further comprising:
receiving a read request;
in a computer, the process of copying a file from disk to load in the memory of the computer can be understood as a read operation
Searching the file to be read corresponding to the reading request in the index information;
in this embodiment, it is assumed that the small file 1 needs to be read, but the file block to which the small file 1 belongs is not known, see fig. 3, and the index information is searched.
When index information of the file to be read exists, positioning a preset file block to which the file to be read belongs according to the index information;
according to the index information, a large block to which the small file 1 belongs is found, and in this embodiment, it is assumed that the large block to which the small file 1 belongs is a file block a.
And obtaining the data to be read corresponding to the file to be read according to the metadata of the file to be read in the preset file block to which the file to be read belongs.
After determining the file block A, whether the file block A is in the memory or in the disk is also required to be judged, if the file block A is in the memory, the metadata information of the small file 1 in the file block A can be directly read in the memory, the file data of the small file 1 is obtained through recording the storage information of the small file 1 by the metadata information, the file data of the small file 1 is checked to ensure the integrity of the data, the file content of the small file 1 is successfully loaded if the data is complete, and related personnel can perform operations such as checking, calling and the like.
If the file block A is stored in the disk by the user, the file block A needs to be loaded into the memory, and if the memory has insufficient space to completely load the file block A, the LRU is used to eliminate the least recently used data in the memory until the space size of the memory can completely load the file block A.
As an optional embodiment of the present application, optionally, further comprising:
receiving a file deleting request;
when a user wants to delete a certain file, a module in the computer executing corresponding operation receives a request for deleting the file by typing a deletion instruction into the computer.
Searching the file to be deleted corresponding to the file deleting request in the index information;
in this embodiment, it is assumed that the small file 4 needs to be deleted, but the file block to which the small file 4 belongs is not known, see fig. 4, and the index information is searched at this time.
When index information of the file to be deleted exists, positioning a preset file block to which the file to be deleted belongs according to the index information;
in this embodiment, assuming that the file block to which the small file 4 belongs is the file block B, the small file 4 is known to be stored in the file block B through the index information.
Marking a designated position in metadata of the file to be deleted in a preset file block of the file to be deleted;
and finding out a bitmap file through metadata information of the small file 4 in the file block B, and marking the file failure in the bitmap, namely, the small file 4 failure.
And according to the mark, deleting the file to be deleted.
After marking file failure, when merging a plurality of file blocks storing small files, a background thread can check a bitmap during merging, and delete the files which have failed through marking, so that the purpose of deleting the files is achieved. After deleting the file, the corresponding index information is deleted.
When small files are directly written into a disk, the disk I/O is very high, the small files are stored in the file blocks in the memory, then all the file blocks are combined, which is equivalent to a large file block, at the moment, the large files are written into the disk, the disk I/O is reduced, in the process of combining all the file blocks, the background thread has a certain effect on saving disk space by combining the file blocks, a plurality of small files are prevented from being scattered in different clusters or blocks, the waste of storage space is caused, meanwhile, the background thread is used for deleting the files, the deletion operation of the files is quickened, and for updating the files in the file blocks, an additional mode can be adopted, so that the background thread can clean the old files according to the updating date.
And in the process of merging the file blocks, compressing the generated large file blocks by using a preconfigured compression algorithm, wherein the compression algorithm comprises LZ4, snappy and Zlib.
In one possible case, file blocks are written to the disk without merging, and several file blocks are written to the disk separately, for example, some file blocks with fewer accesses; in the process, when the file blocks are needed to be written into the disk, whether the file blocks on the disk exceed a preset upper limit is judged, if so, the LRU is used, and the file blocks which are not used recently on the disk are deleted.
The background thread also monitors the access heat of each small file, rearranges the small file sets which are frequently accessed together, puts the small file sets into the same file block as much as possible, and updates index information and metadata information of the file block. This file block is then placed in memory, and a batch of files is read in this block, reducing addressing at the different file blocks. The I/O of the disk is reduced by combining the small files and matching the use of the memory and the cache of the disk by the memory.
In a possible embodiment, referring to fig. 5, an example user merges a plurality of file blocks by using a mobile phone, after restarting the mobile phone, collecting all the merged file blocks according to an algorithm preset in the mobile phone, analyzing a folder for the merged file blocks, obtaining all metadata information, determining a record of an index relation of a corresponding small file according to all the metadata information, generating an index file according to a preset index rule according to the index relation, adding the index file into a common index structure in the mobile phone, sequentially analyzing file data of the small file according to the index file, checking, and determining whether the file data is complete. When a user needs to check a small file in the mobile phone, a corresponding file block is found according to the generated index file, and then corresponding metadata information is found, the corresponding small file is called out according to the metadata information, and file data of the small file is loaded for the user to check.
In the above method, for the size of the file block, and the detailed configuration of the merge file block size, compression algorithm and LRU, the user can adjust according to the current usage scenario.
However, the small files written into the file blocks are not too large, the small files with the sizes of less than 16KB are written into the file blocks for merging after multiple tests, the disk I/O is obviously reduced, and the read-write performance is obviously improved.
After the small files are combined in the file blocks in the memory, the file blocks are combined, so that the memory plays a role of a cache space of a disk, the storage space is saved, the disk addressing time is reduced in the case of high concurrency, and the reading speed of the files is improved.
Example 2
Based on the same principle as the foregoing method, a device for reducing disk I/O is also provided, referring to fig. 6, a device 100 for reducing disk I/O according to an embodiment of the disclosure includes:
a receiving request module 110, configured to receive a write request;
the processing module 120 is configured to process a file to be processed corresponding to the write request, to obtain a file to be written;
the writing data module 130 is configured to write data to be written corresponding to the writing request in a preset file block in the memory;
the metadata module 140 is configured to include metadata corresponding to the file to be written in the data to be written, where the metadata includes storage information of the file to be written in the file block.
It should be apparent to those skilled in the art that the implementation of all or part of the above-described embodiments of the method may be implemented by a computer program for instructing relevant hardware, and the program may be stored in a computer readable storage medium, and the program may include the steps of the embodiments of the control methods described above when executed. The modules or steps of the invention described above may be implemented in a general-purpose computing device, they may be centralized in a single computing device, or distributed across a network of computing devices, or they may alternatively be implemented in program code executable by a computing device, such that they may be stored in a memory device and executed by a computing device, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps within them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
It will be appreciated by those skilled in the art that implementing all or part of the above-described embodiment methods may be implemented by a computer program for instructing relevant hardware, and the program may be stored in a computer readable storage medium, and the program may include the embodiment flow of each control method as described above when executed. The storage medium may be a magnetic disk, an optical disc, a Read-only memory (ROM), a random access memory (RandomAccessMemory, RAM), a flash memory (flash memory), a hard disk (HDD), or a Solid State Drive (SSD); the storage medium may also comprise a combination of memories of the kind described above.
Example 3
Still further, an electronic device is proposed, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to implement the method for reducing disk I/O of embodiment 1 when executing the executable instructions.
An electronic device of an embodiment of the present disclosure includes a processor and a memory for storing processor-executable instructions. Wherein the processor is configured to implement any of the methods of reducing disk I/O described above when executing the executable instructions.
It should be noted that the number of the processors may be one or more. Meanwhile, in the electronic device of the embodiment of the disclosure, an input device and an output device may be further included. The processor, the memory, the input device, and the output device may be connected by a bus, or may be connected by other means, which is not specifically limited herein.
The memory is used as a computer readable storage medium for reducing disk I/O, and can be used for storing software programs, computer executable programs and various modules, such as: the method for reducing disk I/O in the embodiment of the disclosure corresponds to a program or a module. The processor executes various functional applications and data processing of the electronic device by running software programs or modules stored in the memory.
The input device may be used to receive an input number or signal. Wherein the signal may be a key signal generated in connection with user settings of the device/terminal/server and function control. The output means may comprise a display device such as a display screen.
The embodiments of the present application have been described above, the foregoing description is exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A method of reducing disk I/O comprising:
receiving a write request;
processing the file to be processed corresponding to the writing request to obtain the file to be written;
writing data to be written corresponding to the writing request in a preset file block in a memory;
the data to be written contains metadata corresponding to the file to be written, and the metadata comprises storage information of the file to be written in the file block.
2. The method for reducing disk I/O according to claim 1, wherein processing the file to be processed corresponding to the write request to obtain the file to be written, includes:
judging whether the size of the file to be processed is smaller than a preset value or not according to the size of the file to be processed;
and if the file to be processed is smaller than a preset value, judging that the file to be processed is a file to be written.
3. The method for reducing disk I/O according to claim 1, wherein writing the data to be written corresponding to the write request in the preset file block in the memory includes:
judging whether a preset file block exists in the memory or not;
if the preset file block does not exist, applying for the preset file block to the memory.
4. The method for reducing disk I/O according to claim 1, wherein writing the data to be written corresponding to the write request in a preset file block in the memory, further comprises:
if a preset file block exists in the memory, comparing the size of the residual space of the preset file block with the size of the file to be written;
and if the size of the residual space of the preset file block is smaller than the size of the file to be written, applying for a new preset file block to the memory.
5. A method for reducing disk I/O as set forth in any one of claims 3-4, wherein applying for a new default file block to the memory comprises:
checking whether the number of the file blocks in the memory reaches a preset number;
and deleting the file blocks which are not in use if the number of the file blocks reaches the preset number.
6. The method for reducing disk I/O of claim 1, further comprising:
the data to be written also comprises an index relation of the file to be written, wherein the index relation comprises a preset file block to which the file to be written belongs;
and generating index information of the preset file block according to the affiliated index relation.
7. The method for reducing disk I/O of claim 6 wherein a read request is received;
searching the file to be read corresponding to the reading request in the index information;
when index information of the file to be read exists, positioning a preset file block to which the file to be read belongs according to the index information;
and obtaining the data to be read corresponding to the file to be read according to the metadata of the file to be read in the preset file block to which the file to be read belongs.
8. The method for reducing disk I/O according to claim 6, further comprising:
receiving a file deleting request;
searching the file to be deleted corresponding to the file deleting request in the index information;
when index information of the file to be deleted exists, positioning a preset file block to which the file to be deleted belongs according to the index information;
marking a designated position in metadata of the file to be deleted in a preset file block of the file to be deleted;
and according to the mark, deleting the file to be deleted.
9. An apparatus for reducing disk I/O, comprising the following modules:
the receiving request module is used for receiving a writing request;
the processing module is used for processing the file to be processed corresponding to the writing request to obtain the file to be written;
the writing data module is used for writing the data to be written corresponding to the writing request in a preset file block in the memory;
the metadata module is used for containing metadata corresponding to the file to be written in the data to be written in, and the metadata comprises storage information of the file to be written in the file block.
10. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to implement the method of reducing disk I/O of any of claims 1 to 8 when executing the executable instructions.
CN202410030126.7A 2024-01-09 2024-01-09 Method and device for reducing disk I/O and electronic equipment Pending CN117827117A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410030126.7A CN117827117A (en) 2024-01-09 2024-01-09 Method and device for reducing disk I/O and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410030126.7A CN117827117A (en) 2024-01-09 2024-01-09 Method and device for reducing disk I/O and electronic equipment

Publications (1)

Publication Number Publication Date
CN117827117A true CN117827117A (en) 2024-04-05

Family

ID=90523962

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410030126.7A Pending CN117827117A (en) 2024-01-09 2024-01-09 Method and device for reducing disk I/O and electronic equipment

Country Status (1)

Country Link
CN (1) CN117827117A (en)

Similar Documents

Publication Publication Date Title
US11010300B2 (en) Optimized record lookups
CN108733306B (en) File merging method and device
US10402091B1 (en) Managing data in log-structured storage systems
CN108459826B (en) Method and device for processing IO (input/output) request
CN108268219B (en) Method and device for processing IO (input/output) request
US20150039837A1 (en) System and method for tiered caching and storage allocation
US7640276B2 (en) Backup system, program and backup method
US5555389A (en) Storage controller for performing dump processing
CN108628542B (en) File merging method and controller
CN106326229B (en) File storage method and device of embedded system
CN113568582B (en) Data management method, device and storage equipment
US11886401B2 (en) Database key compression
CN113094372A (en) Data access method, data access control device and data access system
CN117827117A (en) Method and device for reducing disk I/O and electronic equipment
US9235349B2 (en) Data duplication system, data duplication method, and program thereof
CN114443722A (en) Cache management method and device, storage medium and electronic equipment
US11954328B2 (en) Storage management device, storage management method, and program
US11354233B2 (en) Method and system for facilitating fast crash recovery in a storage device
KR20120039166A (en) Nand flash memory system and method for providing invalidation chance to data pages
KR101966399B1 (en) Device and method on file system journaling using atomic operation
US11567671B2 (en) Method, electronic device, and computer program product for storage management
US20210263648A1 (en) Method for managing performance of logical disk and storage array
CN117170942B (en) Database backup method based on file system snapshot and related equipment
EP4187363A1 (en) Storage controller, storage control method, solid state disk and storage system
CN111190543B (en) Storage method and system for sharing NVDIMM storage resources among threads

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination