CN113485639A - Distributed storage IO speed optimization method, system, terminal and storage medium - Google Patents
Distributed storage IO speed optimization method, system, terminal and storage medium Download PDFInfo
- Publication number
- CN113485639A CN113485639A CN202110677274.4A CN202110677274A CN113485639A CN 113485639 A CN113485639 A CN 113485639A CN 202110677274 A CN202110677274 A CN 202110677274A CN 113485639 A CN113485639 A CN 113485639A
- Authority
- CN
- China
- Prior art keywords
- data
- target
- distributed lock
- client
- file
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 238000003860 storage Methods 0.000 title claims abstract description 45
- 238000005457 optimization Methods 0.000 title claims abstract description 18
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000004590 computer program Methods 0.000 claims description 3
- 230000007423 decrease Effects 0.000 claims description 3
- 230000003247 decreasing effect Effects 0.000 claims description 3
- 230000003993 interaction Effects 0.000 abstract description 8
- 238000010586 diagram Methods 0.000 description 7
- 230000007246 mechanism Effects 0.000 description 7
- 238000012217 deletion Methods 0.000 description 6
- 230000037430 deletion Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000004140 cleaning Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000006467 substitution reaction Methods 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/17—Details of further file system functions
- G06F16/172—Caching, prefetching or hoarding of files
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/18—File system types
- G06F16/182—Distributed file systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0608—Saving storage space on storage systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/0643—Management of files
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0656—Data buffering arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5016—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5022—Mechanisms to release resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/52—Program synchronisation; Mutual exclusion, e.g. by means of semaphores
- G06F9/524—Deadlock detection or avoidance
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention provides an IO speed optimization method, a system, a terminal and a storage medium for distributed storage, wherein the method comprises the following steps: caching distributed lock data and bottom file layout data of a plurality of directories for a client operation request in advance; binding the client to which the distributed lock data client operation request belongs; recording the number of times that the cached distributed lock data and bottom file layout data are quoted by the client operation request, and determining the caching time of the distributed lock data and the bottom file layout data according to the quoted times; and acquiring a target directory of a target file requested by the client operation, calling distributed lock data and bottom file layout data of the target directory from the cached data, and executing target file locking and IO operation on the target file. According to the method and the device, data do not need to be acquired from the metadata system in distributed storage, the number of interaction times with the metadata system is reduced, and the IO operation efficiency is improved.
Description
Technical Field
The invention relates to the technical field of distributed storage, in particular to an IO speed optimization method, a system, a terminal and a storage medium for distributed storage.
Background
Distributed File System (DFS) means that physical storage resources managed by a File System are not necessarily directly connected to a local node, but are connected to a node (which may be simply understood as a computer) through a computer network; or a complete hierarchical file system formed by combining several different logical disk partitions or volume labels. DFS provides a logical tree file system structure for resources distributed at any position on the network, so that users can access shared files distributed on the network more conveniently. The role of an individual DFS shared folder is relative to the access points through other shared folders on the network.
In the existing distributed file system, when file creation, deletion and other operations are performed in the same directory, each file operation needs to lock and unlock its parent directory again, and for the creation and deletion operations, the layout data (dir _ layout) of the bottom file of the directory needs to be acquired again. Most of the operations are repeated for the same directory, and since the locking and unlocking of the distributed lock require the interaction of the metadata system (mds) and the client of the distributed file system, the operation efficiency of the system IO is greatly reduced.
Disclosure of Invention
In view of the above-mentioned shortcomings in the prior art, the present invention provides a method, a system, a terminal and a storage medium for optimizing IO speed in distributed storage, so as to solve the above-mentioned technical problems.
In a first aspect, the present invention provides an IO speed optimization method for distributed storage, including:
caching distributed lock data and bottom file layout data of a plurality of directories for a client operation request in advance;
binding the client to which the distributed lock data client operation request belongs;
recording the number of times that the cached distributed lock data and bottom file layout data are quoted by the client operation request, and determining the caching time of the distributed lock data and the bottom file layout data according to the quoted times;
and acquiring a target directory of a target file requested by the client operation, calling distributed lock data and bottom file layout data of the target directory from the cached data, and executing target file locking and IO operation on the target file.
Further, caching the distributed lock data and the underlying file layout data of a plurality of directories for the client operation request in advance, including:
and creating a lock cache member for the client operation request, and storing the distributed lock data and the bottom file layout data of the file directory related to the client operation request.
Further, binding the client to which the distributed lock data client operation request belongs includes:
and binding the client to which the distributed lock data client operation request belongs by setting the client operation request authority, and clearing the invalid bound distributed lock data after all client operation requests of the client are released according to the binding relationship.
Further, recording the number of times that the cached distributed lock data and the cached bottom layer file layout data are quoted by the client operation request, and determining the caching time of the distributed lock data and the cached bottom layer file layout data according to the quoted times, includes:
setting a rule that the number of times of reference is decreased along with unreferenced time;
calculating a caching coefficient according to the number of references, unreferenced time and the rule of cached distributed lock data and bottom file layout data;
and if the caching coefficient is 0, clearing the corresponding caching data.
Further, obtaining a target directory of a target file requested by a client operation, calling distributed lock data and bottom file layout data of the target directory from the cached data, and executing target file locking and IO operations on the target file, including:
acquiring a target file of a current IO operation requested by a client and a target directory to which the target file belongs;
searching target distributed lock data and target bottom file layout data of a target directory from the cached data;
and performing locking and unlocking operation on the target directory according to the target distributed lock data, and executing the current IO operation by using the target bottom layer file layout data.
In a second aspect, the present invention provides an IO speed optimization system for distributed storage, including:
the data caching unit is used for caching distributed lock data and bottom file layout data of a plurality of directories for the client operation request in advance;
the data binding unit is used for binding the client to which the distributed lock data client operation request belongs;
the cache management unit is used for recording the number of times that the cached distributed lock data and bottom file layout data are quoted by the client operation request, and determining the caching time of the distributed lock data and the bottom file layout data according to the quoted times;
and the operation execution unit is used for acquiring a target directory of a target file requested by the client operation, calling distributed lock data and bottom file layout data of the target directory from the cached data, and executing target file locking and IO (input/output) operation on the target file.
Further, the cache management unit includes:
the rule setting module is used for setting a rule that the number of times of reference decreases along with unreferenced time;
the coefficient calculation module is used for calculating a cache coefficient according to the reference times, unreferenced time and the rule of the cached distributed lock data and the bottom file layout data;
and the cache clearing module is used for clearing corresponding cache data if the cache coefficient is 0.
Further, the operation execution unit includes:
the target acquisition module is used for acquiring a target file of the current IO operation of the client operation request and a target directory to which the target file belongs;
the data searching module is used for searching target distributed lock data and target bottom layer file layout data of the target directory from the cached data;
and the operation execution module is used for performing locking and unlocking operations on the target directory according to the target distributed lock data and executing the current IO operation by using the target bottom layer file layout data.
In a third aspect, a terminal is provided, including:
a processor, a memory, wherein,
the memory is used for storing a computer program which,
the processor is used for calling and running the computer program from the memory so as to make the terminal execute the method of the terminal.
In a fourth aspect, a computer storage medium is provided having stored therein instructions that, when executed on a computer, cause the computer to perform the method of the above aspects.
The beneficial effect of the invention is that,
according to the distributed storage IO speed optimization method, distributed lock data and bottom file layout data of a plurality of directories are cached in advance for the client operation request, the client operation request only needs to acquire target distributed lock data and bottom file layout data from the cache when IO operation is executed every time, data do not need to be acquired from a distributed storage metadata system, interaction times with the metadata system are reduced, IO operation efficiency is improved, management of cache data is achieved by binding the client to which the cache data client operation request belongs and counting the reference times of the cache data, and occupation of cache resources by useless data is avoided.
According to the distributed storage IO speed optimization system, distributed lock data and bottom file layout data of a plurality of directories are cached for a client operation request in advance through the data caching unit, so that the operation execution unit only needs to acquire target distributed lock data and bottom file layout data from a cache when the client operation request executes IO operation every time, data do not need to be acquired from a distributed storage metadata system, interaction times with the metadata system are reduced, IO operation efficiency is improved, meanwhile, management of cache data is achieved by binding a client to which the cache data client operation request belongs through the data binding unit and counting the reference times of the cache management unit on the cache data, and useless data are prevented from occupying cache resources.
According to the terminal and the IO speed optimization method for executing the distributed storage, the distributed lock data and the bottom file layout data of the plurality of directories are cached in advance for the client operation request, so that the client operation request only needs to acquire the target distributed lock data and the bottom file layout data from the cache every time the IO operation is executed, the data do not need to be acquired from the metadata system of the distributed storage, the interaction frequency with the metadata system is reduced, the IO operation efficiency is improved, meanwhile, the management of the cache data is realized by binding the client to which the cache data client operation request belongs and counting the reference frequency of the cache data, and the cache resources are prevented from being occupied by useless data.
The storage medium stores a program of an IO speed optimization method capable of executing distributed storage, pre-caches distributed lock data and bottom file layout data of a plurality of directories for a client operation request, so that the client operation request only needs to acquire target distributed lock data and bottom file layout data from a cache when IO operation is executed every time, data acquisition to a metadata system of distributed storage is not needed, interaction times with the metadata system are reduced, IO operation efficiency is improved, management of cache data is realized by binding a client to which the cache data client operation request belongs and counting reference times of the cache data, and cache resources are prevented from being occupied by useless data.
In addition, the invention has reliable design principle, simple structure and very wide application prospect.
Drawings
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present invention, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without creative efforts.
FIG. 1 is a schematic flow diagram of a method of one embodiment of the invention.
FIG. 2 is a schematic flow chart diagram of a method of one embodiment of the invention for acquiring distributed lock data.
FIG. 3 is a schematic flow chart diagram of a method of acquiring underlying file layout data in accordance with one embodiment of the present invention.
FIG. 4 is a schematic block diagram of a system of one embodiment of the present invention.
Fig. 5 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the technical solution in the embodiment of the present invention will be clearly and completely described below with reference to the drawings in the embodiment of the present invention, and it is obvious that the described embodiment is only a part of the embodiment of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The following explains key terms appearing in the present invention.
Metadata Server, a short for metadata service in a distributed file system.
Object Storage Device, responsible for the process of returning specific data in response to client requests. A distributed storage cluster typically has many OSDs.
A distributed lock. The lock is a self-driven lock, and a coordinator for ensuring the consistency of behaviors and data among multiple nodes.
Mdr. The client sends an operation request to the MDS.
dir _ layout. And providing a data structure of bottom layer file layout such as distribution of files in the directory in a bottom layer data pool.
And cap, acquiring the operation authority data from the MDS by one client.
mdr _ locks. And distributed lock data required by the IO operation in mdr.
And caching data in the mdr _ lock _ cache.
FIG. 1 is a schematic flow diagram of a method of one embodiment of the invention. The execution subject in fig. 1 may be an IO speed optimization system for distributed storage.
As shown in fig. 1, the method includes:
In order to facilitate understanding of the present invention, the IO speed optimization method for distributed storage provided by the present invention is further described below with reference to the principle of the IO speed optimization method for distributed storage according to the present invention and the process of optimizing the IO speed for distributed storage in the embodiment.
Specifically, the IO speed optimization method for distributed storage includes:
s1, caching the distributed lock data and the bottom file layout data of a plurality of directories for the client operation request in advance.
And creating a lock cache member mdr _ lock _ cache for each client operation request mdr in the MDS, wherein the member is used for storing relevant distributed read-write locks of all levels of parent directories required by the directory operated by the current client operation request mdr and bottom file layout data dir _ layout.
The bottom layer file layout data dir _ layout does not change under a general condition of a directory, and the acquisition process needs to perform distributed lock adding operation on each level of parent directory until the available dir _ layout is acquired, so that when acquiring the distributed lock data and the bottom layer file layout data of a directory, the distributed lock data needs to be acquired first, then the directory is locked, and after the locking, the bottom layer file layout data is acquired.
And S2, binding the client to which the distributed lock data client operation request belongs.
And binding the client to which the distributed lock data client operation request belongs by setting the client operation request authority, and clearing the invalid bound distributed lock data after all client operation requests of the client are released according to the binding relationship.
Specifically, the lock cache member mdr _ lock _ cache is associated and bound with the cap right acquired by the client in the current MDS, and unless all mdr associated with the mdr _ lock _ cache is released and the cap right acquired by the current client in the MDS is invalid, the distributed lock cache data will be invalid. The failed distributed lock data is purged from the lock cache member mdr _ lock _ cache.
And S3, recording the number of times that the cached distributed lock data and the cached bottom layer file layout data are referred by the client operation request, and determining the caching time of the distributed lock data and the cached bottom layer file layout data according to the number of reference times.
Relationship of the bottom file layout data dir _ layout and the distributed lock cache. The bottom file layout data needs to be stored in the MDS cache for a long time and cannot disappear along with the disappearance of the client operation request mdr, otherwise the system still needs to acquire the bottom file layout data frequently, and therefore the bottom file layout data is stored in the lock cache member mdr _ lock _ cache corresponding to the mdr. Therefore, the lock cache mechanism can optimize the acquisition efficiency of the bottom layer file layout data dir _ layout, and is convenient for the storage and use of the bottom layer file layout data dir _ layout, but generally, the two mechanisms of the method for improving the file IO efficiency are different, the distributed lock cache data are optimized for the MDS distributed lock mechanism, the cache optimization for the bottom layer file layout data dir _ layout is optimized for a specific operation step of the system IO, and the two mechanisms are related but are not indispensable to each other.
In order to enable invalid cache data in the lock cache member mdr _ lock _ cache not to occupy cache resources, a cache cleaning mechanism is set, and the cache cleaning mechanism comprises the following steps:
and setting a rule that the number of times of reference is decreased along with unreferenced time, wherein the unreferenced time is a timing time counted from the last time of reference. The present embodiment sets the decrement coefficient to k.
And calculating a cache coefficient G according to the reference times C, unreferenced time T and the rule of the cached distributed lock data and the bottom file layout data, wherein G is C-kT, and k is a positive number.
And if the cache data is not referred for a long time until the cache coefficient G is reduced to 0, clearing the corresponding cache data.
By setting a cache cleaning mechanism, the infrequent data is cleaned in time, and invalid data is prevented from occupying cache resources.
S4, obtaining the target directory of the target file requested by the client operation, calling the distributed lock data and the bottom layer file layout data of the target directory from the cached data, and executing target file locking and IO operation on the target file.
The method specifically comprises the following steps:
(1) and acquiring a target file of the current IO operation requested by the client and a target directory to which the target file belongs.
(2) And searching target distributed lock data and target bottom file layout data of the target directory from the cached data.
(3) And performing locking and unlocking operation on the target directory according to the target distributed lock data, and executing the current IO operation by using the target bottom layer file layout data.
Fig. 2 shows a specific method for performing locking and unlocking operations on a target directory, which includes:
1) when a certain MDS receives the creation or deletion operation of a file under a certain directory sent by a client, if the current file is the first file for executing the creation or deletion operation under the certain directory under the MDS, the step 2) is entered, and if not, the step 5) is entered.
2) If the current operation is file creation and distributed lock cache data of the current client and the directory exists in the current system, the step 3) is entered, otherwise, if the current operation is deletion or creation but the distributed lock cache data of the current client and the directory does not exist in the current system, the step 4) is entered.
3) Acquiring mdr _ lock _ cache distributed lock cache data of the directory under the current MDS according to the cap of the current client, assigning the mdr _ lock _ cache distributed lock cache data to the mdr corresponding to the IO operation, increasing the lock cache reference count of the directory, and entering step 5).
4) And collecting distributed lock data of each superior directory required by the file when acquiring various operation rights, and storing the distributed lock data into the distributed lock data mdr _ locks required by the current IO operation of the mdr.
5) And sequentially performing lock evaluation according to corresponding distributed lock data (mdr _ locks and mdr _ lock _ cache) in the mdr corresponding to the IO operation, if the lock currently being evaluated has distributed lock cache data, directly skipping the lock evaluation, the lock interaction and the corresponding locking and unlocking operation of the lock, and otherwise, performing the above operation to obtain various distributed lock data required by the IO operation.
6) If the IO enters the step 4), the read-write distributed lock of the upper-level directory required by the IO operation lock is moved into the mdr _ lock _ cache from the mdr _ locks, the mdr _ lock _ cache is associated with the cap authority of the current client, and the lock cache reference count is increased.
7) And after the IO operation of the file is completed, reducing the application count of the distributed lock cache mdr _ lock _ cache in the mdr, and removing the distributed lock cache data from the whole distributed system when the count is reduced to 0.
Fig. 3 shows a method for obtaining underlying file layout data to execute IO operations, which includes:
1) obtaining dir _ layout required by a file, and sequentially locking each father directory according to the hierarchy until obtaining the available dir _ layout from a certain layer of father directory.
2) When a file deletion or creation operation is performed, firstly, whether cached available bottom-layer file layout data dir _ layout exists or not is judged in the mdr corresponding to the IO operation, if yes, the step 3 is performed, and if not, the step 4) is performed).
3) If the current mdr does not have cached bottom layer file layout data dir _ layout, distributed lock adding operation is sequentially performed on each upper layer directory to obtain available bottom layer file layout data dir _ layout, and in the process, if the required distributed lock is already in the distributed lock cache (mdr _ lock _ cache) of the mdr, the locking process of the distributed lock is directly skipped.
4) And executing subsequent operations of the IO after the layout data dir _ layout of the bottom layer file is acquired.
5) After the operation is completed, if the current IO enters step 3), that is, there is no cache data of the bottom layer file layout data dir _ layout in the mdr, the bottom layer file layout data dir _ layout is stored in a distributed lock cache (mdr _ lock _ cache) of the mdr, so that the newly acquired bottom layer file layout data dir _ layout can be stored in the distributed file system for a longer time (the mdr _ lock _ cache has reference count protection and cannot completely depend on the mdr to exist).
As shown in fig. 4, the system 400 includes:
the data caching unit 410 is configured to cache distributed lock data and underlying file layout data of multiple directories for a client operation request in advance;
a data binding unit 420, configured to bind a client to which the distributed lock data client operation request belongs;
the cache management unit 430 is configured to record the number of times that the cached distributed lock data and the cached bottom layer file layout data are referred by the client operation request, and determine the caching time of the distributed lock data and the cached bottom layer file layout data according to the number of times of reference;
the operation execution unit 440 is configured to obtain a target directory of a target file requested by the client, invoke distributed lock data and underlying file layout data of the target directory from the cached data, and execute target file locking and IO operations on the target file.
Optionally, as an embodiment of the present invention, the cache management unit includes:
the rule setting module is used for setting a rule that the number of times of reference decreases along with unreferenced time;
the coefficient calculation module is used for calculating a cache coefficient according to the reference times, unreferenced time and the rule of the cached distributed lock data and the bottom file layout data;
and the cache clearing module is used for clearing corresponding cache data if the cache coefficient is 0.
Optionally, as an embodiment of the present invention, the operation execution unit includes:
the target acquisition module is used for acquiring a target file of the current IO operation of the client operation request and a target directory to which the target file belongs;
the data searching module is used for searching target distributed lock data and target bottom layer file layout data of the target directory from the cached data;
and the operation execution module is used for performing locking and unlocking operations on the target directory according to the target distributed lock data and executing the current IO operation by using the target bottom layer file layout data.
Fig. 5 is a schematic structural diagram of a terminal 500 according to an embodiment of the present invention, where the terminal 500 may be used to execute an IO speed optimization method for distributed storage according to the embodiment of the present invention.
Among them, the terminal 500 may include: a processor 510, a memory 520, and a communication unit 530. The components communicate via one or more buses, and those skilled in the art will appreciate that the architecture of the servers shown in the figures is not intended to be limiting, and may be a bus architecture, a star architecture, a combination of more or less components than those shown, or a different arrangement of components.
The memory 520 may be used for storing instructions executed by the processor 510, and the memory 520 may be implemented by any type of volatile or non-volatile storage terminal or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk. The executable instructions in memory 520, when executed by processor 510, enable terminal 500 to perform some or all of the steps in the method embodiments described below.
The processor 510 is a control center of the storage terminal, connects various parts of the entire electronic terminal using various interfaces and lines, and performs various functions of the electronic terminal and/or processes data by operating or executing software programs and/or modules stored in the memory 520 and calling data stored in the memory. The processor may be composed of an Integrated Circuit (IC), for example, a single packaged IC, or a plurality of packaged ICs connected with the same or different functions. For example, processor 510 may include only a Central Processing Unit (CPU). In the embodiment of the present invention, the CPU may be a single operation core, or may include multiple operation cores.
A communication unit 530 for establishing a communication channel so that the storage terminal can communicate with other terminals. And receiving user data sent by other terminals or sending the user data to other terminals.
The present invention also provides a computer storage medium, wherein the computer storage medium may store a program, and the program may include some or all of the steps in the embodiments provided by the present invention when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM) or a Random Access Memory (RAM).
Therefore, the distributed lock data and the bottom file layout data of the multiple directories are cached in advance for the client operation request, so that the client operation request only needs to acquire the target distributed lock data and the bottom file layout data from the cache when the client operation request executes the IO operation each time, data does not need to be acquired from the metadata system stored in a distributed manner, the number of interaction with the metadata system is reduced, the IO operation efficiency is improved, meanwhile, the management of the cache data is realized by binding the client to which the cache data client operation request belongs and counting the number of times of reference of the cache data, the cache resources are prevented from being occupied by the useless data, the technical effect which can be achieved by the embodiment can be referred to the description above, and the details are omitted here.
Those skilled in the art will readily appreciate that the techniques of the embodiments of the present invention may be implemented as software plus a required general purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be embodied in the form of a software product, where the computer software product is stored in a storage medium, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like, and the storage medium can store program codes, and includes instructions for enabling a computer terminal (which may be a personal computer, a server, or a second terminal, a network terminal, and the like) to perform all or part of the steps of the method in the embodiments of the present invention.
The same and similar parts in the various embodiments in this specification may be referred to each other. Especially, for the terminal embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and the relevant points can be referred to the description in the method embodiment.
In the embodiments provided in the present invention, it should be understood that the disclosed system and method can be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, systems or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
Although the present invention has been described in detail by referring to the drawings in connection with the preferred embodiments, the present invention is not limited thereto. Various equivalent modifications or substitutions can be made on the embodiments of the present invention by those skilled in the art without departing from the spirit and scope of the present invention, and these modifications or substitutions are within the scope of the present invention/any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (10)
1. An IO speed optimization method for distributed storage is characterized by comprising the following steps:
caching distributed lock data and bottom file layout data of a plurality of directories for a client operation request in advance;
binding the client to which the distributed lock data client operation request belongs;
recording the number of times that the cached distributed lock data and bottom file layout data are quoted by the client operation request, and determining the caching time of the distributed lock data and the bottom file layout data according to the quoted times;
and acquiring a target directory of a target file requested by the client operation, calling distributed lock data and bottom file layout data of the target directory from the cached data, and executing target file locking and IO operation on the target file.
2. The method of claim 1, wherein pre-caching distributed lock data and underlying file layout data for a plurality of directories for client operation requests comprises:
and creating a lock cache member for the client operation request, and storing the distributed lock data and the bottom file layout data of the file directory related to the client operation request.
3. The method of claim 1, wherein binding the client to which the distributed lock data client operation request belongs comprises:
and binding the client to which the distributed lock data client operation request belongs by setting the client operation request authority, and clearing the invalid bound distributed lock data after all client operation requests of the client are released according to the binding relationship.
4. The method of claim 1, wherein recording the number of times that the cached distributed lock data and underlying file layout data are referenced by the client operation request, and determining the caching time of the distributed lock data and the underlying file layout data according to the number of references comprises:
setting a rule that the number of times of reference is decreased along with unreferenced time;
calculating a caching coefficient according to the number of references, unreferenced time and the rule of cached distributed lock data and bottom file layout data;
and if the caching coefficient is 0, clearing the corresponding caching data.
5. The method of claim 1, wherein obtaining a target directory of a target file requested by a client operation, retrieving distributed lock data and underlying file layout data of the target directory from cached data, and performing target file locking and IO operations on the target file comprises:
acquiring a target file of a current IO operation requested by a client and a target directory to which the target file belongs;
searching target distributed lock data and target bottom file layout data of a target directory from the cached data;
and performing locking and unlocking operation on the target directory according to the target distributed lock data, and executing the current IO operation by using the target bottom layer file layout data.
6. A distributed storage IO speed optimization system, comprising:
the data caching unit is used for caching distributed lock data and bottom file layout data of a plurality of directories for the client operation request in advance;
the data binding unit is used for binding the client to which the distributed lock data client operation request belongs;
the cache management unit is used for recording the number of times that the cached distributed lock data and bottom file layout data are quoted by the client operation request, and determining the caching time of the distributed lock data and the bottom file layout data according to the quoted times;
and the operation execution unit is used for acquiring a target directory of a target file requested by the client operation, calling distributed lock data and bottom file layout data of the target directory from the cached data, and executing target file locking and IO (input/output) operation on the target file.
7. The system of claim 6, wherein the cache management unit comprises:
the rule setting module is used for setting a rule that the number of times of reference decreases along with unreferenced time;
the coefficient calculation module is used for calculating a cache coefficient according to the reference times, unreferenced time and the rule of the cached distributed lock data and the bottom file layout data;
and the cache clearing module is used for clearing corresponding cache data if the cache coefficient is 0.
8. The system of claim 6, wherein the operation execution unit comprises:
the target acquisition module is used for acquiring a target file of the current IO operation of the client operation request and a target directory to which the target file belongs;
the data searching module is used for searching target distributed lock data and target bottom layer file layout data of the target directory from the cached data;
and the operation execution module is used for performing locking and unlocking operations on the target directory according to the target distributed lock data and executing the current IO operation by using the target bottom layer file layout data.
9. A terminal, comprising:
a processor;
a memory for storing instructions for execution by the processor;
wherein the processor is configured to perform the method of any one of claims 1-5.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110677274.4A CN113485639B (en) | 2021-06-18 | 2021-06-18 | IO speed optimization method, system, terminal and storage medium for distributed storage |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110677274.4A CN113485639B (en) | 2021-06-18 | 2021-06-18 | IO speed optimization method, system, terminal and storage medium for distributed storage |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113485639A true CN113485639A (en) | 2021-10-08 |
CN113485639B CN113485639B (en) | 2024-02-20 |
Family
ID=77933931
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110677274.4A Active CN113485639B (en) | 2021-06-18 | 2021-06-18 | IO speed optimization method, system, terminal and storage medium for distributed storage |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113485639B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7730258B1 (en) * | 2005-11-01 | 2010-06-01 | Netapp, Inc. | System and method for managing hard and soft lock state information in a distributed storage system environment |
CN102024017A (en) * | 2010-11-04 | 2011-04-20 | 天津曙光计算机产业有限公司 | Method for traversing directory entries of distribution type file system in repetition-free and omission-free way |
CN103902660A (en) * | 2014-03-04 | 2014-07-02 | 中国科学院计算技术研究所 | System and method for prefetching file layout through readdir++ in cluster file system |
CN104158897A (en) * | 2014-08-25 | 2014-11-19 | 曙光信息产业股份有限公司 | Updating method of file layout in distributed file system |
KR20170090594A (en) * | 2016-01-29 | 2017-08-08 | 한국전자통신연구원 | Data server device configured to manage distributed lock of file together with client device in storage system employing distributed file system |
CN109582658A (en) * | 2018-12-03 | 2019-04-05 | 郑州云海信息技术有限公司 | A kind of distributed file system realizes the method and device of data consistency |
CN110750507A (en) * | 2019-09-30 | 2020-02-04 | 华中科技大学 | Client persistent caching method and system under global namespace facing DFS |
CN111966635A (en) * | 2020-08-14 | 2020-11-20 | 苏州浪潮智能科技有限公司 | Method and device for improving file detection speed of distributed storage file system |
-
2021
- 2021-06-18 CN CN202110677274.4A patent/CN113485639B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7730258B1 (en) * | 2005-11-01 | 2010-06-01 | Netapp, Inc. | System and method for managing hard and soft lock state information in a distributed storage system environment |
CN102024017A (en) * | 2010-11-04 | 2011-04-20 | 天津曙光计算机产业有限公司 | Method for traversing directory entries of distribution type file system in repetition-free and omission-free way |
CN103902660A (en) * | 2014-03-04 | 2014-07-02 | 中国科学院计算技术研究所 | System and method for prefetching file layout through readdir++ in cluster file system |
CN104158897A (en) * | 2014-08-25 | 2014-11-19 | 曙光信息产业股份有限公司 | Updating method of file layout in distributed file system |
KR20170090594A (en) * | 2016-01-29 | 2017-08-08 | 한국전자통신연구원 | Data server device configured to manage distributed lock of file together with client device in storage system employing distributed file system |
CN109582658A (en) * | 2018-12-03 | 2019-04-05 | 郑州云海信息技术有限公司 | A kind of distributed file system realizes the method and device of data consistency |
CN110750507A (en) * | 2019-09-30 | 2020-02-04 | 华中科技大学 | Client persistent caching method and system under global namespace facing DFS |
CN111966635A (en) * | 2020-08-14 | 2020-11-20 | 苏州浪潮智能科技有限公司 | Method and device for improving file detection speed of distributed storage file system |
Non-Patent Citations (2)
Title |
---|
杨洪章;张军伟;齐颖;吴雪丽;: "分布式文件系统中海量小文件异步创建技术", 网络新媒体技术, no. 02 * |
钱迎进;金士尧;肖侬;: "Lustre文件系统I/O锁的应用与优化", 计算机工程与应用, no. 03 * |
Also Published As
Publication number | Publication date |
---|---|
CN113485639B (en) | 2024-02-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP2022534215A (en) | Hybrid indexing method, system and program | |
CN106446197B (en) | A kind of date storage method, apparatus and system | |
US9442694B1 (en) | Method for storing a dataset | |
CN108140040A (en) | The selective data compression of database in memory | |
CN107783988B (en) | Method and equipment for locking directory tree | |
CN109298835B (en) | Data archiving processing method, device, equipment and storage medium of block chain | |
CA2548084A1 (en) | Method and apparatus for data storage using striping | |
CN107368260A (en) | Memory space method for sorting, apparatus and system based on distributed system | |
CN111339078A (en) | Data real-time storage method, data query method, device, equipment and medium | |
US10248693B2 (en) | Multi-layered row mapping data structure in a database system | |
CN114490527B (en) | Metadata retrieval method, system, terminal and storage medium | |
CN107408132B (en) | Method and system for moving hierarchical data objects across multiple types of storage | |
CN111400334B (en) | Data processing method, data processing device, storage medium and electronic device | |
CN109460345B (en) | Real-time data calculation method and system | |
CN111708894A (en) | Knowledge graph creating method | |
US20220342888A1 (en) | Object tagging | |
KR102354343B1 (en) | Spatial indexing method and apparatus for blockchain-based geospatial data | |
CN113485639B (en) | IO speed optimization method, system, terminal and storage medium for distributed storage | |
CN115878625A (en) | Data processing method and device and electronic equipment | |
US11580128B2 (en) | Preventing DBMS deadlock by eliminating shared locking | |
CN116821076A (en) | File searching method and device based on electronic equipment | |
CN114443583A (en) | Method, device and equipment for arranging fragment space and storage medium | |
EP3995972A1 (en) | Metadata processing method and apparatus, and computer-readable storage medium | |
CN115686343A (en) | Data updating method and device | |
CN113950145B (en) | Data processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |