CN112115170A - Metadata caching method, system, equipment and medium - Google Patents

Metadata caching method, system, equipment and medium Download PDF

Info

Publication number
CN112115170A
CN112115170A CN202010988143.3A CN202010988143A CN112115170A CN 112115170 A CN112115170 A CN 112115170A CN 202010988143 A CN202010988143 A CN 202010988143A CN 112115170 A CN112115170 A CN 112115170A
Authority
CN
China
Prior art keywords
cache
metadata
read
layer
cache layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010988143.3A
Other languages
Chinese (zh)
Other versions
CN112115170B (en
Inventor
司龙湖
胡永刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202010988143.3A priority Critical patent/CN112115170B/en
Publication of CN112115170A publication Critical patent/CN112115170A/en
Application granted granted Critical
Publication of CN112115170B publication Critical patent/CN112115170B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating

Abstract

The invention discloses a metadata caching method, which comprises the following steps: in response to receiving a request for reading metadata, judging whether the metadata to be read exists in a first cache layer; responding to the fact that the metadata to be read is not in the first cache layer, and judging whether a plurality of objects corresponding to the metadata to be read exist in a second cache layer or not; in response to the absence, reading a plurality of objects corresponding to the metadata to be read from the underlying storage pool; updating the first cache tier and the second cache tier with the plurality of objects read from the underlying storage pool and notifying other nodes to update the second cache tier under the other nodes. The invention also discloses a system, a computer device and a readable storage medium. The scheme provided by the invention divides the cache layer into two layers, the second cache layer is close to the storage layer of the bottom layer, the first cache layer is close to the service logic layer, and the separation of the service logic layer cache and the bottom layer storage cache is realized through two-stage cache.

Description

Metadata caching method, system, equipment and medium
Technical Field
The invention relates to the field of distributed object storage, in particular to a metadata caching method, a metadata caching system, metadata caching equipment and a metadata caching storage medium.
Background
The performance is one of the key evaluation indexes of the storage system, which is embodied in IOPS and bandwidth, and in order to improve the performance of the storage system, the most critical technology is to use cache, because the speed of accessing the memory is far faster than the speed of accessing the actual storage and the network under the condition that the access path cannot be reduced. In a distributed object storage system, two structures of an object and system metadata are adopted, the data volume of the object is large, the access to a single object is not frequent, the cache consistency needs to be realized in the distributed system, the cost performance of the memory cache for the single object is low, but the characteristics of the system metadata are different, and compared with the object, the system metadata access is frequent (the access to the system metadata is necessary), the data volume is small, the modification is not frequent, so that if a proper scheme is provided for caching the system metadata, the method is very helpful for improving the performance of the storage system.
Disclosure of Invention
In view of the above, in order to overcome at least one aspect of the above problems, an embodiment of the present invention provides a metadata caching method, including:
in response to receiving a request for reading metadata, judging whether the metadata to be read exists in a first cache layer;
responding to the fact that the metadata to be read is not in the first cache layer, and judging whether a plurality of objects corresponding to the metadata to be read exist in a second cache layer or not;
reading a plurality of objects corresponding to the metadata to be read from the bottom storage pool in response to the second cache layer not having the plurality of objects corresponding to the metadata to be read;
updating the first cache tier and the second cache tier with the plurality of objects read from the underlying storage pool and notifying other nodes to update the second cache tier under the other nodes.
In some embodiments, updating the first cache tier and the second cache tier with a plurality of objects read from the underlying storage pool further comprises:
and creating a cache unit for each object in the second cache layer, wherein the KEY of the cache unit is the name of the object and the name of the storage pool, and the value is the object.
In some embodiments, further comprising:
responding to the metadata to be read in the first cache layer, and acquiring and returning the metadata to be read from the first cache layer;
and in response to the second cache layer having a plurality of objects corresponding to the metadata to be read, reading the plurality of objects corresponding to the metadata to be read from the second cache layer and assembling the plurality of objects into the metadata to be read for returning.
In some embodiments, further comprising:
assembling the objects in the cache units in the second cache layer into metadata and returning the metadata to the first cache layer;
recording relevant information of the assembled metadata in the first cache layer;
and respectively storing the related information records into the values of the corresponding cache units of the second cache layer.
In some embodiments, further comprising:
in response to receiving a request to update metadata, writing the updated metadata to a plurality of objects in the underlying plurality of storage pools;
after each object is updated, positioning the cache unit in the second cache layer according to the KEY formed by the name of the object and the name of the storage pool, so as to update the object in the positioned cache unit by using the updated object.
In some embodiments, further comprising:
and positioning the cache data in the first cache layer according to the information related to the first cache layer in the positioned cache unit, and enabling the positioned cache data in the first cache layer to be invalid.
In some embodiments, further comprising:
and sending the KEY consisting of the name of the object and the name of the storage pool and the updated object to other nodes so that the other nodes update corresponding cache units in a second cache layer of the other nodes.
Based on the same inventive concept, according to another aspect of the present invention, an embodiment of the present invention further provides a metadata caching system, including:
a first interpretation module configured to determine whether metadata to be read exists in a first cache layer in response to receiving a request to read the metadata;
a second interpretation module, configured to respond to that the metadata to be read is not in the first cache layer, determine whether a plurality of objects corresponding to the metadata to be read exist in a second cache layer;
an obtaining module, configured to, in response to that a plurality of objects corresponding to the metadata to be read do not exist in the second cache layer, read a plurality of objects corresponding to the metadata to be read from the bottom storage pool;
an update module configured to update the first cache tier and the second cache tier with a plurality of objects read from the underlying storage pool and notify other nodes to update the second cache tier under the other nodes.
Based on the same inventive concept, according to another aspect of the present invention, an embodiment of the present invention further provides a computer apparatus, including:
at least one processor; and
a memory storing a computer program operable on the processor, wherein the processor executes the program to perform any of the steps of the metadata caching method described above.
Based on the same inventive concept, according to another aspect of the present invention, an embodiment of the present invention further provides a computer-readable storage medium storing a computer program, which when executed by a processor performs the steps of any of the metadata caching methods described above.
The invention has one of the following beneficial technical effects: the scheme provided by the invention divides the cache layer into two layers, the second cache layer is close to the storage layer of the bottom layer, the first cache layer is close to the service logic layer, and the separation of the service logic layer cache and the bottom layer storage cache is realized through two-stage cache.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other embodiments can be obtained by using the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a metadata caching method according to an embodiment of the present invention;
fig. 2 is a flowchart of a metadata caching method according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a metadata cache system according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a computer device provided in an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following embodiments of the present invention are described in further detail with reference to the accompanying drawings.
It should be noted that all expressions using "first" and "second" in the embodiments of the present invention are used for distinguishing two entities with the same name but different names or different parameters, and it should be noted that "first" and "second" are merely for convenience of description and should not be construed as limitations of the embodiments of the present invention, and they are not described in any more detail in the following embodiments.
According to an aspect of the present invention, an embodiment of the present invention provides a metadata caching method, as shown in fig. 1, which may include the steps of:
s1, responding to the received request for reading the metadata, and judging whether the metadata to be read exists in the first cache layer;
s2, in response to the fact that the metadata to be read is not in the first cache layer, judging whether a plurality of objects corresponding to the metadata to be read exist in a second cache layer;
s3, in response to that the plurality of objects corresponding to the metadata to be read do not exist in the second cache layer, reading the plurality of objects corresponding to the metadata to be read from the bottom storage pool;
s4, updating the first cache tier and the second cache tier with the plurality of objects read from the underlying storage pool, and notifying other nodes to update the second cache tier under the other nodes.
The scheme provided by the invention divides the cache layer into two layers, the second cache layer is close to the storage layer of the bottom layer, the first cache layer is close to the service logic layer, and the separation of the service logic layer cache and the bottom layer storage cache is realized through two-stage cache.
In some embodiments, as shown in FIG. 2, from the data plane, there are four levels from bottom to top: a data persistence layer, an original data caching layer, a user data caching layer and a business logic layer.
The data persistence layer (bottom storage pool) is responsible for data persistence and comprises a plurality of storage pools and objects, the storage pools in which distributed objects are stored can be multiple, each storage pool has objects with different numbers, and the object is the minimum storage unit in the storage pool.
Each cache unit in the original data cache tier (second cache tier) comprises an object read from the pool that has not been subject to task processing (mostly assembly), and the set of cache units in the original data cache tier is in fact a subset of the set of data persistence tier objects.
The user data cache layer can be divided into a plurality of parts according to actual business requirements, the actual key data of the object storage system is users and buckets, and the metadata of a single user or a single bucket is assembled by the objects in a plurality of cache units of the data persistence layer, so that the user data cache layer is established on the basis of the original data cache layer, and the updating of the user data cache layer is driven by the cache updating of the original data cache layer.
The service logic layer is responsible for processing specific service logic, and mainly interacts with the data persistence layer, the original data cache layer and the user data cache layer in the whole system structure.
The whole data persistence layer is a public storage resource pool, the original data cache layer and the user data cache layer are realized by depending on a memory, the data persistence layer comprises a service logic layer and belongs to a part of a single node, under the condition that a plurality of nodes process services, the cache consistency of the original data cache layer and the user data cache layer between the nodes needs to be considered, and because the user data cache layer is established on the original data cache layer, only how to realize the cache consistency of the original data cache layer needs to be considered.
In some embodiments, step S4, updating the first cache tier and the second cache tier with the plurality of objects read from the underlying storage pool, further comprises:
and creating a cache unit for each object in the second cache layer, wherein the KEY of the cache unit is the name of the object and the name of the storage pool, and the value is the object.
Specifically, the original data caching layer (second caching layer) may be implemented according to the lru algorithm, which is substantially a map structure with a limited length, that is, a limited number of caching units may be stored, and the KEY of each caching unit is composed of a storage pool name and an object name, and its value includes not only an object but also which part of the user data caching layer (for example, a user cache or a bucket cache) the object is used for, and how to locate its object in the user data caching layer (that is, how to locate the part of the user data caching layer where the object is used). For example, an object of the cache unit a in the original data cache layer is used by the user cache B of the user data cache layer, and then the value of the cache unit a needs to record the user cache B and the KEY or the index information corresponding to the user cache B in the first cache layer.
In some embodiments, further comprising:
responding to the metadata to be read in the first cache layer, and acquiring and returning the metadata to be read from the first cache layer;
and in response to the second cache layer having a plurality of objects corresponding to the metadata to be read, reading the plurality of objects corresponding to the metadata to be read from the second cache layer and assembling the plurality of objects into the metadata to be read for returning.
Specifically, as shown in fig. 2, if the metadata to be read exists in the user data cache layer, the corresponding metadata may be directly read from the user data cache layer. If the plurality of objects corresponding to the metadata to be read do not exist in the user data cache layer, but the plurality of objects corresponding to the metadata to be read exist in the original data cache layer, the plurality of objects corresponding to the metadata to be read can also be directly read from the original data cache layer and assembled into the metadata to be read for returning. If the plurality of objects corresponding to the metadata to be read do not exist in the user data cache layer or the original data cache layer, the plurality of objects corresponding to the metadata to be read need to be acquired in the data persistence layer and assembled into the metadata to be read for returning.
In some embodiments, further comprising:
assembling the objects in the cache units in the second cache layer into metadata and returning the metadata to the first cache layer;
recording relevant information of the assembled metadata in the first cache layer;
and respectively storing the related information records into the values of the corresponding cache units of the second cache layer.
Specifically, after the plurality of objects corresponding to the metadata to be read are read from the original data cache layer and assembled into the metadata to be read for returning, or the plurality of objects corresponding to the metadata to be read are obtained from the data persistence layer and assembled into the metadata to be read for returning, the user data cache layer and/or the original data cache layer may be updated. When the original data cache layer is updated, the updating may be performed according to the aforementioned description logic, and details are not described here. When the user data cache layer is updated, the objects in the cache units in the original data cache layer may be assembled into metadata and returned to the user data cache layer, and which part of the user data cache layer uses the metadata and the corresponding positioning information is recorded and returned to the value of the corresponding cache unit in the original data cache layer.
Thus, when reading metadata, the business logic layer can read the cache type (such as barrel cache or user cache) corresponding to the designated metadata to be read, then determine the cache index in the cache of the corresponding type of the user data cache layer, directly locate the relevant cache through the cache index, if the cache can be located, directly return data, if the cache cannot be located, read the object of the relevant storage pool through the reading process of the business logic layer and the data persistence layer, in the reading process, the cache is also located in the original data cache layer, the cache is used when the cache is located, the cache is directly obtained from the bottom layer when the cache is not located, then the original data cache layer is updated, the cache key in the original data cache layer which needs to be temporarily stored and updated in the process is updated, and finally the relevant cache is located in the original data cache layer through the relevant key to set which part of the user data cache layer is used, and finally, updating the information in the bucket cache and returning the relevant information of the bucket.
In some embodiments, further comprising:
in response to receiving a request to update metadata, writing the updated metadata to a plurality of objects in the underlying plurality of storage pools;
after each object is updated, positioning the cache unit in the second cache layer according to the KEY formed by the name of the object and the name of the storage pool, so as to update the object in the positioned cache unit by using the updated object.
In some embodiments, further comprising:
and positioning the cache data in the first cache layer according to the information related to the first cache layer in the positioned cache unit, and enabling the positioned cache data in the first cache layer to be invalid.
In some embodiments, further comprising:
and sending the KEY consisting of the name of the object and the name of the storage pool and the updated object to other nodes so that the other nodes update corresponding cache units in a second cache layer of the other nodes.
Specifically, when the business logic layer updates the metadata, the business logic layer and the data persistence layer are not only involved in the write logic, but also in the objects of the metadata storage pools. As shown in fig. 2, when each object is updated, the data persistence layer is written first, and then the original data cache layer is updated with the KEY composed of the storage pool name and the object name and the new object, in this process, the cache unit in the original data cache layer can be located, and the cache information associated with the user data cache layer in the user data cache layer is invalidated through the cache type information and the location information about the user data cache layer in the cache unit, so that the reading process does not read the old cache information in the user data cache layer. And finally, sending the KEY consisting of the name of the object and the name of the storage pool and the updated object to other nodes through a distributed cache notification mechanism so as to notify the other nodes that the cache units of the KEY consisting of the name of the storage pool and the name of the object need to be updated.
The scheme provided by the invention divides the whole system into a data persistence layer, an original data cache layer, a user data cache layer and a service logic layer, wherein the original data cache layer is close to the data persistence layer and is a subset of the data persistence layer, the user data cache layer is close to the service logic layer, cache information which can be directly used by the service logic cache layer is provided, the user data cache layer is divided into a plurality of modules according to actual service logic and is established on the original data cache layer, the consistency of the original data cache layer is ensured through a distributed cache notification module, and the consistency of user data cache is ensured through invalidation and updating mechanisms of the original data cache layer and the user data cache layer. The separation of the bottom-layer storage cache and the business-layer cache is realized through the two-level cache, the consistency of the caches is ensured through a distributed cache notification mechanism, and the performance of the storage system is improved.
Based on the same inventive concept, according to another aspect of the present invention, an embodiment of the present invention further provides a metadata caching system 400, as shown in fig. 3, including:
a first interpretation module 401, where the first interpretation module 401 is configured to respond to a received request for reading metadata, and determine whether the metadata to be read exists in a first cache layer;
a second interpreting module 402, where the second interpreting module 402 is configured to determine whether multiple objects corresponding to the metadata to be read exist in a second cache layer in response to that the metadata to be read does not exist in the first cache layer;
an obtaining module 403, where the obtaining module 403 is configured to, in response to that multiple objects corresponding to the metadata to be read do not exist in the second cache layer, read multiple objects corresponding to the metadata to be read from the underlying storage pool;
an update module 404, the update module 404 configured to update the first cache tier and the second cache tier with the plurality of objects read from the underlying storage pool and notify other nodes to update the second cache tier under the other nodes.
Based on the same inventive concept, according to another aspect of the present invention, as shown in fig. 4, an embodiment of the present invention further provides a computer apparatus 501, including:
at least one processor 520; and
the memory 510, the memory 510 storing a computer program 511 executable on the processor, the processor 520 executing the program to perform any of the steps of the metadata caching method as described above.
Based on the same inventive concept, according to another aspect of the present invention, as shown in fig. 5, an embodiment of the present invention further provides a computer-readable storage medium 601, where the computer-readable storage medium 601 stores computer program instructions 610, and the computer program instructions 610, when executed by a processor, perform the steps of any one of the above metadata caching methods.
Finally, it should be noted that, as will be understood by those skilled in the art, all or part of the processes of the methods of the above embodiments may be implemented by a computer program to instruct related hardware to implement the methods.
Further, it should be appreciated that the computer-readable storage media (e.g., memory) herein can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as software or hardware depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed embodiments of the present invention.
The foregoing is an exemplary embodiment of the present disclosure, but it should be noted that various changes and modifications could be made herein without departing from the scope of the present disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the disclosed embodiments described herein need not be performed in any particular order. Furthermore, although elements of the disclosed embodiments of the invention may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
It should be understood that, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly supports the exception. It should also be understood that "and/or" as used herein is meant to include any and all possible combinations of one or more of the associated listed items.
The numbers of the embodiments disclosed in the embodiments of the present invention are merely for description, and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps of implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, of embodiments of the invention is limited to these examples; within the idea of an embodiment of the invention, also technical features in the above embodiment or in different embodiments may be combined and there are many other variations of the different aspects of the embodiments of the invention as described above, which are not provided in detail for the sake of brevity. Therefore, any omissions, modifications, substitutions, improvements, and the like that may be made without departing from the spirit and principles of the embodiments of the present invention are intended to be included within the scope of the embodiments of the present invention.

Claims (10)

1. A metadata caching method is characterized by comprising the following steps:
in response to receiving a request for reading metadata, judging whether the metadata to be read exists in a first cache layer;
responding to the fact that the metadata to be read is not in the first cache layer, and judging whether a plurality of objects corresponding to the metadata to be read exist in a second cache layer or not;
reading a plurality of objects corresponding to the metadata to be read from the bottom storage pool in response to the second cache layer not having the plurality of objects corresponding to the metadata to be read;
updating the first cache tier and the second cache tier with the plurality of objects read from the underlying storage pool and notifying other nodes to update the second cache tier under the other nodes.
2. The method of claim 1, wherein updating the first cache tier and the second cache tier with a plurality of objects read from the underlying storage pool further comprises:
and creating a cache unit for each object in the second cache layer, wherein the KEY of the cache unit is the name of the object and the name of the storage pool, and the value is the object.
3. The method of claim 1, further comprising:
responding to the metadata to be read in the first cache layer, and acquiring and returning the metadata to be read from the first cache layer;
and in response to the second cache layer having a plurality of objects corresponding to the metadata to be read, reading the plurality of objects corresponding to the metadata to be read from the second cache layer and assembling the plurality of objects into the metadata to be read for returning.
4. The method of claim 2 or 3, further comprising:
assembling the objects in the cache units in the second cache layer into metadata and returning the metadata to the first cache layer;
recording relevant information of the assembled metadata in the first cache layer;
and respectively storing the related information records into the values of the corresponding cache units of the second cache layer.
5. The method of claim 1, further comprising:
in response to receiving a request to update metadata, writing the updated metadata to a plurality of objects in the underlying plurality of storage pools;
after each object is updated, positioning the cache unit in the second cache layer according to the KEY formed by the name of the object and the name of the storage pool, so as to update the object in the positioned cache unit by using the updated object.
6. The method of claim 5, further comprising:
and positioning the cache data in the first cache layer according to the information related to the first cache layer in the positioned cache unit, and enabling the positioned cache data in the first cache layer to be invalid.
7. The method of claim 5, further comprising:
and sending the KEY consisting of the name of the object and the name of the storage pool and the updated object to other nodes so that the other nodes update corresponding cache units in a second cache layer of the other nodes.
8. A metadata caching system, comprising:
a first interpretation module configured to determine whether metadata to be read exists in a first cache layer in response to receiving a request to read the metadata;
a second interpretation module, configured to respond to that the metadata to be read is not in the first cache layer, determine whether a plurality of objects corresponding to the metadata to be read exist in a second cache layer;
an obtaining module, configured to, in response to that a plurality of objects corresponding to the metadata to be read do not exist in the second cache layer, read a plurality of objects corresponding to the metadata to be read from the bottom storage pool;
an update module configured to update the first cache tier and the second cache tier with a plurality of objects read from the underlying storage pool and notify other nodes to update the second cache tier under the other nodes.
9. A computer device, comprising:
at least one processor; and
memory storing a computer program operable on the processor, wherein the processor executes the program to perform the steps of the method according to any of claims 1-7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, is adapted to carry out the steps of the method according to any one of claims 1 to 7.
CN202010988143.3A 2020-09-18 2020-09-18 Metadata caching method, system, equipment and medium Active CN112115170B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010988143.3A CN112115170B (en) 2020-09-18 2020-09-18 Metadata caching method, system, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010988143.3A CN112115170B (en) 2020-09-18 2020-09-18 Metadata caching method, system, equipment and medium

Publications (2)

Publication Number Publication Date
CN112115170A true CN112115170A (en) 2020-12-22
CN112115170B CN112115170B (en) 2022-12-06

Family

ID=73799817

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010988143.3A Active CN112115170B (en) 2020-09-18 2020-09-18 Metadata caching method, system, equipment and medium

Country Status (1)

Country Link
CN (1) CN112115170B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104317736A (en) * 2014-09-28 2015-01-28 曙光信息产业股份有限公司 Method for implementing multi-level caches in distributed file system
CN104866434A (en) * 2015-06-01 2015-08-26 北京圆通慧达管理软件开发有限公司 Multi-application-oriented data storage system and data storage and calling method
CN105573673A (en) * 2015-12-11 2016-05-11 芜湖乐锐思信息咨询有限公司 Database based data cache system
CN106843770A (en) * 2017-01-23 2017-06-13 北京思特奇信息技术股份有限公司 A kind of distributed file system small file data storage, read method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104317736A (en) * 2014-09-28 2015-01-28 曙光信息产业股份有限公司 Method for implementing multi-level caches in distributed file system
CN104866434A (en) * 2015-06-01 2015-08-26 北京圆通慧达管理软件开发有限公司 Multi-application-oriented data storage system and data storage and calling method
CN107273522A (en) * 2015-06-01 2017-10-20 明算科技(北京)股份有限公司 Towards the data-storage system and data calling method applied more
CN105573673A (en) * 2015-12-11 2016-05-11 芜湖乐锐思信息咨询有限公司 Database based data cache system
CN106843770A (en) * 2017-01-23 2017-06-13 北京思特奇信息技术股份有限公司 A kind of distributed file system small file data storage, read method and device

Also Published As

Publication number Publication date
CN112115170B (en) 2022-12-06

Similar Documents

Publication Publication Date Title
CN103890709B (en) Key value database based on caching maps and replicates
US11314717B1 (en) Scalable architecture for propagating updates to replicated data
US11977532B2 (en) Log record identification using aggregated log indexes
CN110109873B (en) File management method for message queue
CN112148678B (en) File access method, system, device and medium
US11263080B2 (en) Method, apparatus and computer program product for managing cache
WO2018133762A1 (en) File merging method and apparatus
EP2901316A2 (en) Method and system for memory efficient, update optimized, transactional full-text index view maintenance
US8032570B2 (en) Efficient stacked file system and method
CN105653539A (en) Index distributed storage implement method and device
CN112115170B (en) Metadata caching method, system, equipment and medium
CN105808451B (en) Data caching method and related device
US10521398B1 (en) Tracking version families in a file system
CN112527804B (en) File storage method, file reading method and data storage system
US20140214899A1 (en) Leaf names and relative level indications for file system objects
CN114647658A (en) Data retrieval method, device, equipment and machine-readable storage medium
CN112286873A (en) Hash tree caching method and device
CN115470243A (en) Method and device for accelerating data processing
CN113760875A (en) Data processing method and device, electronic equipment and storage medium
CN112307272A (en) Method and device for determining relation information between objects, computing equipment and storage medium
CN111680069A (en) Database access method and device
US7979638B2 (en) Method and system for accessing data using an asymmetric cache device
CN109918355A (en) Realize the virtual metadata mapped system and method for the NAS based on object storage service
CN110008188A (en) A kind of application software external memory limit system of file system level
CN115981570B (en) Distributed object storage method and system based on KV database

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant