CN115374301B - Cache device, method and system for realizing graph query based on cache device - Google Patents

Cache device, method and system for realizing graph query based on cache device Download PDF

Info

Publication number
CN115374301B
CN115374301B CN202211302696.4A CN202211302696A CN115374301B CN 115374301 B CN115374301 B CN 115374301B CN 202211302696 A CN202211302696 A CN 202211302696A CN 115374301 B CN115374301 B CN 115374301B
Authority
CN
China
Prior art keywords
cache
cache region
key
region
negative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211302696.4A
Other languages
Chinese (zh)
Other versions
CN115374301A (en
Inventor
文豪
吴敏
叶小萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Ouruozhi Technology Co ltd
Original Assignee
Hangzhou Ouruozhi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Ouruozhi Technology Co ltd filed Critical Hangzhou Ouruozhi Technology Co ltd
Priority to CN202211302696.4A priority Critical patent/CN115374301B/en
Publication of CN115374301A publication Critical patent/CN115374301A/en
Application granted granted Critical
Publication of CN115374301B publication Critical patent/CN115374301B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application relates to a cache device, a method and a system for realizing graph query based on the cache device, wherein the cache device comprises a block cache region, a vertex cache region and a negative cache region, wherein the block cache region is a cache region in a KVS storage engine; the block cache region is used for caching a key-value of graph data and quickly accessing meta information and side information of the KVS storage engine; the vertex cache region is used for caching the key and the attribute of the point; and the negative cache region is used for caching the key of the point without the attribute. By the method and the device, the problem of graph query in the related technology is solved, the space utilization rate of cache is improved, and the performance overhead is reduced.

Description

Cache device, method and system for realizing graph query based on cache device
Technical Field
The present application relates to the field of graph query, and in particular, to a cache device, and a method and a system for implementing graph query based on the cache device.
Background
In the current big data era, data generated and consumed are complex, wherein most data are unstructured data. For unstructured data, key Value Store (abbreviated as KVS) is the most applicable storage engine, and common KVS includes LevelDB, rocksDB, and the like. To satisfy today's access of large amounts of data per second, KVS typically provides a block cache to speed up access to the underlying disks.
However, the existing cache device in KVS cannot well satisfy the following two characteristics of graph query: 1. the spatial locality of the points is poor; 2. there is a large amount of access to the absence of keys. Specifically, firstly, since the block cache uses a block as a basic operation unit, one block is often much larger than one point, so that mismatch of unit size may occur during storage, and when the spatial locality of the point is poor, the ratio of cache miss may become high and the utilization efficiency of the cache space may be low. Second, although KVS tends to use bloom filters to speed up filtering of some nonexistent keys, its overhead is still not negligible, especially for some graph data queries that address high performance. In addition, the performance overhead increases with the amount of data, for example, for RocksDB, for a non-existent key, it is necessary to query each layer of the LSM tree to determine that the key does not exist. And the query for each layer mainly comprises the comparison of key range and the query of bloom filter, so that at least two times of memory access is required. And this overhead grows linearly with the increase in data. In addition, considering that there may be multiple schemas for points and edges on the graph data, if there is only one or a few schemas with data on most of the points or edges, the above method may result in querying for non-existing keys far more than existing keys when performing graph query, and a large amount of performance resources are wasted.
Therefore, an effective solution is needed to solve the graph query problem in the related art.
Disclosure of Invention
The embodiment of the application provides a cache device, and a method and a system for realizing graph query based on the cache device, so as to at least solve the graph query problem in the related art.
In a first aspect, an embodiment of the present application provides a cache device for graph query, where the cache device includes a block cache region, a vertex cache region, and a negative cache region, where the block cache region is a cache region inside a KVS storage engine;
the block cache region is used for caching a key-value of graph data and quickly accessing meta information and side information of a KVS (KVS) storage engine;
the vertex cache region is used for the key and the attribute of the cache point;
and the negative cache region is used for caching keys of points without attributes.
In some of these embodiments, given the available memory,
the system presets the memory allocation priority ratio of the block cache region, the vertex cache region and the negative cache region in the cache device,
or, adjusting the memory allocation priority ratio of the block cache region, the vertex cache region and the negative cache region in the cache device according to the cache operation statistical information in the preset time.
In some embodiments, it is determined whether to write the latest data into the cache or swap out the least accessed data from the cache according to the priority ratio of the cache device.
In some embodiments, the data read/write operation in the cache device is locked by a lock.
In some embodiments, the caching device is located in the storage layer.
In a second aspect, an embodiment of the present application provides a method for implementing graph query based on the cache device in any one of claims 1 to 5, where the method includes:
judging whether a cache engine interface can read the graph data key existing in the vertex cache region or the negative cache region through one cache access;
if not, respectively accessing the vertex cache area and the negative cache area, and inquiring and reading corresponding graph data information;
and if so, directly accessing the cache device through the cache engine interface, and inquiring and reading corresponding graph data information.
In some embodiments, the accessing the vertex cache area and the negative cache area respectively, and querying and reading corresponding graph data information includes:
accessing a negative cache region, directly returning if the cache is hit, continuously accessing a vertex cache region if the cache is not hit, acquiring a relevant value and returning if the cache is hit, and continuously accessing a KVS storage engine if the cache is not hit;
if the key is found in the KVS storage engine, judging whether a vertex cache region exists, and if so, writing the key and the corresponding value into the vertex cache region; and if the key is not found in the KVS storage engine, judging whether a negative cache region exists or not, and if so, writing the key into the negative cache region.
In some embodiments, the accessing the cache device directly through the cache engine interface, and the querying and reading the corresponding graph data information includes:
accessing the cache device through the cache engine interface, if the cache is hit, reading a value corresponding to the key, judging whether the key hits a vertex cache region or a negative cache region according to the value, and returning a corresponding value; if the cache misses, the KVS storage engine continues to be accessed,
if the key is found in the KVS storage engine, judging whether a vertex cache region exists, and if so, writing the key and a corresponding value into the vertex cache region; and if the key is not found in the KVS storage engine, judging whether a negative cache region exists or not, and if so, writing the key into the negative cache region.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor, when executing the computer program, implements the method according to the second aspect.
In a fourth aspect, embodiments of the present application provide a storage medium, on which a computer program is stored, which when executed by a processor, implements the method according to the second aspect.
Compared with the related art, the cache device for graph query provided by the embodiment of the application comprises a block cache region, a vertex cache region and a negative cache region, wherein the block cache region is a cache region inside a KVS storage engine; the block cache region is used for caching a key-value of graph data and quickly accessing meta information and side information of the KVS storage engine; the vertex cache region is used for caching the key and the attribute of the point; and the negative cache region is used for caching keys of points without attributes.
Compared with a block cache in the conventional KVS (KVS) storage engine, the cache constructed by the method has finer granularity, and is partitioned and cached aiming at different points in graph data through the constructed discrete cache region, so that the storage units are matched with the sizes of the points, the space utilization rate of the cache is effectively improved, in addition, points without attributes exist are stored in the negative cache region, the access to keys without the points with the attributes can be accelerated, the query efficiency is improved, and the performance overhead is reduced. In addition, the cache device constructed by the method also reserves a block cache block in the KVS storage engine, and ensures quick access to meta information and edges in the KVS.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a schematic diagram of a caching apparatus for graph query according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a graph query flow according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a graph query flow according to an embodiment of the present application;
fig. 4 is an internal structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. Reference herein to "a plurality" means greater than or equal to two. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
In the related art, there are two features of graph queries: a. there is a lot of key value access to points where there is no attribute.
First, with respect to feature a: the query to the graph needs to be traversed, and when the graph is traversed, multiple iterations are often needed, and the flow can be simply described as follows: for the current input point a, a neighbor set N = { A1, A2, A3, … An } of the current input point is found, and the set is used as the input of the next iteration. And then continuously searching a neighbor set for each point in the set N, and continuously repeating the steps until the graph is completely traversed.
It can be seen from the graph traversal process described above that for each iteration, all neighbors of a point need to be visited. In a distributed graph database, points are randomly distributed in different data fragments through hashing, so that finding a neighbor of a point often requires accessing multiple disks and multiple storage nodes. And as the number of iterations increases, the number of random accesses also increases exponentially. Therefore, in graph queries, the spatial locality of points is poor.
With respect to feature b: one notable feature of the graph is the schema-less. The schema of any point and any side is not fixed, can be any schema, and can be changed at any time. For example, a person, whose points may define a number of schema attributes: students, researchers, etc. And in graph query, attributes of all schemas on points or edges are required to be returned. For example, the query: MATCH (n: person) RETURN, which requires finding a point with Person this schema and returning all the attributes at that point. However, it is not possible to predict which schemas have attributes at a point in the query, so for all possible schemas, the key of the query must be constructed and then accessed to the underlying storage engine. However, all points and edges have attributes under each schema, and if key construction is performed on points without attributes and bottom access is performed, a large amount of key access to points without attributes exists in the graph query.
For the above two features, the Cache device in the existing KVS cannot be well satisfied, and therefore, to solve the graph query problem in the prior art, the present application provides a Cache device for graph query, where fig. 1 is a schematic diagram of a Cache device for graph query according to an embodiment of the present application, and as shown in fig. 1, the Cache device (Cache-Cache) includes a block Cache region (block Cache), a vertex Cache region (vertex Cache), and a negative Cache region (negative Cache), where the block Cache region is a Cache region inside a KVS storage engine (RocksDB). In addition, the caching device is located in the storage node.
Specifically, the block cache region is used for caching a key-value of graph data and can quickly access meta information and side information of a KVS storage engine; the vertex cache region is used for caching the key and the attribute of the point; and the negative cache region is used for caching the key of the point without the attribute.
Through the cache device, two cache regions, namely a vertex cache region and a negative cache region, are added on the basis of the original block cache region of the storage layer, so that a default large block cache caching mode in the prior art is replaced, two major characteristics of graph query are met, and the performance is effectively improved.
It should be noted that the block buffer is an original buffer in the KVS storage engine, and is reserved in the cache device of this embodiment, so as to ensure fast access to meta information and edges of the KVS. The vertex cache region and the negative cache region are new cache regions set in the cache device, and the problem that cache units are not matched in size when caching is carried out through the whole block in the conventional graph query can be solved through keys and attribute values of cache points of the vertex cache region, so that the space utilization rate of the cache is improved; and the key of the point without the attribute is cached in the negative cache region, so that the times of traversal access during graph query can be effectively reduced, and the query cost for the point is reduced.
In some embodiments, given available memory, the system presets a memory allocation priority ratio of the block cache, the vertex cache, and the negative cache in the cache device, for example, 3;
or, the memory allocation priority ratio of the block cache region, the vertex cache region and the negative cache region in the cache device is adjusted according to the cache operation statistical information in the preset time. Preferably, the statistical information in this embodiment includes: the number of times that each cache in the cache device is read within the preset time is greater, the priority is higher, and the priority or the size of the memory space needs to be increased gradually according to the actual situation.
In some embodiments, whether to write the latest data into the cache or swap the least accessed data out of the cache is determined by the priority ratio of the cache device. For example, a random value is generated and a determination is made as to whether the value is less than probability 3/(3 +2+ 1) to determine whether to write the most recent data or swap out the least accessed data. If the value is less than the preset value, the cache is replaced; if not, no operation is performed.
In some embodiments, since there may be a case where the same piece of data is copied to multiple different storage nodes between the graph data, after a point is updated or written, a cache on the storage node where each piece of copied data of the point is located needs to be updated or deleted synchronously. In order to ensure strong consistency, on each storage node, the bottom storage needs to be written first, and then the points in the cache need to be updated or deleted. However, the read flow is complicated due to the difference of the cache device of this embodiment. In the case of a cache miss (cache miss), it is necessary to read from the Rocksdb and write to the vertex cache or negative cache of the cache device. In order to avoid data inconsistency caused by interruption of the process, in this embodiment, the data read-write operation in the cache device is locked by the lock, so that interruption of the read process is avoided.
It should be noted that the caching apparatus in the embodiment of the present application is applicable to any system that queries graph data, and is not limited to a graph database system.
The embodiment also provides a method for realizing graph query based on the cache device. According to the cache device nebula cache in the application, because a vertex cache region and a negative cache region are additionally arranged, compared with the original read-write mode of a block cache region, the whole read flow can be changed. The original block cache is a cache inside KVS, is hidden outwards and cannot be directly accessed. Thus, data may be read by directly accessing the KVS, which may originate from either the block cache or the disk. The method for performing graph query and reading and writing data based on the cache device nebula cache in the embodiment is as follows:
judging whether the cache engine interface can read the graph data key existing in a vertex cache region or a negative cache region through one cache access:
in the first case, if the map data information cannot be obtained, the vertex cache area and the negative cache area are respectively accessed, and the corresponding map data information is inquired and read. Fig. 2 is a schematic diagram of a graph query flow according to an embodiment of the present application, and as shown in fig. 2, the specific steps include: accessing the negative cache area under the condition that the negative cache area exists, if the cache is hit, the key does not exist in the database, directly returning, if the cache is not hit, the key possibly exists or does not exist in the database, continuing accessing the vertex cache area under the condition that the vertex cache area exists, if the cache is hit, obtaining a relevant value and returning, and if the cache is not hit, continuing accessing the KVS storage engine; if the key is found in the KVS storage engine, judging whether the vertex cache region exists again, and if so, writing the key and the corresponding value read from the KVS storage engine into the vertex cache region; if the key is not found in the KVS storage engine, judging whether a negative cache region exists or not, and if so, writing the key into the negative cache region.
In the second case, if the map data information can be obtained, the cache device is directly accessed through the cache engine interface, and the corresponding map data information is inquired and read. Fig. 3 is another schematic diagram of a graph query flow according to an embodiment of the present application, and as shown in fig. 3, the specific steps include: accessing the cache device of the embodiment through a cache engine interface, if the cache is hit, reading a value corresponding to the key, judging whether the key hits a vertex cache region or a negative cache region according to the value, and returning a corresponding value; if the cache is not hit, continuing to access the KVS storage engine, if a key is found in the KVS storage engine, judging whether a vertex cache region exists, and if so, writing the key and a corresponding value read from the KVS storage engine into the vertex cache region; if the key is not found in the KVS storage engine, judging whether a negative cache region exists or not, and if so, writing the key into the negative cache region.
It should be noted that the steps illustrated in the above-described flow diagrams or in the flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order different than here.
The present embodiment also provides an electronic device comprising a memory having a computer program stored therein and a processor configured to execute the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
In addition, in combination with the method for implementing graph query based on a cache device in the foregoing embodiments, the embodiments of the present application may provide a storage medium to implement the method. The storage medium having stored thereon a computer program; when executed by a processor, the computer program implements any one of the above-described embodiments of the method for implementing graph queries based on a cache device.
In one embodiment, a computer device is provided, which may be a terminal. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method for implementing graph queries based on a caching apparatus. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
In one embodiment, fig. 4 is a schematic diagram of an internal structure of an electronic device according to an embodiment of the present application, and as shown in fig. 4, there is provided an electronic device, which may be a server, and its internal structure diagram may be as shown in fig. 4. The electronic device comprises a processor, a network interface, an internal memory and a non-volatile memory connected by an internal bus, wherein the non-volatile memory stores an operating system, a computer program and a database. The processor is used for providing calculation and control capability, the network interface is used for communicating with an external terminal through network connection, the internal memory is used for providing an environment for an operating system and the running of a computer program, the computer program is executed by the processor to realize a method for realizing graph query based on a cache device, and the database is used for storing data.
Those skilled in the art will appreciate that the configuration shown in fig. 4 is a block diagram of only a portion of the configuration associated with the present application, and does not constitute a limitation on the electronic device to which the present application is applied, and a particular electronic device may include more or less components than those shown in the drawings, or combine certain components, or have a different arrangement of components.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above may be implemented by hardware instructions of a computer program, which may be stored in a non-volatile computer-readable storage medium, and when executed, the computer program may include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It should be understood by those skilled in the art that various technical features of the above-described embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above-described embodiments are not described, however, so long as there is no contradiction between the combinations of the technical features, they should be considered as being within the scope of the present description.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, and these are all within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. The cache device for graph query is characterized by comprising a block cache region, a vertex cache region and a negative cache region, wherein the block cache region is a cache region inside a KVS storage engine;
the block cache region is used for caching a key-value of graph data and quickly accessing meta information and side information of a KVS (KVS) storage engine;
the vertex cache region is used for the key and the attribute of the cache point;
and the negative cache region is used for caching keys of points without attributes.
2. The caching apparatus of claim 1, wherein, given available memory,
the system presets the memory allocation priority ratio of the block cache region, the vertex cache region and the negative cache region in the cache device,
or, adjusting the memory allocation priority ratio of the block cache region, the vertex cache region and the negative cache region in the cache device according to the cache operation statistical information in the preset time.
3. The caching apparatus according to claim 2, wherein whether to write the latest data into the cache or swap out the least accessed data from the cache is determined by a priority ratio of the caching apparatus.
4. The caching apparatus of claim 1,
and locking the data read-write operation in the cache device through a lock.
5. The caching device according to claim 1, wherein said caching device is located in a storage tier.
6. A method for implementing graph query based on the cache device of any one of claims 1 to 5, the method comprising:
judging whether a cache engine interface can read the graph data key existing in the vertex cache region or the negative cache region through one cache access;
if not, respectively accessing the vertex cache area and the negative cache area, and inquiring and reading corresponding graph data information;
and if so, directly accessing the cache device through the cache engine interface, and inquiring and reading corresponding graph data information.
7. The method of claim 6, wherein the accessing the vertex cache area and the negative cache area respectively, and the querying and reading the corresponding graph data information comprises:
accessing a negative cache region, directly returning if the cache is hit, continuously accessing a vertex cache region if the cache is not hit, acquiring a relevant value and returning if the cache is hit, and continuously accessing a KVS storage engine if the cache is not hit;
if the key is found in the KVS storage engine, judging whether a vertex cache region exists, and if so, writing the key and a corresponding value into the vertex cache region; and if the key is not found in the KVS storage engine, judging whether a negative cache region exists or not, and if so, writing the key into the negative cache region.
8. The method of claim 6, wherein the accessing the caching device directly through the caching engine interface, the querying for reading corresponding graph data information comprises:
accessing the cache device through the cache engine interface, if the cache is hit, reading a value corresponding to the key, judging whether the key hits a vertex cache region or a negative cache region according to the value, and returning a corresponding value; if the cache is not hit, continuing to access the KVS storage engine, if a key is found in the KVS storage engine, judging whether a vertex cache region exists, and if so, writing the key and a corresponding value into the vertex cache region; and if the key is not found in the KVS storage engine, judging whether a negative cache region exists or not, and if so, writing the key into the negative cache region.
9. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 6 to 8.
10. A storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of any one of claims 6 to 8 when executed.
CN202211302696.4A 2022-10-24 2022-10-24 Cache device, method and system for realizing graph query based on cache device Active CN115374301B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211302696.4A CN115374301B (en) 2022-10-24 2022-10-24 Cache device, method and system for realizing graph query based on cache device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211302696.4A CN115374301B (en) 2022-10-24 2022-10-24 Cache device, method and system for realizing graph query based on cache device

Publications (2)

Publication Number Publication Date
CN115374301A CN115374301A (en) 2022-11-22
CN115374301B true CN115374301B (en) 2023-02-07

Family

ID=84074058

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211302696.4A Active CN115374301B (en) 2022-10-24 2022-10-24 Cache device, method and system for realizing graph query based on cache device

Country Status (1)

Country Link
CN (1) CN115374301B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114138776A (en) * 2021-11-01 2022-03-04 杭州欧若数网科技有限公司 Method, system, apparatus and medium for graph structure and graph attribute separation design

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7979301B1 (en) * 2002-09-03 2011-07-12 Hector Franco Online taxonomy for constructing customer service queries
CN104111898A (en) * 2014-05-26 2014-10-22 中国能源建设集团广东省电力设计研究院 Hybrid storage system based on multidimensional data similarity and data management method
US10853872B1 (en) * 2016-06-20 2020-12-01 Amazon Technologies, Inc. Advanced item associations in an item universe
CN110955658B (en) * 2019-11-19 2022-11-18 杭州趣链科技有限公司 Data organization and storage method based on Java intelligent contract

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114138776A (en) * 2021-11-01 2022-03-04 杭州欧若数网科技有限公司 Method, system, apparatus and medium for graph structure and graph attribute separation design

Also Published As

Publication number Publication date
CN115374301A (en) 2022-11-22

Similar Documents

Publication Publication Date Title
US9348752B1 (en) Cached data replication for cache recovery
US10838622B2 (en) Method and apparatus for improving storage performance of container
CN107491523B (en) Method and device for storing data object
US10409728B2 (en) File access predication using counter based eviction policies at the file and page level
US20100228914A1 (en) Data caching system and method for implementing large capacity cache
US9229869B1 (en) Multi-lock caches
US20130290636A1 (en) Managing memory
US9021208B2 (en) Information processing device, memory management method, and computer-readable recording medium
CN112148736A (en) Method, device and storage medium for caching data
CN107992270B (en) Method and device for globally sharing cache of multi-control storage system
CN114817195A (en) Method, system, storage medium and equipment for managing distributed storage cache
CN104598652B (en) A kind of data base query method and device
Feng et al. HQ-Tree: A distributed spatial index based on Hadoop
CN112579650A (en) Data processing method and system based on Redis cache
CN115374301B (en) Cache device, method and system for realizing graph query based on cache device
CN115080459A (en) Cache management method and device and computer readable storage medium
CN106796588A (en) The update method and equipment of concordance list
CN115934583B (en) Hierarchical caching method, device and system
US9747315B2 (en) Bucket skiplists
CN110502535A (en) Data access method, device, equipment and storage medium
CN116204130A (en) Key value storage system and management method thereof
CN115878677A (en) Data processing method and device for distributed multi-level cache
CN114390069B (en) Data access method, system, equipment and storage medium based on distributed cache
CN111752941A (en) Data storage method, data access method, data storage device, data access device, server and storage medium
US20220365905A1 (en) Metadata processing method and apparatus, and a computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant