CN111090653B - Data caching method and device and related products - Google Patents

Data caching method and device and related products Download PDF

Info

Publication number
CN111090653B
CN111090653B CN201911330901.6A CN201911330901A CN111090653B CN 111090653 B CN111090653 B CN 111090653B CN 201911330901 A CN201911330901 A CN 201911330901A CN 111090653 B CN111090653 B CN 111090653B
Authority
CN
China
Prior art keywords
data
graph
node
nodes
buffer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911330901.6A
Other languages
Chinese (zh)
Other versions
CN111090653A (en
Inventor
马忠义
崔朝辉
赵立军
张霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Corp
Original Assignee
Neusoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Corp filed Critical Neusoft Corp
Priority to CN201911330901.6A priority Critical patent/CN111090653B/en
Publication of CN111090653A publication Critical patent/CN111090653A/en
Application granted granted Critical
Publication of CN111090653B publication Critical patent/CN111090653B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2237Vectors, bitmaps or matrices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/242Query formulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a data caching method, a data caching device and related products, wherein a graph data cache region corresponding to a first application program in a memory medium of equipment is determined before the first application program operates, and graph data nodes are filled into the graph data cache region according to node structural characteristics of a graph database. When the first application program runs, the time for traversing the data from the graph database is saved; if the graph data buffer zone just contains the needed data, the data can be directly obtained from the graph data buffer zone without searching the data in the graph database, so that the time for preheating the graph database is saved. The graph data nodes filled into the graph data buffer area are selected according to the node structural characteristics of the graph database, if some graph data nodes are obtained according to the node structural characteristics of the graph database, the node with low data exchange performance of the memory medium and the external memory medium is caused, after the graph data nodes are filled into the graph data buffer area, the nodes can be directly obtained from the graph data buffer area, so that the traversing time is saved, and the data obtaining speed is improved.

Description

Data caching method and device and related products
Technical Field
The present application relates to the field of data storage, and in particular, to a data caching method, apparatus and related products.
Background
With the rapid development of big data industries such as finance, electronic commerce, internet of things and the like, the relation among data which needs to be processed in the big data industry is increased along with the geometric progression of the data quantity. The conventional relational databases are difficult to meet the actual demands in terms of expansion capability, read-write performance and the like, and non-relational databases (NoSQL, not Only SQL) such as graph databases and the like have been developed.
The graph database does not refer to a database storing pictures, but stores and queries data in a data structure of a graph. Currently, a graph database is usually stored in a memory medium such as a hard disk, and when an application program in the device runs, data needs to be traversed from the graph database in the memory medium in real time. However, the graph database product has a warm-up mechanism, which causes the graph database to consume more time in the startup phase, thus affecting the speed of acquiring data when the application program is running.
Disclosure of Invention
Based on the above problems, the application provides a data caching method, a data caching device and related products, so as to improve the speed of acquiring data when an application program runs.
The embodiment of the application discloses the following technical scheme:
in a first aspect, the present application provides a data caching method, including:
Before a first application program runs, determining a graph data buffer area corresponding to the first application program in a memory medium of equipment;
and filling the graph data buffer area with graph data nodes according to the node structure characteristics of the graph database.
Optionally, filling the graph data buffer with graph data nodes according to node structural features of the graph database, specifically including:
scoring each graph data node in a graph database according to a preset scoring mode to obtain the score of each graph data node; the preset scoring mode is related to node structural features of the graph database;
sequencing the graph data nodes according to the scores of the graph data nodes to obtain a node list;
and filling the graph data nodes into the graph data buffer according to the node list.
Optionally, the expression of the preset scoring mode is:
score(v)=p*R(v)+q*Q(v);
wherein v represents any one graph data node in a node set of the graph database, and the node set comprises all graph data nodes of the graph database; the score (v) represents the score of v; said R (v) represents the maximum value among the respective closest distances from each graph data node in said set of nodes to said v; the Q (v) represents the degree of v; the p is a first weight, and represents the weight of the R (v) in the preset scoring mode; q is a second weight, and represents the weight of Q (v) in the preset scoring mode; the expression of R (v) is:
R(v)=max S {P(v,t)};
Wherein, the S represents the node set, and the t represents any graph database node except the v in the S; the P (v, t) represents the closest distance of the v from the t.
Optionally, the method further comprises: the first weight and the second weight are set according to a data query history for the graph database.
Optionally, setting the first weight and the second weight according to the data query history of the graph database specifically includes:
obtaining the depth search times and the breadth search times according to the data query history of the graph database;
if the depth search times are larger than the breadth search times, setting the first weight to be larger than the second weight; and if the depth search times are smaller than the breadth search times, setting the first weight to be smaller than the second weight.
Optionally, the graph data buffer area comprises a fixed buffer and a hit buffer; the filling of the graph data node into the graph data buffer area specifically comprises the following steps: filling the graph data node into the fixed cache; when the first application is running on the device, the method further comprises:
Receiving a data query request, wherein the data query request comprises a data query condition;
judging whether the graph data cache area comprises data conforming to the data query condition or not; if yes, returning the data meeting the data query condition to the initiating terminal of the data query request; if not, returning the data meeting the data query condition in the graph database to the initiating terminal of the data query request, and filling the hit cache with the data meeting the data query condition.
Optionally, filling the graph data node into the fixed cache specifically includes:
obtaining cache format data corresponding to the graph data nodes;
judging whether the buffer memory space occupied by the buffer memory format data exceeds the residual buffer memory space of the fixed buffer memory, if not, filling the buffer memory format data into the fixed buffer memory; if yes, stopping filling the buffer format data corresponding to the arbitrary graph data node into the fixed buffer.
In a second aspect, the present application provides a data caching apparatus, comprising:
the buffer area determining module is used for determining a graph data buffer area corresponding to a first application program in a memory medium of equipment before the first application program operates;
And the data caching module is used for filling the graph data nodes into the graph data caching area according to the node structure characteristics of the graph database.
Optionally, the data caching module specifically includes:
the score obtaining unit is used for scoring each graph data node in the graph database according to a preset scoring mode to obtain the score of each graph data node; the preset scoring mode is related to node structural features of the graph database;
the node list acquisition unit is used for sequencing the graph data nodes according to the scores of the graph data nodes to obtain a node list;
and the data first buffer unit is used for filling the graph data nodes into the graph data buffer area M according to the node list.
Optionally, the above apparatus may further include:
and the weight setting module is used for setting the first weight and the second weight according to the data query history of the graph database.
Optionally, the weight setting module specifically includes:
a search number acquisition unit configured to acquire a depth search number and a breadth search number according to a data query history for the graph database;
a setting unit configured to set the first weight to be greater than the second weight when the number of depth searches is greater than the number of breadth searches; and setting the first weight to be smaller than the second weight when the number of depth searches is smaller than the number of breadth searches.
Optionally, the graph data buffer area includes a fixed buffer and a hit buffer;
the data caching module is specifically used for filling the graph data nodes into the fixed cache;
when the first application is running on the device, the apparatus may further include:
the request receiving module is used for receiving a data query request, wherein the data query request comprises a data query condition;
the judging module is used for judging whether the graph data cache area comprises data conforming to the data query condition or not;
the data return module is used for returning the data meeting the data query conditions to the initiating terminal of the data query request when the judging result of the judging module is yes; the data processing module is also used for returning the data meeting the data query conditions in the graph database to the initiating terminal of the data query request when the judging module judges that the judging result is negative;
and the data caching module is also used for filling the hit cache with the data meeting the data query condition when the judging result of the judging module is negative.
Optionally, the data caching module is specifically configured to include:
the data acquisition unit is used for acquiring cache format data corresponding to the graph data nodes;
The judging unit is used for judging whether the buffer space occupied by the buffer format data exceeds the residual buffer space of the fixed buffer;
the data second buffer unit is used for filling the buffer format data into the fixed buffer when the judging result of the judging unit is negative; and stopping filling the buffer memory format data corresponding to the data nodes of the arbitrary graph into the fixed buffer memory when the judging unit judges that the judging result is yes.
In a third aspect, the present application provides a computer readable storage medium having a computer program stored therein, which when executed by a processor, implements the data caching method as provided in the first aspect.
In a fourth aspect, the present application provides a processor for executing a computer program, which when run performs the data caching method provided in the first aspect.
Compared with the prior art, the application has the following beneficial effects:
in the technical scheme provided by the application, before the first application program operates, a graph data buffer area corresponding to the first application program in a memory medium of the equipment is determined, and graph data nodes are filled into the graph data buffer area according to the node structural characteristics of a graph database. By filling the graph data cache with graph data nodes, when the first application program is running, time for traversing data from the graph database can be saved; and if the graph data buffer zone just contains the required data, the data can be directly obtained from the graph data buffer zone, even the data in the graph database is not required to be searched, and the time for preheating the graph database is saved. In addition, in the technical scheme of the application, the graph data nodes filled into the graph data cache region are selected according to the node structural characteristics of the graph database, so that if some graph data nodes are nodes with low data exchange performance of the memory medium and the external memory medium according to the node structural characteristics of the graph database, after the nodes are filled into the graph data cache region, the graph data nodes can be directly obtained from the graph data cache region when a first application program runs, the data exchange of the memory medium and the external memory medium is not needed, and the time for traversing the graph database is further saved, thereby improving the speed of obtaining data.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the application, and that other drawings can be obtained according to these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a flowchart of a data caching method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of allocating a graph data buffer for an application according to an embodiment of the present application;
FIG. 3 is a schematic node structure diagram of a graph database according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a node structure of another graph database according to an embodiment of the present application;
FIG. 5 is a flowchart of another data caching method according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a node structure of a graph database according to another embodiment of the present application;
FIG. 7 is a schematic diagram of a structure of a data buffer area according to an embodiment of the present application;
FIG. 8 is a flowchart of another data caching method according to an embodiment of the present application;
Fig. 9 is a schematic structural diagram of a data caching device according to an embodiment of the present application;
fig. 10 is a hardware structure diagram of a data caching device according to an embodiment of the present application.
Detailed Description
As described above, when the graph database is used to obtain data, the speed of obtaining data during the running process of the application program is not good for various reasons. For example, the data pre-heating mechanism of the graph database results in a longer time being consumed in the start-up phase of the graph database; if the data of the graph database reaches a relatively huge scale, the data acquisition speed is also reduced when traversing the graph data. How to effectively use the graph database and simultaneously increase the data acquisition speed has become an urgent problem to be solved.
In order to solve the above problems, the inventor provides a data caching method, a data caching device and related products. Before the application program is started, the graph data nodes are filled into graph data buffer areas distributed for the application program according to the node structure characteristics of the graph database, and the graph data buffer areas are positioned in a memory medium of equipment used for running the application program. Because the graph data nodes are cached in advance, when an application program actually operates, the burden of exchanging data between the memory medium and the external memory medium can be reduced, the time for preheating the data is saved, and the speed for acquiring the data is improved.
In order to make the present application better understood by those skilled in the art, the following description will clearly and completely describe the technical solutions in the embodiments of the present application with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Method embodiment
Referring to fig. 1, the flow chart of a data caching method according to an embodiment of the present application is shown.
The data caching method illustrated in fig. 1 may be implemented by an application-executable device. The device may be a stationary terminal device, such as a desktop computer; but also a removable terminal device such as a notebook computer, tablet computer, etc.
The device may be used to run a plurality of applications, such as a first application, a second application, a third application, etc. The functions implemented by these different applications may be similar or different. The data required by different applications at run-time may be provided by the same graph database, for example: when the first application program runs, providing train ticket and airplane ticket booking service for the user; and when the second application program runs, providing hot spot recommendation service for the user. Furthermore, the data required by different applications at runtime may also be provided by different graph databases, for example: when the first application program runs, providing train ticket and airplane ticket booking service for the user; and when the third application program runs, providing book recommendation service for the user.
The graph database resides in the external storage medium of the device. In this embodiment, a specific implementation of the method is described by taking a first application program as an example, and data cached for the first application program is mainly provided by a graph database corresponding to the first application program.
As shown in fig. 1, the data caching method provided in this embodiment includes:
step 101: before a first application program operates, determining a graph data buffer area M corresponding to the first application program in a memory medium of equipment.
In this embodiment, corresponding graph data buffer areas are allocated in advance for various application programs that may run on the device. These map data buffers are located in the memory medium of the device and can be used to buffer the data provided by the map database. Referring to fig. 2, a schematic diagram of allocating a map data buffer for an application according to an embodiment of the present application is shown. As shown in fig. 2, in this embodiment, a graph data buffer M is allocated for a first application program, a graph data buffer X is allocated for a second application program, and a graph data buffer W is allocated for a third application program.
In order to increase the speed of acquiring data when the first application program runs, the embodiment needs to first determine the image data buffer area M corresponding to the first application program before the first application program runs, for example, determine the size and the position of the buffer space of the image data buffer area M, so as to facilitate the subsequent data buffering to the image data buffer area M.
Step 102: and filling the graph data buffer area M with graph data nodes according to the node structure characteristics of the graph database.
The graph database contains a plurality of graph data nodes, in which data is represented and stored using graph data nodes, edges, attributes, and the like. Fig. 3 and 4 provide exemplary node structures of two different graph databases, respectively, in which the links between the graph data nodes represent the relationships between the nodes. The line arrow initiating node between the data nodes of the graph is a father node, and the line arrow pointing node is a child node. Wherein, the node which is only a parent node and not a child node is a root node, and the node which is only a child node and not a parent node is a leaf node.
The greater the distance between the root node and the leaf node, the deeper the depth of the graph database; the greater the number of child nodes connected to the root node, the greater the breadth of the graph database. The node structure in the graph database shown in fig. 3 is mainly depth; the node structure in the graph database shown in fig. 4 is mainly extensive. Of course, in practical application, there are some graph databases with depth and breadth features, and the node structures of the various graph databases are not illustrated in a one-to-one drawing.
In the specific implementation, the method selects the graph data nodes according to the node structure characteristics of the graph database, and fills the selected graph data nodes into the graph data buffer area M. For example, if the node structure of the graph database is mainly depth, the graph data nodes with deeper depth may be preferentially filled into the graph data buffer area M; the node structure of the graph database is characterized by mainly comprising the breadth, so that the graph data nodes with wider breadth can be filled into the graph data buffer area M preferentially.
In practical applications, the data can be filled according to the data query history of the graph database. In one possible implementation, data with high frequency query and deep depth can be determined according to the data query history, the data query is more times, and each query affects the data exchange performance between the external storage medium and the internal storage medium due to the deep depth, so that the data can be preferentially cached in the graph data cache region M. In another possible implementation manner, data with high frequency query and wider range can be determined according to the data query history, the data query is more frequently and each query affects the data exchange performance between the external storage medium and the internal storage medium due to wider range, so that the data can be cached in the graph data cache area preferentially.
In the process of caching data, the cached data is not limited to the graph data node itself, and may also include a relationship between the graph data node and an adjacent node, an attribute of the graph data node, and the like.
The above is a data caching method provided by the embodiment of the application. In the method, a graph data buffer area corresponding to a first application program in a memory medium of equipment is determined before the first application program operates, and graph data nodes are filled into the graph data buffer area according to node structural characteristics of a graph database. By filling the graph data cache with graph data nodes, when the first application program is running, time for traversing data from the graph database can be saved; and if the graph data buffer zone just contains the required data, the data can be directly obtained from the graph data buffer zone, even the data in the graph database is not required to be searched, and the time for preheating the graph database is saved. In addition, in the technical scheme of the application, the graph data nodes filled into the graph data cache region are selected according to the node structural characteristics of the graph database, so that if some graph data nodes are nodes with low data exchange performance of the memory medium and the external memory medium according to the node structural characteristics of the graph database, after the nodes are filled into the graph data cache region, the graph data nodes can be directly obtained from the graph data cache region when the first application program runs, the data exchange of the memory medium and the external memory medium is not needed, the time for traversing the graph database is further saved, or the time consumption for traversing the database is avoided, and the speed for obtaining data is improved.
In the foregoing embodiments, there are a plurality of possible implementations of step 102, in some possible implementations, the priority sequence of caching the data in the graph data cache area M is determined in a quantifiable manner, and the data caching is performed according to the sequence, so as to promote the order of caching the data. For ease of understanding, the implementation will be specifically described below with reference to the accompanying drawings.
Referring to fig. 5, a flowchart of another data caching method according to an embodiment of the present application is shown.
As shown in fig. 5, the data caching method provided in this embodiment includes:
step 501: before a first application program operates, determining a graph data buffer area M corresponding to the first application program in a memory medium of equipment.
In this embodiment, the implementation manner of step 501 is substantially the same as that of step 101 in the foregoing embodiment, so the description of step 501 may refer to the foregoing embodiment and will not be repeated here.
Step 502: and scoring each graph data node in the graph database according to a preset scoring mode to obtain the score of each graph data node.
In this embodiment, a method for scoring each graph data node in a graph database is provided to sort the graph data nodes, so as to sequentially select and cache the graph data nodes. The preset scoring mode applied in this embodiment is related to the node structure characteristics of the graph database. An exemplary scoring scheme provided by embodiments of the present application is described below in conjunction with the figures and formulas.
Referring to fig. 6, a schematic node structure of another graph database according to an embodiment of the present application is shown. Fig. 6 includes: the data nodes A-H are shown, wherein the node A is a root node, and the node B, the node C and the node G are three child nodes of the node A; node B and node D are child nodes of node C; the node F and the node E are child nodes of the node D, and meanwhile, the node F is also a child node of the node E; node H is a child of node G.
In this embodiment, the node set S of the graph database includes all graph data nodes of the graph database. Referring to fig. 6, the node set s= [ a, B, C, D, E, F, G, H ]. Defining a function R (v), wherein R (v) represents the maximum value among the nearest distances between each graph data node in the node set S and any node v in the node set S, and the expression of the function is as follows:
R(v)=max S { P (v, t) } formula (1)
In the formula (1), S represents the node set, and t represents any graph database node except a node v in the node set S; the P (v, t) represents the closest distance between node v and node t.
Taking fig. 6 as an example, node a and node B have two distances: the A points to the B directly, and the distance between the node A and the node B is 1; from A through C to B, the distance between node A and node B is 2. Thus, the closest distance between node a and node B is 1. Similarly, the closest distance between node D and node F is 1. In fig. 6, P (a, B) =1, P (a, F) =3, P (a, H) =2, and the nearest distances between the other nodes in the node set S and the node a do not exceed 3, so R (a) takes the maximum value thereof, i.e., R (a) =3, according to formula (1). It will be appreciated that for node A, the greater R (A), the deeper the depth of the node structure of the graph database.
In addition, a function Q (v) is defined in the present embodiment, where the function Q (v) represents the degree of any graph data node v in the node set S, and the value is equal to the number of edges associated with the node v. Taking fig. 6 as an example, Q (a) =3, Q (B) =2, and Q (D) =3. It will be appreciated that for any one node v, the larger Q (v) is, the more the node v has a relationship with other nodes in the node set S; for node A, the larger Q (A), the wider the node structure of the graph database.
In this embodiment, a first weight p and a second weight Q are also defined, and in the present application, a preset scoring mode combines and adopts a function R (v) and a function Q (v) to score any one node v, and the first weight p and the second weight Q configured for the function R (v) and the function Q (v) respectively. The expression of the preset scoring mode is as follows:
score (v) =p×r (v) +q×q (v) formula (2)
In the formula (2), score (v) represents the score of any graph data node v in the node set S. For the node set S, the first weight p and the second weight q are set uniformly, and will not change due to the change of the graph data nodes.
It should be noted that, in this embodiment, the first weight p and the second weight q may be set according to the data query history of the graph database. In specific implementation, the depth search times and the breadth search times can be obtained according to the data query history of the graph database; the number of depth searches and the number of breadth searches are then numerically compared.
As a possible implementation manner, if the distance between the nodes involved in each search is acquired from the data query history of the graph database, the number of searches with the distance being greater than or equal to a first preset distance (for example, 4) is T1, and T1 is taken as the number of deep searches; the number of searches with a distance less than or equal to a second preset distance (e.g., 2) is T2, and T2 is taken as the breadth search number. The above is merely an example way of determining the number of depth searches and the number of breadth searches, and other ways of determining the number of depth searches and the number of breadth searches are also possible in practical applications. The specific implementation is not limited here. Taking fig. 6 as an example, the search from a→c→d→f is determined as a depth search; the search by A.fwdarw.G, A.fwdarw.C or A.fwdarw.B is determined as a breadth search.
If the number of depth searches is greater than the number of breadth searches, the representation map database is used to provide data for the depth search service of the first application program at high frequency, so that the first weight p can be set to be greater than the second weight q, and the R (v) function value of the node v has a relatively high duty ratio in the score calculation; if the number of depth searches is smaller than the number of breadth searches, the representation map database is used to provide data for the breadth search service of the first application at a high frequency, so that the first weight p can be set smaller than the second weight Q, and the Q (v) function value of the node v has a relatively high duty ratio in the score calculation.
Of course, in practical application, the first weight p and the second weight q may also be set to be equal. Taking fig. 6 as an example, if p=q=1 is set, the scores of the respective graph data nodes can be obtained using formulas (1) and (2) as follows:
score(A)=6,score(B)=3,score(C)=5,score(D)=5,score(E)=5,score(F)=5,score(G)=3,score(H)=3。
step 503: and sequencing the data nodes of each graph according to the scores of the data nodes of each graph to obtain a node list.
As one possible implementation manner, the scores of the individual graph data nodes in the node set S may be arranged in a descending order, so as to form a node list L (S) in which the scores of the graph data nodes are arranged from large to small, and still taking the example of fig. 6, when p=q=1, since score (a) > score (C) =score (D) =score (E) =score (F) > score (B) =score (G) =score (H), the node list L (S) is obtained as follows:
L(S)=[A,C,D,E,F,B,G,H]
the node list L (S) contains all graph data nodes in the node set S. The ordering of the nodes in L (S) can be understood as: the priority ordering of the data is cached to the map data cache area M. The greater the score of a graph data node, the more advanced the ordering in the node list L (S), and thus the graph data node can be cached in the graph data cache region M more preferentially than the graph data node with a smaller score.
Step 504: and filling the graph data nodes into the graph data buffer area M according to the node list.
In the specific implementation of this step, the graph data buffer area M may be filled with graph data nodes according to the order of the graph data nodes in the node list L (S). In addition, a buffer format of the graph data node may be preset, and the data may be buffered in the graph data buffer M according to the buffer format. A cache format is provided below by way of example:
for any graph data node v in the node list L (S), the node v itself is used as a key, and a set of adjacent nodes of the node v is used as a value. Taking node a in fig. 6 as an example, the data in the cache format is < a, [ B, C, G ] >, where [ B, C, G ] is the set of adjacent nodes of node a.
In this embodiment, the buffer format data corresponding to any one graph data node v may be an Item v Thus, it is<A,[B,C,G]>Can be marked as Item A . In the specific implementation, the step can fill the buffer format data corresponding to the nodes into the graph data buffer area M according to the node ordering of the node list L (S).
In the above embodiment, the score of each graph data node is obtained by scoring each graph data node in the node set of the graph database, and then a node list used as the data cache selection basis is formed according to the score, so that the data cache is finally realized. In the embodiment, the scoring method is adopted to quantify the priority of the data caching, so that the caching process is more orderly carried out.
In the foregoing embodiment, the data of the graph database is cached in the graph data cache region corresponding to the first application program before the first application program runs, so that the speed of acquiring the data when the first application program runs is improved. There is one possible scenario in practical applications: when the first application program runs, the data query request does not hit the data cached before, so that the data needs to be acquired from the graph database and returned to the initiating end of the data query request. However, after this request, the initiator of the data query request may also repeatedly initiate the request, which obviously also affects the speed of acquiring data if the data related to the request is repeatedly acquired from the graph database. For this problem, the inventors have further provided yet another data caching method in which the map data cache area M corresponding to the first application program is divided into a fixed cache and a hit cache. The following description is made with reference to the examples and the accompanying drawings.
Referring to fig. 7, the structure of the data buffer M according to the embodiment of the present application is shown. As shown in fig. 7, the map data buffer area M specifically includes: a fixed cache M1 and a hit cache M2. The fixed cache M1 is specifically configured to cache data before the first application program runs as described in the foregoing embodiments; the hit buffer M2 is specifically used for buffering data when the first application program runs.
It should be noted that, when the size of the buffer space of the map data buffer area M is fixed, the sum of the size of the buffer space of the fixed buffer M1 and the size of the buffer space of the hit buffer M2 is equal to the size of the buffer space of the map data buffer area M. In one possible implementation, the buffer space size of M1 and the buffer space size of M2 are fixed, respectively. In another possible implementation, the buffer space size of M1 and the buffer space size of M2 are both variable, but the sum is still equal to the buffer space size of the map data buffer M. For example, a part of the buffer space of M1 remains, and this part of the remaining buffer space may be allocated to M2 for use.
Referring to fig. 8, a flowchart of another data caching method according to an embodiment of the present application is shown.
As shown in fig. 8, the data caching method includes:
step 801: before the first application program operates, determining a graph data buffer area M corresponding to the first application program in a memory medium of the device.
As shown in fig. 7, the map data buffer area M includes a fixed buffer M1 and a hit buffer M2. This step may specifically include determining the size and location of the buffer space of the fixed buffer M1 in the map data buffer area M, and determining the size and location of the buffer space of the hit buffer M2.
Step 802: and filling the graph data nodes into the fixed cache M1 according to the node structure characteristics of the graph database.
Before filling the graph data node, the cache format data of the graph data node can be acquired, and then whether the cache space occupied by the cache format data exceeds the residual cache space of the fixed cache M1 is judged.
It will be appreciated that the remaining cache space of M1 is continually reduced during the process of filling the graph data node into the fixed cache M1. If the buffer space occupied by the buffer format data to be buffered next (or next group) exceeds the remaining buffer space of the fixed buffer M1, the buffer format data cannot be filled into the fixed buffer M1; if the buffer space required for the next (or next set) of buffer format data waiting to be buffered is less than or equal to the remaining buffer space of the fixed buffer M1, the buffer format data may continue to be filled into the fixed buffer M1. For the former case, a residual cache space is reserved in the fixed cache M1, the residual cache space is denoted as M1, and M1 can be allocated to the hit cache M2, so that the residual cache space M1 is avoided being wasted.
The above judging process can be represented by formulas (3) and (4), wherein formula (3) represents the buffer format data Item corresponding to the ith graph data node in the node list L (S) i The condition for data buffering is expressed by the formula (4) that the (i+1) th graph data node Item in the node list L (S) i+1 The condition of buffering is no longer performed.
In formulas (3) and (4), size (Item) i ) Buffer format data Item corresponding to ith graph data node in node list L (S) i The Size of the buffer space required to be occupied, size (M1) represents the buffer space Size of the fixed buffer M1,representing the sum of the buffer space occupied by the buffer format data corresponding to each of the first i graph data nodes in the node list L (S). As expressed in formula (3), when the sum of the buffer spaces required by the buffer format data corresponding to the first i map data nodes in the node list L (S) does not exceed the buffer space size of the fixed buffer M1, that is, the buffer space required by the buffer format data corresponding to the i node does not exceed the size of the remaining buffer space of the fixed buffer M1, the buffer format data Item corresponding to the i node can be obtained i Filling into the fixed cache M1.
In equation (4), size (Item) i+1 ) Buffer format data Item corresponding to (i+1) th graph data node in node list L (S) i+1 The amount of buffer space that needs to be occupied,the sum of the buffer space occupied by the buffer format data corresponding to each of the first i+1 graph data nodes in the node list L (S) is represented. As expressed in formula (4), when the sum of the buffer spaces required by the buffer format data corresponding to the first i+1 graph data nodes in the node list L (S) exceeds the buffer space size of the fixed buffer M1, that is, the buffer space required by the buffer format data corresponding to the i+1 node exceeds the size of the remaining buffer space after the buffer format data corresponding to the first i node is filled by the fixed buffer M1, that is, stopping the buffer format data Item corresponding to the i+1 node i+1 Filling the fixed buffer M1 withAnd (5) charging operation.
The data caching operation performed by the first application program at runtime is described below in connection with steps 803-806.
Step 803: a data query request is received while a first application is running on a device.
In this embodiment, the data query request is initiated by the user or the tester directly manipulating the device, or may be initiated by another device communicatively connected to the device. When the data query request is initiated by the user or the tester directly controlling the equipment, the user or the tester is used as an initiating terminal of the request; when a data query request is initiated by another device, the other device acts as the initiating end of the request.
The data query request may include a data query condition f. The data query condition f may be a keyword for searching data, a picture provided for searching data, audio provided for searching data, or the like. The specific form of the data query condition f is not limited here.
It should be noted that, the data query request described in this step refers to any data query request received by the first application program when running on the device. If the device has received other data query requests during the previous first application running process before receiving the data query request, the hit cache M2 is likely not empty; if the data query request is the first data query request received by the device during the running process of the first application program, the hit buffer M2 may be empty.
Step 804: judging whether the data cache area M comprises data meeting the data query condition f or not, if so, executing step 805; if not, step 806 is performed.
In practice, if the data query request received in step 803 is the first data query request received at the first application runtime, there are two possibilities,
one possibility is: the fixed cache M1 includes data meeting the data query condition f, that is, the data meeting the data query condition f in the fixed cache M1 is directly utilized without acquiring the data from the graph database of the external memory medium. The operation described in step 805 is directly performed.
Another possibility is: the fixed cache M1 does not include data that meets the data query condition f, and thus, it is necessary to acquire related data from the graph database of the external storage medium. For the former case, if the data query request including the same data query condition f is received later, the data in the fixed cache M1 can still be used to provide to the initiating end of the request; in the latter case, if the data query request including the same data query condition f is received again later, the speed of acquiring the data is repeatedly affected if the data is repeatedly acquired from the external storage medium. For this purpose, step 806 may be performed to cache the relevant data acquired for the first time in the hit cache M2 for later use, thereby increasing the speed of data acquisition.
If the data query request received in step 803 is not the first data query request received at the first application runtime, then there are three possibilities:
the first two of these may be similar to the two above, and will not be described again.
The third possibility is: the hit buffer M2 includes data meeting the data query condition f, that is, the data meeting the hit buffer M2 is directly utilized without acquiring the data from the graph database of the external memory medium. The operation described in step 805 is directly performed.
Step 805: and returning the data meeting the data query condition f to the initiating terminal of the data query request.
Step 806: and returning the data meeting the data query condition f in the graph database to an initiating terminal of the data query request, and filling the hit cache M2 with the data meeting the data query condition f.
In practical application, there may be multiple (or multiple groups) of data meeting the data query condition f, and the device may return all of the data meeting the data query condition f to the initiator through the first application program, or may return a part of the data meeting the data query condition f to the initiator.
It should be noted that, before the data meeting the data query condition f is returned to the initiator of the data query request in step 805 and step 806, the first application may process the acquired data, and then the first application returns the processed data to the initiator of the request.
For example: the cache format data < A, [ B, C ] > is data meeting the data query condition f, but the first application program judges that the graph data node A is not the data required by the initiating terminal of the data query request, so the first application program can only return [ B, C ] to the initiating terminal after processing the data < A, [ B, C ] >.
It will be appreciated that the above is only one example implementation of processing data for a first application. In practical applications, other ways of processing data are also possible according to the difference of the data query requests and the difference of the first application programs, so the specific way of processing data is not limited here.
In the above embodiment, steps 803-806 may be repeated, i.e. the data query request is received continuously and the update of the hit cache M2 is implemented in the process. The hit buffer M2 is filled with more and more data in the buffer process, and when the hit buffer M2 is full of data, updating of the data in the hit buffer M2 can be stopped.
In the above embodiment, the hit rate of the data query is improved by continuously updating the data in the hit cache M2. In the big data scene, the data cached in the graph data cache area M corresponding to the first application program is far less than the data contained in the graph database, and the data read-write speed of the memory medium where M is located is far higher than the data read-write speed of the external memory medium where M is located. According to the description of the embodiments, some high-probability query requests can directly query and acquire data in a memory medium, so that the data reading and writing performance is greatly improved compared with the prior art.
In the embodiment of the application, the design goal of the fixed cache M1 is to pre-cache nodes with low performance in depth and breadth-first traversal, so as to avoid spontaneous data preheating mechanism of the graph database; the hit cache M2 is designed to make statistical use of the user's query behavior, and for nodes that are not stored in the fixed cache M1 but easily appear in the query, they can also be put into the cache to improve efficiency, which is a compensation and supplement to the fixed cache mechanism.
Based on the data caching method provided by the foregoing embodiment, correspondingly, the application further provides a data caching device. Specific implementations of the apparatus are described below with reference to the examples and figures.
Device embodiment
Referring to fig. 9, the structure of a data caching apparatus according to an embodiment of the present application is shown.
As shown in fig. 9, the apparatus 900 includes:
the buffer area determining module 901 is configured to determine, before a first application program runs, a graph data buffer area M corresponding to the first application program in a memory medium of a device;
and the data caching module 902 is configured to fill the graph data cache area M with graph data nodes according to node structural features of the graph database.
By filling the graph data cache with graph data nodes, when the first application program is running, time for traversing data from the graph database can be saved; and if the graph data buffer zone just contains the required data, the data can be directly obtained from the graph data buffer zone, even the data in the graph database is not required to be searched, and the time for preheating the graph database is saved. In addition, in the technical scheme of the application, the graph data nodes filled into the graph data cache area are selected according to the node structural characteristics of the graph database, so that if some graph data nodes are nodes with low data exchange performance of the memory medium and the external memory medium according to the node structural characteristics of the graph database, after the nodes are filled into the graph data cache area, the graph data nodes can be directly obtained from the graph data cache area when a first application program runs, the data exchange of the memory medium and the external memory medium is not needed, the time for traversing the graph database is further saved, and the complex graph traversal process is optimized into the memory calculation process, so that the speed for obtaining data is improved.
In some possible implementations, the priority sequence of caching data in the graph data cache region M is determined in a quantifiable manner, and data caching is implemented according to the sequence, so that the order of the cached data is improved. The data buffering module 902 specifically includes:
The score obtaining unit is used for scoring each graph data node in the graph database according to a preset scoring mode to obtain the score of each graph data node; the preset scoring mode is related to node structural features of the graph database;
the node list acquisition unit is used for sequencing the graph data nodes according to the scores of the graph data nodes to obtain a node list;
and the data first buffer unit is used for filling the graph data nodes into the graph data buffer area M according to the node list.
Optionally, the expression of the preset scoring mode is:
score(v)=p*R(v)+q*Q(v);
wherein v represents any one graph data node in a node set of the graph database, and the node set comprises all graph data nodes of the graph database; the score (v) represents the score of v; said R (v) represents the maximum value among the respective closest distances from each graph data node in said set of nodes to said v; the Q (v) represents the degree of v; the p is a first weight, and represents the weight of the R (v) in the preset scoring mode; q is a second weight, and represents the weight of Q (v) in the preset scoring mode; the expression of R (v) is:
R(v)=max S {P(v,t)};
Wherein, the S represents the node set, and the t represents any graph database node except the v in the S; the P (v, t) represents the closest distance of the v from the t.
Scoring each graph data node in the node set of the graph database to obtain the score of each graph data node, forming a node list serving as a data cache selection basis according to the score, and finally realizing data caching. In the embodiment, the scoring method is adopted to quantify the priority of the data caching, so that the caching process is more orderly carried out. Optionally, the above apparatus may further include:
and the weight setting module is used for setting the first weight and the second weight according to the data query history of the graph database.
Optionally, the weight setting module specifically includes:
a search number acquisition unit configured to acquire a depth search number and a breadth search number according to a data query history for the graph database;
a setting unit configured to set the first weight to be greater than the second weight when the number of depth searches is greater than the number of breadth searches; and setting the first weight to be smaller than the second weight when the number of depth searches is smaller than the number of breadth searches.
By caching the data of the graph database into the graph data cache region corresponding to the first application program before the first application program operates, the speed of acquiring the data when the first application program operates is improved. There is one possible scenario in practical applications: when the first application program runs, the data query request does not hit the data cached before, so that the data needs to be acquired from the graph database and returned to the initiating end of the data query request. After this request, the initiator of the data query request may also repeatedly initiate the request, and if the data related to the request is repeatedly obtained from the graph database, the speed of obtaining the data will be obviously affected. For this problem, the map data buffer area M corresponding to the first application program may be divided into a fixed buffer and a hit buffer. Optionally, the map data buffer area M includes a fixed buffer M1 and a hit buffer M2;
a data caching module 902, specifically configured to fill the fixed cache M1 with graph data nodes;
when the first application is running on the device, the apparatus may further include:
the request receiving module is used for receiving a data query request, wherein the data query request comprises a data query condition f;
The judging module is used for judging whether the graph data cache area M comprises data conforming to the data query condition f or not;
the data return module is used for returning the data meeting the data query condition f to the initiating terminal of the data query request when the judging result of the judging module is yes; the data processing module is also used for returning the data meeting the data query condition f in the graph database to the initiating terminal of the data query request when the judging result of the judging module is negative;
the data caching module 902 is further configured to fill the hit cache M2 with the data meeting the data query condition f when the determination result of the determining module is no.
The hit rate of the data query is improved by continuously updating the data in the hit cache M2. In the big data scenario, the data cached in the graph data cache area M corresponding to the first application program is far less than the data contained in the graph database, so that the performance of the memory medium where M is located is far higher than that of the memory medium where the graph database is located. As can be seen from the description of the above embodiments, some high-probability query requests can directly perform data query and acquisition in the memory medium, and compared with the prior art, the performance is greatly improved.
Optionally, the data buffering module 902 is specifically configured to include:
the data acquisition unit is used for acquiring cache format data corresponding to the graph data nodes;
the judging unit is used for judging whether the buffer space occupied by the buffer format data exceeds the residual buffer space of the fixed buffer M1;
the data second buffer unit is used for filling the buffer format data into the fixed buffer M1 when the judging result of the judging unit is negative; and when the judging unit judges that the judging result is yes, stopping filling the buffer format data corresponding to the data nodes of any graph into the fixed buffer M1.
Based on the data caching method and device provided by the foregoing embodiments, the embodiment of the present application further provides a computer readable storage medium.
The storage medium stores a program which, when executed by a processor, implements some or all of the steps in the data caching method protected by the foregoing method embodiment of the present application.
The storage medium may be a Memory medium, such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), or the like, which may store program codes.
Based on the data caching method, the device and the storage medium provided by the foregoing embodiments, the embodiment of the present application provides a processor. The processor is configured to execute a program, where when the program runs, part or all of the steps in the data caching method protected by the foregoing method embodiment are executed.
Based on the storage medium and the processor provided in the foregoing embodiments, the present application further provides a data caching device.
Referring to fig. 10, the hardware configuration diagram of the data caching device provided in this embodiment is shown.
As shown in fig. 10, the data caching apparatus includes: memory 1001, processor 1002, communication bus 1003, and communication interface 1004.
The memory 1001 stores a program that can be run on a processor, and when the program is executed, part or all of the steps in the data caching method provided in the foregoing method embodiment of the present application are implemented. Memory 1001 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
In this device, the processor 1002 and the memory 1001 transmit signaling, logic instructions, and the like through a communication bus. The device is capable of communicating with other devices via the communication interface 1004.
Before the first application program operates, determining a graph data buffer area corresponding to the first application program in a memory medium of the equipment, and filling graph data nodes into the graph data buffer area according to node structural characteristics of a graph database. By filling the graph data cache with graph data nodes, when the first application program is running, time for traversing data from the graph database can be saved; and if the graph data buffer zone just contains the required data, the data can be directly obtained from the graph data buffer zone, even the data in the graph database is not required to be searched, and the time for preheating the graph database is saved. In addition, in the technical scheme of the application, the graph data nodes filled into the graph data cache region are selected according to the node structural characteristics of the graph database, so that if some graph data nodes are nodes with low data exchange performance of the memory medium and the external memory medium according to the node structural characteristics of the graph database, after the nodes are filled into the graph data cache region, the graph data nodes can be directly obtained from the graph data cache region when the first application program runs, the data exchange of the memory medium and the external memory medium is not needed, and the data obtaining speed is further improved.
It should be noted that, in the present specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment is mainly described in a different point from other embodiments. In particular, for the apparatus and system embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, with reference to the description of the method embodiments in part. The above-described apparatus and system embodiments are merely illustrative, in which elements illustrated as separate elements may or may not be physically separate, and elements illustrated as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present application without undue burden.
The foregoing is only one specific embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the technical scope of the present application should be included in the scope of the present application. Therefore, the protection scope of the present application should be subject to the protection scope of the claims.

Claims (6)

1. A data caching method, comprising:
before a first application program runs, determining a graph data buffer area corresponding to the first application program in a memory medium of equipment; the map data buffer area comprises a fixed buffer;
scoring each graph data node in a graph database according to a preset scoring mode to obtain the score of each graph data node; the preset scoring mode is related to node structural features of the graph database;
sequencing the graph data nodes according to the scores of the graph data nodes to obtain a node list;
filling graph data nodes into the fixed cache according to the node list before the first application program runs on the device;
the expression of the preset scoring mode is as follows:
score(v)=p*R(v)+q*Q(v);
wherein v represents any one graph data node in a node set of the graph database, and the node set comprises all graph data nodes of the graph database; the score (v) represents the score of v; said R (v) represents the maximum value among the respective closest distances from each graph data node in said set of nodes to said v; the Q (v) represents the degree of v; the p is a first weight, and represents the weight of the R (v) in the preset scoring mode; q is a second weight, and represents the weight of Q (v) in the preset scoring mode; the expression of R (v) is:
R(v)=max S {P(v,t)};
Wherein, the S represents the node set, and the t represents any graph database node except the v in the S; -P (v, t) represents the closest distance of said v from said t;
the method further comprises the steps of:
obtaining the depth search times and the breadth search times according to the data query history of the graph database;
if the depth search times are larger than the breadth search times, setting the first weight to be larger than the second weight; and if the depth search times are smaller than the breadth search times, setting the first weight to be smaller than the second weight.
2. The method of claim 1, wherein the graph data buffer further comprises a hit buffer; the method further comprises the steps of after the nodes are obtained by sequencing the graph data nodes according to the scores of the graph data nodes, and the method further comprises the steps of:
when the first application program runs on the equipment, receiving a data query request, wherein the data query request comprises a data query condition;
judging whether the graph data cache area comprises data conforming to the data query condition or not; if yes, returning the data meeting the data query condition to the initiating terminal of the data query request; if not, returning the data meeting the data query condition in the graph database to the initiating terminal of the data query request, and filling the hit cache with the data meeting the data query condition.
3. The method according to claim 1, wherein said filling the fixed cache with graph data nodes comprises:
obtaining cache format data corresponding to the graph data nodes;
judging whether the buffer memory space occupied by the buffer memory format data exceeds the residual buffer memory space of the fixed buffer memory, if not, filling the buffer memory format data into the fixed buffer memory; if yes, stopping filling the buffer format data corresponding to the arbitrary graph data node into the fixed buffer.
4. A data caching apparatus, comprising:
the cache region determining module is used for determining a graph data cache region corresponding to a first application program in a memory medium of equipment before the first application program operates, wherein the graph data cache region comprises a fixed cache;
the data caching module is used for scoring each graph data node in the graph database according to a preset scoring mode, and obtaining the score of each graph data node; the preset scoring mode is related to node structural features of the graph database; sequencing the graph data nodes according to the scores of the graph data nodes to obtain a node list; filling graph data nodes into the fixed cache according to the node list before the first application program runs on the device;
The expression of the preset scoring mode is as follows:
score(v)=p*R(v)+q*Q(v);
wherein v represents any one graph data node in a node set of the graph database, and the node set comprises all graph data nodes of the graph database; the score (v) represents the score of v; said R (v) represents the maximum value among the respective closest distances from each graph data node in said set of nodes to said v; the Q (v) represents the degree of v; the p is a first weight, and represents the weight of the R (v) in the preset scoring mode; q is a second weight, and represents the weight of Q (v) in the preset scoring mode; the expression of R (v) is:
R(v)=max S {P(v,t)};
wherein, the S represents the node set, and the t represents any graph database node except the v in the S; -P (v, t) represents the closest distance of said v from said t;
the device also comprises a weight setting module; the weight setting module specifically comprises a searching frequency acquisition unit and a setting unit;
the searching frequency acquisition unit is used for acquiring the depth searching frequency and the breadth searching frequency according to the data query history of the graph database;
the setting unit is configured to set the first weight to be greater than the second weight when the number of depth searches is greater than the number of breadth searches; and setting the first weight to be smaller than the second weight when the number of depth searches is smaller than the number of breadth searches.
5. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when being executed by a processor, implements the data caching method according to any one of claims 1-3.
6. A processor configured to run a computer program, said program when run performing a data caching method according to any one of claims 1-3.
CN201911330901.6A 2019-12-20 2019-12-20 Data caching method and device and related products Active CN111090653B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911330901.6A CN111090653B (en) 2019-12-20 2019-12-20 Data caching method and device and related products

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911330901.6A CN111090653B (en) 2019-12-20 2019-12-20 Data caching method and device and related products

Publications (2)

Publication Number Publication Date
CN111090653A CN111090653A (en) 2020-05-01
CN111090653B true CN111090653B (en) 2023-12-15

Family

ID=70395210

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911330901.6A Active CN111090653B (en) 2019-12-20 2019-12-20 Data caching method and device and related products

Country Status (1)

Country Link
CN (1) CN111090653B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111858612B (en) * 2020-07-28 2023-04-18 平安科技(深圳)有限公司 Data accelerated access method and device based on graph database and storage medium
CN117882065A (en) * 2021-08-30 2024-04-12 西门子股份公司 Method, apparatus and system for graphics data caching

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899156A (en) * 2015-05-07 2015-09-09 中国科学院信息工程研究所 Large-scale social network service-oriented graph data storage and query method
CN109670089A (en) * 2018-12-29 2019-04-23 颖投信息科技(上海)有限公司 Knowledge mapping system and its figure server
CN110019361A (en) * 2017-10-30 2019-07-16 北京国双科技有限公司 A kind of caching method and device of data

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8667012B2 (en) * 2011-06-09 2014-03-04 Salesforce.Com, Inc. Methods and systems for using distributed memory and set operations to process social networks
WO2017196315A1 (en) * 2016-05-11 2017-11-16 Hitachi, Ltd. Data storage system and process for reducing read and write amplifications
US20180173755A1 (en) * 2016-12-16 2018-06-21 Futurewei Technologies, Inc. Predicting reference frequency/urgency for table pre-loads in large scale data management system using graph community detection
US10445321B2 (en) * 2017-02-21 2019-10-15 Microsoft Technology Licensing, Llc Multi-tenant distribution of graph database caches

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899156A (en) * 2015-05-07 2015-09-09 中国科学院信息工程研究所 Large-scale social network service-oriented graph data storage and query method
CN110019361A (en) * 2017-10-30 2019-07-16 北京国双科技有限公司 A kind of caching method and device of data
CN109670089A (en) * 2018-12-29 2019-04-23 颖投信息科技(上海)有限公司 Knowledge mapping system and its figure server

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
左遥 等.一种面向图数据的预装载缓存策略.《计算机工程》.2016,第42卷(第5期),第85-92页. *

Also Published As

Publication number Publication date
CN111090653A (en) 2020-05-01

Similar Documents

Publication Publication Date Title
US20190205305A1 (en) Preliminary ranker for scoring matching documents
CN104536959A (en) Optimized method for accessing lots of small files for Hadoop
CN111090653B (en) Data caching method and device and related products
CN106649401A (en) Data writing method and device of distributed file system
US10229143B2 (en) Storage and retrieval of data from a bit vector search index
KR20130020050A (en) Apparatus and method for managing bucket range of locality sensitivie hash
CN106156331A (en) Cold and hot temperature data server system and processing method thereof
CN109240946A (en) The multi-level buffer method and terminal device of data
US11748324B2 (en) Reducing matching documents for a search query
US10467215B2 (en) Matching documents using a bit vector search index
CN102629941A (en) Caching method of a virtual machine mirror image in cloud computing system
US20160378828A1 (en) Bit vector search index using shards
CN107436813A (en) A kind of method and system of meta data server dynamic load leveling
CN107391600A (en) Method and apparatus for accessing time series data in internal memory
WO2015100549A1 (en) Graph data query method and device
CN109766318B (en) File reading method and device
US20160378796A1 (en) Match fix-up to remove matching documents
TW201903613A (en) System and method for data processing
CN110046175A (en) A kind of buffer update, data return method and device
CN103838680B (en) A kind of data cache method and device
CN104391947B (en) Magnanimity GIS data real-time processing method and system
CN111858612B (en) Data accelerated access method and device based on graph database and storage medium
US10733164B2 (en) Updating a bit vector search index
CN108173974A (en) A kind of HC Model inner buffer data based on distributed caching Memcached eliminate method
US20140032590A1 (en) Windowed mid-tier data cache

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant