CN115061952A - Data caching method and device, electronic equipment and computer storage medium - Google Patents

Data caching method and device, electronic equipment and computer storage medium Download PDF

Info

Publication number
CN115061952A
CN115061952A CN202210996965.5A CN202210996965A CN115061952A CN 115061952 A CN115061952 A CN 115061952A CN 202210996965 A CN202210996965 A CN 202210996965A CN 115061952 A CN115061952 A CN 115061952A
Authority
CN
China
Prior art keywords
data
parameter values
mapping
accessed
data items
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210996965.5A
Other languages
Chinese (zh)
Other versions
CN115061952B (en
Inventor
高强
赵文浩
商帆
孙成新
王金明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Feihu Information Technology Tianjin Co Ltd
Original Assignee
Feihu Information Technology Tianjin Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Feihu Information Technology Tianjin Co Ltd filed Critical Feihu Information Technology Tianjin Co Ltd
Priority to CN202210996965.5A priority Critical patent/CN115061952B/en
Publication of CN115061952A publication Critical patent/CN115061952A/en
Application granted granted Critical
Publication of CN115061952B publication Critical patent/CN115061952B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0877Cache access modes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0844Multiple simultaneous or quasi-simultaneous cache accessing
    • G06F12/0846Cache with multiple tag or data arrays being simultaneously accessible

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a data caching method, a data caching device, electronic equipment and a computer storage medium, wherein the data caching method comprises the steps of determining dimensionality related to business requirements according to the business requirements when data are to be cached; obtaining dimensionality related to business requirements and a parameter value corresponding to each dimensionality; for each dimension, determining accessible data results based on each parameter value; if it is determined that data items which can be accessed by all parameter values exist, processing all the parameter values by using a first mapping rule to obtain a first mapping result; if the parameter values identical to the accessed data items are determined to exist in the rest data items, taking the parameter values identical to the accessed data items as first data; processing the first data by using a second mapping rule to obtain a second mapping result; and generating and storing the cache key value key of the dimension based on the target mapping result. The storage capacity of the storage data is reduced in a mapping configuration mode, so that waste of cache storage resources is avoided.

Description

Data caching method and device, electronic equipment and computer storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a data caching method and apparatus, an electronic device, and a computer storage medium.
Background
The access speed of the cache is faster than that of the general random access memory RAM, and the cache is a high-speed memory, and generally the cache does not use DRAM technology like a system main memory but uses expensive but faster SRAM technology. The setting of the cache is one of the important factors of all modern computer systems for exerting high performance, and the common cache is a key-value structure.
In order to enable the quick return of the access data, the data is cached, the cached key value key needs to be uniquely identifiable to the result, and when the key needs to be defined by parameters of multiple dimensions, a large number of cached results (the product of the number of each dimension value) are generated, so that the caching and storing resources are wasted.
Disclosure of Invention
In view of this, embodiments of the present invention provide a data caching method, an apparatus, an electronic device, and a computer storage medium, so as to solve the problem of cache storage resource waste in the prior art.
In order to achieve the above purpose, the embodiments of the present invention provide the following technical solutions:
a first aspect of the present invention shows a data caching method, where the method includes:
determining dimensionality related to a business requirement according to the business requirement when data is to be cached;
obtaining the dimensionality related to the service requirement and the parameter value corresponding to each dimensionality;
for each dimension, determining an accessible data result based on each parameter value, the data result comprising a plurality of data items;
for each dimension, if it is determined that data items which can be accessed by all the parameter values exist, processing all the parameter values by using a first mapping rule to obtain a first mapping result;
if the parameter values identical to the accessed data items are determined to exist in the remaining data items, taking the parameter values identical to the accessed data items as first data, wherein the remaining data items are data items except the data items which can be accessed by all the parameter values;
processing the first data by using a second mapping rule to obtain a second mapping result;
and generating and storing the cache key value key of the dimension based on the target mapping result and/or the parameter value of the accessed data item which is different, wherein the target mapping result comprises a first mapping result and/or a second mapping result, the parameter value of the target mapping result and/or the accessed data item which is different comprises a target mapping result, and the parameter value of the target mapping result which is different from the parameter value of the accessed data item.
Optionally, the method further includes:
and initializing the parameter values corresponding to the dimensions to obtain the processed parameter values.
Optionally, the processing all the parameter values by using the first mapping rule to obtain a first mapping result includes:
all parameter values are mapped to constant values according to a first mapping rule, the constant values being specified in the first mapping rule.
Optionally, the processing the first data by using the second mapping rule to obtain a second mapping result includes:
and mapping the parameter values, which are the same as the data items accessed in the same group, in the first data to corresponding target values according to a second mapping rule, wherein the target values are determined from the parameter values, which are the same as the data items accessed in the same group.
Optionally, the method further includes:
if the data item which can be accessed by all the parameter values does not exist, determining whether the parameter values which are the same as the accessed data item exist;
if so, the parameter value identical to the accessed data item is taken as the first data.
A second aspect of the present invention shows a data caching apparatus, including:
the determining unit is used for determining the dimensionality related to the business requirement according to the business requirement when the data to be cached is obtained;
the acquisition unit is used for acquiring the dimensionality related to the service requirement and the parameter value corresponding to each dimensionality;
a processing unit for determining, for each dimension, an accessible data result based on each parameter value, the data result comprising a plurality of data items; for each dimension, if it is determined that data items which can be accessed by all the parameter values exist, processing all the parameter values by using a first mapping rule to obtain a first mapping result; if the parameter values identical to the accessed data items are determined to exist in the remaining data items, taking the parameter values identical to the accessed data items as first data, wherein the remaining data items are data items except the data items which can be accessed by all the parameter values; processing the first data by using a second mapping rule to obtain a second mapping result;
a generating unit, configured to generate and store the cache key value key of the dimension based on the target mapping result and/or the parameter value of the accessed data item that is different, where the target mapping result includes a first mapping result and/or a second mapping result, and the parameter value of the target mapping result and/or the parameter value of the accessed data item that is different includes the target mapping result, and the parameter value of the target mapping result that is different from the parameter value of the accessed data item.
Optionally, the method further includes:
and the initialization unit is used for initializing the parameter values corresponding to the dimensions according to each dimension to obtain the processed parameter values.
Optionally, the processing unit, configured to process all the parameter values by using the first mapping rule to obtain a first mapping result, is specifically configured to:
all parameter values are mapped to constant values according to a first mapping rule, the constant values being specified in the first mapping rule.
A third aspect of the embodiments of the present invention shows an electronic device, where the electronic device is configured to run a program, where the program executes the data caching method shown in the first aspect of the embodiments of the present invention when running.
A fourth aspect of the embodiments of the present invention shows a computer storage medium, where the storage medium includes a storage program, and when the program runs, a device on which the storage medium is located is controlled to execute the data caching method shown in the first aspect of the embodiments of the present invention.
Based on the data caching method, the data caching device, the electronic equipment and the computer storage medium, the data caching method comprises the steps of determining dimensionality related to business requirements according to the business requirements when data are to be cached; obtaining the dimensionality related to the service requirement and the parameter value corresponding to each dimensionality; for each dimension, determining an accessible data result based on each parameter value, the data result comprising a plurality of data items; for each dimension, if it is determined that data items which can be accessed by all the parameter values exist, processing all the parameter values by using a first mapping rule to obtain a first mapping result; if the same parameter value of the accessed data item is determined to exist in the remaining data items, taking the same parameter value of the accessed data item of the first data as the first data, wherein the remaining data items are the data items except the data items which can be accessed by all the parameter values; processing the data by using a second mapping rule to obtain a second mapping result; and generating and storing the cache key value key of the dimension based on the first mapping result and the second mapping result. In the embodiment of the invention, the parameter values in each dimension are processed by using the corresponding mapping rules to obtain the first mapping result and the second mapping result, so that the storage capacity of the stored data is reduced in a mapping configuration mode, and the waste of cache storage resources is avoided.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flowchart illustrating a data caching method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart illustrating another data caching method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a data caching apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of another data caching apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and claims of this application and in the above-described drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be implemented in other sequences than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that the description relating to "first", "second", etc. in the present invention is for descriptive purposes only and is not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present invention.
In this application, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Referring to fig. 1, a schematic flow chart of a data caching method according to an embodiment of the present invention is shown, where the method includes:
step S101: and determining the dimensionality related to the business requirement according to the business requirement when the data is to be cached.
In the process of implementing step S101 specifically, when data is to be cached, the data is analyzed according to a service requirement to determine a dimension related to the service requirement.
Step S102: and obtaining the dimensionality related to the service requirement and the parameter value corresponding to each dimensionality.
In the process of implementing step S102 specifically, obtaining a dimension related to the service requirement, and querying the dimension related to the service requirement; and enumerates all parameter values for each dimension.
For example: analyzing dimensionality of the data to be cached and the service requirement, inquiring n dimensionalities related to the service requirement, wherein the value of n is an integer greater than or equal to 2, and enumerating all parameter values of the dimensionality A aiming at the dimensionality A in the n dimensionalities.
Step S103: for each dimension, the data results that can be accessed are determined based on each parameter value.
In the process of implementing step S103 specifically, for the parameter value of each dimension, the data result that can be accessed by each parameter value of each dimension is determined by using the access terminal.
Such as: the version dimension comprises parameter values 1.0 and 1.1 … … 10.0.0, and the total number of the parameter values is 100, and the access terminal is used for determining the data results which can be accessed by the 100 parameter values.
Step S104: for each dimension, determining whether data items which can be accessed by all the parameter values exist, if so, executing step S105 to step S106, and executing step S108 to step S110; if not, step S107 to step S110 are executed.
In the process of implementing step S104 specifically, for each dimension, it is determined whether each parameter value in the dimension has a parameter value that is the same as the accessible data item, and if so, steps S105 to S106 and steps S108 to S110 are executed; if not, step S107 to step S110 are executed.
Such as: the version dimension comprises parameter values 1.0 and 1.1 … … 10.0.0, and the total number of the parameter values is 100, the data result comprises 20 data items, specifically data 1 and data 2 … … data 20, and the parameter values 1.0 and 1.1 … … to 10.0 can access the data 1 to data 15; parameter values 5.5 to 5.6 can access data 16 to data 19, parameter values 5.7 to 5.8 can access data 16 to data 20, and parameter values 5.9 to 6.0 can access data 20; that is, the versions of the data content parameter values 1.0 to 10.0 of the data 1 to data 15 are accessible, the versions of the data content parameter values 5.5 to 5.8 of the data 15 to data 19 are accessible, and the versions of the data content parameter values 5.7 to 7.0 of the parameter values 20 are accessible. In summary, it can be determined that there are data items accessible to all of the parameter values 1.0 to 10.0, which are data 1 to data 15.
Step S105: and processing all the parameter values by using a first mapping rule to obtain a first mapping result.
It should be noted that, in the process of implementing step S105 specifically, the following steps are included:
step S11: the first data is mapped to a constant value according to a first mapping rule.
In step S11, the constant value is specified in the first mapping rule.
In the process of implementing step S11 specifically, values consistent with the returned data in each dimension are merged according to the service requirement, that is, a first mapping rule conforming to C = f' (x) is created, where C is a constant value and x is a parameter value; the parameter value is mapped to a constant value based on a first mapping rule.
Such as: assuming that the constant value is 1, the version dimension includes parameter values 1.0, 1.1 … … 10.0.0, and 100 parameter values, the data items include 20 pieces, specifically data 1, data 2 … … data 20, and the parameter values 1.0, 1.1 … … to 10.0 can access data 1 to data 15, and all the parameter values 1.0 to 10.0 of the version dimension are mapped to the constant value 1.
Step S106: and determining whether the residual data items exist or not based on the data items which can be accessed by all the parameter values, if so, executing the step S107, and if not, executing the step S111.
In the process of implementing step S106 specifically, it is determined whether data items other than the data items accessible to all parameter values exist in the data result, if so, step S107 is executed, and if not, it is described that all parameter values of the dimension can be mapped onto a constant, and step S111 is executed.
Step S107: and determining whether the parameter values identical to the accessed data items exist in the rest data items, if so, executing the step S109, and if not, determining that the parameter values identical to the accessed data items do not exist, not mapping the parameter values, and directly storing the parameter values.
In step S107, the remaining data items are data items excluding data items that can be accessed by all the parameter values.
In the process of implementing step S107 specifically, data items other than the data items that all the parameter values can access are taken as remaining data items; and judging whether the parameter values identical to the accessed data items exist in the rest data items or not for each dimension, if so, executing step S109, and if not, determining that the parameter values identical to the accessed data items do not exist, not mapping the parameter values, and directly storing the parameter values.
It should be noted that, in each dimension, there may be multiple sets of parameter values that are the same for the accessed data items.
Step S108: and determining whether the parameter values identical to the accessed data items exist or not, if so, executing the step S109, and if not, determining that the parameter values identical to the accessed data items do not exist, not mapping the parameter values, and directly storing the parameter values.
In the process of implementing step S108 specifically, determining all data items in the data result, which are data items that can be accessed by all the parameter values; and for each dimension, judging whether parameter values identical to the accessed data items exist in all the data items in the data result, if so, executing step S109, and if not, determining that the parameter values identical to the accessed data items do not exist, not mapping the parameter values, and directly storing the parameter values.
Step S109: the parameter value identical to the accessed data item is taken as the first data.
In order to better understand the contents shown in step S107 and step S109, the following description is made.
For example: the version dimension comprises parameter values 1.0 and 1.1 … … 10.0.0, and the total number of the parameter values is 100, and the data result comprises 20 data items, specifically data 1 and data 2 … … data 20. Since the versions of the data content parameter values 1.0 to 10.0 of the data 1 to data 15 are accessible, the remaining data items include data 16 to data 19, parameter values 5.5 to 5.6 are accessible to data 16 to data 19, parameter values 5.7 to 5.8 are accessible to data 16 to data 20, and parameter values 5.9 to 7.0 are accessible to data 20. In this case, the parameter values 5.5 to 5.6 for accessing the data 16 to 19, the parameter values 5.7 to 5.8 for accessing the data 16 to 20, and the parameter values 5.9 to 7.0 for accessing the data 20 may be placed in one group, that is, the parameter values 5.5 to 7.0, 15 parameter values, which are the first data.
Step S110: and processing the first data by using a second mapping rule to obtain a second mapping result.
In an embodiment, the process of processing the first data by using the second mapping rule to obtain the second mapping result in the step 110 includes the following steps:
step S21: and mapping the parameter values of the data items of the first data in the same group access to the corresponding target values according to a second mapping rule.
In step S21, the target value is determined from the same parameter values of the data items accessed from the same group.
In the process of the specific implementation step S21, determining how many groups of accessed data items in the first data of each dimension have the same parameter value; for each group of parameter values, selecting any one parameter value from the parameter values which are the same with the data items accessed in the same group as a target value; merging values consistent with the returned data in each dimension according to business requirements, namely creating a second mapping rule which accords with y = f (x), wherein y is a target value, and x is first data; the first data is mapped to a corresponding target value.
Such as: the parameter values 5.5 to 5.6 enable access to the data 16 to 19, in which case the parameter values 5.5 to 5.6 for access to the data 16 to 19 can be placed in a group; any one parameter value 5.5 is selected from the parameter values 5.5 to 5.6 as a target value, and the parameter values with the versions of 5.5 to 5.6 in the first data are mapped to the target value.
The target value is generally the minimum value among the parameter values.
Step S111: and generating and storing the cache key value key of the dimension based on the target mapping result and/or the parameter values of the different accessed data items.
In step S111, the target mapping result includes a first mapping result and/or a second mapping result, and the parameter values of the target mapping result and/or the accessed data item that are different include the target mapping result and the parameter values of the target mapping result that are different from the accessed data item.
In the process of implementing step S111 specifically, the target mapping result of each dimension is determined based on the processes from step S101 to step S110; and combining the target mapping result of each dimension and/or the parameter values of different accessed data items to generate a cache key value key, and storing the cache key value key.
Specifically, for each dimension, if the parameter value processing of the dimension involves performing steps S101 to S107 and steps S109 to S110, determining that the dimension generates data of the dimension by combining the first mapping result, the second mapping result, and the parameter value of the different accessed data item of each dimension, or combining the first mapping result and the second mapping result; if the parameter value processing of the dimension involves executing steps S101 to S104, and steps S108 to S110, combining the second mapping result of the dimension and the parameter value of the accessed data item which is not the same, or combining the second mapping results of the dimension to generate the data of the dimension; if the scheme involves executing steps S101 to S106, using the first mapping result of each dimension as the data of the dimension; and then combining the data of each dimension to generate a cache key value key and storing the key value key.
Optionally, because of the filtering in step S101, there will not be parameter values with different accessed data items in the data caching method.
Optionally, after caching the key value key, binding the relationship between the key value key and the key-value and writing the key value key and the key-value into the cache.
Optionally, to ensure final consistency, additional asynchronous compensation operations may be required to ensure full data caching and cache synchronization updates to prevent cache breakthrough, such as timing-based full data caching updates.
It should be noted that the full amount of data refers to data enumerated by the cache key of the same service.
In the implementation of the invention, when data is to be cached, according to a service requirement, determining a dimensionality related to the service requirement; obtaining the dimensionality related to the service requirement and the parameter value corresponding to each dimensionality; for each dimension, determining accessible data results based on each parameter value; for each dimension, if it is determined that data items which can be accessed by all the parameter values exist, processing all the parameter values by using a first mapping rule to obtain a first mapping result; if the parameter values identical to the accessed data items are determined to exist in the rest data items, taking the parameter values identical to the accessed data items as first data; processing the first data by using a second mapping rule to obtain a second mapping result; and generating and storing the cache key value key of the dimension based on the target mapping result. The storage capacity of the storage data is reduced in a mapping configuration mode, so that waste of cache storage resources is avoided.
Based on the data caching method of the above embodiment of the present invention, correspondingly, another data caching method is further shown in the embodiment of the present invention, as shown in fig. 2, the method includes:
step S201: and determining the dimensionality related to the business requirement according to the business requirement when the data is to be cached.
Step S202: and obtaining the dimensionality related to the service requirement and the parameter value corresponding to each dimensionality.
It should be noted that the specific implementation process of step S201 to step S202 is the same as the specific implementation process of step S101 to step S102, and reference may be made to each other.
Step S203: and initializing the parameter values corresponding to the dimensions to obtain the processed parameter values.
Note that the initialization process includes a deduplication process, a cleaning process, and/or the like.
In the process of specifically implementing step S203, enumerated parameter values may be repeated, and in order to ensure that enumerated values of each dimension are not repeated, data unicity needs to be ensured, and for each dimension, deduplication processing and/or cleaning processing is performed on parameter values corresponding to the dimension, so as to obtain processed parameter values.
Step S204: for each dimension, the data results that can be accessed are determined based on each parameter value.
Step S205: for each dimension, determining whether there are data items that can be accessed by all the parameter values, if so, executing step S206 to step S208, and step S210 to step S212; if not, step S209 to step S211 are executed.
Step S206: and processing all the parameter values by using a first mapping rule to obtain a first mapping result.
Step S207: and determining whether the residual data items exist or not based on the data items which can be accessed by all the parameter values, if so, executing step S208, and if not, executing step S212.
Step S208: and determining whether the parameter values identical to the accessed data items exist in the rest data items, if so, executing step S210, and if not, determining that the parameter values identical to the accessed data items do not exist, not mapping the parameter values, and directly storing the parameter values.
In step S208, the remaining data items are data items excluding the data items that can be accessed by all the parameter values.
Step S209: and determining whether the parameter values identical to the accessed data items exist or not, if so, executing the step S210, and if not, determining that the parameter values identical to the accessed data items do not exist, not mapping the parameter values, and directly storing the parameter values.
Step S210: the parameter value identical to the accessed data item is taken as the first data.
Step S211: and processing the first data by using a second mapping rule to obtain a second mapping result.
Step S212: and generating and storing the cache key value key of the dimension based on the target mapping result and/or the parameter values of the different accessed data items.
In step S212, the target mapping result includes a first mapping result and/or a second mapping result.
It should be noted that the specific implementation process of step S204 to step S212 is the same as the specific implementation process of step S103 to step S111, and they can be referred to each other.
In the embodiment of the invention, when data is to be cached, according to a service requirement, determining a dimensionality related to the service requirement; obtaining the dimensionality related to the service requirement and the parameter value corresponding to each dimensionality; for each dimension, determining accessible data results based on each parameter value; for each dimension, if it is determined that data items which can be accessed by all the parameter values exist, processing all the parameter values by using a first mapping rule to obtain a first mapping result; if the parameter values identical to the accessed data items are determined to exist in the rest data items, taking the parameter values identical to the accessed data items as first data; processing the first data by using a second mapping rule to obtain a second mapping result; and generating and storing the cache key value key of the dimension based on the target mapping result. The storage quantity of the stored data is reduced in a mapping configuration mode, so that the waste of cache storage resources is avoided.
Based on the data caching method described in the foregoing embodiment of the present invention, correspondingly, an embodiment of the present invention further correspondingly shows a data caching apparatus, and as shown in fig. 3, the data caching apparatus shown in the embodiment of the present invention is a schematic structural diagram of the data caching apparatus, where the apparatus includes:
a determining unit 301, configured to determine, according to a service requirement, a dimension related to the service requirement when data is to be cached;
an obtaining unit 302, configured to obtain dimensions related to the service requirement and a parameter value corresponding to each dimension;
a processing unit 303 for determining, for each dimension, an accessible data result based on each parameter value, the data result comprising a plurality of data items; for each dimension, if it is determined that data items which can be accessed by all the parameter values exist, processing all the parameter values by using a first mapping rule to obtain a first mapping result; if the parameter values identical to the accessed data items are determined to exist in the remaining data items, taking the parameter values identical to the accessed data items as first data, wherein the remaining data items are data items except the data items which can be accessed by all the parameter values; processing the first data by using a second mapping rule to obtain a second mapping result;
a generating unit 304, configured to generate and store the cache key value key of the dimension based on the target mapping result and/or the parameter value of the accessed data item that is not the same, where the target mapping result includes a first mapping result and/or a second mapping result, and the parameter value of the target mapping result and/or the parameter value of the accessed data item that is not the same includes a target mapping result, and the parameter value of the target mapping result and the parameter value of the accessed data item that is not the same.
It should be noted that, the specific principle and the implementation process of each unit in the data caching apparatus disclosed in the above embodiment of the present invention are the same as the data caching method shown in the above embodiment of the present invention, and reference may be made to corresponding parts in the data caching method disclosed in the above embodiment of the present invention, which are not described herein again.
In the embodiment of the invention, when data is to be cached, according to a service requirement, determining a dimensionality related to the service requirement; obtaining the dimensionality related to the service requirement and the parameter value corresponding to each dimensionality; for each dimension, determining accessible data results based on each parameter value; for each dimension, if it is determined that data items which can be accessed by all the parameter values exist, processing all the parameter values by using a first mapping rule to obtain a first mapping result; if the parameter values identical to the accessed data items are determined to exist in the rest data items, taking the parameter values identical to the accessed data items as first data; processing the first data by using a second mapping rule to obtain a second mapping result; and generating and storing the cache key value key of the dimension based on the target mapping result. The storage capacity of the storage data is reduced in a mapping configuration mode, so that waste of cache storage resources is avoided.
Optionally, the data caching apparatus shown based on the foregoing embodiment of the present invention, with reference to fig. 3 and fig. 4, further includes:
an initializing unit 305, configured to perform initialization processing on a parameter value corresponding to each dimension to obtain a processed parameter value.
Optionally, based on the data caching apparatus shown in the foregoing embodiment of the present invention, the processing unit 303, which uses the first mapping rule to process all the parameter values to obtain the first mapping result, is specifically configured to:
all parameter values are mapped to constant values according to a first mapping rule, the constant values being specified in the first mapping rule.
Optionally, based on the data caching apparatus shown in the foregoing embodiment of the present invention, the processing unit 303, which uses the second mapping rule to process the first data to obtain the second mapping result, is specifically configured to:
and mapping the parameter values, which are the same as the data items accessed in the same group, in the first data to corresponding target values according to a second mapping rule, wherein the target values are determined from the parameter values, which are the same as the data items accessed in the same group.
Optionally, based on the data caching apparatus shown in the foregoing embodiment of the present invention, the processing unit 303 is further configured to:
if the data item which can be accessed by all the parameter values does not exist, determining whether the parameter values which are the same as the accessed data item exist;
if so, the parameter value identical to the accessed data item is taken as the first data.
The embodiment of the invention also discloses an electronic device, which is used for operating the database storage process, wherein the data caching method disclosed in the figure 1 and the figure 2 is executed when the database storage process is operated.
The embodiment of the invention also discloses a computer storage medium, which comprises a storage database storage process, wherein when the storage database storage process runs, the device where the storage medium is located is controlled to execute the data caching method disclosed in the above fig. 1 and fig. 2.
In the context of this disclosure, a computer storage medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, the system or system embodiments, which are substantially similar to the method embodiments, are described in a relatively simple manner, and reference may be made to some descriptions of the method embodiments for relevant points. The above-described system and system embodiments are only illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the components and steps of the various examples have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for caching data, the method comprising:
determining dimensionality related to a business requirement according to the business requirement when data is to be cached;
obtaining the dimensionality related to the service requirement and the parameter value corresponding to each dimensionality;
for each dimension, determining an accessible data result based on each parameter value, the data result comprising a plurality of data items;
for each dimension, if it is determined that data items which can be accessed by all parameter values exist, processing all the parameter values by using a first mapping rule to obtain a first mapping result;
if the parameter values identical to the accessed data items are determined to exist in the remaining data items, taking the parameter values identical to the accessed data items as first data, wherein the remaining data items are data items except the data items which can be accessed by all the parameter values;
processing the first data by using a second mapping rule to obtain a second mapping result;
and generating and storing the cache key of the dimension based on target mapping results and/or parameter values of different accessed data items, wherein the target mapping results comprise the first mapping results and/or the second mapping results, the parameter values of different target mapping results and/or accessed data items comprise target mapping results, and the parameter values of different target mapping results and different accessed data items are stored.
2. The method of claim 1, further comprising:
and initializing the parameter values corresponding to the dimensions to obtain the processed parameter values.
3. The method of claim 1, wherein the processing all the parameter values by using the first mapping rule to obtain a first mapping result comprises:
all parameter values are mapped to constant values according to a first mapping rule, the constant values being specified in the first mapping rule.
4. The method of claim 1, wherein processing the first data using the second mapping rule to obtain a second mapping result comprises:
and mapping the parameter values, which are the same as the data items accessed in the same group, in the first data to corresponding target values according to a second mapping rule, wherein the target values are determined from the parameter values, which are the same as the data items accessed in the same group.
5. The method of claim 1, further comprising:
if the data item which can be accessed by all the parameter values does not exist, determining whether the parameter values which are the same as the accessed data item exist;
if so, the parameter value identical to the accessed data item is taken as the first data.
6. A data caching apparatus, comprising:
the determining unit is used for determining the dimensionality related to the business requirement according to the business requirement when the data to be cached is obtained;
the acquisition unit is used for acquiring the dimensionality related to the service requirement and the parameter value corresponding to each dimensionality;
a processing unit for determining, for each dimension, an accessible data result based on each parameter value, the data result comprising a plurality of data items; for each dimension, if it is determined that data items which can be accessed by all parameter values exist, processing all the parameter values by using a first mapping rule to obtain a first mapping result; if the parameter values identical to the accessed data items are determined to exist in the remaining data items, taking the parameter values identical to the accessed data items as first data, wherein the remaining data items are data items except the data items which can be accessed by all the parameter values; processing the first data by using a second mapping rule to obtain a second mapping result;
a generating unit, configured to generate and store the cache key value key of the dimension based on a target mapping result and/or a parameter value of the accessed data item that is different, where the target mapping result includes the first mapping result and/or the second mapping result, and the parameter value of the target mapping result and/or the accessed data item that is different includes the target mapping result and the parameter value of the target mapping result that is different from the accessed data item.
7. The apparatus of claim 6, further comprising:
and the initialization unit is used for initializing the parameter values corresponding to the dimensions according to each dimension to obtain the processed parameter values.
8. The apparatus according to claim 6, wherein the processing unit that processes all the parameter values by using the first mapping rule to obtain the first mapping result is specifically configured to:
all parameter values are mapped to constant values according to a first mapping rule, the constant values being specified in the first mapping rule.
9. An electronic device, wherein the electronic device is configured to run a program, and wherein the program is configured to perform the data caching method according to any one of claims 1 to 5 when running.
10. A computer storage medium, characterized in that the storage medium comprises a stored program, wherein when the program runs, the device on which the storage medium is located is controlled to execute the data caching method according to any one of claims 1 to 5.
CN202210996965.5A 2022-08-19 2022-08-19 Data caching method and device, electronic equipment and computer storage medium Active CN115061952B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210996965.5A CN115061952B (en) 2022-08-19 2022-08-19 Data caching method and device, electronic equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210996965.5A CN115061952B (en) 2022-08-19 2022-08-19 Data caching method and device, electronic equipment and computer storage medium

Publications (2)

Publication Number Publication Date
CN115061952A true CN115061952A (en) 2022-09-16
CN115061952B CN115061952B (en) 2022-12-27

Family

ID=83207753

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210996965.5A Active CN115061952B (en) 2022-08-19 2022-08-19 Data caching method and device, electronic equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN115061952B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105404634A (en) * 2014-09-15 2016-03-16 南京理工大学 Key-Value data block based data management method and system
CN110427437A (en) * 2019-07-31 2019-11-08 南京邮电大学 A kind of relevant database mixing isomery interrogation model and method towards big data
US20200057781A1 (en) * 2018-08-20 2020-02-20 Salesforce.org Mapping and query service between object oriented programming objects and deep key-value data stores
CN111143417A (en) * 2019-12-27 2020-05-12 广东浪潮大数据研究有限公司 Data processing method, device and system, Nginx server and medium
CN111611225A (en) * 2020-05-15 2020-09-01 腾讯科技(深圳)有限公司 Data storage management method, query method, device, electronic equipment and medium
CN114356921A (en) * 2021-12-28 2022-04-15 中国农业银行股份有限公司 Data processing method, device, server and storage medium
CN114756287A (en) * 2022-06-14 2022-07-15 飞腾信息技术有限公司 Data processing method and device for reorder buffer and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105404634A (en) * 2014-09-15 2016-03-16 南京理工大学 Key-Value data block based data management method and system
US20200057781A1 (en) * 2018-08-20 2020-02-20 Salesforce.org Mapping and query service between object oriented programming objects and deep key-value data stores
CN110427437A (en) * 2019-07-31 2019-11-08 南京邮电大学 A kind of relevant database mixing isomery interrogation model and method towards big data
CN111143417A (en) * 2019-12-27 2020-05-12 广东浪潮大数据研究有限公司 Data processing method, device and system, Nginx server and medium
CN111611225A (en) * 2020-05-15 2020-09-01 腾讯科技(深圳)有限公司 Data storage management method, query method, device, electronic equipment and medium
CN114356921A (en) * 2021-12-28 2022-04-15 中国农业银行股份有限公司 Data processing method, device, server and storage medium
CN114756287A (en) * 2022-06-14 2022-07-15 飞腾信息技术有限公司 Data processing method and device for reorder buffer and storage medium

Also Published As

Publication number Publication date
CN115061952B (en) 2022-12-27

Similar Documents

Publication Publication Date Title
CN112395293B (en) Database and table dividing method, database and table dividing device, database and table dividing equipment and storage medium
CN109032803B (en) Data processing method and device and client
CN111813805A (en) Data processing method and device
CN108228799B (en) Object index information storage method and device
CN112395322B (en) List data display method and device based on hierarchical cache and terminal equipment
CN107357794B (en) Method and device for optimizing data storage structure of key value database
CN110232156B (en) Information recommendation method and device based on long text
CN115114232A (en) Method, device and medium for enumerating historical version objects
CN110245129B (en) Distributed global data deduplication method and device
CN115061952B (en) Data caching method and device, electronic equipment and computer storage medium
CN116700629B (en) Data processing method and device
CN113157600A (en) Space allocation method of shingled hard disk, file storage system and server
CN112559067A (en) Dynamic library loading method and related device
CN104537016B (en) A kind of method and device of determining file place subregion
CN115525655A (en) Method and system for data query slicing
CN111190863B (en) Catalog management method, device, equipment and medium
CN106446080B (en) Data query method, query service equipment, client equipment and data system
CN115269548A (en) Method and system for generating data warehouse development model and related equipment
CN111858590B (en) Storage system metadata organization method, system, terminal and storage medium
CN113434596A (en) Method and device for generating test data of distributed database
CN114443583A (en) Method, device and equipment for arranging fragment space and storage medium
CN114138552B (en) Data dynamic repeating and deleting method, system, terminal and storage medium
EP3995972A1 (en) Metadata processing method and apparatus, and computer-readable storage medium
CN111127064B (en) Method and device for determining social attribute of user and electronic equipment
CN108874804B (en) Data storage method, data query method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant