CN117271395A - Data caching method and device, electronic equipment and storage medium - Google Patents

Data caching method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117271395A
CN117271395A CN202311559705.2A CN202311559705A CN117271395A CN 117271395 A CN117271395 A CN 117271395A CN 202311559705 A CN202311559705 A CN 202311559705A CN 117271395 A CN117271395 A CN 117271395A
Authority
CN
China
Prior art keywords
key
map
data
buffer
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311559705.2A
Other languages
Chinese (zh)
Other versions
CN117271395B (en
Inventor
吴亮平
王娟
黄昕远
杨博
杜磊
方锐
冯友全
谭春勇
付俊超
李颖
吴万兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Minhang Chengdu Information Technology Co ltd
Original Assignee
Minhang Chengdu Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Minhang Chengdu Information Technology Co ltd filed Critical Minhang Chengdu Information Technology Co ltd
Priority to CN202311559705.2A priority Critical patent/CN117271395B/en
Publication of CN117271395A publication Critical patent/CN117271395A/en
Application granted granted Critical
Publication of CN117271395B publication Critical patent/CN117271395B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0877Cache access modes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F5/00Methods or arrangements for data conversion without changing the order or content of the data handled
    • G06F5/06Methods or arrangements for data conversion without changing the order or content of the data handled for changing the speed of data flow, i.e. speed regularising or timing, e.g. delay lines, FIFO buffers; over- or underrun control therefor
    • G06F5/065Partitioned buffers, e.g. allowing multiple independent queues, bidirectional FIFO's
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The present disclosure provides a data caching method, apparatus, electronic device and storage medium, by providing a data caching MAP and a key caching MAP; determining a target key corresponding to a new buffer item stored in the data buffer MAP, and taking a modulus value of the buffer capacity corresponding to the data buffer MAP by using the target key; when the buffer capacity is not full, writing the new buffer item into the data buffer MAP, and writing the module value as a key and the target key as a value into the key buffer MAP; when the buffer capacity is full, searching a target key value pair of which the key is the modulus value in the key buffer MAP, and determining a target value corresponding to the target key value; deleting the key value pair corresponding to the target value in the data cache MAP, and writing the new cache item in the data cache MAP. The first-in first-out strategy in the process of data elimination of the memory cache can be ensured.

Description

Data caching method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of data processing, and in particular relates to a data caching method, a data caching device, electronic equipment and a storage medium.
Background
In order to ensure high performance and high availability of the system, the local cache or the distributed cache adopts a memory to store data, but as time goes on, more and more data are cached, and due to the limitation of cost and memory, when the stored data exceeds the maximum capacity of the cache, the cached data needs to be eliminated, and common cache elimination strategies include a FIFO first-in first-out algorithm, an LRU algorithm, an LFU algorithm and the like.
At present, memory caching supporting a FIFO first-in first-out algorithm mainly adopts 30 items of cache items to be randomly fetched from the cache, and then the cache items are eliminated based on the FIFO in the 30 values, but if the cached items are more than 30, the first-in first-out can not be ensured when the cached items are randomly fetched.
Disclosure of Invention
The embodiment of the disclosure at least provides a data caching method, a device, an electronic device and a storage medium.
The embodiment of the disclosure provides a data caching method, which comprises the following steps:
providing a data buffer MAP and a key buffer MAP;
determining a target key corresponding to a new buffer item stored in the data buffer MAP, and taking a modulus value of the buffer capacity corresponding to the data buffer MAP by using the target key;
when the buffer capacity is not full, writing the new buffer item into the data buffer MAP, and writing the module value as a key and the target key as a value into the key buffer MAP;
when the buffer capacity is full, searching a target key value pair of which the key is the modulus value in the key buffer MAP, and determining a target value corresponding to the target key value;
deleting the key value pair corresponding to the target value in the data cache MAP, and writing the new cache item in the data cache MAP.
In an alternative embodiment, the data cache MAP is configured to store cache data in the form of key-value pairs, where the key values in the data cache MAP are sequentially incremented by one according to the data storage order.
In an optional implementation manner, the key cache MAP is configured to take a key in the data cache MAP as a value, and take a modulus value of a capacity corresponding to the data cache MAP as a key by using the key corresponding to the cache data to form a key queue corresponding to the data cache MAP.
In an optional implementation manner, after deleting the key value pair corresponding to the target value in the data cache MAP and writing the new cache item in the data cache MAP, the method further includes:
in the key cache MAP, the target value is changed to the target key.
In an alternative embodiment, it is determined whether the buffer capacity of the data buffer MAP is full based on the following steps:
traversing each key value pair in the key cache MAP, and determining whether a key corresponding to the modulus value exists;
if yes, determining that the buffer capacity of the data buffer MAP is full;
if the data is not stored, determining that the buffer capacity of the data buffer MAP is not full.
The embodiment of the disclosure also provides a data caching device, which comprises:
the buffer creation module is used for providing a data buffer MAP and a key buffer MAP;
the capacity modulo module is used for determining a target key corresponding to a new cache item stored in the data cache MAP, and taking a modulo value of the corresponding cache capacity of the data cache MAP by the target key;
the first writing module is used for writing the new buffer item in the data buffer MAP when the buffer capacity is not full, and writing the key buffer MAP by taking the modulus value as a key and the target key as a value;
the key value searching module is used for searching a target key value pair of which the key is the module value in the key cache MAP when the cache capacity is full, and determining a target value corresponding to the target key value;
and the second writing module is used for deleting the key value pair corresponding to the target value in the data cache MAP, and writing the new cache item in the data cache MAP.
In an alternative embodiment, the apparatus further includes a key value updating module, where the key value updating module is configured to:
in the key cache MAP, the target value is changed to the target key.
In an alternative embodiment, the device is further configured to:
traversing each key value pair in the key cache MAP, and determining whether a key corresponding to the modulus value exists;
if yes, determining that the buffer capacity of the data buffer MAP is full;
if the data is not stored, determining that the buffer capacity of the data buffer MAP is not full.
The embodiment of the disclosure also provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication over the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the data caching method described above, or steps in any of the possible embodiments of the data caching method described above.
The disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the above-described data caching method, or steps of any one of the possible implementation manners of the above-described data caching method.
The disclosed embodiments also provide a computer program product comprising a computer program/instructions which, when executed by a processor, implement the above-described data caching method, or steps in any one of the possible implementation manners of the above-described data caching method.
The embodiment of the disclosure provides a data caching method, a data caching device, electronic equipment and a storage medium, wherein the data caching MAP and the key caching MAP are provided; determining a target key corresponding to a new buffer item stored in the data buffer MAP, and taking a modulus value of the buffer capacity corresponding to the data buffer MAP by using the target key; when the buffer capacity is not full, writing the new buffer item into the data buffer MAP, and writing the module value as a key and the target key as a value into the key buffer MAP; when the buffer capacity is full, searching a target key value pair of which the key is the modulus value in the key buffer MAP, and determining a target value corresponding to the target key value; deleting the key value pair corresponding to the target value in the data cache MAP, and writing the new cache item in the data cache MAP. The first-in first-out strategy in the process of data elimination of the memory cache can be ensured.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the embodiments are briefly described below, which are incorporated in and constitute a part of the specification, these drawings showing embodiments consistent with the present disclosure and together with the description serve to illustrate the technical solutions of the present disclosure. It is to be understood that the following drawings illustrate only certain embodiments of the present disclosure and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may admit to other equally relevant drawings without inventive effort.
FIG. 1 is a flow chart of a method for caching data according to an embodiment of the disclosure;
FIG. 2 is a schematic diagram of a data buffer MAP and a key buffer MAP according to an embodiment of the disclosure;
FIG. 3 is a schematic diagram of data writing of a data buffer MAP and a key buffer MAP according to an embodiment of the disclosure;
FIG. 4 is a schematic diagram of data writing of another data buffer MAP and key buffer MAP according to an embodiment of the disclosure;
FIG. 5 is a schematic diagram of a data caching apparatus according to an embodiment of the disclosure;
fig. 6 shows a schematic diagram of an electronic device provided by an embodiment of the disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. The components of the embodiments of the present disclosure, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be made by those skilled in the art based on the embodiments of this disclosure without making any inventive effort, are intended to be within the scope of this disclosure.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
The term "and/or" is used herein to describe only one relationship, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
It is found that, at present, the memory cache supporting the FIFO first-in first-out algorithm mainly adopts to randomly fetch 30 cache entries from the cache, and then eliminates the cache entries based on the FIFO in the 30 values, but if the cache entries are greater than 30, the FIFO first-in first-out cannot be guaranteed when the cache entries are randomly fetched.
Based on the above study, the present disclosure provides a data caching method, apparatus, electronic device and storage medium, by providing a data caching MAP and a key caching MAP; determining a target key corresponding to a new buffer item stored in the data buffer MAP, and taking a modulus value of the buffer capacity corresponding to the data buffer MAP by using the target key; when the buffer capacity is not full, writing the new buffer item into the data buffer MAP, and writing the module value as a key and the target key as a value into the key buffer MAP; when the buffer capacity is full, searching a target key value pair of which the key is the modulus value in the key buffer MAP, and determining a target value corresponding to the target key value; deleting the key value pair corresponding to the target value in the data cache MAP, and writing the new cache item in the data cache MAP. The first-in first-out strategy in the process of data elimination of the memory cache can be ensured.
For the sake of understanding the present embodiment, first, a detailed description will be given of a data caching method disclosed in an embodiment of the present disclosure, where an execution body of the data caching method provided in the embodiment of the present disclosure is generally a computer device with a certain computing capability, where the computer device includes, for example: the terminal device, or server or other processing device, may be a User Equipment (UE), mobile device, user terminal, cellular telephone, cordless telephone, personal digital assistant (Personal Digital Assistant, PDA), handheld device, computing device, vehicle mounted device, wearable device, etc. In some possible implementations, the data caching method may be implemented by way of a processor invoking computer readable instructions stored in a memory.
Referring to fig. 1, a flowchart of a data caching method according to an embodiment of the disclosure is shown, where the method includes steps S101 to S104, where:
s101, providing a data buffer MAP and a key buffer MAP.
In an implementation, during data caching, a data cache MAP and a key cache MAP are provided to store relevant data.
Here, the data buffer MAP is used to store buffer data in the form of key-value pairs, wherein the key values in the data buffer MAP are sequentially incremented by one according to the data storage order. The key buffer MAP is configured to form a key queue corresponding to the data buffer MAP by using a key in the data buffer MAP as a value and using a modulus value of a capacity corresponding to the key data buffer MAP corresponding to the buffer data as a key.
The data caching MAP and the key caching MAP are MAP sets for storing data in a key value pair mode, data content which is specifically needed to be cached is stored in the data caching MAP, and keys in the data caching MAP are managed to achieve conversion of the MAP into a queue.
It should be noted that, the capacity of the data buffer MAP corresponding to the key buffer MAP is constant, that is, the fixed-length buffer is the same as the capacity of the key buffer MAP.
Exemplary, referring to fig. 2, a schematic diagram of a data buffer MAP and a key buffer MAP according to an embodiment of the disclosure is shown. As shown in fig. 2, the data buffer MAP and the key buffer MAP store data in the form of key-value pairs, and have a capacity of 5.
Here, in the data buffer MAP, contents 1 to 4 are written sequentially in time sequence, i.e., key value pair "1000-content 1" is written first, key value pair "1004-content 4" is written last, in the key buffer MAP, key value pair "0-1000" is written first, and key value pair "3-1003" is written last.
Note that, the capacity of the data buffer MAP corresponding to the key buffer MAP may be set according to actual needs, and is not particularly limited herein.
S102, determining a target key corresponding to a new buffer item stored in the data buffer MAP, and taking a modulus value of the buffer capacity corresponding to the data buffer MAP by the target key.
In a specific implementation, for a new cache entry of a new write data cache MAP, first, a target key of the new cache entry write data cache MAP needs to be determined, and a corresponding module of the cache capacity of the data cache MAP is accessed by the target key of the new cache entry write data cache MAP, so as to obtain a corresponding module value.
Here, the new cache entry is written into the target key of the data cache MAP, and may be automatically incremented by one on the basis of the value of the key value pair corresponding to the latest data content cached in the current data cache MAP.
For an exemplary data cache MAP with a capacity of 5, the latest data content corresponding key value pair cached in the current data cache MAP is: the key value pair "1004-content 4", the target key corresponding to the newly written new cache entry may be 1005. Taking the modulo of data cache MAP capacity 5 for key 1005 results in a corresponding modulo value of 1004% 5=4.
And S103, when the buffer capacity is not full, writing the new buffer item into the data buffer MAP, and writing the module value serving as a key and the target key serving as a value into the key buffer MAP.
In a specific implementation, when the data buffer MAP writes a new buffer item, if the buffer capacity of the data buffer MAP is not full, the new buffer item is directly written into the data buffer MAP in the form of a key value pair, meanwhile, a target key corresponding to the new buffer item in the data buffer MAP is taken as a value, a model value corresponding to the buffer capacity of the data buffer MAP is taken as a key by the target key of the new buffer item writing into the data buffer MAP, a key value pair is constructed, and the key value pair is written into the key buffer MAP.
Exemplary, referring to fig. 3, a schematic diagram of data writing of a data cache MAP and a key cache MAP according to an embodiment of the disclosure is shown.
Here, as shown in fig. 3, for a data buffer MAP and a key buffer MAP of capacity 5, the new buffer entry written is a key value pair "1004-content 5", and the modulo of capacity 5 of the data buffer MAP is first fetched with the target key "1004" of the new buffer entry, i.e., 1004% 5=4.
Then, the new buffer item is written into the key buffer MAP by taking the modulus value 4 as a key and taking the target key '1004' of the new buffer item in the data buffer MAP as a value, and constructing a key value pair '4-1004'.
S104, when the cache capacity is full, searching a target key value pair of which the key is the modulus value in the key cache MAP, and determining a target value corresponding to the target key value; deleting the key value pair corresponding to the target value in the data cache MAP, and writing the new cache item in the data cache MAP.
In a specific implementation, when the data buffer MAP writes in a new buffer item, if the buffer capacity of the data buffer MAP is full, a module value of the buffer capacity corresponding to the target key data buffer MAP of the data buffer MAP is fetched according to the new buffer item, and the module value is used as a key to search for a corresponding target value in the key buffer MAP.
Then deleting the key value pair with the target value as a key in the data cache MAP, completing one-time data elimination and releasing the cache space of the data cache MAP, and normally writing a new cache item into the data cache MAP.
In the process of writing the new cache entry into the data cache MAP normally, the writing step when the cache capacity is not full in step S103 is still required to be observed, but in this case, since the key value pair using the modulo value of the corresponding cache capacity of the target key access data cache MAP of the new cache entry writing the data cache MAP as a key already exists in the key cache MAP, in this case, the target value is changed to the target key in the key cache MAP, and the value is covered.
For example, referring to fig. 4, another data writing schematic diagram of a data buffer MAP and a key buffer MAP according to an embodiment of the disclosure is shown.
Here, as shown in fig. 4, for a data cache MAP and a key cache MAP of capacity 5, the new entry written is a key value pair "1005-content 6", and the modulo of capacity 5 of the data cache MAP, i.e., 1005% 5=0, is first fetched with the target key "1005" of the new entry.
Then, taking 0 as a key in the key cache MAP, searching a corresponding key value pair to obtain a value of 1000; in the data cache MAP, the value "1000" is used as a key, the corresponding key value pair is found to be "1000-content 1", and the key value pair is deleted.
Further, the key value pair "1005-content 6" is written into the data buffer MAP, in this process, the mode value 0 is used as a key, the "1005" is used as a value to construct the key value pair writing key buffer MAP, and since the key value "0" already exists in the key buffer MAP at this time, the value is covered on the basis of the key value pair, the value "1000" is covered to the value "1005", and the writing process of the key value pair "1005-content 6" is completed.
As a possible implementation manner, whether the buffer capacity of the data buffer MAP is full is determined based on the following steps: traversing each key value pair in the key cache MAP, and determining whether a key corresponding to a module value exists; if yes, determining that the buffer capacity of the data buffer MAP is full; if the data is not stored, determining that the buffer capacity of the data buffer MAP is not full.
The embodiment of the disclosure provides a data caching method, which comprises the steps of providing a data caching MAP and a key caching MAP; determining a target key corresponding to a new buffer item stored in the data buffer MAP, and taking a modulus value of the buffer capacity corresponding to the data buffer MAP by using the target key; when the buffer capacity is not full, writing the new buffer item into the data buffer MAP, and writing the module value as a key and the target key as a value into the key buffer MAP; when the buffer capacity is full, searching a target key value pair of which the key is the modulus value in the key buffer MAP, and determining a target value corresponding to the target key value; deleting the key value pair corresponding to the target value in the data cache MAP, and writing the new cache item in the data cache MAP. The first-in first-out strategy in the process of data elimination of the memory cache can be ensured.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Based on the same inventive concept, the embodiments of the present disclosure further provide a data caching device corresponding to the data caching method, and since the principle of solving the problem by the device in the embodiments of the present disclosure is similar to that of the data caching method in the embodiments of the present disclosure, the implementation of the device may refer to the implementation of the method, and the repetition is omitted.
Referring to fig. 5, fig. 5 is a schematic diagram of a data buffering device according to an embodiment of the disclosure. As shown in fig. 5, a data caching apparatus 500 provided in an embodiment of the present disclosure includes:
the buffer creation module 510 is configured to provide a data buffer MAP and a key buffer MAP.
The capacity modulo module 520 is configured to determine a target key corresponding to a new buffer entry stored in the data buffer MAP, and use the target key to fetch a modulo value of a buffer capacity corresponding to the data buffer MAP.
A first writing module 530, configured to write the new cache entry in the data cache MAP when the cache capacity is not full, and write the key cache MAP with the modulus value as a key and the target key as a value.
And a key value searching module 540, configured to determine, when the buffer capacity is full, a target value corresponding to the target key value when the key buffer MAP searching key is the target key value pair of the modulus value.
And a second writing module 550, configured to delete the key value pair corresponding to the target value in the data cache MAP, and write the new cache entry in the data cache MAP.
The process flow of each module in the apparatus and the interaction flow between the modules may be described with reference to the related descriptions in the above method embodiments, which are not described in detail herein.
The embodiment of the disclosure provides a data caching device, which is provided with a data caching MAP and a key caching MAP; determining a target key corresponding to a new buffer item stored in the data buffer MAP, and taking a modulus value of the buffer capacity corresponding to the data buffer MAP by using the target key; when the buffer capacity is not full, writing the new buffer item into the data buffer MAP, and writing the module value as a key and the target key as a value into the key buffer MAP; when the buffer capacity is full, searching a target key value pair of which the key is the modulus value in the key buffer MAP, and determining a target value corresponding to the target key value; deleting the key value pair corresponding to the target value in the data cache MAP, and writing the new cache item in the data cache MAP. The first-in first-out strategy in the process of data elimination of the memory cache can be ensured.
Corresponding to the data caching method in fig. 1, the embodiment of the present disclosure further provides an electronic device 600, as shown in fig. 6, which is a schematic structural diagram of the electronic device 600 provided in the embodiment of the present disclosure, including:
a processor 61, a memory 62, and a bus 63; memory 62 is used to store execution instructions, including memory 621 and external memory 622; the memory 621 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 61 and data exchanged with the external memory 622 such as a hard disk, and the processor 61 exchanges data with the external memory 622 through the memory 621, and when the electronic device 600 is operated, the processor 61 and the memory 62 communicate with each other through the bus 63, so that the processor 61 performs the steps of the data caching method in fig. 1.
The disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the data caching method described in the method embodiments above. Wherein the storage medium may be a volatile or nonvolatile computer readable storage medium.
The embodiments of the present disclosure further provide a computer program product, where the computer program product includes computer instructions, where the computer instructions, when executed by a processor, may perform the steps of the data caching method described in the foregoing method embodiments, and specifically, reference the foregoing method embodiments will not be described herein.
Wherein the above-mentioned computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It will be clear to those skilled in the art that, for convenience and brevity of description, reference may be made to the corresponding process in the foregoing method embodiment for the specific working process of the apparatus described above, which is not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present disclosure, and are not intended to limit the scope of the disclosure, but the present disclosure is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, it is not limited to the disclosure: any person skilled in the art, within the technical scope of the disclosure of the present disclosure, may modify or easily conceive changes to the technical solutions described in the foregoing embodiments, or make equivalent substitutions for some of the technical features thereof; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A data caching method, comprising:
providing a data buffer MAP and a key buffer MAP;
determining a target key corresponding to a new buffer item stored in the data buffer MAP, and taking a modulus value of the buffer capacity corresponding to the data buffer MAP by using the target key;
when the buffer capacity is not full, writing the new buffer item into the data buffer MAP, and writing the module value as a key and the target key as a value into the key buffer MAP;
when the buffer capacity is full, searching a target key value pair of which the key is the modulus value in the key buffer MAP, and determining a target value corresponding to the target key value;
deleting the key value pair corresponding to the target value in the data cache MAP, and writing the new cache item in the data cache MAP.
2. The data caching method of claim 1, wherein:
the data buffer MAP is used for storing buffer data in the form of key value pairs, wherein the key values in the data buffer MAP are sequentially increased by one according to the data storage sequence.
3. The method according to claim 2, characterized in that:
and the key cache MAP is used for taking a key in the data cache MAP as a value, taking a key corresponding to the cache data as a key, and taking a modular value of the capacity corresponding to the data cache MAP as a key, so as to form a key queue corresponding to the data cache MAP.
4. The data caching method according to claim 1, wherein after deleting the key value pair corresponding to the target value in the data cache MAP and writing the new cache item in the data cache MAP, the method further comprises:
in the key cache MAP, the target value is changed to the target key.
5. The data buffering method of claim 1, wherein determining whether the buffering capacity of the data buffering MAP is full is based on:
traversing each key value pair in the key cache MAP, and determining whether a key corresponding to the modulus value exists;
if yes, determining that the buffer capacity of the data buffer MAP is full;
if the data is not stored, determining that the buffer capacity of the data buffer MAP is not full.
6. A data caching apparatus, comprising:
the buffer creation module is used for providing a data buffer MAP and a key buffer MAP;
the capacity modulo module is used for determining a target key corresponding to a new cache item stored in the data cache MAP, and taking a modulo value of the corresponding cache capacity of the data cache MAP by the target key;
the first writing module is used for writing the new buffer item in the data buffer MAP when the buffer capacity is not full, and writing the key buffer MAP by taking the modulus value as a key and the target key as a value;
the key value searching module is used for searching a target key value pair of which the key is the module value in the key cache MAP when the cache capacity is full, and determining a target value corresponding to the target key value;
and the second writing module is used for deleting the key value pair corresponding to the target value in the data cache MAP, and writing the new cache item in the data cache MAP.
7. The apparatus of claim 6, further comprising a key value update module to:
in the key cache MAP, the target value is changed to the target key.
8. The apparatus of claim 6, wherein the apparatus is further configured to:
traversing each key value pair in the key cache MAP, and determining whether a key corresponding to the modulus value exists;
if yes, determining that the buffer capacity of the data buffer MAP is full;
if the data is not stored, determining that the buffer capacity of the data buffer MAP is not full.
9. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication over the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the data caching method of any one of claims 1 to 5.
10. A computer-readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, performs the steps of the data caching method according to any one of claims 1 to 5.
CN202311559705.2A 2023-11-22 2023-11-22 Data caching method and device, electronic equipment and storage medium Active CN117271395B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311559705.2A CN117271395B (en) 2023-11-22 2023-11-22 Data caching method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311559705.2A CN117271395B (en) 2023-11-22 2023-11-22 Data caching method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117271395A true CN117271395A (en) 2023-12-22
CN117271395B CN117271395B (en) 2024-02-06

Family

ID=89203015

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311559705.2A Active CN117271395B (en) 2023-11-22 2023-11-22 Data caching method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117271395B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110276744A1 (en) * 2010-05-05 2011-11-10 Microsoft Corporation Flash memory cache including for use with persistent key-value store
US10204046B1 (en) * 2015-11-19 2019-02-12 Netronome Systems, Inc. High-speed and memory-efficient flow cache for network flow processors
CN111078147A (en) * 2019-12-16 2020-04-28 南京领行科技股份有限公司 Processing method, device and equipment for cache data and storage medium
CN111177143A (en) * 2019-06-12 2020-05-19 腾讯科技(深圳)有限公司 Key value data storage method and device, storage medium and electronic equipment
CN111400308A (en) * 2020-02-21 2020-07-10 中国平安财产保险股份有限公司 Processing method of cache data, electronic device and readable storage medium
CN112948287A (en) * 2021-03-29 2021-06-11 成都新易盛通信技术股份有限公司 SD card read-write method and system based on Hashmap caching mechanism
CN113177069A (en) * 2021-05-08 2021-07-27 中国科学院声学研究所 Cache and query system and query method
CN114816219A (en) * 2021-01-21 2022-07-29 北京金山云网络技术有限公司 Data writing and reading method and device and data reading and writing system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110276744A1 (en) * 2010-05-05 2011-11-10 Microsoft Corporation Flash memory cache including for use with persistent key-value store
US10204046B1 (en) * 2015-11-19 2019-02-12 Netronome Systems, Inc. High-speed and memory-efficient flow cache for network flow processors
CN111177143A (en) * 2019-06-12 2020-05-19 腾讯科技(深圳)有限公司 Key value data storage method and device, storage medium and electronic equipment
CN111078147A (en) * 2019-12-16 2020-04-28 南京领行科技股份有限公司 Processing method, device and equipment for cache data and storage medium
CN111400308A (en) * 2020-02-21 2020-07-10 中国平安财产保险股份有限公司 Processing method of cache data, electronic device and readable storage medium
CN114816219A (en) * 2021-01-21 2022-07-29 北京金山云网络技术有限公司 Data writing and reading method and device and data reading and writing system
CN112948287A (en) * 2021-03-29 2021-06-11 成都新易盛通信技术股份有限公司 SD card read-write method and system based on Hashmap caching mechanism
CN113177069A (en) * 2021-05-08 2021-07-27 中国科学院声学研究所 Cache and query system and query method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王永亮;: "缓存淘汰算法研究", 电子技术与软件工程, no. 23, pages 149 - 150 *

Also Published As

Publication number Publication date
CN117271395B (en) 2024-02-06

Similar Documents

Publication Publication Date Title
CN102682037B (en) A kind of data capture method, system and device
CN110191428B (en) Data distribution method based on intelligent cloud platform
CN108228649B (en) Method and apparatus for data access
CN107491523B (en) Method and device for storing data object
US7788240B2 (en) Hash mapping with secondary table having linear probing
CN110399235B (en) Multithreading data transmission method and device in TEE system
US20020138648A1 (en) Hash compensation architecture and method for network address lookup
CN108008918A (en) Data processing method, memory node and distributed memory system
CN111506604B (en) Method, apparatus and computer program product for accessing data
CN106708825A (en) Data file processing method and system
CN113934655B (en) Method and apparatus for solving ambiguity problem of cache memory address
CN109376125A (en) A kind of metadata storing method, device, equipment and computer readable storage medium
CN114064668A (en) Method, electronic device and computer program product for storage management
CN113392042A (en) Method, electronic device and computer program product for managing a cache
CN117271395B (en) Data caching method and device, electronic equipment and storage medium
CN105630612B (en) Process updating method and device
CN106599247A (en) Method and device for merging data file in LSM-tree structure
CN104166649A (en) Caching method and device for search engine
EP3343395A1 (en) Data storage method and apparatus for mobile terminal
CN110658999B (en) Information updating method, device, equipment and computer readable storage medium
US10552343B2 (en) Zero thrash cache queue manager
KR20010065040A (en) Method for effectively using memory in mobile station
CN114579812B (en) Management method and device of linked list queue, task management method and storage medium
CN111723266A (en) Mass data processing method and device
CN116483741B (en) Order preserving method, system and related equipment for multiple groups of access queues of processor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant