CN115981895A - Cache management method and system during data reading - Google Patents

Cache management method and system during data reading Download PDF

Info

Publication number
CN115981895A
CN115981895A CN202310113392.1A CN202310113392A CN115981895A CN 115981895 A CN115981895 A CN 115981895A CN 202310113392 A CN202310113392 A CN 202310113392A CN 115981895 A CN115981895 A CN 115981895A
Authority
CN
China
Prior art keywords
data
read
determining
calling
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310113392.1A
Other languages
Chinese (zh)
Inventor
许志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hengguo Jinan Technology Co ltd
Original Assignee
Hengguo Jinan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hengguo Jinan Technology Co ltd filed Critical Hengguo Jinan Technology Co ltd
Priority to CN202310113392.1A priority Critical patent/CN115981895A/en
Publication of CN115981895A publication Critical patent/CN115981895A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention is suitable for the technical field of data caching, and provides a cache management method and a cache management system during data reading, wherein the cache management method comprises the following steps: determining the calling base number of each data in the memory according to the historical data of the user, the current time and the running software information; copying partial data in the memory into a first cache region and a second cache region according to the calling base number; receiving a data reading instruction, wherein the data reading instruction comprises data to be read, searching the data to be read from the first cache region, the second cache region and the memory in sequence, and processing the data to be read by the CPU when the data to be read is searched; determining the accompanying data according to the data to be read, clearing partial data in the second cache region, releasing partial data in the first cache region into the second cache region, and copying the accompanying data into the first cache region. The calling base number reflects the degree of the requirement of the user on each piece of data in the memory at the current time, and the data which the user needs to read most is guaranteed to be placed in the cache region.

Description

Cache management method and system during data reading
Technical Field
The invention relates to the technical field of data caching, in particular to a cache management method and system during data reading.
Background
The cache is a memory capable of exchanging data with the CPU prior to the memory, the CPU has very high data processing speed, the CPU with the main frequency of 1GHz can read and process billions of instructions and data in one second, the memory is very slow, the reading speed of the memory and the CPU has great difference, the cache is used for solving the speed difference between the CPU speed and the memory speed, the data which is accessed most frequently by the CPU in the memory can be copied into the cache, so that the CPU does not need to access the part of data in the memory, the CPU only needs to access the cache, however, the data required by a user at each moment is different, the data which is accessed most frequently is needed at many times, and the data copied into the cache region cannot be well determined according to the actual needs of the user at present. Therefore, it is desirable to provide a method and a system for cache management during data reading, which aim to solve the above problems.
Disclosure of Invention
In view of the defects in the prior art, an object of the present invention is to provide a method and a system for cache management during data reading, so as to solve the problems in the background art.
The invention is realized in this way, a cache management method in data reading, the method includes the following steps:
determining the calling base number of each data in the memory according to the historical data of the user, the current time and the running software information;
copying partial data in the memory into a first cache region and a second cache region according to the calling base number;
receiving a data reading instruction, wherein the data reading instruction comprises data to be read, searching the data to be read from a first cache region, a second cache region and a memory in sequence, and processing the data to be read by a CPU when the data to be read is searched;
and determining the accompanying data according to the data to be read, clearing partial data in the second cache region, releasing partial data in the first cache region into the second cache region, and copying the accompanying data into the first cache region.
As a further scheme of the invention: the step of determining the calling base number of each data in the memory according to the user historical data, the current time and the operating software information specifically comprises the following steps:
determining data matched with the running software in the memory according to the running software information, and setting a calling basic value for each matched data;
determining historical reading time information of each matched data according to the historical data of the user, and determining a calling coefficient according to the historical reading time information and the current time;
and determining the calling base number of each data according to the calling base value and the calling coefficient.
As a further scheme of the invention: the step of determining the calling coefficient according to the historical reading time information and the current time specifically comprises the following steps:
determining a difference value between each historical reading time in the historical reading time information and the current time;
and determining a calling coefficient for the data according to the difference range and the number of times of historical reading time conforming to the difference range.
As a further scheme of the invention: the step of determining the accompanying data according to the data to be read specifically includes:
determining data which is read simultaneously with the data to be read in a set time difference according to the data to be read and user historical data;
and counting each piece of data read simultaneously, and determining the corresponding data as the accompanying data when the counted number is greater than a set number value.
As a further scheme of the invention: the method further comprises the following steps:
receiving shortcut instruction information input by a user, wherein the shortcut instruction information comprises a plurality of shortcut instructions, and each shortcut instruction corresponds to read data;
and receiving a shortcut instruction input by a user, and copying corresponding read data into the first cache region.
Another object of the present invention is to provide a cache management system for data reading, the system comprising:
the calling base number determining module is used for determining the calling base number of each data in the memory according to the historical data of the user, the current time and the operating software information;
the data copying cache module is used for copying partial data in the memory into the first cache region and the second cache region according to the calling base number;
the data reading processing module is used for receiving a data reading instruction, searching the data to be read from the first cache region, the second cache region and the memory in sequence, and processing the data to be read by the CPU when the data to be read is searched;
and the adjoint data cache module is used for determining adjoint data according to the data to be read, removing partial data in the second cache region, releasing partial data in the first cache region into the second cache region, and copying the adjoint data into the first cache region.
As a further scheme of the invention: the call cardinality determination module includes:
the calling basic value determining unit is used for determining data matched with the running software in the memory according to the running software information and setting calling basic values for each matched data;
the calling coefficient determining unit is used for determining historical reading time information of each matched data according to the historical data of the user and determining a calling coefficient according to the historical reading time information and the current time;
and the calling base number determining unit is used for determining the calling base number of each piece of data according to the calling basic value and the calling coefficient.
As a further scheme of the invention: the calling coefficient determination unit includes:
a time difference determining subunit, configured to determine a difference between each of the historical reading times in the historical reading time information and the current time;
and the calling coefficient determining subunit is used for determining a calling coefficient for the data according to the difference range and the times of the historical reading time conforming to the difference range.
As a further scheme of the invention: the accompanying data caching module comprises:
the data reading unit is used for determining data to be read simultaneously in a set time difference with the data to be read according to the data to be read and the user historical data;
and an accompanying data determination unit for counting each of the simultaneously read data, and determining the corresponding data as accompanying data when the counted number is greater than a set number value.
As a further scheme of the invention: the system also comprises a shortcut instruction cache module, wherein the shortcut instruction cache module specifically comprises:
the shortcut instruction information unit is used for receiving shortcut instruction information input by a user, the shortcut instruction information comprises a plurality of shortcut instructions, and each shortcut instruction corresponds to read data;
and the data shortcut cache unit is used for receiving a shortcut instruction input by a user and copying corresponding read data into the first cache region.
Compared with the prior art, the invention has the beneficial effects that:
the method can determine the calling base number of each data in the memory according to the historical data of the user, the current time and the running software information; copying partial data in the memory into a first cache region and a second cache region according to the calling base number; and determining the accompanying data according to the data to be read, clearing partial data in the second cache region, releasing partial data in the first cache region into the second cache region, and copying the accompanying data into the first cache region. The calling base number can reflect the degree of the requirement of the user on each piece of data in the memory at the current time, and further ensures that the data which needs to be read by the user most is placed in the cache region.
Drawings
Fig. 1 is a flowchart of a cache management method during data reading.
Fig. 2 is a flowchart of determining a call base number of each piece of data in a memory according to user history data, current time, and operating software information in a cache management method during data reading.
Fig. 3 is a flowchart of determining a calling coefficient according to history reading time information and current time in a cache management method during data reading.
Fig. 4 is a flowchart of determining the accompanying data according to the data to be read in the cache management method during data reading.
Fig. 5 is a flowchart of receiving shortcut instruction information input by a user in a cache management method during data reading.
Fig. 6 is a schematic structural diagram of a cache management system during data reading.
Fig. 7 is a schematic structural diagram of a call base determination module in a cache management system during data reading.
Fig. 8 is a schematic structural diagram of a call coefficient determining unit in a cache management system during data reading.
Fig. 9 is a schematic structural diagram of an accompanying data caching module in a cache management system during data reading.
Fig. 10 is a schematic structural diagram of a shortcut instruction cache module in a cache management system during data reading.
Detailed description of the preferred embodiments
In order to make the objects, technical solutions and advantages of the present invention more clear, the present invention is further described in detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Specific implementations of the present invention are described in detail below with reference to specific embodiments.
As shown in fig. 1, an embodiment of the present invention provides a cache management method in data reading, where the method includes the following steps:
s100, determining the calling base number of each data in the memory according to the historical data of the user, the current time and the running software information;
s200, copying partial data in the memory into a first cache region and a second cache region according to the calling base number;
s300, receiving a data reading instruction, wherein the data reading instruction comprises data to be read, searching the data to be read from a first cache region, a second cache region and a memory in sequence, and processing the data to be read by a CPU when the data to be read is searched;
s400, determining the accompanying data according to the data to be read, clearing partial data in the second cache region, releasing partial data in the first cache region into the second cache region, and copying the accompanying data into the first cache region.
It should be noted that the cache is to solve the speed difference between the CPU speed and the memory speed, and data in the memory which is accessed most frequently by the CPU is copied into the cache, so that the CPU does not need to access the part of data in the memory, and the CPU only needs to access the cache.
In the embodiment of the invention, the calling base number of each data in the memory is determined according to the historical data of the user, the current time and the running software information, so that the calling base number can reflect the degree of the requirement of the user on each data in the memory at the current time, and partial data in the memory is copied into the first cache region and the second cache region according to the calling base number, so that the data which needs to be read most by the user is ensured to be put into the cache regions, the reading speed of the first cache region is higher than that of the second cache region, and the second cache region is occupied after the first cache region is filled. When a data reading instruction input by a user is received, the data to be read is searched from the first cache region, the second cache region and the memory in sequence, when the data to be read is searched, the CPU processes the data to be read, the understanding is easy, the probability of finding the data to be read from the first cache region is the largest, the probability of finding the data to be read from the second cache region is smaller, and the probability of finding the data to be read from the memory is the smallest; furthermore, when the data to be read is processed by the CPU, the embodiment of the present invention further determines, according to the data to be read, accompanying data, where the accompanying data is data that is frequently accessed by a user in the same time period as the data to be read, then clears a part of data in the second buffer area, where an occupied amount of the part of data is greater than or equal to an occupied amount of the accompanying data, releases a part of data in the first buffer area to the second buffer area, and copies the accompanying data into the first buffer area, thereby further meeting the user requirements.
As shown in fig. 2, as a preferred embodiment of the present invention, the step of determining the call cardinality of each piece of data in the memory according to the user history data, the current time, and the operating software information specifically includes:
s101, determining data matched with running software in a memory according to the running software information, and setting a calling basic value for each matched data;
s102, determining historical reading time information of each matched data according to user historical data, and determining a calling coefficient according to the historical reading time information and the current time;
and S103, determining the calling base number of each piece of data according to the calling base value and the calling coefficient.
In the embodiment of the invention, in order to determine the calling base number, firstly, the data matched with the running software in the memory is determined according to the running software information, and the calling basic value is set for each matched data, so that the method is easy to understand, the matched data can be opened by the running software, and the calling basic value is a fixed value set in advance; and finally, determining the calling base number of each piece of data according to the calling base value and the calling coefficient, wherein the calling base number is equal to the calling base value multiplied by the calling coefficient, and in addition, the data which are not matched with the running software in the memory have no calling base number or the calling base number is zero.
As shown in fig. 3, as a preferred embodiment of the present invention, the step of determining the calling coefficient according to the historical reading time information and the current time specifically includes:
s1021, determining the difference value between each historical reading time in the historical reading time information and the current time;
s1022, determining a calling coefficient for the data according to the difference range and the number of times of historical reading time conforming to the difference range.
In the embodiment of the present invention, in order to determine the calling coefficient, it is first required to determine a difference between each historical reading time in the historical reading time information and the current time, it should be noted that it is not necessary to manage which day when calculating the difference, and the difference is calculated as the time of the same day, for example, the historical reading time of a certain data includes: 9: 02. 9: 06. 8: 50. 9: 26. 9:15 and 15:26, current time 9:20, the obtained difference values are respectively: 18 minutes, 14 minutes, 30 minutes, 6 minutes, 5 minutes and 6 hours and 6 minutes, and finally determining a calling coefficient for the data according to a difference range and the number of times of historical reading time which meets the difference range, wherein the difference range is a fixed value which is set in advance, for example, the difference range is 20 minutes, the number of times of meeting is 4, the calling coefficient is equal to the number of times of meeting/a basic number value, and the basic number value is a fixed value which is set in advance.
As shown in fig. 4, as a preferred embodiment of the present invention, the step of determining the accompanying data according to the data to be read specifically includes:
s401, determining data to be read simultaneously in a set time difference with the data to be read according to the data to be read and user historical data;
s402, counting each piece of data which is read simultaneously, and determining the corresponding data as the accompanying data when the counted number is larger than a set number value.
In the embodiment of the present invention, in order to determine the accompanying data, it is first required to determine, according to the data to be read and the user history data, data that is read simultaneously with the data to be read in a set time difference, where the user history data includes all the history read data and corresponding history read times, the set time difference is date-considered, for example, the set time difference is 10 minutes, the data to be read is read in 9.
As shown in fig. 5, as a preferred embodiment of the present invention, the method further includes:
s501, receiving shortcut instruction information input by a user, wherein the shortcut instruction information comprises a plurality of shortcut instructions, and each shortcut instruction corresponds to read data;
s502, receiving a shortcut instruction input by a user, and copying corresponding read data into a first cache region.
In the embodiment of the invention, a user can also autonomously copy data into the first cache region, specifically, the user needs to input shortcut instruction information in advance, the shortcut instruction information comprises a plurality of shortcut instructions, and each shortcut instruction corresponds to read data; then, when the user inputs one of the shortcut commands, the embodiment of the present invention copies the corresponding read data into the first cache region.
As shown in fig. 6, an embodiment of the present invention further provides a cache management system during data reading, where the cache management system includes:
a calling base number determining module 100, configured to determine a calling base number of each piece of data in the memory according to the user history data, the current time, and the operating software information;
the data copying cache module 200 is configured to copy part of data in the memory into the first cache region and the second cache region according to the call base number;
the data reading processing module 300 is configured to receive a data reading instruction, where the data reading instruction includes data to be read, sequentially search for the data to be read from the first cache region, the second cache region, and the memory, and when the data to be read is found, the CPU processes the data to be read;
the accompanying data caching module 400 is configured to determine accompanying data according to data to be read, clear a part of data in the second cache region, release a part of data in the first cache region to the second cache region, and copy the accompanying data into the first cache region.
In the embodiment of the invention, the calling base number of each data in the memory is determined according to the historical data of the user, the current time and the running software information, so that the calling base number can reflect the degree of the requirement of the user on each data in the memory at the current time, and partial data in the memory is copied into the first cache region and the second cache region according to the calling base number, so that the data which needs to be read most by the user is ensured to be put into the cache regions, the reading speed of the first cache region is higher than that of the second cache region, and the second cache region is occupied after the first cache region is filled. When a data reading instruction input by a user is received, the data to be read is searched from the first cache region, the second cache region and the memory in sequence, when the data to be read is searched, the CPU processes the data to be read, the understanding is easy, the probability of finding the data to be read from the first cache region is the largest, the probability of finding the data to be read from the second cache region is smaller, and the probability of finding the data to be read from the memory is the smallest; furthermore, when the data to be read is processed by the CPU, the embodiment of the present invention further determines, according to the data to be read, accompanying data, where the accompanying data is data that is frequently accessed by a user in the same time period as the data to be read, then clears a part of data in the second buffer area, where an occupied amount of the part of data is greater than or equal to an occupied amount of the accompanying data, releases a part of data in the first buffer area to the second buffer area, and copies the accompanying data into the first buffer area, thereby further meeting the user requirements.
As shown in fig. 7, as a preferred embodiment of the present invention, the call cardinality determining module 100 includes:
a calling basic value determining unit 101, configured to determine, according to the operating software information, data in the memory that matches the operating software, and set a calling basic value for each piece of matched data;
a calling coefficient determining unit 102, configured to determine historical reading time information of each piece of matching data according to user historical data, and determine a calling coefficient according to the historical reading time information and current time;
and a calling base number determining unit 103, configured to determine a calling base number of each piece of data according to the calling base value and the calling coefficient.
As shown in fig. 8, as a preferred embodiment of the present invention, the calling coefficient determining unit 102 includes:
a time difference value determining subunit 1021 for determining a difference value between each of the history read times in the history read time information and the current time;
a calling coefficient determining subunit 1022, configured to determine a calling coefficient for the data according to the difference range and the number of times of the historical reading time that meets the difference range.
As shown in fig. 9, as a preferred embodiment of the present invention, the accompanying data caching module 400 includes:
a data reading unit 401, configured to determine, according to data to be read and user history data, data to be read simultaneously with the data to be read in a set time difference;
an accompanying data determination unit 402 for counting each of the simultaneously read data, and when the counted number is larger than a set number value, determining the corresponding data as accompanying data.
As shown in fig. 10, as a preferred embodiment of the present invention, the system further includes a shortcut instruction caching module 500, where the shortcut instruction caching module 500 specifically includes:
a shortcut instruction information unit 501, configured to receive shortcut instruction information input by a user, where the shortcut instruction information includes a plurality of shortcut instructions, and each shortcut instruction corresponds to read data;
the data shortcut cache unit 502 is configured to receive a shortcut instruction input by a user, and copy corresponding read data into the first cache region.
The present invention has been described in detail with reference to the preferred embodiments thereof, and it should be understood that the invention is not limited thereto, but is intended to cover modifications, equivalents, and improvements within the spirit and scope of the present invention.
It should be understood that, although the steps in the flowcharts of the embodiments of the present invention are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not limited to being performed in the exact order illustrated and, unless explicitly stated herein, may be performed in other orders. Moreover, at least a portion of steps in various embodiments may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least a portion of sub-steps or stages of other steps.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (10)

1. A cache management method during data reading is characterized by comprising the following steps:
determining the calling base number of each data in the memory according to the historical data of the user, the current time and the running software information;
copying partial data in the memory into a first cache region and a second cache region according to the calling base number;
receiving a data reading instruction, wherein the data reading instruction comprises data to be read, searching the data to be read from a first cache region, a second cache region and a memory in sequence, and processing the data to be read by a CPU when the data to be read is searched;
determining the accompanying data according to the data to be read, clearing partial data in the second cache region, releasing partial data in the first cache region into the second cache region, and copying the accompanying data into the first cache region.
2. The method for cache management during data reading according to claim 1, wherein the step of determining the call cardinality of each piece of data in the memory according to the user history data, the current time, and the operating software information specifically comprises:
determining data matched with the running software in the memory according to the running software information, and setting a calling basic value for each matched data;
determining historical reading time information of each matched data according to the historical data of the user, and determining a calling coefficient according to the historical reading time information and the current time;
and determining the calling base number of each data according to the calling base value and the calling coefficient.
3. The cache management method during data reading according to claim 2, wherein the step of determining the calling coefficient according to the historical reading time information and the current time specifically comprises:
determining a difference value between each historical reading time in the historical reading time information and the current time;
and determining a calling coefficient for the data according to the difference range and the number of times of historical reading time conforming to the difference range.
4. The method for cache management during data reading according to claim 1, wherein the step of determining the accompanying data according to the data to be read specifically comprises:
determining data which is read simultaneously with the data to be read in a set time difference according to the data to be read and user historical data;
and counting each piece of data read simultaneously, and determining the corresponding data as the accompanying data when the counted number is greater than a set number value.
5. The method for managing the cache during data reading according to claim 1, further comprising:
receiving shortcut instruction information input by a user, wherein the shortcut instruction information comprises a plurality of shortcut instructions, and each shortcut instruction corresponds to read data;
and receiving a shortcut instruction input by a user, and copying corresponding read data into the first cache region.
6. A cache management system for data reading, the system comprising:
the calling base number determining module is used for determining the calling base number of each data in the memory according to the historical data of the user, the current time and the operating software information;
the data copying cache module is used for copying partial data in the memory into the first cache region and the second cache region according to the calling base number;
the data reading processing module is used for receiving a data reading instruction, searching the data to be read from the first cache region, the second cache region and the memory in sequence, and processing the data to be read by the CPU when the data to be read is searched;
and the adjoint data cache module is used for determining adjoint data according to the data to be read, removing partial data in the second cache region, releasing partial data in the first cache region into the second cache region, and copying the adjoint data into the first cache region.
7. The cache management system for data reading according to claim 6, wherein the call base determination module comprises:
the calling basic value determining unit is used for determining data matched with the running software in the memory according to the running software information and setting calling basic values for each matched data;
the calling coefficient determining unit is used for determining historical reading time information of each matched data according to the historical data of the user and determining a calling coefficient according to the historical reading time information and the current time;
and the calling base number determining unit is used for determining the calling base number of each piece of data according to the calling basic value and the calling coefficient.
8. The cache management system for data reading according to claim 7, wherein the call coefficient determining unit includes:
a time difference determining subunit, configured to determine a difference between each of the historical reading times in the historical reading time information and the current time;
and the calling coefficient determining subunit is used for determining a calling coefficient for the data according to the difference range and the times of the historical reading time conforming to the difference range.
9. The system according to claim 6, wherein the accompanying data caching module comprises:
the data reading unit is used for determining data to be read simultaneously in a set time difference with the data to be read according to the data to be read and the user historical data;
and an accompanying data determination unit for counting each of the simultaneously read data, and determining the corresponding data as accompanying data when the counted number is greater than a set number value.
10. The cache management system during data reading according to claim 6, wherein the system further comprises a shortcut instruction cache module, and the shortcut instruction cache module specifically comprises:
the shortcut instruction information unit is used for receiving shortcut instruction information input by a user, the shortcut instruction information comprises a plurality of shortcut instructions, and each shortcut instruction corresponds to read data;
and the data shortcut cache unit is used for receiving a shortcut instruction input by a user and copying corresponding read data into the first cache region.
CN202310113392.1A 2023-02-15 2023-02-15 Cache management method and system during data reading Pending CN115981895A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310113392.1A CN115981895A (en) 2023-02-15 2023-02-15 Cache management method and system during data reading

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310113392.1A CN115981895A (en) 2023-02-15 2023-02-15 Cache management method and system during data reading

Publications (1)

Publication Number Publication Date
CN115981895A true CN115981895A (en) 2023-04-18

Family

ID=85968149

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310113392.1A Pending CN115981895A (en) 2023-02-15 2023-02-15 Cache management method and system during data reading

Country Status (1)

Country Link
CN (1) CN115981895A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117991996A (en) * 2024-04-03 2024-05-07 深圳市铨兴科技有限公司 Data dynamic storage method, system and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117991996A (en) * 2024-04-03 2024-05-07 深圳市铨兴科技有限公司 Data dynamic storage method, system and storage medium
CN117991996B (en) * 2024-04-03 2024-05-31 深圳市铨兴科技有限公司 Data dynamic storage method, system and storage medium

Similar Documents

Publication Publication Date Title
US8239343B2 (en) Database reorganization technique
CN115981895A (en) Cache management method and system during data reading
US20200349113A1 (en) File storage method, deletion method, server and storage medium
CN108573019B (en) Data migration method and device, electronic equipment and readable storage medium
CN108280226B (en) Data processing method and related equipment
CN113392089B (en) Database index optimization method and readable storage medium
EP3267329A1 (en) Data processing method having structure of cache index specified to transaction in mobile environment dbms
CN113535563A (en) Test case duplication removing method and device, computer equipment and storage medium
CN115934354A (en) Online storage method and device
CN117666926A (en) Data storage method and device and electronic equipment
CN114741033A (en) Data classification storage method and system
CN117991996B (en) Data dynamic storage method, system and storage medium
CN114546995B (en) Dynamic data migration method and system based on graph database
CN111143288A (en) Data storage method, system and related device
CN117215503B (en) Method for reading flash memory data
CN115470598B (en) Multithreading-based three-dimensional rolled piece model block data rapid inheritance method and system
CN116055464B (en) Download preservation path selection method, device and medium
US20220342564A1 (en) Method, electronic device, and computer program product for storage management
CN111104344B (en) D-S evidence theory-based distributed file system data reading method
CN111290972B (en) Method and device for improving data carrying efficiency and computer equipment
CN114489492B (en) Data storage method, safety device and data storage system
CN114706671B (en) Multiprocessor scheduling optimization method and system
CN115309343B (en) Data storage method and system for multi-stage detection
CN117891981A (en) Request response method and system based on cache layer
CN118132598A (en) Database data processing method and device based on multi-level cache

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination