CN111522506A - Data reading method and device - Google Patents

Data reading method and device Download PDF

Info

Publication number
CN111522506A
CN111522506A CN202010261076.5A CN202010261076A CN111522506A CN 111522506 A CN111522506 A CN 111522506A CN 202010261076 A CN202010261076 A CN 202010261076A CN 111522506 A CN111522506 A CN 111522506A
Authority
CN
China
Prior art keywords
data
address
memory
instruction
cache address
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010261076.5A
Other languages
Chinese (zh)
Other versions
CN111522506B (en
Inventor
吴刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou DPtech Information Technology Co Ltd
Original Assignee
Hangzhou DPtech Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou DPtech Information Technology Co Ltd filed Critical Hangzhou DPtech Information Technology Co Ltd
Priority to CN202010261076.5A priority Critical patent/CN111522506B/en
Publication of CN111522506A publication Critical patent/CN111522506A/en
Application granted granted Critical
Publication of CN111522506B publication Critical patent/CN111522506B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0611Improving I/O performance in relation to response time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device

Abstract

The present specification discloses a method and an apparatus for reading data, wherein a processor may obtain a data reading instruction, query a cache address corresponding to a memory address carried by the instruction as an assigned cache address, use data stored in the assigned cache address as first data, determine whether the memory address carried by the instruction is consistent with an address corresponding to the first data, if so, read the first data, and change a state of the first data into a reserved state, if not, read data stored in a memory as second data according to the memory address carried by the instruction, and determine whether a state of the first data is the reserved state, if so, adjust a state of the first data according to a preset adjustment rule, otherwise, change the first data into the second data and store the second data in the assigned cache address. The present specification solves the problem that in the prior art, the processor needs to wait for a long time to read the data in the same memory address again.

Description

Data reading method and device
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for reading data.
Background
When a processor is running, it is often necessary to access the memory, especially when the processor is performing certain services, it is necessary to frequently access the same address in the memory.
Currently, the processor may include a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), a Central Processing Unit (CPU), a Network Processor (NPU), and the like. After the processor sends out a data reading instruction, the processor reads the data stored in the memory according to the memory address carried by the instruction.
Since there is a long delay in returning data from the memory to the processor, if the processor has read the data stored in the memory and reads the data stored in the same memory address again, there is still a long delay in the processor, which results in the processor waiting for a long time to read the data stored in the same memory address again.
Disclosure of Invention
The embodiments of the present disclosure provide a method and an apparatus for reading data, so as to partially solve the above problems in the prior art.
The embodiment of the specification adopts the following technical scheme:
the present specification provides a method for data reading, comprising:
the processor acquires a data reading instruction;
according to the corresponding relation between the cache address in the processor and the memory address in the memory, inquiring the cache address corresponding to the memory address carried by the instruction as a specified cache address;
taking the data stored in the specified cache address as first data, and judging whether the memory address carried by the instruction is consistent with the memory address corresponding to the first data;
if the first data are consistent with the first data, reading the first data, and changing the state of the first data into a reserved state;
if the first data is not consistent with the second data, reading the data stored in the memory as the second data according to the memory address carried by the instruction, and judging whether the state of the first data is the reserved state, if so, adjusting the state of the first data according to a preset adjustment rule, and if not, changing the first data into the second data and storing the second data in the designated cache address.
Optionally, querying, according to a correspondence between a cache address in the processor and a memory address in a memory, a cache address corresponding to the memory address carried by the instruction as an assigned cache address, specifically including:
and taking the specified bit address in the memory address carried by the instruction as the specified cache address, and inquiring.
Optionally, determining whether a memory address carried by the instruction is consistent with a memory address corresponding to the first data, specifically including:
judging whether the data stored in the specified cache address exists or not;
if the judgment result is that the first data exists, inquiring a memory address stored in the specified cache address to serve as a memory address corresponding to the data stored in the specified cache address, and judging whether the memory address carried by the instruction is consistent with the memory address corresponding to the first data;
and if the judgment result is that the instruction does not exist, judging that the memory address carried by the instruction is inconsistent with the memory address corresponding to the first data.
Optionally, the state of the first data is characterized by a number of hits stored in the specified cache address;
judging whether the state of the first data is the reserved state, specifically including:
judging whether the number of hits is greater than zero;
if so, judging that the state of the first data is the reserved state;
otherwise, judging that the state of the first data is not the reserved state.
Optionally, changing the state of the first data to a reserved state specifically includes:
increasing the number of hits;
adjusting the state of the first data according to a preset adjustment rule, specifically comprising:
reducing the number of hits.
Optionally, changing the first data into the second data and storing the second data in the specified cache address specifically includes:
deleting the first data from the specified cache address;
storing the second data in the specified cache address;
initializing the state of the second data to the reserved state, and changing the memory address stored in the specified cache address to the memory address carried by the instruction.
Optionally, the method further comprises:
aiming at each cache address, acquiring the storage duration of the data stored in the cache address;
judging whether the storage time of the data stored in the cache address reaches a preset storage time threshold value or not;
if so, storing the data stored in the cache address into the memory according to the memory address stored in the cache address, and clearing the data stored in the cache address;
otherwise, the data stored in the cache address is not processed.
Optionally, the processor comprises a field programmable gate array FPGA.
The present specification provides an apparatus for data reading, the apparatus comprising:
the instruction acquisition module is used for acquiring an instruction for data reading by a processor where the device is located;
the address query module is used for querying a cache address corresponding to the memory address carried by the instruction as an appointed cache address according to the corresponding relation between the cache address in the processor and the memory address in the memory;
the judging module is used for taking the data stored in the specified cache address as first data and judging whether the memory address carried by the instruction is consistent with the memory address corresponding to the first data;
the first reading module is used for reading the first data and changing the state of the first data into a reserved state when the judgment result of the judging module is consistent;
and the second reading module is used for reading data stored in the memory as second data according to the memory address carried by the instruction when the judgment result of the judging module is inconsistent, judging whether the state of the first data is the reserved state, if so, adjusting the state of the first data according to a preset adjustment rule, and if not, changing the first data into the second data and storing the second data in the appointed cache address.
The present specification provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the processor implements the above data reading method.
The embodiment of the specification adopts at least one technical scheme which can achieve the following beneficial effects:
the processor can obtain a data reading instruction, according to the corresponding relation between the cache address and the memory address, the cache address corresponding to the memory address carried by the instruction can be inquired as a designated cache address, the data stored in the designated cache address can be used as first data, whether the memory address carried by the instruction is consistent with the address corresponding to the first data or not is judged, if so, the first data is read, the state of the first data is changed into a reserved state, if not, the data stored in the memory is read as second data according to the memory address carried by the instruction, whether the state of the first data is the reserved state or not is judged, if so, the state of the first data is adjusted according to a preset adjustment rule, and if not, the first data is changed into the second data and is stored in the designated cache address. In this specification, since the cache in the processor stores the data in the read memory address, and the delay time for reading the data from the cache in the processor is much shorter than the delay time for reading the data from the memory, for the same memory address, when the processor reads the memory address again after the data in the memory address has been read, the method for reading the data provided by this specification is used, which can greatly reduce the waiting time, and solve the problem that it still needs to wait for a long time to read the data in the same memory address again in the prior art.
Drawings
The accompanying drawings, which are included to provide a further understanding of the specification and are incorporated in and constitute a part of this specification, illustrate embodiments of the specification and together with the description serve to explain the specification and not to limit the specification in a non-limiting sense. In the drawings:
fig. 1 is a flowchart of a method for reading data according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of an apparatus for reading data according to an embodiment of the present disclosure;
FIG. 3 is a block diagram of an internal area of a processor according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of an electronic device corresponding to fig. 1 provided in an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present disclosure more clear, the technical solutions of the present disclosure will be clearly and completely described below with reference to the specific embodiments of the present disclosure and the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present specification without any creative effort belong to the protection scope of the present specification.
The technical solutions provided by the embodiments of the present description are described in detail below with reference to the accompanying drawings.
Fig. 1 is a flowchart of a data reading method provided in an embodiment of the present disclosure, which may specifically include the following steps:
s100: the processor obtains instructions for data reading.
In this specification, the processor may include a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), a Central Processing Unit (CPU), and the like, wherein the CPU may be divided into a first-level cache and a second-level cache, and the ASIC may be a custom Circuit according to actual requirements. The FPGA can be applied to various fields, such as the communication field, the security field and the like, and realizes the functions of image processing, data acquisition and the like.
When a processor implements functions such as image processing, it is often necessary to frequently access the same memory address for a period of time. Due to the storage characteristics of the memory, the delay time for reading data from the memory is much longer than the delay time for reading data from the processor's cache. When the processor frequently accesses the same memory address within a period of time, the processor needs to wait for a longer delay time every time the processor accesses the memory address in the memory, so that the data stored in the frequently-accessed memory address can be stored in the cache, and when the processor accesses the memory address again, the processor can directly access the cache address corresponding to the memory address carried in the instruction in the cache, thereby solving the problem that the processor needs to wait for a longer time when accessing the same address again. Accessing the memory address includes performing a read operation on the memory address, performing a write operation on the memory address, and the like. The present description relates generally to reading operations for memory addresses and updating operations for data stored in memory addresses.
The processor generates an instruction for reading data when executing a task, and of course, the processor may also obtain an instruction for reading data generated by other processors. The description is mainly made here for an instruction for reading data generated by the processor itself.
Instructions are code for a processor to perform tasks to implement certain control or operation, and may include an operation function and an operation object, where in this specification, the operation function may refer to a data reading function, and the operation object may include a memory address of a memory for storing data to be read.
S102: and inquiring the cache address corresponding to the memory address carried by the instruction as an appointed cache address according to the corresponding relation between the cache address in the processor and the memory address in the memory.
In this specification, the capacity of the cache in the processor is much smaller than the memory, but the speed of the cache can approach the frequency of the processor. When the instruction for reading data is obtained, the processor may first check whether the data to be read by the instruction is stored in the cache, and if the data to be read by the instruction is stored in the cache (that is, hit), the processor may directly read the data without accessing the memory.
Therefore, when the processor checks whether the data to be read is stored in the cache, first, the cache address corresponding to the memory address carried by the instruction (that is, the designated cache address) may be queried according to the correspondence between the cache address in the processor and the memory address in the memory.
Specifically, the correspondence between the cache address in the processor and the memory address in the memory may be represented as: the cache address in the processor is the specified bit address of the memory address in the memory. Because the capacity of the cache is far smaller than that of the memory, the bit number of the cache address can be determined according to the capacity of the cache. For example, if the cached address includes 1024 addresses, the lower 10-bit address of the memory address may be used as the cached address, or alternatively, 10 non-consecutive designated bit addresses in the memory address may be used as the cached address.
According to the corresponding relation between the cache address and the memory address carried by the instruction, the designated bit address in the memory address carried by the instruction can be used as the designated cache address, and the designated cache address is inquired in the cache. It should be noted that, according to the memory address carried by the instruction, a specified cache address may be queried, but the specified cache address is not necessarily queried only by the memory address carried by the instruction, and may also be queried according to other memory addresses including the specified cache address.
S104: and taking the data stored in the specified cache address as first data, judging whether the memory address carried by the instruction is consistent with the memory address corresponding to the first data, if so, executing the step S106, and if not, executing the step S108.
In this specification, the processor may first determine whether data to be read is stored in the cache, and if the data stored in the memory address carried by the instruction is stored in the cache, the processor may directly read the data stored in the cache without reading the data stored in the memory, so as to save a long time delay time and improve the operating efficiency of the processor.
After the designated cache address is queried, the processor may use the data stored in the designated cache address as the first data, and determine whether the memory address carried by the instruction is consistent with the memory address corresponding to the first data.
In particular, the processor may determine whether data stored at the specified cache address is present. When the processor stores the data in the storage memory into the cache, the processor can simultaneously store the valid flag, the storage time, the number of hits, and the like in the cache, that is, the data stored in the designated cache address can include the memory address, the data stored in the memory address in the designated cache address, the valid flag, the storage time, the number of hits, and the like. The processor may query the specified cache address for a valid tag, and if a valid tag exists, may determine that data stored in the specified cache address exists, and if a valid tag does not exist, may determine that data stored in the specified cache address does not exist. Of course, the processor may also determine whether the data stored in the specified cache address exists in other manners, for example, may determine whether the storage time exists in the specified cache address, and the like.
If the processor determines that the data stored in the specified cache address exists, the processor may query the memory address stored in the specified cache address as the memory address corresponding to the data stored in the specified cache address, that is, the memory address stored in the specified cache address and the memory address of the data stored in the specified cache address in the memory are the same memory address. Then, the processor may compare the memory address stored in the designated cache with the memory address carried by the instruction, and determine whether the memory address carried by the instruction is consistent with the memory address corresponding to the first data according to the comparison result. If the comparison result is consistent, the processor may determine that the memory address carried by the instruction is consistent with the memory address stored in the specified cache address, and may read the first data, and change the state of the first data to the reserved state, that is, execute step S106, if the comparison result is inconsistent, the processor may determine that the memory address carried by the instruction is inconsistent with the memory address stored in the specified cache address, and may read the data stored in the memory as the second data according to the memory address carried by the instruction, that is, execute step S108.
If the processor determines that the data stored in the specified cache address does not exist, that is, the data is not stored in the specified cache address, the processor may determine that the memory address carried by the instruction is not consistent with the memory address stored in the specified cache address without comparing the memory address carried by the instruction with the memory address stored in the specified cache address, and may read the second data according to the memory address carried by the instruction, that is, perform step S108.
S106: and reading the first data, and changing the state of the first data into a reserved state.
After the processor determines that the memory address carried by the instruction is consistent with the memory address corresponding to the first data, the processor can directly read the first data stored in the specified cache address without reading the data stored in the memory, and the delay time for reading the data is reduced. The processor may also change the state of the first data to a reserved state when reading the first data.
Specifically, the state of the first data is characterized by the number of hits stored in the specified cache address, which is a non-negative integer. When the number of hits is a positive integer, the state of the first data is a reserved state, and when the number of hits is zero, the state of the first data is a discarded state. Therefore, when the processor changes the state corresponding to the first data to the reserved state, the number of hits can be increased, for example, the number of hits can be increased by 1. The greater the number of hits, the more frequently the processor reads the data stored in the specified cache address, i.e., the more time the processor saves in reading the data in the specified cache address. Of course, the processor may also set a threshold value of the number of hits, and when the number of hits is greater than the threshold value of the number of hits, the number of hits may be reset to zero, which mainly considers that the processor frequently reads data stored in the same memory address within a period of time, and may not read data stored in the memory address at other times, and in addition, as mentioned above, the same cache address may correspond to a plurality of memory addresses, and if the processor frequently reads data stored in the same memory address within a period of time (i.e., frequently reads data stored in the same cache address), the processor frequently reads data stored in other memory addresses corresponding to the cache address without reading data stored in the memory address at other time periods, and instead, the number of hits may be reset to zero more easily to update data stored in a designated cache address, so as to save more effectively the time for the processor to read data, and, the processor can be facilitated to store frequently read data in the cache.
S108: and reading the data stored in the memory as second data according to the memory address carried by the instruction.
S110: determining whether the state of the first data is the reserved state, if so, performing step S112, and if not, performing step S114.
After determining that the memory address carried by the instruction is inconsistent with the memory address corresponding to the first data, the processor may read, in the memory, the data (i.e., the second data) stored in the memory address carried by the instruction according to the memory address carried by the instruction.
The processor can also judge whether the state of the first data is a reserved state or not while reading the second data.
Specifically, the processor may determine whether the number of hits stored in the designated cache address is greater than zero while reading the second data, and if the number of hits is greater than zero, the processor may determine that the state of the first data is a reserved state, and frequently reads the first data stored in the designated cache address in a past period of time. In step S100, the memory address carried by the data reading instruction obtained by the processor is not the memory address in the designated cache address, and the state of the first data may be adjusted according to the preset adjustment rule, that is, step S112 is executed. If the number of hits is not greater than zero (i.e., the number of hits is zero), it may be determined that the state of the first data is a discarded state, and the processor may not frequently read the first data for a period of time, and at this time, the first data stored in the specified cache address may be changed to data (i.e., second data) stored in the memory address carried by the instruction for reading the data acquired in step S100, so that when the memory address carried by the instruction for reading the data acquired again by the processor is consistent with the memory address carried by the instruction for reading the data acquired in step S100, the data stored in the specified cache address is directly read, that is, step S114 is performed.
In addition, when the processor reads the data stored in the memory, the processor needs to wait for a long delay time due to the storage characteristics of the memory, and the delay time for the processor to wait for reading the data stored in the cache is almost negligible relative to the delay time for reading the data stored in the memory. Therefore, when the processor reads the data stored in the memory, the processor does not need to record the information for accessing the specified cache address, after reading the second data, the processor can query the specified cache address again according to the memory address carried by the instruction, and judge whether the data stored in the specified cache address exists again. When the data stored in the designated cache address exists, it may be determined whether the state of the first data is a reserved state, that is, it is determined whether the number of hits stored in the designated cache address is greater than zero, if the number of hits is greater than zero, the state of the first data is adjusted according to a preset adjustment rule, that is, step S112 is performed, and if the number of hits is zero or when the data stored in the designated cache address does not exist, the first data is changed to the second data and stored in the designated cache address, that is, step S114 is performed.
The process of re-querying the designated cache address and re-determining whether the data stored in the designated cache address exists is described above, and is not described herein again.
S112: and adjusting the state of the first data according to a preset adjustment rule.
When the processor determines that the memory address carried by the instruction is inconsistent with the memory address stored in the specified cache address and determines that the state of the first data is the reserved state, the processor may adjust the state of the first data according to a preset adjustment rule.
Specifically, since the data stored in the specified cache address is not the data that the processor needs to read, the adjustment rule can be expressed as: the state of the first data is adjusted to a discard state, or the state of the first data is adjusted toward a discard state. Since the state of the first data is characterized by the number of hits stored in the specified cache address, the adjustment rule can also be expressed as: the number of hits is adjusted to zero or, alternatively, the number of hits is reduced, e.g., the number of hits is reduced by 1.
It should be noted that adjusting the state of the first data facilitates the processor to update the frequently accessed data into the cache according to the number of times of reading the data stored in the cache. If the number of hits stored in the specified cache address is continuously reduced until the number of hits is reduced to zero, it is indicated that the processor frequently reads data stored in other memory addresses corresponding to the specified cache address in a period of time, and the data stored in the other memory addresses corresponding to the specified cache address can be written into the specified cache address, so that the processor can read the data from the specified cache, the data reading time is saved, and meanwhile, the effect that the processor stores the frequently read data in the period of time into the cache is achieved.
S114: and changing the first data into the second data to be stored in the specified cache address.
Upon determining that the state of the first data is not the reserved state, the processor may change the first data to the second data and store the second data in the specified cache address.
Specifically, first, the processor may delete the first data from the specified cache address. The processor may delete a memory address stored in the specified cache address, data stored in the memory address in the specified cache address, a validity flag, a storage time, a number of hits, and the like from the specified cache address. The processor may then store second data at the specified cache address, where the second data is data stored at the memory address carried by the instruction. Meanwhile, the processor may initialize the state of the second data to a reserved state, that is, initialize and store the number of hits in the designated cache address when the second data is stored in the designated cache address, wherein the number of hits may be 1 or other positive integer. In addition, the processor may change the memory address stored in the specified cache address to the memory address carried by the instruction, may use a time when the second data is stored in the specified cache address as a storage time, and stores the valid flag in the specified cache address. In summary, when the processor changes the first data into the second data and stores the second data in the designated cache address, the first data can be deleted and the second data and the memory address, the hit number, the storage time, the valid flag, etc. carried by the instruction can be stored in the designated cache address.
In this specification, the processor may further process the data stored in each cache address in a polling manner according to a preset polling cycle.
Specifically, first, the processor may obtain a retention time period of data stored in the cache address. The processor may determine a retention time period of the data stored in the cache address according to the storage time stored in the cache address. Then, the processor may determine whether the storage duration of the data stored in the cache address reaches a preset storage duration threshold, and if so, the processor may store the data stored in the cache address into the memory according to the memory address stored in the cache address, clear the data stored in the cache address, and release the space of the cache address to facilitate subsequent storage of new data, and if not, the processor may not process the data stored in the cache address.
Based on the data reading method shown in fig. 1, the embodiment of the present specification further provides a schematic structural diagram of a data reading apparatus, as shown in fig. 2.
Fig. 2 is a schematic structural diagram of an apparatus for reading data according to an embodiment of the present disclosure, where the apparatus includes:
an instruction obtaining module 201, configured to obtain an instruction for reading data by a processor in which the apparatus is located;
an address querying module 202, configured to query, according to a correspondence between a cache address in the processor and a memory address in a memory, a cache address corresponding to the memory address carried in the instruction as an assigned cache address;
a determining module 203, configured to use the data stored in the specified cache address as first data, and determine whether a memory address carried by the instruction is consistent with a memory address corresponding to the first data;
a first reading module 204, configured to read the first data and change the state of the first data to a reserved state when the determination result of the determining module is consistent;
a second reading module 205, configured to, when the determination result of the determining module is inconsistent, read data stored in the memory as second data according to the memory address carried by the instruction, and determine whether the state of the first data is the reserved state, if the state is the reserved state, adjust the state of the first data according to a preset adjustment rule, and if the state is not the reserved state, change the first data into the second data and store the second data in the specified cache address.
Optionally, the address querying module 202 is specifically configured to take a specified bit address in a memory address carried by the instruction as the specified cache address, and query the specified cache address.
Optionally, the determining module 203 is specifically configured to determine whether data stored in the specified cache address exists; if the judgment result is that the first data exists, inquiring a memory address stored in the specified cache address to serve as a memory address corresponding to the data stored in the specified cache address, and judging whether the memory address carried by the instruction is consistent with the memory address corresponding to the first data; and if the judgment result is that the instruction does not exist, judging that the memory address carried by the instruction is inconsistent with the memory address corresponding to the first data.
Optionally, the state of the first data is characterized by a number of hits stored in the specified cache address;
the second reading module 205 is specifically configured to determine whether the number of hits is greater than zero; if so, judging that the state of the first data is the reserved state; otherwise, judging that the state of the first data is not the reserved state.
Optionally, the first reading module 204 is specifically configured to increase the number of hits;
the second reading module 205 is specifically configured to reduce the number of hits.
Optionally, the second reading module 205 is specifically configured to delete the first data from the specified cache address; storing the second data in the specified cache address; initializing the state of the second data to the reserved state, and changing the memory address stored in the specified cache address to the memory address carried by the instruction.
Optionally, the apparatus further comprises: a processing module 206;
the processing module 206 is specifically configured to, for each cache address, obtain a storage duration of data stored in the cache address; judging whether the storage time of the data stored in the cache address reaches a preset storage time threshold value or not; if so, storing the data stored in the cache address into the memory according to the memory address stored in the cache address, and clearing the data stored in the cache address; otherwise, the data stored in the cache address is not processed.
Optionally, the processor on which the apparatus is located comprises a field programmable gate array FPGA.
Based on the data reading method shown in fig. 1 and the data reading apparatus shown in fig. 2, the embodiment of the present specification further provides a schematic structural diagram of an internal area of a processor, as shown in fig. 3.
Fig. 3 is a schematic structural diagram of an internal area of a processor according to an embodiment of the present disclosure, in fig. 3, the processor may include a service module, an acceleration module, and a memory controller, where the service module may include the instruction obtaining module 201 in fig. 2, the acceleration module may include a first sub-module, a second sub-module, a third sub-module, and a cache, the first sub-module may include an address querying module 202, a determining module 203, and a first reading module 204, the second reading module 205 may include a second sub-module and a memory controller, and the third sub-module may include a processing module 206.
Specifically, the service module may generate a data reading instruction, or obtain the data reading instruction from another processor, and send the instruction to the first sub-module.
After receiving a data reading instruction, a first submodule queries a cache address corresponding to the memory address carried by the instruction in a cache as an appointed cache address according to the memory address carried by the instruction, judges whether data stored in the appointed cache address exist, when the data stored in the appointed cache address exist, the first submodule judges whether the memory address carried by the instruction is consistent with the memory address stored in the appointed cache address, if so, the data stored in the appointed cache is returned to a service module, if not, or when the data stored in the appointed cache address does not exist, the first submodule sends the data reading instruction to a memory controller, and meanwhile, the first submodule can reduce the number of hits stored in the appointed cache address according to a preset adjustment rule.
After the memory control receives a data reading instruction sent by the first sub-module, according to a memory address carried in the instruction, data stored in the memory address is read in the memory as second data, and the read second data is sent to the second sub-module.
After the second submodule receives second data sent by the memory control, according to a memory address carried by an instruction, a designated cache address is inquired in a cache, whether the data stored in the designated cache address exist or not is judged, when the data stored in the designated cache address exist, the second submodule judges whether the number of hits stored in the designated cache address is larger than zero or not, if the number of hits stored in the designated cache address is larger than zero, the number of hits is reduced according to a preset adjustment rule, if the number of hits is not larger than zero or the data stored in the designated cache address does not exist, the second submodule can delete the first data stored in the designated cache address, the second data is stored in the designated cache address, the storage time is recorded, the number of hits is initialized, and effective marks and the memory address corresponding to the second data and the like are stored. Of course, the first submodule may also reduce the number of hits according to the adjustment rule.
The third sub-module may process the data stored in the buffer in a polling manner according to a preset polling period. For each cache address in the cache, the third submodule may obtain a storage duration of the data stored in the cache address, and determine whether the obtained storage duration reaches a preset storage duration threshold, if so, store the data stored in the cache address into the memory according to the memory address stored in the cache address, and clear the data stored in the cache address, and if not, the third submodule may not process the data stored in the cache address.
Based on the method for reading data shown in fig. 1, the embodiment of the present specification further proposes a schematic structural diagram of the electronic device shown in fig. 4. As shown in fig. 4, at the hardware level, the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile memory, but may also include hardware required for other services. The processor reads a corresponding computer program from the non-volatile memory into the memory and then runs the computer program to implement the data reading method described in fig. 1 above.
Of course, besides the software implementation, the present specification does not exclude other implementations, such as logic devices or a combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may be hardware or logic devices.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Language Description Language), traffic, pl (core unified Programming Language), HDCal, JHDL (Java Hardware Description Language), langue, Lola, HDL, laspam, hardsradware (Hardware Description Language), vhjhd (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functions of the various elements may be implemented in the same one or more software and/or hardware implementations of the present description.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The description has been presented with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the description. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
This description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present specification, and is not intended to limit the present specification. Various modifications and alterations to this description will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present specification should be included in the scope of the claims of the present specification.

Claims (10)

1. A method of data reading, the method comprising:
the processor acquires a data reading instruction;
according to the corresponding relation between the cache address in the processor and the memory address in the memory, inquiring the cache address corresponding to the memory address carried by the instruction as a specified cache address;
taking the data stored in the specified cache address as first data, and judging whether the memory address carried by the instruction is consistent with the memory address corresponding to the first data;
if the first data are consistent with the first data, reading the first data, and changing the state of the first data into a reserved state;
if the first data is not consistent with the second data, reading the data stored in the memory as the second data according to the memory address carried by the instruction, and judging whether the state of the first data is the reserved state, if so, adjusting the state of the first data according to a preset adjustment rule, and if not, changing the first data into the second data and storing the second data in the designated cache address.
2. The method according to claim 1, wherein querying, according to a correspondence between a cache address in the processor and a memory address in a memory, a cache address corresponding to the memory address carried by the instruction as a specified cache address specifically includes:
and taking the specified bit address in the memory address carried by the instruction as the specified cache address, and inquiring.
3. The method according to claim 1, wherein determining whether the memory address carried by the instruction is consistent with the memory address corresponding to the first data comprises:
judging whether the data stored in the specified cache address exists or not;
if the judgment result is that the first data exists, inquiring a memory address stored in the specified cache address to serve as a memory address corresponding to the data stored in the specified cache address, and judging whether the memory address carried by the instruction is consistent with the memory address corresponding to the first data;
and if the judgment result is that the instruction does not exist, judging that the memory address carried by the instruction is inconsistent with the memory address corresponding to the first data.
4. The method of claim 1, wherein the state of the first data is characterized by a number of hits stored in the specified cache address;
judging whether the state of the first data is the reserved state, specifically including:
judging whether the number of hits is greater than zero;
if so, judging that the state of the first data is the reserved state;
otherwise, judging that the state of the first data is not the reserved state.
5. The method of claim 4, wherein changing the state of the first data to a retained state comprises:
increasing the number of hits;
adjusting the state of the first data according to a preset adjustment rule, specifically comprising:
reducing the number of hits.
6. The method of claim 1, wherein changing the first data to the second data is stored at the specified cache address, specifically comprising:
deleting the first data from the specified cache address;
storing the second data in the specified cache address;
initializing the state of the second data to the reserved state, and changing the memory address stored in the specified cache address to the memory address carried by the instruction.
7. The method of claim 1, wherein the method further comprises:
aiming at each cache address, acquiring the storage duration of the data stored in the cache address;
judging whether the storage time of the data stored in the cache address reaches a preset storage time threshold value or not;
if so, storing the data stored in the cache address into the memory according to the memory address stored in the cache address, and clearing the data stored in the cache address;
otherwise, the data stored in the cache address is not processed.
8. The method of any one of claims 1-7, wherein the processor comprises a Field Programmable Gate Array (FPGA).
9. An apparatus for data reading, the apparatus comprising:
the instruction acquisition module is used for acquiring an instruction for data reading by a processor where the device is located;
the address query module is used for querying a cache address corresponding to the memory address carried by the instruction as an appointed cache address according to the corresponding relation between the cache address in the processor and the memory address in the memory;
the judging module is used for taking the data stored in the specified cache address as first data and judging whether the memory address carried by the instruction is consistent with the memory address corresponding to the first data;
the first reading module is used for reading the first data and changing the state of the first data into a reserved state when the judgment result of the judging module is consistent;
and the second reading module is used for reading data stored in the memory as second data according to the memory address carried by the instruction when the judgment result of the judging module is inconsistent, judging whether the state of the first data is the reserved state, if so, adjusting the state of the first data according to a preset adjustment rule, and if not, changing the first data into the second data and storing the second data in the appointed cache address.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any of claims 1-8 when executing the program.
CN202010261076.5A 2020-04-03 2020-04-03 Data reading method and device Active CN111522506B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010261076.5A CN111522506B (en) 2020-04-03 2020-04-03 Data reading method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010261076.5A CN111522506B (en) 2020-04-03 2020-04-03 Data reading method and device

Publications (2)

Publication Number Publication Date
CN111522506A true CN111522506A (en) 2020-08-11
CN111522506B CN111522506B (en) 2022-08-02

Family

ID=71901860

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010261076.5A Active CN111522506B (en) 2020-04-03 2020-04-03 Data reading method and device

Country Status (1)

Country Link
CN (1) CN111522506B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112256600A (en) * 2020-10-22 2021-01-22 海光信息技术股份有限公司 Data reading method and related device
CN114003182A (en) * 2022-01-04 2022-02-01 苏州浪潮智能科技有限公司 Instruction interaction method and device, storage equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104503707A (en) * 2014-12-24 2015-04-08 华为技术有限公司 Method and device for reading data
CN105335102A (en) * 2015-10-10 2016-02-17 浪潮(北京)电子信息产业有限公司 Buffer data processing method and device
US20170039145A1 (en) * 2015-03-09 2017-02-09 Intel Corporation Memcached systems having local caches
CN107220188A (en) * 2017-05-31 2017-09-29 莫倩 A kind of automatic adaptation cushion block replacement method
CN108897701A (en) * 2018-06-20 2018-11-27 珠海市杰理科技股份有限公司 Cache storage architecture

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104503707A (en) * 2014-12-24 2015-04-08 华为技术有限公司 Method and device for reading data
US20170039145A1 (en) * 2015-03-09 2017-02-09 Intel Corporation Memcached systems having local caches
CN105335102A (en) * 2015-10-10 2016-02-17 浪潮(北京)电子信息产业有限公司 Buffer data processing method and device
CN107220188A (en) * 2017-05-31 2017-09-29 莫倩 A kind of automatic adaptation cushion block replacement method
CN108897701A (en) * 2018-06-20 2018-11-27 珠海市杰理科技股份有限公司 Cache storage architecture

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112256600A (en) * 2020-10-22 2021-01-22 海光信息技术股份有限公司 Data reading method and related device
CN114003182A (en) * 2022-01-04 2022-02-01 苏州浪潮智能科技有限公司 Instruction interaction method and device, storage equipment and medium

Also Published As

Publication number Publication date
CN111522506B (en) 2022-08-02

Similar Documents

Publication Publication Date Title
CN108712454B (en) File processing method, device and equipment
US10599564B2 (en) Resource reclamation method and apparatus
US11436252B2 (en) Data processing methods, apparatuses, and devices
CN111522506B (en) Data reading method and device
CN108959341B (en) Data synchronization method, device and equipment
EP3023878B1 (en) Memory physical address query method and apparatus
CN117312394A (en) Data access method and device, storage medium and electronic equipment
US20240126465A1 (en) Data storage methods, apparatuses, devices, and storage media
CN111241040B (en) Information acquisition method and device, electronic equipment and computer storage medium
CN114900546B (en) Data processing method, device and equipment and readable storage medium
CN111355672A (en) Message forwarding method and device
CN116822657B (en) Method and device for accelerating model training, storage medium and electronic equipment
CN111190655A (en) Processing method, device, equipment and system for application cache data
CN109656946B (en) Multi-table association query method, device and equipment
CN106156050B (en) Data processing method and device
CN111694992A (en) Data processing method and device
CN117112227A (en) Memory management method, system, device, storage medium and electronic equipment
CN115374117A (en) Data processing method and device, readable storage medium and electronic equipment
CN113010551B (en) Resource caching method and device
CN111339117B (en) Data processing method, device and equipment
US20100077147A1 (en) Methods for caching directory structure of a file system
CN113761400A (en) Access request forwarding method, device and equipment
CN113342270A (en) Volume unloading method and device and electronic equipment
CN112559068A (en) Component caching method and device
CN115344410B (en) Method and device for judging event execution sequence, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant