CN118276944B - Data reading method and device, electronic equipment and readable storage medium - Google Patents
Data reading method and device, electronic equipment and readable storage medium Download PDFInfo
- Publication number
- CN118276944B CN118276944B CN202410711555.0A CN202410711555A CN118276944B CN 118276944 B CN118276944 B CN 118276944B CN 202410711555 A CN202410711555 A CN 202410711555A CN 118276944 B CN118276944 B CN 118276944B
- Authority
- CN
- China
- Prior art keywords
- target
- address
- cache
- mapping
- physical address
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 72
- 238000003860 storage Methods 0.000 title claims abstract description 20
- 238000013507 mapping Methods 0.000 claims abstract description 298
- 230000006870 function Effects 0.000 claims description 32
- 238000004891 communication Methods 0.000 claims description 18
- 230000008569 process Effects 0.000 description 25
- 238000010586 diagram Methods 0.000 description 12
- 238000012545 processing Methods 0.000 description 12
- 238000004590 computer program Methods 0.000 description 7
- 230000009471 action Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 3
- 238000004904 shortening Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000000750 progressive effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 239000002699 waste material Substances 0.000 description 2
- 101150055297 SET1 gene Proteins 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The embodiment of the invention provides a data reading method, a data reading device, electronic equipment and a readable storage medium, which relate to the technical field of computers; acquiring a first physical address of each cache line in the target group from the tag domain, and determining the first physical address of each cache line in the target group as a first reference address; hash mapping is carried out on the target physical address corresponding to the first reference address and the access memory address, and a first mapping parameter corresponding to the first reference address and a second mapping parameter corresponding to the target physical address are obtained; and acquiring a target data block corresponding to the target physical address according to the first mapping parameter and the second mapping parameter. The embodiment of the invention reduces the power consumption of the cache for data reading and improves the efficiency of the cache for data reading.
Description
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a data reading method, a data reading device, an electronic device, and a readable storage medium.
Background
In a computer system, in order to increase the speed of data access by a processor, decrease the gap between the operation speed of the processor and the data access speed, a data cache (DATA CACHE, DCACHE) is typically provided between the processor and the memory, and a copy of the data most recently used by the processor is stored in the data cache. Group associative (set-associative) caches are a common form of cache in a data cache, in which the cache is divided into groups (sets), each group having a different way, each way corresponding to a cache line, each cache line comprising a data block and a physical address to which the data block corresponds, and the data blocks in the cache lines being stored in a data field (DATAARRAY) of the group associative cache, the physical addresses to which the data blocks in the cache lines correspond being stored in a tag field (TagArray) of the group associative cache.
Currently, when a processor needs to read data from a group-connected cache, a cache controller needs to compare a physical address corresponding to an access address with addresses in each group one by one so as to determine whether the access address hits the cache; then, under the condition that the access address hits in the cache, a path where the data hit by the access address is located is determined based on the address comparison result, and then the data hit by the access address is read from the cache line corresponding to the path.
However, the parallel address comparison operation consumes a great deal of power consumption of the group connection cache, and for the processor with more memory access bits, the address comparison needs to consume a long time, so that the data reading delay is too long, and the processing frequency of the processor is severely limited.
Disclosure of Invention
The embodiment of the invention provides a data reading method, a device, electronic equipment and a readable storage medium, which can solve the problems of high power consumption and long time delay in the process of data reading by group connection caches in the related technology.
In order to solve the above problems, an embodiment of the present invention discloses a data reading method, which includes:
Receiving a memory access request sent by a processor, and acquiring a memory access address carried in the memory access request;
Determining a target group corresponding to the access address from the group of the cache according to the access address and a first physical address stored in a mark domain of the cache; the high-speed cache comprises a data field and a mark field, wherein the data field is used for storing data blocks, the data blocks and first physical addresses corresponding to the data blocks one by one form cache lines, at least two cache lines form groups, and the number of the groups is at least 2;
Acquiring first physical addresses of all cache lines in the target group from the tag domain, and determining the first physical addresses of all the cache lines in the target group as first reference addresses;
Hash mapping is carried out on the first reference address and the target physical address corresponding to the access address, so that a first mapping parameter corresponding to the first reference address and a second mapping parameter corresponding to the target physical address are obtained;
Acquiring a target data block corresponding to the target physical address according to the first mapping parameter and the second mapping parameter; the first physical address of the target data block is the same as the target physical address.
In another aspect, an embodiment of the present invention discloses a data reading apparatus, including:
the first acquisition module is used for receiving a memory access request sent by the processor and acquiring a memory access address carried in the memory access request;
A determining module, configured to determine, from the group of the cache, a target group corresponding to the access address according to the access address and a first physical address stored in a tag field of the cache; the high-speed cache comprises a data field and a mark field, wherein the data field is used for storing data blocks, the data blocks and first physical addresses corresponding to the data blocks one by one form cache lines, at least two cache lines form groups, and the number of the groups is at least 2;
The second acquisition module is used for acquiring the first physical address of each cache line in the target group from the mark domain and determining the first physical address of each cache line in the target group as a first reference address;
The hash mapping module is used for carrying out hash mapping on the first reference address and the target physical address corresponding to the access address to obtain a first mapping parameter corresponding to the first reference address and a second mapping parameter corresponding to the target physical address;
the third acquisition module is used for acquiring a target data block corresponding to the target physical address according to the first mapping parameter and the second mapping parameter; the first physical address of the target data block is the same as the target physical address.
In still another aspect, the embodiment of the invention also discloses an electronic device, which comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus; the memory is used for storing executable instructions which enable the processor to execute the data reading method.
The embodiment of the invention also discloses a readable storage medium, which enables the electronic device to execute the data reading method when the instructions in the readable storage medium are executed by the processor of the electronic device.
The embodiment of the invention has the following advantages:
The embodiment of the invention provides a data reading method, wherein under the condition that a cache acquires an access address, a target group corresponding to the access address is firstly determined from a group of the cache, and a first physical address of each cache line in the target group acquired from a mark domain of the cache is determined as a first reference address; the target physical address corresponding to the first reference address and the access address are subjected to Hash mapping to obtain a first mapping parameter corresponding to the first reference address and a second mapping parameter corresponding to the target physical address, so that the bit of the first mapping parameter is smaller than that of the first reference address, the bit of the second mapping parameter is smaller than that of the target physical address, and the cache acquires a target data block corresponding to the target physical address according to the first mapping parameter and the second mapping parameter with fewer bits, thereby reducing the workload of address comparison of the cache in the process of data reading, shortening the time consumed by address comparison of the cache, reducing the power consumption of data reading of the cache, improving the data reading efficiency of the cache, and further improving the processing frequency of a processor.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of steps of an embodiment of a data reading method of the present invention;
FIG. 2 is a schematic diagram of the architecture of a cache of the present invention;
FIG. 3 is a block diagram of a data reading apparatus of the present invention;
Fig. 4 is a block diagram of an electronic device for data reading according to an example of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present invention may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, the term "and/or" as used in the specification and claims to describe an association of associated objects means that there may be three relationships, e.g., a and/or B, may mean: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. The term "plurality" in embodiments of the present invention means two or more, and other adjectives are similar.
Method embodiment
Referring to fig. 1, there is shown a flow chart of steps of an embodiment of a data reading method of the present invention, which may specifically include the steps of:
step 101, receiving a memory access request sent by a processor, and acquiring a memory access address carried in the memory access request.
Step 102, determining a target group corresponding to the access address from the group of the cache according to the access address and a first physical address stored in a tag field of the cache.
Step 103, obtaining the first physical address of each cache line in the target group from the tag domain, and determining the first physical address of each cache line in the target group as a first reference address.
Step 104, hash mapping is performed on the first reference address and the target physical address corresponding to the access address, so as to obtain a first mapping parameter corresponding to the first reference address and a second mapping parameter corresponding to the target physical address.
Step 105, obtaining a target data block corresponding to the target physical address according to the first mapping parameter and the second mapping parameter; the first physical address of the target data block is the same as the target physical address.
The data reading method provided by the embodiment of the invention can be applied to a Cache (Cache). The cache is a cache area between the memory and the processor. The operation speed of the processor is very fast, but the access speed of the memory is relatively slow, so that a storage wall problem exists between the processor and the memory, and if the processor needs to read data from the memory each time, a large amount of waiting time is caused, and the overall performance of the processor is reduced. By introducing a cache between the processor and the memory, a data cache layer may be established between the processor and the memory, with the most common data of the processor being backed up into the cache for the processor to quickly read the data from the cache.
In the embodiment of the invention, the cache is a group connection cache, and the cache can be any one of a data cache and an instruction cache. Referring to FIG. 2, there is shown a schematic diagram of a cache of the present invention, the cache including a data field for storing a data block and a tag field for storing a first physical address (paddr) of the data block; the data block and the first physical address corresponding to the data block form a cache line (CACHELINE), and at least two cache lines form a group (set), in the embodiment of the invention, the number of groups is at least 2.
In particular, the cache exploits the dependency of the processor program, in case a data is read, it is possible that both the data and the data surrounding the data are read, the data field being used for storing data having consecutive addresses, and the tag field being used for storing the first physical address of the data having consecutive addresses. As shown in fig. 2, a line formed by a first physical address and data corresponding to the first physical address is called a cache line, and a data portion in the cache line is called a data block (CACHE DATA block); if a data block can be stored in m data blocks in a cache, the cache can be referred to as an m-way set associative cache, and m cache lines in which the m data blocks reside can form a set of caches, where m is an integer greater than 1 and n is an integer greater than or equal to 2 in the embodiment of the present invention.
Wherein the number of ways in the cache is predetermined, in the embodiment of the present invention, the number of ways is at least 2. As an example, where the number of ways is 4, 4 cache lines are included in a group, and data in the same memory block (block) may be mapped into a data block of any of the 4 cache lines.
Specifically, according to the characteristic information of the data in the memory block, the data in the memory block can be mapped to the cache lines of each group of the cache; the characteristic information of the data in the memory block may include, but is not limited to, a data type, a size, a physical address of the data, an area code of the memory block where the data is located, a block number of the memory block where the data is located, and the like. Accordingly, under the condition that the cache acquires the address, the target group can be determined from the group of the cache according to the characteristic information of the target data corresponding to the address, wherein the address is a virtual address (vaddr), and the target data corresponding to the address is the data stored in the target data block hit by the address.
It will be appreciated that the cache may uniquely determine the target set corresponding to a memory address based on the memory address, and the number of target sets determined by the cache in step 102 is 1.
Illustratively, the cache maps data in the memory block to a different set of cache lines of the cache according to the block number of the memory block; specifically, the memory is divided into 16 memory blocks, the block numbers of the memory blocks are 0 to 15, the cache comprises two groups (set 0 and set 1), each group comprises two cache lines, wherein the two cache lines in set0 are used for storing data in the memory blocks with the block numbers of 0,2, 4, 6, 8, 10, 12 and 14, and the two cache lines in set1 are used for storing data in the memory blocks with the block numbers of 1, 3,5, 7, 9, 11, 13 and 15; under the condition that the cache acquires the access address, determining that the block number of the memory block where the target data corresponding to the access address exists is 10 according to the access address, and determining the target group by the set 0.
In the embodiment of the invention, the first physical address is used for matching with the target physical address corresponding to the access memory address to determine whether the access memory address hits in the cache. It will be appreciated that the capacity of the memory is much greater than that of the cache, and that multiple memory blocks in the memory may map to the same set of the cache; in the embodiment of the invention, under the condition of determining the target group, the cache can determine whether the access address hits in the cache according to the first physical address in each cache line in the target group.
In the embodiment of the invention, under the condition that the cache receives the access request sent by the processor, the access address carried by the access request can be obtained from the access request; specifically, the memory address may include three fields, namely a Tag (Tag), an index (index), and a block offset (block offset); the Tag is used for indicating a target physical address corresponding to the access address, the access address hits the cache when the first physical address of any cache line in the cache is matched with the target physical address indicated by the Tag, and the cache is missing when the first physical address of any cache line in the cache is not matched with the target physical address indicated by the Tag; the index is used for indicating the characteristic information of the target data corresponding to the access address, and the cache can determine a target group from groups in the cache based on the index; the block offset is used for indicating the offset of the data block in the cache, and under the condition that the memory access hits the cache, the cache can determine the target data corresponding to the memory access from the target data block according to the offset indicated by the block offset; for example, the data block in the cache line is 64 bytes (Byte), and the memory data is 4 bytes, and one data block may store a plurality of memory data with consecutive addresses, where the block offset is used to indicate a specific location in the data block of the target data corresponding to the memory address.
In an alternative embodiment, in the process of determining the target group, the cache may match index in the memory address with the first physical address of the cache line in each group stored in the tag field, and if index matches a preset bit of any first physical address, the group where the first physical address is located may be determined as the target group; it will be appreciated that the predetermined bits in the first physical address are bits associated with characteristic information of the data in the data block.
In another optional implementation manner, a first identifier corresponding to the group one by one is set according to the characteristic information of the data blocks of the cache lines in each group stored in the data domain, and the first identifier is used for indicating the characteristic information of the data blocks in the group corresponding to the first identifier one by one; in the process of determining the target group, the cache can match the characteristic information indicated by the index in the memory address with the first identifiers of the groups, and determine the group corresponding to the first identifier matched with the characteristic information indicated by the index as the target group.
In the embodiment of the invention, the first reference addresses are first physical addresses in each cache line of the target group stored in the tag domain, and the number of the first reference addresses is the same as the number of the cache lines in the target group, and is at least two. Specifically, after determining the target set, the cache may obtain the first physical address of each cache line in the target set from the tag field, and determine the first physical address of each cache line in the target set as the first reference address.
In addition, the cache may send the address to the bypass translation cache (Translation Lookaside Buffer, TLB) while executing step 103, so that the TLB determines, according to the address, a target physical address corresponding to the address, and sends the target physical address to the cache; the cache, upon receiving the target physical address sent by the TLB, may perform the operation corresponding to step 104 based on the target physical address and the first reference address.
In step 104, the cache may perform hash mapping on the first reference address by using a first predefined target hash function to obtain a first mapping parameter, and perform hash mapping on the target physical address by using the first target hash function to obtain a second mapping parameter. The first target hash function is used for realizing hash mapping from multiple bits (Bit) to fewer bits, and in a specific application scenario, the first target hash function can be customized according to the hash mapping requirement.
Illustratively, the first target hash function may be defined as: and selecting the low 3 bits of the addresses to be processed as mapping parameters, wherein after hash mapping is carried out on all the addresses to be processed through a hash function, only the low 3 bits are taken as the mapping parameters corresponding to the addresses to be processed, wherein the addresses to be processed comprise a first reference address and a target physical address, the mapping parameters are the first mapping parameters when the addresses to be processed are the first reference address, and the mapping parameters are the second mapping parameters when the addresses to be processed are the target physical address. Specifically, the first reference address is hashed and mapped by using the first target hash function, and then the lower 3 bits of the first reference address are taken as the first mapping parameter; hash mapping is performed on the target physical address by using the first target hash function, and then the lower 3 bits of the target physical address are taken as a second mapping parameter.
It can be understood that the bits of the first mapping parameters obtained by hash mapping are smaller than the bits of the first reference addresses, and the first reference addresses are in one-to-one correspondence with the first mapping parameters obtained by hash mapping the first reference addresses; bits of the second mapping parameters obtained by hash mapping are smaller than bits of the target physical address, and the target physical address corresponds to the second mapping parameters obtained by hash mapping of the target physical address one by one. Wherein the bits of the first mapping parameter and the second mapping parameter are the same.
In the case that the number of cache lines in the target set is at least two, the number of first reference addresses is also at least two, and the cache may utilize the first target hash function to perform hash mapping on each first parameter address in the process of executing step 104, so as to obtain a first mapping parameter corresponding to the first reference addresses one to one.
It should be noted that the bits of the mapping parameter may be any value greater than 0 and less than the bits of the address to be processed, for example, in the case where the bits of the address to be processed are 27, the bits of the mapping parameter may be any value from 1 to 26. In the embodiment of the present invention, as long as the bits of the mapping parameter can meet the timing requirement of the back end, the embodiment of the present invention is not particularly limited thereto.
In the embodiment of the present invention, although the first reference addresses corresponding to each first mapping parameter are different from each other, since the bits of the first mapping parameter are smaller than those of the first reference address, the values of the first mapping parameters corresponding to different first reference addresses may be the same or different from each other.
The process of obtaining the target data block corresponding to the target physical address by the cache according to the first mapping parameter and the second mapping parameter specifically includes the following steps:
Step A11, determining whether the first mapping parameters are the same.
And step A12, matching the second mapping parameters with the first mapping parameters under the condition that the first mapping parameters are different from each other.
Step A13, under the condition that at least two first mapping parameters are the same, the first target hash function is adjusted, so that the cache uses the first target hash function to carry out hash mapping on the address to be processed to obtain the mapping parameters, and the bit of the mapping parameters is increased by 1 bit; and executing the re-step 104 to perform hash mapping on the first reference address and the target physical address by using the adjusted first target hash function to obtain a first mapping parameter and a second mapping parameter, executing the step A11 to determine whether the first mapping parameters are identical, executing the step A12 under the condition that the first mapping parameters are different from each other, and repeatedly executing the step A13 under the condition that at least two first mapping parameters are identical to each other until the first mapping parameters are different from each other.
And step A14, when the second mapping parameter is matched with any one of the first mapping parameters, matching a first reference address corresponding to the first mapping parameter matched with the second mapping parameter with a target physical address, and when the first reference address corresponding to the first mapping parameter is matched with the target physical address, determining the data block corresponding to the first mapping parameter as a target data block corresponding to the target physical address. The data block corresponding to the first mapping parameter is a data block in a cache line where a first reference address corresponding to the first mapping parameter is located.
And step A15, under the condition that the second mapping parameters are not matched with any first mapping parameters, determining cache miss, and acquiring a target data block from a downstream node of the cache.
It will be appreciated that the target data block corresponding to the target physical address obtained in step 105 includes target data hit by the memory address, where the first physical address of the target data block is the same as the target physical address.
Specifically, when the cache acquires the target data block, the target data corresponding to the memory address can be read from the target data block according to the offset indicated by the block offset in the memory address, and the target data can be sent to the processor.
In the related art, a large amount of power consumption is consumed in parallel address comparison operation performed in the process of reading data by the group-connected caches, for a group-connected data cache with the size of 64 Kilobytes (KB), the data cache is divided into 256 groups, each group has 4 different ways, the size of a corresponding cache line of each way is 64 Bytes, and under the condition that a processor with a virtual address of 39 bits needs to read data from the data cache, whether 27 bits of addresses are identical or not needs to be compared, the time consumed in the address comparison process is long, the data reading delay is too long, and the processing frequency of the processor is severely limited. For the above problems, the methods commonly used in the industry include the following three methods:
The first approach is a way prediction approach, which typically maintains an additional prediction bit or tag bit in the cache for each cache line to track the branch's prediction outcome; these additional prediction bits or tag bits may occupy the storage space of the cache, resulting in a waste of cache capacity; in addition, updating of the prediction bits or the flag bits introduces additional operations and overhead; under the condition of a way prediction error, invalid data can be loaded into the cache, so that the cache bandwidth and storage resources are wasted, and meanwhile, useful data with high cache hit probability in the cache can be eliminated from the cache, and further, cache miss is caused.
The second method is that under the condition that the target physical address and the first physical address corresponding to the access address are not compared, the first physical address of each path of the cache and the data block corresponding to the first physical address are read firstly, then the first physical address is compared with the target physical address, and the data block corresponding to the first physical address is selected based on the comparison result between the first physical address and the target physical address; although the method can shorten the path of data reading, under the condition of uncertain cache hit results, all data blocks in the cache are read out, the power consumption of the cache for data reading can be improved, and energy waste is caused.
The third approach takes into account the longer time required for the address alignment process, adding a first order pipeline between the processor and the cache for waiting for the results of the address alignment, but adding a first order pipeline increases the time to fetch instructions.
The data reading method provided by the embodiment of the invention comprises the steps of caching a data field and a mark field, wherein the data field is used for storing a data block, and the mark field is used for storing a first physical address of the data block; the method comprises the steps that data blocks and first physical addresses which are in one-to-one correspondence with the data blocks form cache lines, at least two cache lines form groups, the number of the groups is at least 2, under the condition that a cache obtains access addresses, a target group corresponding to the access addresses is firstly determined from the groups of the cache, and the first physical addresses of the cache lines in the target group obtained from a mark field of the cache are determined to be first reference addresses; the target physical address corresponding to the first reference address and the access address are subjected to Hash mapping to obtain a first mapping parameter corresponding to the first reference address and a second mapping parameter corresponding to the target physical address, so that the bit of the first mapping parameter is smaller than that of the first reference address, the bit of the second mapping parameter is smaller than that of the target physical address, and the cache acquires a target data block corresponding to the target physical address according to the first mapping parameter and the second mapping parameter with fewer bits, thereby reducing the workload of address comparison of the cache in the process of data reading, shortening the time consumed by address comparison of the cache, reducing the power consumption of data reading of the cache, improving the data reading efficiency of the cache, and further improving the processing frequency of a processor.
In an optional embodiment of the present invention, step 105 of obtaining, according to the first mapping parameter and the second mapping parameter, a target data block corresponding to the target physical address includes:
step 1051, matching the second mapping parameter with each of the first mapping parameters.
Step 1052, determining a second reference address corresponding to the first target mapping parameter when there is a first target mapping parameter matching the second mapping parameter in the first mapping parameter.
Step 1053, according to the second reference address, obtaining the data block corresponding to the second reference address from the data domain, and obtaining the reference data block.
Step 1054, determining a target data block from the reference data blocks according to the second reference address and the target physical address.
In the embodiment of the present invention, in the process of obtaining the target data block corresponding to the target physical address according to the first mapping parameter and the second mapping parameter, the cache may first match the second mapping parameter with each first mapping parameter, and execute step 1052 to determine the second reference address corresponding to the first target mapping parameter if the first target mapping parameter matched with the second mapping parameter exists in the first mapping parameter. It is understood that the first target mapping parameter is a first mapping parameter that matches the second mapping parameter in each first mapping parameter, and the number of first target mapping parameters is greater than or equal to 0 and less than or equal to the number of first mapping parameters. Specifically, in the case that the number of the first target mapping parameters is 0, it indicates that the first mapping parameters obtained in step 104 are not matched with the second mapping parameters, and the cache may determine that there is no target data block corresponding to the target physical address in the cache, that is, a cache miss; in the case where the number of first target mapping parameters is greater than 0, it indicates that there is a first mapping parameter matching the second mapping parameter in the first mapping parameters obtained in step 104, and the cache may continue to perform steps 1052 to 1054 to determine the target data block from the cache.
In the case that the first mapping parameter does not have the first target mapping parameter matched with the second mapping parameter, the cache may determine a cache miss and obtain the target data block corresponding to the target physical address from the downstream node of the cache, without executing the operations corresponding to steps 1052 to 1054.
It will be appreciated that the bits of the mapping parameters are smaller than the bits of the address to be processed; when the first mapping parameter is matched with the second mapping parameter, only the first reference address is partially matched with the target physical address, so that when the first mapping parameter is matched with the second mapping parameter, the second reference address corresponding to the first target mapping parameter is determined from the first reference address, and the target data block is determined from the reference data block corresponding to the second reference address according to the second reference address and the target physical address, thereby improving the accuracy of the target data block acquired by the cache.
In the case that the bits of the mapping parameters are smaller than the bits of the addresses to be processed, the smaller the bits of the mapping parameters, the greater the number of first target mapping parameters determined by the cache; the larger the bits of the mapping parameters are, the fewer the number of first target mapping parameters determined by the cache is, and the larger the probability that the reference data block corresponding to the second reference address obtained from the data domain is the target data block according to the second reference address corresponding to the first target mapping parameters is, but the resulting hardware overhead and power consumption are also increased. In an actual application scenario, the bits of the mapping parameters may be determined by balancing according to the timing requirement of the back end, the hardware overhead and the power consumption of the cache, which is not particularly limited in the embodiment of the present invention.
It should be noted that, the second reference address is determined from the first reference address, and the cache may determine, as the second reference address, the first reference address corresponding to the first target mapping parameter according to a one-to-one correspondence between the first mapping parameter and the first reference address when the first target mapping parameter is determined.
In the embodiment of the invention, the cache acquires the data block corresponding to the second reference address from the data domain according to the second reference address, and the process of acquiring the reference data block is specifically as follows: firstly, determining a cache line where a second reference address is located in a target group as a target cache line according to the second reference address; then, the data block of the target cache line is acquired from the data domain, and a reference data block is obtained.
It can be understood that the reference data block is a data block in the target cache line where the second reference address is located, and the reference data block corresponds to the second reference address one by one. Where the number of second reference addresses is equal to the number of first mapping parameters obtained in step 104, then all cache lines in the target set may be determined to be target cache lines.
In an embodiment of the present invention, the cache is configured to determine a target data block from the reference data blocks according to the second reference address and the target physical address, and the method specifically includes the following steps:
Step A21, performing secondary hash mapping on the second reference address by using a second target hash function to obtain a third mapping parameter; wherein the bits of the third mapping parameter are larger than the bits of the first mapping parameter and smaller than the bits of the second reference address.
Step A22, performing secondary hash mapping on the target physical address by using a second target hash function to obtain a fourth mapping parameter; wherein the bits of the fourth mapping parameter are the same as the bits of the third mapping parameter.
And step A23, matching the fourth mapping parameter with the third mapping parameter.
Step A24, when second target mapping parameters matched with fourth mapping parameters exist in the third mapping parameters, and the number of the second target mapping parameters is smaller than a first preset threshold, matching a third reference address corresponding to the second target mapping parameters with a target physical address, and when the third reference address is matched with the target physical address, determining a reference data block corresponding to the third reference address as a target data block; wherein the first preset threshold is 2.
Step A25, when there are second target mapping parameters matched with the fourth mapping parameters in the third mapping parameters, and the number of the second target mapping parameters is greater than or equal to the first preset threshold, adjusting the second target hash function, so that the bits of the mapping parameters obtained by performing hash mapping on the addresses to be processed by using the second target hash function are increased by 1 bit; repeatedly executing the operations corresponding to the steps A21 to A22 by using the adjusted second target hash function until the number of the second target mapping parameters is smaller than a first preset threshold value, or the third mapping parameters do not have second target mapping parameters matched with the fourth mapping parameters; specifically, in the case where the number of second target mapping parameters is smaller than the first preset threshold, the operation corresponding to step a24 is performed, and in the case where there is no second target mapping parameter matching the fourth mapping parameter in the third mapping parameters, the operation corresponding to step a26 is performed.
And step A26, if the second target mapping parameter matched with the fourth mapping parameter does not exist in the third mapping parameter, determining cache miss, and acquiring a target data block corresponding to the target physical address from a downstream node of the cache.
In the data reading method provided by the embodiment of the invention, in the process of obtaining the target data block corresponding to the target physical address, the cache firstly determines the first target mapping parameter matched with the second mapping parameter from the first mapping parameters, then screens the second reference address from the first reference address based on the first target mapping parameter, and determines the target data block from the reference data block corresponding to the second reference address according to the second reference address and the target physical address; in addition, in the embodiment of the present invention, the cache may acquire the data block corresponding to the second reference address from the data domain in the execution step 1053, and execute the step 1054 to determine the target data block from the reference data block according to the second reference address and the target physical address, and return the target data in the target data block to the processor, thereby shortening the execution path of the cache when the cache reads the data, so that the timing convergence of the data reading process of the cache is shortened, the time consumed by the cache for address comparison is shortened, the efficiency of the cache for data reading is improved, and further the processing frequency of the processor is improved.
In an alternative embodiment of the present invention, the determining, in step 1054, the target data block from the reference data blocks according to the second reference address and the target physical address includes:
And S11, determining a reference data block corresponding to the target reference address as a target data block when the target reference address matched with the target physical address exists in the second reference address.
In the embodiment of the invention, the cache can directly match the second reference address with the target physical address in the process of determining the target data block from the reference data blocks according to the second reference address and the target physical address, and determine the reference data block corresponding to the target reference address as the target data block when the target reference address matched with the target physical address exists in the second reference address.
It will be appreciated that, before step S11, the cache has performed a filtering on the first reference address determined in step S103 through the matching operation of the second mapping parameter and the first mapping parameter in step 1051, the address matching range of step S11 is narrowed, in step S11, the cache may directly match the second reference address with the target physical address, and in the case that there is a target reference address matching the target physical address in the second reference address, determine a reference data block corresponding to the target reference address in the cache as a target data block. The target reference address is a reference address matched with the target physical address in the second reference address.
In addition, in the event that there is no target reference address in the second reference address that matches the target physical address, the cache may determine a cache miss and obtain a target block of data corresponding to the target physical address in a node downstream of the cache.
According to the data reading method provided by the embodiment of the invention, under the condition that the cache determines the second reference address and acquires the reference data block corresponding to the second reference address, the target physical address can be directly matched with the second reference address, and under the condition that the target reference address matched with the target physical address exists in the second reference address, the reference data block corresponding to the target reference address is determined as the target data block, so that the workload of address comparison of the cache in the data reading process is reduced, the operation flow of determining the target data block from the cache is simplified, and the feasibility of the embodiment of the invention is improved.
In an optional embodiment of the present invention, step 105 of obtaining, according to the first mapping parameter and the second mapping parameter, a target data block corresponding to the target physical address includes:
Step 1055, matching the second mapping parameter with each of the first mapping parameters.
Step 1056, if there is no first target mapping parameter matching the second mapping parameter in the first mapping parameter, acquiring a target data block from a downstream node of the cache according to the target physical address.
In the embodiment of the invention, in the process of matching the second mapping parameters with each first mapping parameter, under the condition that the first target mapping parameters matched with the second mapping parameters do not exist in the first mapping parameters, the target data block corresponding to the target physical address does not exist in the cache can be directly determined, the operation of acquiring the target data block from the downstream node of the cache according to the target physical address is executed, the cache miss can be determined by comparing the first mapping parameters with the second mapping parameters with fewer bits, the target physical address corresponding to the access address and each first reference address are not required to be compared, the workload of address comparison of the cache is greatly reduced, the time period consumed by the address comparison operation is reduced under the condition of the cache miss, the time delay of acquiring the target data block by the cache is shortened, the efficiency of acquiring the target data block by the cache is improved, and the processing frequency of the processor is further improved.
Specifically, under the condition that the first mapping parameter does not have the first target mapping parameter matched with the second mapping parameter, the cache can search the data block matched with the target physical address corresponding to the memory address from the downstream node, read the searched data block from the downstream node, and backfill the data block into the cache to obtain the target data block corresponding to the target physical address.
Wherein, in the case that the cache is a first level cache, the downstream node of the cache is a second level cache; in the case that the cache is a secondary cache, the downstream node of the cache is a tertiary cache; in the case where the cache is a three-level cache, the downstream node of the cache is a system level cache or memory.
In an optional embodiment of the present invention, step 102, determining, from the set of caches, the target set corresponding to the address according to the address and the first physical address stored in the tag field of the cache, includes:
step 1021, obtaining first matching information according to the access address; the first matching information is used for indicating the data characteristics of the target data block corresponding to the memory access.
Step 1022, matching the first matching information with the first physical address stored in the tag domain of the cache.
Step 1023, if there is a first physical address matching the first matching information in the tag field, determining a group where the first physical address matching the first matching information is located as a target group.
In the embodiment of the invention, in the process of caching a target group corresponding to an access address from a cached group, first matching information can be acquired according to the access address; then matching the first matching information with each first physical address stored in the tag field of the cache; if there is a first physical address matching the first matching information in the tag field, the group in which the first physical address matching the first matching information is located is determined as the target group.
Wherein, the first matching information may include bits in the memory address for indicating data characteristics of the target data block; specifically, the cache may intercept a portion of bits in the address as first matching information when the address is acquired, and in step 1022, match the first matching information with bits corresponding to the first matching information in each first physical address in the tag domain, and determine, as a target group, a group in which the first physical address is located when the bits corresponding to the first matching information in the first physical address are the same as the first matching information.
Illustratively, the address has 10 bits, and the cache may determine the first 3 bits of the 10 bits of the address as first matching information and match the first matching information with the first 3 bits of each first physical address in the tag domain in the process of acquiring the first matching information according to the address; in the case where the first 3 bits of any first physical address are identical to the first matching information, then the group in which the first physical address is located may be determined as the target group.
According to the data reading method provided by the embodiment of the invention, the target group can be determined only by acquiring fewer bits used for indicating the data characteristics of the target data block from the memory address as the first matching information and matching the first matching information with each first physical address in the tag domain, so that the power consumption of the cache in the process of determining the target group is reduced, and the efficiency of determining the target group is improved.
As an example, the embodiment of the present invention further provides another data reading method, applied to a cache, where the cache is a 64 KB-sized group-connected data cache, and the cache includes a data field for storing a data block and a tag field for storing a first physical address of the data block; the data blocks and the first physical addresses corresponding to the data blocks one by one form cache lines, at least two cache lines form groups, and the number of the groups is at least 2; specifically, the cache is divided into 256 sets, each set having 4 different ways, and each corresponding cache line has a size of 64 bytes, and the method specifically may include the following steps 201 to 212:
Step 201, the cache receives a memory access request sent by the processor, and obtains a memory access address a carried in the memory access request.
It is understood that memory access a is a virtual address.
Step 202, the cache acquires first matching information according to the access address A; the first matching information is used for indicating the data characteristics of the target data block corresponding to the memory address A, and the first matching information is the first 3 bits of the memory address A.
Step 203, the cache matches the first matching information with the first 3 bits of the first physical address stored in the tag field of the cache, and the first 3 bits of the first physical address in the 128 th set of cache lines are matched with the first matching information.
Step 204, the cache determines the 128 th set as the target set.
Step 205, the cache obtains the first physical addresses in the 128 th set of 4-way cache lines from the tag field, address 1, address 2, address 3, and address 4, respectively, and determines address 1, address 2, address 3, and address 4 as the first reference address.
In step 206, the cache performs hash mapping on the address 1, the address 2, the address 3, and the address 4 by using the first target hash function, to obtain a first mapping parameter 1 corresponding to the address 1, a first mapping parameter 2 corresponding to the address 2, a first mapping parameter 3 corresponding to the address 3, and a first mapping parameter 4 corresponding to the address 4.
The first mapping parameter 1 is the lower 5 bits of the address 1, the first mapping parameter 2 is the lower 5 bits of the address 2, the first mapping parameter 3 is the lower 5 bits of the address 3, and the first mapping parameter 4 is the lower 5 bits of the address 4.
Step 207, the cache performs hash mapping on the target physical address corresponding to the access address a by using the first target hash function, so as to obtain a second mapping parameter corresponding to the target physical address.
Wherein the second mapping parameter is the lower 5 bits of the target physical address.
Step 208, the cache matches the second mapping parameter with the first mapping parameter 1, the first mapping parameter 2, the first mapping parameter 3 and the first mapping parameter 4, respectively, and the first mapping parameter 2 and the first mapping parameter 3 are both matched with the second mapping parameter.
In step 209, the cache determines the first mapping parameter 2 and the first mapping parameter 3 as first target mapping parameters, and determines the second reference address corresponding to the first mapping parameter 2 as address 2, and the second reference address corresponding to the first mapping parameter 3 as address 3.
Step 210, the cache acquires a data block corresponding to the second reference address from the data domain according to the second reference address (address 2, address 3) to obtain a reference data block; the reference data block is a data block 2 corresponding to an address 2 and a data block 3 corresponding to an address 3.
Step 211, the cache matches the target physical address with address 2 and address 3, respectively, and address 3 matches the target physical address.
Step 212, the cache determines address 3 as a target reference address, and determines the data block 3 corresponding to address 3 as a target data block.
In summary, the embodiment of the invention provides a data reading method, which reduces the workload of address comparison of a cache in the process of data reading, shortens the time consumed by address comparison of the cache, reduces the power consumption of the cache for data reading, improves the efficiency of the cache for data reading, and further improves the processing frequency of a processor.
It should be noted that, for simplicity of description, the method embodiments are shown as a series of acts, but it should be understood by those skilled in the art that the embodiments are not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts are not necessarily required by the embodiments of the invention.
Device embodiment
Referring to fig. 3, there is shown a block diagram of a data reading apparatus of the present invention, for use with a cache, the apparatus may include:
the first obtaining module 301 is configured to receive a memory access request sent by a processor, and obtain a memory access address carried in the memory access request;
A determining module 302, configured to determine, from the set of caches, a target set corresponding to the access address according to the access address and a first physical address stored in a tag field of the cache; the high-speed cache comprises a data field and a mark field, wherein the data field is used for storing data blocks, the data blocks and first physical addresses corresponding to the data blocks one by one form cache lines, at least two cache lines form groups, and the number of the groups is at least 2;
a second obtaining module 303, configured to obtain, from the tag field, a first physical address of each cache line in the target group, and determine the first physical address of each cache line in the target group as a first reference address;
The hash mapping module 304 is configured to perform hash mapping on the first reference address and a target physical address corresponding to the access address, so as to obtain a first mapping parameter corresponding to the first reference address and a second mapping parameter corresponding to the target physical address;
a third obtaining module 305, configured to obtain, according to the first mapping parameter and the second mapping parameter, a target data block corresponding to the target physical address; the first physical address of the target data block is the same as the target physical address.
Optionally, the third obtaining module includes:
the first matching submodule is used for matching the second mapping parameters with the first mapping parameters;
A first determining submodule, configured to determine a second reference address corresponding to the first target mapping parameter when there is a first target mapping parameter that matches the second mapping parameter in the first mapping parameter;
the first acquisition sub-module is used for acquiring a data block corresponding to the second reference address from the data domain according to the second reference address to obtain a reference data block;
and the second determining submodule is used for determining a target data block from the reference data blocks according to the second reference address and the target physical address.
Optionally, the second determining sub-module includes:
And the determining unit is used for determining the reference data block corresponding to the target reference address as a target data block when the target reference address matched with the target physical address exists in the second reference address.
Optionally, the third obtaining module includes:
the second matching submodule is used for matching the second mapping parameters with the first mapping parameters;
and the second obtaining sub-module is used for obtaining the target data block from the downstream node of the cache according to the target physical address under the condition that the first target mapping parameter matched with the second mapping parameter does not exist in the first mapping parameter.
Optionally, the determining module includes:
The third acquisition sub-module is used for acquiring first matching information according to the access address; the first matching information is used for indicating the data characteristics of the target data block corresponding to the memory access;
a third matching sub-module, configured to match the first matching information with a first physical address stored in a tag field of the cache;
And the third determining submodule is used for determining a group where the first physical address matched with the first matching information is located as a target group when the first physical address matched with the first matching information exists in the mark domain.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
The specific manner in which the various modules perform the operations in relation to the processor of the above-described embodiments have been described in detail in relation to the embodiments of the method and will not be described in detail herein.
Referring to fig. 4, a block diagram of an electronic device for data reading according to an embodiment of the present invention is shown. As shown in fig. 4, the electronic device includes: the device comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete communication with each other through the communication bus; the memory is used for storing executable instructions that cause the processor to execute the data reading method of the foregoing embodiment.
The Processor may be a CPU (Central Processing Unit ), general purpose Processor, DSP (DIGITAL SIGNAL Processor ), ASIC (Application SPECIFIC INTEGRATED Circuit), FPGA (Field Programmble GATE ARRAY, field programmable gate array) or other editable device, transistor logic device, hardware component, or any combination thereof. The processor may also be a combination that performs the function of a computation, e.g., a combination comprising one or more microprocessors, a combination of a DSP and a microprocessor, etc.
The communication bus may include a path to transfer information between the memory and the communication interface. The communication bus may be a PCI (PERIPHERAL COMPONENT INTERCONNECT, peripheral component interconnect standard) bus or an EISA (Extended Industry Standard Architecture ) bus, or the like. The communication bus may be classified into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one line is shown in fig. 4, but not only one bus or one type of bus.
The memory may be a ROM (Read Only memory) or other type of static storage device that can store static information and instructions, a RAM (Random Access memory) or other type of dynamic storage device that can store information and instructions, an EEPROM (ELECTRICALLY ERASABLE PROGRAMMABLE READ ONLY, electrically erasable programmable Read Only memory), a CD-ROM (Compact Disa Read Only, compact disc Read Only), a magnetic tape, a floppy disk, an optical data storage device, and the like.
Embodiments of the present invention also provide a non-transitory computer-readable storage medium, which when executed by a processor of an electronic device (server or terminal), enables the processor to perform the data reading method shown in fig. 1.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
It will be apparent to those skilled in the art that embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the invention may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or terminal device that comprises the element.
The foregoing has described in detail the method, apparatus, electronic device and readable storage medium for data reading provided by the present invention, and specific examples have been applied to illustrate the principles and embodiments of the present invention, and the above examples are only used to help understand the method and core idea of the present invention; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.
Claims (12)
1. A method of data reading, for use with a cache, the method comprising:
Receiving a memory access request sent by a processor, and acquiring a memory access address carried in the memory access request;
Determining a target group corresponding to the access address from the group of the cache according to the access address and a first physical address stored in a mark domain of the cache; the high-speed cache comprises a data field and a mark field, wherein the data field is used for storing data blocks, the data blocks and first physical addresses corresponding to the data blocks one by one form cache lines, at least two cache lines form groups, and the number of the groups is at least 2;
Acquiring first physical addresses of all cache lines in the target group from the tag domain, and determining the first physical addresses of all the cache lines in the target group as first reference addresses;
hash mapping is carried out on the first reference address and the target physical address corresponding to the access address by using a first target hash function, so that a first mapping parameter corresponding to the first reference address and a second mapping parameter corresponding to the target physical address are obtained;
determining whether each of the first mapping parameters is the same;
Under the condition that at least two first mapping parameters are the same, the first target hash function is adjusted, so that 1 bit is added to bits of mapping parameters obtained by carrying out hash mapping on addresses to be processed by using the first target hash function, and the first reference address and the target physical address are subjected to hash mapping again by using the adjusted first target hash function to obtain first mapping parameters and second mapping parameters, and the step of determining whether the first mapping parameters are the same or not is carried out again; the address to be processed comprises the first reference address and the target physical address;
under the condition that the first mapping parameters are different from each other, acquiring a target data block corresponding to the target physical address according to the first mapping parameters and the second mapping parameters; the first physical address of the target data block is the same as the target physical address.
2. The method of claim 1, wherein the obtaining, according to the first mapping parameter and the second mapping parameter, the target data block corresponding to the target physical address includes:
matching the second mapping parameters with the first mapping parameters;
determining a second reference address corresponding to the first target mapping parameter under the condition that the first target mapping parameter matched with the second mapping parameter exists in the first mapping parameter;
Acquiring a data block corresponding to the second reference address from the data domain according to the second reference address to obtain a reference data block;
and determining a target data block from the reference data blocks according to the second reference address and the target physical address.
3. The method of claim 2, wherein determining a target data block from the reference data blocks based on the second reference address and the target physical address comprises:
And determining a reference data block corresponding to the target reference address as a target data block under the condition that the target reference address matched with the target physical address exists in the second reference address.
4. The method of claim 1, wherein the obtaining, according to the first mapping parameter and the second mapping parameter, the target data block corresponding to the target physical address includes:
matching the second mapping parameters with the first mapping parameters;
And acquiring a target data block from a downstream node of the cache according to the target physical address under the condition that a first target mapping parameter matched with the second mapping parameter does not exist in the first mapping parameter.
5. The method of claim 1, wherein the determining the target set corresponding to the address from the cache set according to the address and the first physical address stored in the tag field of the cache comprises:
acquiring first matching information according to the access address; the first matching information is used for indicating the data characteristics of the target data block corresponding to the memory access;
matching the first matching information with a first physical address stored in a tag field of the cache;
And when the first physical address matched with the first matching information exists in the mark domain, determining the group where the first physical address matched with the first matching information exists as a target group.
6. A data reading apparatus for use with a cache, the apparatus comprising:
the first acquisition module is used for receiving a memory access request sent by the processor and acquiring a memory access address carried in the memory access request;
A determining module, configured to determine, from the group of the cache, a target group corresponding to the access address according to the access address and a first physical address stored in a tag field of the cache; the high-speed cache comprises a data field and a mark field, wherein the data field is used for storing data blocks, the data blocks and first physical addresses corresponding to the data blocks one by one form cache lines, at least two cache lines form groups, and the number of the groups is at least 2;
The second acquisition module is used for acquiring the first physical address of each cache line in the target group from the mark domain and determining the first physical address of each cache line in the target group as a first reference address;
The hash mapping module is used for carrying out hash mapping on the first reference address and the target physical address corresponding to the access address by using a first target hash function to obtain a first mapping parameter corresponding to the first reference address and a second mapping parameter corresponding to the target physical address;
the determining module is further configured to determine whether each of the first mapping parameters is the same;
The hash mapping module is further configured to adjust the first target hash function under the condition that at least two first mapping parameters are the same, so that 1 bit is added to bits of a mapping parameter obtained by performing hash mapping on an address to be processed by using the first target hash function, and perform hash mapping on a first reference address and a target physical address again by using the adjusted first target hash function to obtain a first mapping parameter and a second mapping parameter, and execute the step of determining whether each first mapping parameter is the same again; the address to be processed comprises the first reference address and the target physical address;
The third obtaining module is used for obtaining a target data block corresponding to the target physical address according to the first mapping parameter and the second mapping parameter under the condition that the first mapping parameters are different from each other; the first physical address of the target data block is the same as the target physical address.
7. The apparatus of claim 6, wherein the third acquisition module comprises:
the first matching submodule is used for matching the second mapping parameters with the first mapping parameters;
A first determining submodule, configured to determine a second reference address corresponding to the first target mapping parameter when there is a first target mapping parameter that matches the second mapping parameter in the first mapping parameter;
the first acquisition sub-module is used for acquiring a data block corresponding to the second reference address from the data domain according to the second reference address to obtain a reference data block;
and the second determining submodule is used for determining a target data block from the reference data blocks according to the second reference address and the target physical address.
8. The apparatus of claim 7, wherein the second determination submodule comprises:
And the determining unit is used for determining the reference data block corresponding to the target reference address as a target data block when the target reference address matched with the target physical address exists in the second reference address.
9. The apparatus of claim 6, wherein the third acquisition module comprises:
the second matching submodule is used for matching the second mapping parameters with the first mapping parameters;
and the second obtaining sub-module is used for obtaining the target data block from the downstream node of the cache according to the target physical address under the condition that the first target mapping parameter matched with the second mapping parameter does not exist in the first mapping parameter.
10. The apparatus of claim 6, wherein the determining module comprises:
The third acquisition sub-module is used for acquiring first matching information according to the access address; the first matching information is used for indicating the data characteristics of the target data block corresponding to the memory access;
a third matching sub-module, configured to match the first matching information with a first physical address stored in a tag field of the cache;
And the third determining submodule is used for determining a group where the first physical address matched with the first matching information is located as a target group when the first physical address matched with the first matching information exists in the mark domain.
11. An electronic device, comprising a processor, a memory, a communication interface, and a communication bus, wherein the processor, the memory, and the communication interface communicate with each other via the communication bus; the memory for use in the storage of the executable instructions may be performed, the executable instructions cause the processor to perform as a right the data reading method according to any one of claims 1 to 5.
12. A readable storage medium, characterized in that instructions in the readable storage medium, when executed by a processor of an electronic device, enable the processor to perform the data reading method of any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410711555.0A CN118276944B (en) | 2024-06-03 | 2024-06-03 | Data reading method and device, electronic equipment and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410711555.0A CN118276944B (en) | 2024-06-03 | 2024-06-03 | Data reading method and device, electronic equipment and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118276944A CN118276944A (en) | 2024-07-02 |
CN118276944B true CN118276944B (en) | 2024-08-02 |
Family
ID=91644375
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410711555.0A Active CN118276944B (en) | 2024-06-03 | 2024-06-03 | Data reading method and device, electronic equipment and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118276944B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105243030A (en) * | 2015-10-26 | 2016-01-13 | 北京锐安科技有限公司 | Data caching method |
CN111602377A (en) * | 2017-12-27 | 2020-08-28 | 华为技术有限公司 | Resource adjusting method in cache, data access method and device |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2560336B (en) * | 2017-03-07 | 2020-05-06 | Imagination Tech Ltd | Address generators for verifying integrated circuit hardware designs for cache memory |
US11604733B1 (en) * | 2021-11-01 | 2023-03-14 | Arm Limited | Limiting allocation of ways in a cache based on cache maximum associativity value |
CN117009599A (en) * | 2023-08-07 | 2023-11-07 | 中国工商银行股份有限公司 | Data retrieval method and device, processor and electronic equipment |
-
2024
- 2024-06-03 CN CN202410711555.0A patent/CN118276944B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105243030A (en) * | 2015-10-26 | 2016-01-13 | 北京锐安科技有限公司 | Data caching method |
CN111602377A (en) * | 2017-12-27 | 2020-08-28 | 华为技术有限公司 | Resource adjusting method in cache, data access method and device |
Also Published As
Publication number | Publication date |
---|---|
CN118276944A (en) | 2024-07-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108496160B (en) | Adaptive value range profiling for enhanced system performance | |
US5479627A (en) | Virtual address to physical address translation cache that supports multiple page sizes | |
US9519588B2 (en) | Bounded cache searches | |
US6560690B2 (en) | System and method for employing a global bit for page sharing in a linear-addressed cache | |
US9086987B2 (en) | Detection of conflicts between transactions and page shootdowns | |
US10191853B2 (en) | Apparatus and method for maintaining address translation data within an address translation cache | |
CN107735773B (en) | Method and apparatus for cache tag compression | |
US10083126B2 (en) | Apparatus and method for avoiding conflicting entries in a storage structure | |
CN107818053B (en) | Method and apparatus for accessing a cache | |
US11403222B2 (en) | Cache structure using a logical directory | |
WO2018027839A1 (en) | Method for accessing table entry in translation lookaside buffer (tlb) and processing chip | |
US20230401161A1 (en) | Translation support for a virtual cache | |
US10606762B2 (en) | Sharing virtual and real translations in a virtual cache | |
US9086986B2 (en) | Detection of conflicts between transactions and page shootdowns | |
US8190853B2 (en) | Calculator and TLB control method | |
US6990551B2 (en) | System and method for employing a process identifier to minimize aliasing in a linear-addressed cache | |
CN117331853B (en) | Cache processing method, device, electronic equipment and medium | |
CN118276944B (en) | Data reading method and device, electronic equipment and readable storage medium | |
US20130198455A1 (en) | Cache memory garbage collector | |
CN114741338B (en) | Bypass conversion buffer, data updating method, memory management unit and chip | |
US20080282059A1 (en) | Method and apparatus for determining membership in a set of items in a computer system | |
US10977176B2 (en) | Prefetching data to reduce cache misses | |
CN117331854B (en) | Cache processing method, device, electronic equipment and medium | |
CN114996024A (en) | Memory bandwidth monitoring method, server and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |