CN111221749A - Data block writing method and device, processor chip and Cache - Google Patents

Data block writing method and device, processor chip and Cache Download PDF

Info

Publication number
CN111221749A
CN111221749A CN201911121349.XA CN201911121349A CN111221749A CN 111221749 A CN111221749 A CN 111221749A CN 201911121349 A CN201911121349 A CN 201911121349A CN 111221749 A CN111221749 A CN 111221749A
Authority
CN
China
Prior art keywords
cache
block
cache block
priority
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911121349.XA
Other languages
Chinese (zh)
Inventor
张喆鹏
李佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New H3C Semiconductor Technology Co Ltd
Original Assignee
New H3C Semiconductor Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New H3C Semiconductor Technology Co Ltd filed Critical New H3C Semiconductor Technology Co Ltd
Priority to CN201911121349.XA priority Critical patent/CN111221749A/en
Publication of CN111221749A publication Critical patent/CN111221749A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0811Multiuser, multiprocessor or multiprocessing cache systems with multilevel cache hierarchies

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The application provides a data block writing method, a data block writing device, a processor chip and a Cache, wherein the data block writing method comprises the steps of reading data blocks from a memory, and then obtaining the priority of each Cache block; the priority of each Cache block is determined according to the monitored access frequency of the Cache block; the priority of each Cache block is positively correlated with the access frequency of the Cache block; and writing the data blocks into Cache blocks in the Cache according to the priority of each Cache block. By using the method provided by the application, the data block to be written can be written into the Cache when no idle Cache block exists in the Cache, and the efficiency of the processor for reading the data is not influenced.

Description

Data block writing method and device, processor chip and Cache
Technical Field
The present application relates to the field of computers, and in particular, to a data block writing method and apparatus, a processor chip, and a Cache.
Background
Cache is also known as Cache. Since the data block reading rate in the Cache is significantly higher than that of the memory, the Cache is usually configured between the processor and the memory in the processor chip to increase the access speed of the processor to the memory.
The Cache comprises a plurality of Cache blocks, and the storage capacity of each Cache block is the same.
The memory stores data in the form of lines, so the memory includes a plurality of data lines, each of which is the same length. A plurality of consecutive data lines form a data block. The length of the data block is the same as the storage capacity of the Cache block.
When the data line is needed to be read by the processor, the processor firstly accesses the Cache, and if the data line is not stored in each Cache block in the Cache, the Cache reads the data block where the data line is located from the memory. And the Cache writes the read data block into the Cache block and returns the data line in the read data block to the processor. When the processor reads the data line again, the processor accesses the Cache block, and the Cache block can directly return the data line to the processor because the data line is cached in the Cache block. Because the speed of directly reading the data line from the Cache by the processor is far higher than the speed of reading the data block where the data line is from the memory by the Cache and returning the data block to the processor, the speed of reading the same data line by the processor can be greatly improved.
However, after the Cache reads the Cache block from the memory, it is important how to write the read data block into the Cache without affecting the reading rate of the processor for reading the data line required by the processor.
Disclosure of Invention
In view of this, the present application provides a data block writing method, an apparatus, a processor chip and a Cache, which are used for writing a data block to be written into the Cache without affecting the efficiency of the processor for reading data when no idle Cache block exists in the Cache.
Specifically, the method is realized through the following technical scheme:
according to a first aspect of the present application, a data block writing method is provided, the method is applied to a Cache memory of a processor chip, the Cache memory includes a plurality of Cache blocks, and the method includes:
after reading the data blocks from the memory, acquiring the priority of each Cache block; the priority of each Cache block is determined according to the monitored access frequency of the Cache block; the priority of each Cache block is positively correlated with the access frequency of the Cache block;
and writing the data blocks into Cache blocks in the Cache according to the priority of each Cache block.
Optionally, the writing the data block into a Cache block in the Cache according to the priority of each Cache block includes:
selecting at least one Cache block with the priority meeting the Cache condition from the Cache blocks;
and writing the data block into one Cache block in the at least one Cache block.
Optionally, the priority of each Cache block is determined by the following method, including:
periodically determining the access frequency of each Cache block based on the monitored access parameters of each Cache block, and setting the priority of each Cache block based on the access frequency of each Cache block;
and/or the presence of a gas in the gas,
when the access parameter of any Cache block is monitored to be changed, determining the access frequency of the Cache block based on the changed access parameter of the Cache block, setting the priority of the Cache block based on the access frequency of the Cache block, and maintaining the priority of the Cache block of which the access parameter is not changed.
Optionally, the access parameters of the Cache block include: the access frequency of the Cache block and/or the time length of the Cache block which is not accessed.
Optionally, before the obtaining of the priority of each Cache block, the method includes:
and determining that no idle Cache block exists in the Cache.
According to a second aspect of the present application, there is provided a data block writing apparatus, the apparatus is disposed in a Cache memory Cache of a processor chip, the Cache further includes a plurality of Cache blocks, the apparatus includes:
the acquisition unit is used for acquiring the priority of each Cache block after the data block is read from the memory; the priority of each Cache block is determined according to the monitored access frequency of the Cache block; the priority of each Cache block is positively correlated with the access frequency of the Cache block;
and the writing unit is used for writing the data blocks into the Cache blocks in the caches according to the priority of each Cache block.
Optionally, the writing unit is specifically configured to select at least one Cache block of which the priority satisfies the caching condition from the Cache blocks when the data block is written into the Cache blocks in the caches according to the priority of each Cache block; and writing the data block into one Cache block in the at least one Cache block.
Optionally, the apparatus further comprises:
the setting unit is used for periodically determining the access frequency of each Cache block based on the monitored access parameters of each Cache block and setting the priority of each Cache block based on the access frequency of each Cache block; and/or when monitoring that the access parameter of any Cache block changes, determining the access frequency of the Cache block based on the changed access parameter of the Cache block, setting the priority of the Cache block based on the access frequency of the Cache block, and maintaining the priority of the Cache block of which the access parameter does not change.
Optionally, the apparatus further comprises: and the determining unit is used for determining that no idle Cache block exists in the caches before the priorities of the Cache blocks are obtained.
Optionally, the access parameters of the Cache block include: the access frequency of the Cache block and/or the time length of the Cache block which is not accessed.
According to a third aspect of the present application, a processor chip is provided, where the processor chip includes a processor and a Cache memory, and the Cache includes a Cache used for implementing the data block writing method.
According to a fourth aspect of the present application, a Cache of a Cache memory is provided, where the Cache includes a Cache for implementing the above data block writing method.
As can be seen from the above description, the caches may dynamically monitor the access frequency of each Cache, and set the priority of each Cache based on the access frequency of each Cache block, and since the priority of each Cache block is positively correlated to the access frequency of the Cache block, after a data block is read from the memory, the data block may be written into a Cache block that is infrequently accessed by the processor, and the data block in the Cache block that is frequently accessed by the processor is retained, so that the efficiency of the processor in accessing the data block that needs frequent access by itself is not affected.
Drawings
FIG. 1a is a block diagram of an electronic device shown in an exemplary embodiment of the present application;
FIG. 1b is a schematic diagram of a Cache and memory shown in an exemplary embodiment of the present application;
FIG. 2 is a flow chart illustrating a method for writing a block of data according to an exemplary embodiment of the present application;
fig. 3 is a flowchart illustrating a method for setting a priority of a Cache block according to an exemplary embodiment of the present application;
fig. 4 is a block diagram of a data block writing apparatus according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
Referring to fig. 1a, fig. 1a is a block diagram of an electronic device according to an exemplary embodiment of the present application.
The electronic device includes: a processor chip and a memory.
The processor chip includes: processor, Cache.
The processor may interact with the Cache via a data bus, and the Cache may interact with the memory.
As shown in fig. 1b, the Cache includes a plurality of Cache blocks, and the storage capacity of each Cache block is the same.
The memory stores data in the form of lines, so the memory includes a plurality of data lines, each of which is the same length. A plurality of consecutive data lines form a data block. For example, as shown in FIG. 1b, 4 rows of data in the memory of FIG. 1b constitute one block of data.
The length of each data block in the memory is the same as the storage capacity of a Cache block in the Cache.
In a memory, a memory address of one data line is generally composed of a plurality of bits. Further, the multiple bits of one data line include a high bit, a low bit, and an invalid bit.
For example, if the memory address of a data line is 32 bits, the 1 st bit to the 2 nd bit of the memory address are invalid bits, the 3 rd bit to the 4 th bit are low bits, and the 5 th bit to the 32 th bit are high bits in order from right to left.
Generally, the upper bits of the memory address are the same for all data lines in a data block in the memory. The storage address of the data block where the data line is located is usually composed of high bits + low bits with all zeros + invalid bits.
For example, if the memory address of a data line is 32 bits, the 1 st bit to the 2 nd bit of the memory address are invalid bits, the 3 rd bit to the 4 th bit are low bits, and the 5 th bit to the 32 th bit are high bits in order from right to left.
For convenience of description, the 5 th bit to the 32 nd bit are referred to as high bit 1, and assuming that the low bits (i.e., the 4 th bit and the 3 rd bit) of the data line are 10, respectively, the storage address of the data line is high bit 1+10+ invalid bit. And the storage address of the data block where the data line is located is composed of high bits 1+00+ invalid bits.
When the processor needs to read the target data line, the processor may send a read request to the Cache, where the read request carries a storage address (here, denoted as storage address 1) of the target data line.
After receiving a read request sent by the processor, the Cache searches whether a target data line corresponding to the storage address 1 exists in each Cache of the Cache. If the target data line is not stored in each Cache block, the Cache sets the low position of the storage address 1 to zero to obtain the storage address (here, referred to as storage address 2) of the data block where the target data line is located.
The Cache may then send a read request to memory, the read request carrying memory address 2. After receiving a read request sent by the Cache, the memory can read the data block where the target data line is located according to the storage address 2, and return the data block to the Cache.
When the Cache receives the data block, the data block can be written into a Cache block of the Cache, and a target data line carried in the data block is returned to the processor.
How to write the data block into the Cache block of the Cache becomes a problem to be solved urgently, especially when no idle Cache block exists in the Cache.
In a conventional writing method, when a new data block is written into a Cache, a target Cache block is selected from the Cache by using a preset rule, and then the new data block is written into the target Cache block to cover a data block currently stored in the target Cache block.
For example, the default rule is a FIFO (First Input First Output) rule. That is, in each Cache block, the Cache block in which the data block written first in the Cache is located is selected as the target Cache block. For another example, if the preset rule is a Random rule, one of the Cache blocks is randomly selected as the target Cache block.
However, if the data block currently stored in the target Cache block is a data block frequently accessed by the processor, in this way, when the processor reads the data line in the data block again, since the data block in the Cache is replaced by the new data block, the Cache cannot directly return the data line to the processor, and the data block where the data line is located needs to be read again from the memory, and then the data line is returned to the processor, so that the rate of reading the data line required by the processor itself is seriously affected.
In view of this, the present application provides a data block writing method, which obtains the priority of each Cache block after reading a data block from a memory; the priority of each Cache block is determined according to the monitored access frequency of the Cache block; the priority of each Cache block is positively correlated with the access frequency of the Cache block. The Cache can write the data block into the Cache block of the Cache according to the priority of each Cache block.
On one hand, the priority of the Cache blocks represents the access frequency of the processor accessing each Cache block, the Cache can write the data blocks to be written into the Cache blocks with infrequent access of the processor, and the Cache can reserve the data blocks in the Cache blocks with frequent access of the processor, so that the efficiency of the processor accessing the data blocks which need frequent access is not influenced.
On the other hand, the Cache can update the priority of each Cache block based on the access parameters of each Cache block at regular time, or update the priority of each Cache block when the access parameters of any Cache block are monitored to be changed, so that the priority of each Cache block is adaptive to the access parameters of the processor for accessing the Cache block at different stages, the selected Cache block is the Cache block with the lowest current access frequency, and the accuracy of selecting the Cache block with the lowest access frequency is improved.
The following describes the data block writing method provided in the present application in detail.
Referring to fig. 2, fig. 2 is a flowchart illustrating a data block writing method according to an exemplary embodiment of the present application, where the method is applicable to a Cache in a processor chip, and the Cache further includes a plurality of Cache blocks. The method may include the steps shown below.
Step 201: and after reading the data blocks from the memory of the processor chip, acquiring the priority of each Cache block. The priority of each Cache block is determined according to the monitored access frequency of the Cache block; the priority of each Cache block is positively correlated with the access frequency of the Cache block.
The first method is as follows: when the Cache reads the data blocks from the memory of the processor chip, the Cache can directly determine the current priority of each Cache block and write the data blocks based on the priority of each Cache block.
The second method comprises the following steps: after the Cache reads a data block from a memory of a processor chip, whether an idle Cache block exists in the Cache can be detected. If no free Cache block exists in the Cache, determining the priority given to the Cache block, and writing the data block based on the priority of each Cache block. And if the free Cache block exists in the Cache, writing the data block into the free Cache block.
And during detection, the Cache can detect whether each Cache block is configured with an idle mark, and if the Cache block configured with the idle mark exists in the Cache, the Cache block configured with the idle mark is determined to exist in the Cache. And the idle mark configured on the Cache block is used for representing that the Cache block is idle. And if no idle mark exists in all the Cache blocks in the Cache, determining that no idle Cache block exists in the Cache.
Here, the "whether or not there is a free Cache block in the Cache" is merely exemplarily described, and is not particularly limited. Of course, the Cache may also detect whether there is an idle Cache block in the Cache by other means. For example, the free Cache block identifier may be written into the available Cache block table, and the Cache may monitor whether there is a Cache block in the available Cache block table to determine whether there is a free Cache block in the Cache.
Of course, here, the timing of "the Cache determines the priority of each Cache block" is only described by way of example, and is not particularly limited.
Step 202: and writing the data blocks into Cache blocks in the Cache according to the priority of each Cache block.
When the Cache is implemented, the Cache can select at least one Cache block with the priority meeting the Cache condition from all the Cache blocks.
Then, the Cache can select one Cache block from the selected at least one Cache block, and then write the data block into the selected one Cache block.
The caching condition can be preset.
For example, if the Cache condition is the Cache block with the lowest priority, the Cache may select the Cache block with the lowest priority from the Cache blocks, and write the data block into the Cache block with the lowest priority.
For example, suppose that the Cache includes 64 Cache blocks, namely, Cache block 1, Cache block 2, …, and Cache block 64. Assume that the priority of Cache block 2 is the minimum. After the Cache reads the Cache block from the memory, the Cache may write the data block into the Cache block 2.
For another example, if the Cache condition is any one of the at least one Cache block with the priority lower than the preset priority threshold, the Cache may select at least one Cache block with the priority lower than the preset priority threshold from among the Cache blocks. Then, the Cache can randomly select one Cache block from the selected at least one Cache block or according to a preset algorithm, and then write the data block into the selected one Cache block.
For example, suppose that the Cache includes 64 Cache blocks, namely, Cache block 1, Cache block 2, …, and Cache block 64.
Assume that the Cache blocks with the priority lower than the preset priority threshold are Cache block 2 and Cache block 3.
After the Cache reads the Cache blocks from the memory, the Cache can determine the priority of each Cache block and select the Cache blocks (namely, the Cache block 2 and the Cache block 3) with the priority threshold lower than the preset threshold. Then, the Cache may select one Cache block at Cache block 2 and Cache block 3. Assume that the selected Cache block is Cache block 2. The Cache may write the data block into Cache block 2.
The following describes a method for setting the priority of the lower Cache block.
Referring to fig. 3, fig. 3 is a flowchart illustrating a method for setting a priority of a Cache block according to an exemplary embodiment of the present application, where the method is applicable to a Cache and may include the following steps.
Step 301: the Cache monitors the access parameters of each Cache block.
For example, the access parameter may be characterized by multiple types of parameters. For example, the access parameters may include: the access frequency of the Cache block and/or the time length of the Cache block which is not accessed.
The access frequency of the Cache block refers to the number of access times of the Cache block accessed in unit time.
The duration that the Cache block is not accessed refers to the duration that the current distance from the last time the Cache block is accessed.
The access parameter is only exemplified here, and in practical applications, the access parameter may also include other types of parameters, which are not specifically limited here.
Optionally, when monitoring the access parameter of each Cache block, the Cache records the access record of the processor accessing each Cache block. The access record includes: cache block identification accessed by the processor and access time for accessing the Cache block.
The access record is shown in table 1. Of course, in practical applications, other contents may also be included in the access record, which is only exemplary and not specifically limited herein.
Figure BDA0002275549250000091
Figure BDA0002275549250000101
TABLE 1
When the Cache monitors that the processor accesses a certain Cache block, an access record of the Cache block can be generated. Then, the Cache can periodically count the access records of each Cache block to obtain the access parameters corresponding to each Cache block. For example, assume that the access parameter is access frequency and the unit time is 1 second. For each Cache block, the Cache can count the number of access records corresponding to the Cache block in the second every 1 second, and the number is used as the current access frequency of the Cache block.
Of course, here, the access parameter for monitoring each Cache block is only described by way of example, and is not particularly limited. In practical application, the access parameters of each Cache block can be monitored in other manners. For example, a counter is configured for each Cache block, and access parameters such as the number of times that each Cache block is accessed by a processor in unit time are counted through the configured counter.
Step 302: and the Cache determines the access frequency of each Cache block based on the monitored access parameters of each Cache block, and sets the priority of each Cache block according to the access frequency of each Cache block. And the priority of each Cache block is positively correlated with the access frequency of each Cache block.
In an optional implementation manner, the Cache may periodically determine the access frequency of each Cache block based on the monitored access parameters of each Cache block, and set the priority of each Cache block according to the access frequency of each Cache block. For example, assume a period of 5 seconds. The Cache can determine the access frequency of a certain Cache block based on the corresponding access parameters of each Cache block in the 1 st second, and if the access frequency is smaller, a lower priority can be set for the Cache block. Then, at the 6 th second, when the Cache determines that the access frequency of the Cache block is higher based on the access parameter corresponding to the Cache block at the 6 th second, a higher priority can be set for the Cache block, and so on.
In another optional implementation manner, when monitoring that the access parameter of any Cache block changes, the Cache determines the access frequency of the Cache block based on the changed access parameter of the Cache block, sets the priority of the Cache block based on the access frequency of the Cache block, and maintains the priority of the Cache block of which the access parameter does not change.
For example, it is assumed that the Cache counts the access records of each Cache block every unit time (e.g., 1 second), and obtains and records the access parameters of each Cache block. When the Cache monitors that the access parameter of the Cache block 1 in the current 1 second is different from the recorded access parameter in the last second, the Cache can determine that the access parameter of the Cache block 1 changes. The Cache can re-determine the access frequency of the Cache block 1 based on the changed access parameters of the Cache block 1, and set the priority of the Cache block 1 based on the access frequency of the Cache block 1. Meanwhile, the Cache maintains the priority of the Cache block with unchanged access parameters.
Of course, in the present application, other trigger timings may also be used to trigger "the Cache sets the priority of each Cache block based on the monitored access parameters of each Cache block," where the trigger timing is only exemplarily described and is not specifically limited.
In the embodiment of the application, when the access frequency of each Cache block is determined based on the access parameters of each Cache block, the access frequency can be determined in the following manner.
When the access parameters of the Cache block only include one type of parameter (for example, only include the access frequency of the Cache block or the duration of the Cache block that is not accessed), the access parameters of the Cache block can be converted into the access frequency of the Cache block. For example, the access parameters of the Cache block can be converted into the access frequency of the Cache block through a preset conversion algorithm. For example, when the access parameter of the Cache block is the access frequency, the access frequency can be directly used as the access frequency of the Cache block. For example, when the access frequency of a Cache block is high, it indicates that the access frequency of the Cache is high, and further indicates that data in the Cache block is frequently used, it can be ensured that the data in the Cache block is not replaced in a short period, and since the priority of the Cache block is positively correlated with the access frequency, it can be ensured that the Cache block has a high priority, and the possibility that the data in the Cache block is replaced is reduced; however, when the access parameter of the Cache block is the time length when the Cache block is not accessed, it indicates that the data in the Cache block is not accessed by the processor for a long time, and at this time, it indicates that the Cache block is accessed less, that is, the access frequency is smaller, so that it can be obtained that the access frequency is negatively correlated with the time length when the Cache block is not accessed, and the longer the time length when the Cache block is not accessed, the lower the access frequency of the Cache block is, and then a lower priority can be set for the Cache block, and conversely, the shorter the time length when the Cache block is not accessed, the higher the access frequency of the Cache block is, and then a higher priority can be set for the Cache block.
When the access parameters of the Cache block comprise various parameters (for example, the access parameters comprise the access frequency of the Cache block and the duration of the Cache block which is not accessed), the access frequency degree can be obtained for various parameters in the access parameters of the Cache block through a preset algorithm. For example, the access frequency of the Cache block can be obtained by performing weighted summation on various types of parameters in the Cache block.
Here, the description is only an exemplary description of "determining the access frequency of the Cache block based on the access parameter of each Cache block", and is not specifically limited thereto.
In the embodiment of the application, when the priority of each Cache block is set based on the access frequency of the Cache block, the Cache can convert the access frequency into the priority through a preset priority algorithm.
Of course, a conversion relation table of access frequency and priority may also be preconfigured, and the Cache may determine the priority corresponding to the access frequency of the Cache block based on a table lookup manner.
Here, the "setting of the priority of each Cache block based on the access frequency of the Cache block" is merely exemplified and is not particularly limited. As long as the set priority is in positive correlation with the access frequency calculated by the access parameters of the Cache block. I.e., the higher the access frequency of the processor to the Cache block, the higher the priority of the Cache block.
As can be seen from the above description, on one hand, the caches may dynamically monitor access parameters of each Cache, determine access frequency of each Cache block based on the monitored access parameters of each Cache, set priorities of each Cache based on the access frequency of each Cache block, write data to be written from the caches to Cache blocks with infrequent processor accesses, and reserve Cache blocks with frequent processor accesses, so as not to affect efficiency of the processors accessing data that need frequent processor accesses themselves.
On the other hand, the Cache can update the priority of each Cache block based on the access parameters of each Cache block at regular time, or update the priority of each Cache block when the access parameters of any Cache block are monitored to be changed, so that the priority of each Cache block is adaptive to the access parameters of the processor for accessing the Cache block at different stages, the selected Cache block is the Cache block with the lowest current access frequency, and the accuracy of selecting the Cache block with the lowest access frequency is improved.
The application also provides a data block writing device corresponding to the data block writing method.
Referring to fig. 4, fig. 4 is a block diagram illustrating a data block writing apparatus according to an exemplary embodiment of the present application. The device is applied to a Cache in a Cache memory of a processor chip, the Cache also comprises a plurality of Cache blocks, and the device comprises the following units.
An obtaining unit 401, configured to obtain a priority of each Cache block after reading a data block from a memory; the priority of each Cache block is determined according to the monitored access frequency of the Cache block; the priority of each Cache block is positively correlated with the access frequency of the Cache block;
and a write-in unit 402, configured to write the data block into a Cache block in the caches according to the priority of each Cache block.
Optionally, the writing unit 402 is specifically configured to select at least one Cache block of which the priority satisfies the caching condition from the Cache blocks when the data block is written into the Cache blocks in the caches according to the priority of each Cache block; and writing the data block into one Cache block in the at least one Cache block.
Optionally, the apparatus further comprises:
a setting unit 403 (not shown in fig. 4) configured to periodically determine access frequency of each Cache block based on the monitored access parameters of each Cache block, and set a priority of each Cache block based on the access frequency of each Cache block; and/or when monitoring that the access parameter of any Cache block changes, determining the access frequency of the Cache block based on the changed access parameter of the Cache block, setting the priority of the Cache block based on the access frequency of the Cache block, and maintaining the priority of the Cache block of which the access parameter does not change.
Optionally, the apparatus further comprises: a determining unit 404 (not shown in fig. 4) configured to determine that there is no idle Cache block in the caches before the obtaining of the priority of each Cache block.
Optionally, the access parameters of the Cache block include: the access frequency of the Cache block and/or the time length of the Cache block which is not accessed.
In addition, the application also provides a processor chip, and the structure of the processor chip can be as shown in fig. 1.
The processor chip comprises a processor and a Cache, wherein the Cache comprises a Cache used for realizing the data block writing method.
In addition, the application also provides a Cache. The structure of the Cache is shown in FIG. 2. The Cache comprises a Cache used for realizing the data block writing method.
The caches can dynamically monitor access parameters of the caches, and set the priority of each Cache based on the monitored access parameters of each Cache.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (10)

1. A data block writing method is applied to a Cache memory of a processor chip, wherein the Cache memory comprises a plurality of Cache blocks, and the method comprises the following steps:
after reading the data blocks from the memory, acquiring the priority of each Cache block; the priority of each Cache block is determined according to the monitored access frequency of the Cache block, and the priority of each Cache block is positively correlated with the access frequency of the Cache block;
and writing the data blocks into Cache blocks in the Cache according to the priority of each Cache block.
2. The method according to claim 1, wherein writing the data block into a Cache block of the caches according to the priority of each Cache block comprises:
selecting at least one Cache block with the priority meeting the Cache condition from the Cache blocks;
and writing the data block into one Cache block in the at least one Cache block.
3. The method of claim 1, wherein the priority of each Cache block is determined by:
periodically determining the access frequency of each Cache block based on the monitored access parameters of each Cache block, and setting the priority of each Cache block based on the access frequency of each Cache block;
and/or the presence of a gas in the gas,
when the access parameter of any Cache block is monitored to be changed, determining the access frequency of the Cache block based on the changed access parameter of the Cache block, setting the priority of the Cache block based on the access frequency of the Cache block, and maintaining the priority of the Cache block of which the access parameter is not changed.
4. The method of claim 3, wherein the access parameters of the Cache block comprise: the access frequency of the Cache block and/or the time length of the Cache block which is not accessed.
5. The method according to any one of claims 1 to 4, wherein before the obtaining the priority of each Cache block, the method further comprises:
and determining that no idle Cache block exists in the Cache.
6. A data block write device, wherein the device is arranged in a Cache memory Cache of a processor chip, the Cache comprises a plurality of Cache blocks, the device comprises:
the acquisition unit is used for acquiring the priority of each Cache block after the data block is read from the memory; the priority of each Cache block is determined according to the monitored access frequency of the Cache block; the priority of each Cache block is positively correlated with the access frequency of the Cache block;
and the writing unit is used for writing the data blocks into the Cache blocks in the caches according to the priority of each Cache block.
7. The apparatus according to claim 6, wherein the write unit, when writing the data block into a Cache block of the caches according to the priority of each Cache block, is specifically configured to select at least one Cache block of which the priority satisfies a Cache condition from among the Cache blocks; and writing the data block into one Cache block in the at least one Cache block.
8. The apparatus of claim 6, further comprising:
the setting unit is used for periodically determining the access frequency of each Cache block based on the monitored access parameters of each Cache block and setting the priority of each Cache block based on the access frequency of each Cache block; and/or when monitoring that the access parameter of any Cache block changes, determining the access frequency of the Cache block based on the changed access parameter of the Cache block, setting the priority of the Cache block based on the access frequency of the Cache block, and maintaining the priority of the Cache block of which the access parameter does not change.
9. A processor chip, characterized in that it comprises a processor and a Cache memory Cache, said Cache comprising a Cache for implementing the method steps of any of claims 1-5.
10. A Cache memory Cache, characterized in that said Cache comprises a Cache for implementing the method steps of any of claims 1-5.
CN201911121349.XA 2019-11-15 2019-11-15 Data block writing method and device, processor chip and Cache Pending CN111221749A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911121349.XA CN111221749A (en) 2019-11-15 2019-11-15 Data block writing method and device, processor chip and Cache

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911121349.XA CN111221749A (en) 2019-11-15 2019-11-15 Data block writing method and device, processor chip and Cache

Publications (1)

Publication Number Publication Date
CN111221749A true CN111221749A (en) 2020-06-02

Family

ID=70827672

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911121349.XA Pending CN111221749A (en) 2019-11-15 2019-11-15 Data block writing method and device, processor chip and Cache

Country Status (1)

Country Link
CN (1) CN111221749A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115373610A (en) * 2022-10-25 2022-11-22 北京智芯微电子科技有限公司 Data writing method and device, electronic equipment and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050188158A1 (en) * 2004-02-25 2005-08-25 Schubert Richard P. Cache memory with improved replacement policy
US20080098177A1 (en) * 2005-02-10 2008-04-24 Guthrie Guy L Data Processing System and Method for Efficient L3 Cache Directory Management
CN101184021A (en) * 2007-12-14 2008-05-21 华为技术有限公司 Method, equipment and system for implementing stream media caching replacement
CN101576856A (en) * 2009-06-18 2009-11-11 浪潮电子信息产业股份有限公司 Buffer data replacement method based on access frequency within long and short cycle
CN103150266A (en) * 2013-02-20 2013-06-12 北京工业大学 Improved multi-core shared cache replacing method
CN105094686A (en) * 2014-05-09 2015-11-25 华为技术有限公司 Data caching method, cache and computer system
CN106155936A (en) * 2015-04-01 2016-11-23 华为技术有限公司 A kind of buffer replacing method and relevant apparatus
CN106528454A (en) * 2016-11-04 2017-03-22 中国人民解放军国防科学技术大学 Memory system cache mechanism based on flash memory
CN106569959A (en) * 2016-10-28 2017-04-19 郑州云海信息技术有限公司 Cache replacing method and system based on SSD
WO2017117734A1 (en) * 2016-01-06 2017-07-13 华为技术有限公司 Cache management method, cache controller and computer system
CN107451071A (en) * 2017-08-04 2017-12-08 郑州云海信息技术有限公司 A kind of caching replacement method and system
CN108021514A (en) * 2016-10-28 2018-05-11 华为技术有限公司 It is a kind of to cache the method and apparatus replaced

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050188158A1 (en) * 2004-02-25 2005-08-25 Schubert Richard P. Cache memory with improved replacement policy
US20080098177A1 (en) * 2005-02-10 2008-04-24 Guthrie Guy L Data Processing System and Method for Efficient L3 Cache Directory Management
CN101184021A (en) * 2007-12-14 2008-05-21 华为技术有限公司 Method, equipment and system for implementing stream media caching replacement
CN101576856A (en) * 2009-06-18 2009-11-11 浪潮电子信息产业股份有限公司 Buffer data replacement method based on access frequency within long and short cycle
CN103150266A (en) * 2013-02-20 2013-06-12 北京工业大学 Improved multi-core shared cache replacing method
CN105094686A (en) * 2014-05-09 2015-11-25 华为技术有限公司 Data caching method, cache and computer system
CN106155936A (en) * 2015-04-01 2016-11-23 华为技术有限公司 A kind of buffer replacing method and relevant apparatus
WO2017117734A1 (en) * 2016-01-06 2017-07-13 华为技术有限公司 Cache management method, cache controller and computer system
CN106569959A (en) * 2016-10-28 2017-04-19 郑州云海信息技术有限公司 Cache replacing method and system based on SSD
CN108021514A (en) * 2016-10-28 2018-05-11 华为技术有限公司 It is a kind of to cache the method and apparatus replaced
CN106528454A (en) * 2016-11-04 2017-03-22 中国人民解放军国防科学技术大学 Memory system cache mechanism based on flash memory
CN107451071A (en) * 2017-08-04 2017-12-08 郑州云海信息技术有限公司 A kind of caching replacement method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
周荷琴 等: "《微型计算机原理与接口技术》", 31 March 2013, 中国科学技术大学出版社 *
赵伟华 等: "《计算机操作系统》", 30 September 2018, 西安电子科技大学出版社 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115373610A (en) * 2022-10-25 2022-11-22 北京智芯微电子科技有限公司 Data writing method and device, electronic equipment and storage medium
CN115373610B (en) * 2022-10-25 2023-08-18 北京智芯微电子科技有限公司 Data writing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
WO2016141735A1 (en) Cache data determination method and device
CN107066397B (en) Method, system, and storage medium for managing data migration
US10481817B2 (en) Methods and apparatus to optimize dynamic memory assignments in multi-tiered memory systems
JP6166616B2 (en) Information processing method, information processing apparatus, and program
US8281103B2 (en) Method and apparatus for allocating storage addresses
CN108021514B (en) Cache replacement method and equipment
US11507484B2 (en) Ethod and computer storage node of shared storage system for abnormal behavior detection/analysis
US11488654B2 (en) Memory row recording for mitigating crosstalk in dynamic random access memory
CN106164874B (en) Method and device for accessing data visitor directory in multi-core system
CN111221749A (en) Data block writing method and device, processor chip and Cache
CN113515474A (en) Data processing apparatus, method, computer device, and storage medium
CN107357523B (en) Data processing method and electronic equipment
US7035980B2 (en) Effects of prefetching on I/O requests in an information processing system
US11899642B2 (en) System and method using hash table with a set of frequently-accessed buckets and a set of less frequently-accessed buckets
CN112231241B (en) Data reading method and device and computer readable storage medium
EP0905619A2 (en) A list management system and method
CN113778693B (en) Cache operation method, cache operation device, electronic equipment and processor
US9015428B2 (en) Physical and logical counters
CN109508302A (en) A kind of fills method and memory
CN117370227A (en) Memory page determining method and computing device
CN107844511B (en) Game resource caching method and system based on cycle cost
KR20170112909A (en) Method and apparatus for controlling memory
CN115185861A (en) Data access method and device and computer equipment
CN114691541A (en) DRAM-NVM (dynamic random Access memory-non-volatile memory) hybrid memory predictor based on dynamic access
CN112099974A (en) Multi-thread processor system and access bandwidth control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200602

RJ01 Rejection of invention patent application after publication