CN112259141B - Refreshing method of dynamic random access memory, memory controller and electronic device - Google Patents

Refreshing method of dynamic random access memory, memory controller and electronic device Download PDF

Info

Publication number
CN112259141B
CN112259141B CN202011162195.1A CN202011162195A CN112259141B CN 112259141 B CN112259141 B CN 112259141B CN 202011162195 A CN202011162195 A CN 202011162195A CN 112259141 B CN112259141 B CN 112259141B
Authority
CN
China
Prior art keywords
refresh
state
request
address
priority
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011162195.1A
Other languages
Chinese (zh)
Other versions
CN112259141A (en
Inventor
谭龙生
吴峰
曾峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Haiguang Information Technology Co Ltd
Original Assignee
Haiguang Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Haiguang Information Technology Co Ltd filed Critical Haiguang Information Technology Co Ltd
Priority to CN202011162195.1A priority Critical patent/CN112259141B/en
Publication of CN112259141A publication Critical patent/CN112259141A/en
Application granted granted Critical
Publication of CN112259141B publication Critical patent/CN112259141B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/21Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements
    • G11C11/34Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices
    • G11C11/40Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors
    • G11C11/401Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements using semiconductor devices using transistors forming cells needing refreshing or charge regeneration, i.e. dynamic cells
    • G11C11/406Management or control of the refreshing or charge-regeneration cycles
    • G11C11/40603Arbitration, priority and concurrent access to memory cells for read/write or refresh operations
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Computer Hardware Design (AREA)
  • Dram (AREA)

Abstract

A refreshing method for a dynamic random access memory, a memory controller and an electronic device are provided. The dynamic random access memory comprises a plurality of storage queues, each storage queue comprises a plurality of block groups, and each block group comprises a plurality of blocks. The method comprises the following steps: determining states of a plurality of state machines corresponding to a plurality of storage queues, wherein the storage queues are in one-to-one correspondence with the state machines; determining a plurality of prediction addresses corresponding to a plurality of storage queues; based on the states of the plurality of state machines and the plurality of predicted addresses, a refresh request is generated and sent to an arbiter coupled to the dynamic random access memory such that the arbiter arbitrates the refresh request and, in response to the refresh request winning the arbitration, sends the refresh request to the dynamic random access memory for effecting a refresh of the dynamic random access memory. The method can ensure the continuity of read-write access as much as possible and improve the bandwidth utilization rate on the basis of ensuring timely completion of refreshing.

Description

Refreshing method of dynamic random access memory, memory controller and electronic device
Technical Field
Embodiments of the present disclosure relate to a refresh method for a dynamic random access memory, a memory controller, and an electronic device.
Background
Computer systems typically employ dynamic random access memory (Dynamic Random Access Memory, DRAM) as the main memory (or referred to as memory) of the system. DRAM has a high density and low price, and is thus widely used in computer systems. DRAM is a semiconductor memory, and the main principle of operation is to use a capacitor to store data, and to represent a binary bit (bit) as "0" or "1" by how much charge is stored in the capacitor.
Disclosure of Invention
At least one embodiment of the present disclosure provides a refresh method for a dynamic random access memory, wherein the dynamic random access memory includes a plurality of storage queues, each storage queue including a plurality of block groups, each block group including a plurality of blocks, the method comprising: determining states of a plurality of state machines corresponding to the plurality of storage queues, wherein the plurality of storage queues are in one-to-one correspondence with the plurality of state machines; determining a plurality of prediction addresses corresponding to the plurality of storage queues; based on the states of the state machines and the predicted addresses, a refresh request is generated and sent to an arbiter coupled to the dynamic random access memory, such that the arbiter arbitrates the refresh request and, in response to the refresh request, wins the arbitration, sends the refresh request to the dynamic random access memory for implementing the refresh of the dynamic random access memory.
For example, in a method provided in an embodiment of the present disclosure, determining states of a plurality of state machines corresponding to the plurality of storage queues includes: for each state machine, determining the state of the state machine according to the value of the deferred refresh counter, the self-refresh entry request, and the self-refresh exit command.
For example, in a method provided by an embodiment of the present disclosure, the state machine includes 4 states: a first priority state, a second priority state, a flush state, and a self-refresh state, the first priority state having a higher priority than the second priority state.
For example, in a method provided in an embodiment of the present disclosure, for each state machine, determining a state of the state machine according to the value of the deferred refresh counter, the self-refresh entry request, and the self-refresh exit command includes: responsive to the deferred refresh counter having a value greater than or equal to a threshold, causing the state machine to enter the first priority state; responsive to the value of the deferred refresh counter being less than the threshold, causing the state machine to enter the second priority state; responsive to the self-refresh entry request, causing the state machine to enter the flush state immediately or with a delay in accordance with a current state of the state machine; responsive to completion of an operation corresponding to the flush state, causing the state machine to enter the self-refresh state; and in response to the self-refresh exit command, enabling the state machine to enter the first priority state or the second priority state according to the value of the deferred refresh counter.
For example, in a method provided by an embodiment of the present disclosure, in response to the self-refresh entry request, according to a current state of the state machine, causing the state machine to immediately or delay entering the flush state includes: in response to the self-refresh entry request, in a case where the state machine is in the first priority state, causing the state machine to maintain the first priority state until the deferred refresh counter value is less than the threshold value to reenter the flush state; in response to the self-refresh entry request, causing the state machine to enter the flush state in a case where the state machine is in the second priority state and no address is recorded in a refresh address recording unit; in response to the self-refresh entry request, in a case where the state machine is in the second priority state and there is a recorded address in the refresh address recording unit, causing the state machine to maintain the second priority state until no recorded address in the refresh address recording unit reenters the flush state.
For example, in a method provided by an embodiment of the present disclosure, determining the plurality of predicted addresses corresponding to the plurality of store queues includes: for each storage queue, determining the prediction address based on block information and the state of a state machine corresponding to the storage queue.
For example, in a method provided in an embodiment of the present disclosure, for each storage queue, determining the prediction address based on the block information and a state of a state machine corresponding to the storage queue includes: determining the addresses of the blocks meeting the requirements as the predicted addresses according to the priority sequence from the first level to the N level in response to the state machine being in the first priority state and no refresh task being executed in the corresponding storage queue; determining the addresses of the blocks meeting the requirements as the predicted addresses according to the priority sequence from the first level to the Mth level in response to the state machine being in the second priority state and no refresh task being executed in the corresponding storage queue; determining that the predicted address is empty in response to the state machine being in the first priority state and there being an executing refresh task in a corresponding storage queue; determining that the predicted address is empty in response to the state machine being in the second priority state and there being no block meeting a requirement or a refresh task being performed in a corresponding store queue; wherein N > M >1 and N and M are integers, the priority order of the first level to the nth level is gradually reduced, the priority order of each level is determined based on the block information, and the block information at least includes: whether valid, whether refreshed, whether there is a memory access request, whether idle, and whether timing is in compliance.
For example, a method provided by an embodiment of the present disclosure further includes: and generating a blocking address based on the states of the state machines and the predicted addresses, and sending the blocking address to the arbiter so that the arbiter blocks a non-refresh command corresponding to the blocking address.
For example, in a method provided by an embodiment of the present disclosure, generating the blocking address based on the states of the plurality of state machines and the plurality of predicted addresses, and sending the blocking address to the arbiter, includes: and in response to the state machine being in the first priority state and no executing refreshing task in the corresponding storage queue, determining a predicted address corresponding to the storage queue as the blocking address, and sending the blocking address to the arbiter.
For example, in a method provided by an embodiment of the present disclosure, generating the refresh request based on states of the plurality of state machines and the plurality of predicted addresses, and sending the refresh request to the arbiter connected to the dynamic random access memory, includes: selecting a storage queue and generating the refresh request according to a priority selection rule based on states of the plurality of state machines, and transmitting the refresh request to the arbiter; the refresh request comprises a request command, a request address and a flag bit, wherein the request address is a predicted address corresponding to the selected storage queue, and the flag bit indicates that a state machine corresponding to the selected storage queue is in the first priority state or the second priority state.
For example, in the method provided in an embodiment of the present disclosure, the first priority state includes a first sub-state and a second sub-state, the first sub-state having a higher priority than the second sub-state, the first sub-state being that the value of the deferred refresh counter reaches a maximum value, and the second sub-state being that the value of the deferred refresh counter is less than the maximum value and greater than or equal to the threshold value; the priority selection rule is: selecting a corresponding storage queue according to the priority orders of the first sub-state, the second sub-state and the second priority state, selecting a storage queue with a prediction address not being empty if all state machines are in the second priority state, and randomly selecting a storage queue corresponding to one state machine from the state machines with the same priority order if a plurality of state machines with the same priority order exist.
For example, a method provided by an embodiment of the present disclosure further includes: generating a precharge request and sending the precharge request to the arbiter in response to the generation of the refresh request, the flag bit of the refresh request indicating that the first priority state and the block corresponding to the request address are not fully idle; wherein the precharge request corresponds to a block corresponding to the request address.
For example, in the method provided in an embodiment of the present disclosure, the arbiter is further configured to arbitrate the read-write request, the row strobe request, and the precharge request, and the priority of the arbitration decreases in the following order: the flag bit indicates a refresh request of the first priority state, the read-write request, the row strobe request, the precharge request, and the flag bit indicates a refresh request of the second priority state.
At least one embodiment of the present disclosure further provides a memory controller for a dynamic random access memory, wherein the memory controller is configured to be connected to the dynamic random access memory and configured to control the dynamic random access memory to refresh, the dynamic random access memory comprising a plurality of memory queues, each memory queue comprising a plurality of block groups, each block group comprising a plurality of blocks; the memory controller comprises a refresh control module, wherein the refresh control module comprises a plurality of state machines, a plurality of address prediction units and a request generation unit; the plurality of state machines are in one-to-one correspondence with the plurality of storage queues, and the state machines are configured to switch among a plurality of states; the plurality of address prediction units are in one-to-one correspondence with the plurality of storage queues, and the address prediction units are configured to determine predicted addresses of the corresponding storage queues; the request generation unit is configured to generate a refresh request based on states of the plurality of state machines and the predicted address, and to send the refresh request to an arbiter connected to the dynamic random access memory.
For example, the memory controller provided in an embodiment of the present disclosure further includes the arbiter, where the refresh control module is connected to the arbiter, the arbiter is connected to the dynamic random access memory, the arbiter is configured to arbitrate the refresh request, and to win arbitration in response to the refresh request, send the refresh request to the dynamic random access memory, for implementing the refresh of the dynamic random access memory.
For example, in the memory controller provided in an embodiment of the present disclosure, the refresh control module further includes a plurality of barrier address generating units; the plurality of blocking address generating units are in one-to-one correspondence with the plurality of storage queues, and are configured to generate blocking addresses based on the predicted addresses and states of state machines of the storage queues corresponding to the predicted addresses, and send the blocking addresses to the arbiter; the arbiter is further configured to block non-refresh commands corresponding to the blocking address.
For example, in the memory controller provided in an embodiment of the present disclosure, the refresh control module further includes a refresh interval counter, a plurality of deferred refresh counters, and a plurality of refresh address recording units; the refresh interval counter is configured to cycle count and generate and empty a pulse when the count value reaches a count set value and send the pulse to the plurality of deferred refresh counters; the plurality of deferred refresh counters are in one-to-one correspondence with the plurality of storage queues, the deferred refresh counters are configured to count deferred refresh requests of the storage queues corresponding to the received pulses, and the state machine determines a state based on the values of the corresponding deferred refresh counters; the refresh address recording units are in one-to-one correspondence with the storage queues and are configured to record addresses of the refreshed blocks.
At least one embodiment of the present disclosure also provides an electronic device including a memory controller according to any one of the embodiments of the present disclosure.
For example, an embodiment of the present disclosure provides an electronic device further including the dynamic random access memory.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings of the embodiments will be briefly described below, and it is apparent that the drawings in the following description relate only to some embodiments of the present disclosure, not to limit the present disclosure.
FIG. 1 is a schematic diagram of a memory controller for a DRAM according to some embodiments of the present disclosure;
FIG. 2 is a schematic diagram of a refresh control module in a memory controller for a DRAM according to some embodiments of the present disclosure;
FIG. 3 is a flow chart of a refresh method for a DRAM according to some embodiments of the present disclosure;
FIG. 4 is a schematic diagram of a state machine used in a method of refreshing a DRAM according to some embodiments of the present disclosure;
FIG. 5 is a flow chart of another method for refreshing a DRAM according to some embodiments of the present disclosure; and
Fig. 6 is a schematic block diagram of an electronic device provided in some embodiments of the present disclosure.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings of the embodiments of the present disclosure. It will be apparent that the described embodiments are some, but not all, of the embodiments of the present disclosure. All other embodiments, which can be made by one of ordinary skill in the art without the need for inventive faculty, are within the scope of the present disclosure, based on the described embodiments of the present disclosure.
Unless defined otherwise, technical or scientific terms used in this disclosure should be given the ordinary meaning as understood by one of ordinary skill in the art to which this disclosure belongs. The terms "first," "second," and the like, as used in this disclosure, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. Likewise, the terms "a," "an," or "the" and similar terms do not denote a limitation of quantity, but rather denote the presence of at least one. The word "comprising" or "comprises", and the like, means that elements or items preceding the word are included in the element or item listed after the word and equivalents thereof, but does not exclude other elements or items. The terms "connected" or "connected," and the like, are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", etc. are used merely to indicate relative positional relationships, which may also be changed when the absolute position of the object to be described is changed.
DRAM is a volatile memory that cannot permanently hold data. This is because the DRAM stores data through a capacitor, and the charge on the capacitor gradually runs off over time, resulting in loss of data. Therefore, in order to hold data, the DRAM needs to be periodically refreshed (i.e., the data in the capacitor is read out and rewritten to restore the charge on the capacitor to the original level, thereby achieving the purpose of holding the data.
However, during refresh, the DRAM is not able to perform normal read and write access nor receive any other command, which negatively impacts memory bandwidth. Before the fifth generation of double data rate dynamic random access memory (Double Data Rate Dynamic Random Access Memory, DDR5 DRAM), refresh commands were performed in units of queues (Rank), a type of refresh command called REFab (all bank refresh). In a typical refresh scheme, refresh scheduling is mostly achieved by ranking the refreshes. When the refresh accumulates to near the upper time limit that can be deferred, the DRAM is refreshed urgently. The impact on DRAM performance is very pronounced when an emergency refresh occurs.
Starting from DDR5 DRAM, a finer granularity of refresh commands may be employed, in blocks (banks), of the type known as REFsb (same Bank refresh), which causes all blocks within a memory queue having some same block address to perform a refresh.
However, in the current refresh scheme, it is difficult to combine the refresh and the read-write access of the DRAM, and the refresh has a large influence on the performance of the DRAM, so that the bandwidth utilization rate of the DRAM is low.
At least one embodiment of the present disclosure provides a refresh method for a dynamic random access memory, a memory controller, and an electronic device. The method has multi-level priority and arbitration logic, can give consideration to the refresh and read-write access of the dynamic random access memory, ensures the continuity of the read-write access as much as possible on the basis of ensuring the timely completion of the refresh, reduces the influence of the refresh on the performance of the dynamic random access memory, improves the bandwidth of the memory access and improves the bandwidth utilization rate.
Hereinafter, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. It should be noted that the same reference numerals in different drawings will be used to refer to the same elements already described.
At least one embodiment of the present disclosure provides a refresh method for a dynamic random access memory including a plurality of storage queues, each storage queue including a plurality of block groups, each block group including a plurality of blocks. The method comprises the following steps: determining states of a plurality of state machines corresponding to a plurality of storage queues, wherein the storage queues are in one-to-one correspondence with the state machines; determining a plurality of prediction addresses corresponding to a plurality of storage queues; based on the states of the plurality of state machines and the plurality of predicted addresses, a refresh request is generated and sent to an arbiter coupled to the dynamic random access memory such that the arbiter arbitrates the refresh request and, in response to the refresh request winning the arbitration, sends the refresh request to the dynamic random access memory for effecting a refresh of the dynamic random access memory.
Fig. 1 is a schematic diagram of a memory controller for a dynamic random access memory according to some embodiments of the disclosure. For example, the memory controller 100 is suitable for controlling DDR5 DRAM. It should be noted that, only the functional blocks related to the refresh operation in the memory controller 100 are shown in fig. 1, and other functional blocks may be set according to requirements, which is not limited by the embodiments of the present disclosure.
For example, a DRAM that needs to be refreshed includes a plurality of store queues (ranks), each store queue including a plurality of banks, each Bank including a plurality of banks. For example, in some examples, the DRAM includes 32 or 64 store queues, each store queue including 4 or 8 banks, each bank including 2 or 4 banks. Reference is made to conventional designs for the specific structure of the DRAM and no further details are given here.
For example, as shown in fig. 1, the memory controller 100 is connected to the bus interface and the DDR5 physical layer, respectively, and may receive an access command from a central processing unit (Central Processing Unit, CPU) core transmitted by the bus interface to access (e.g., read and write data) the DRAM, and may control the DRAM to refresh. For example, the memory controller 100 may be connected to the DDR5 physical layer through a DDR physical layer (DDR PHY Interface, DFI) interface, and may further be connected to the DDR5 physical layer through an advanced peripheral bus (Advanced Peripheral Bus, APB) interface, for example, to further connect the memory controller 100 to the DRAM. Thus, the memory controller 100 may configure the control registers, store access to the DRAM, and issue commands such as refresh, calibration, etc. For example, in some examples, memory controller 100 includes a 32-bit wide DRAM channel without error correction bits (ECC bits).
For example, the memory controller 100 includes an address decoder 101, a command queue 102, a data buffer 103, a timing checker 104, a block status record table 105, an arbiter 106, a refresh control module 107, a departure queue 108, and a precharge module 109.
The address decoder 101 is configured to convert the physical address of the received access request into the standard address format of the DDR5 DRAM in an address mapping manner specified by the configuration register. The command queue 102 is configured to store the received memory commands while updating the stored memory request information in real time according to information provided by the arbiter 106. For example, if a write request is received, the corresponding data is stored in the data cache 103. In addition to storing memory access information, command queue 102 also provides some statistics for use by other modules. For example, the command queue 102 needs to provide the refresh control module 107 with two types of statistical information: (1) Memory access command statistics of the corresponding block address to inform the refresh control module 107 whether a memory access request exists in the command queue 102 for the corresponding block; (2) Whether a type of access command exists in the corresponding block address is determined by that a row strobe command is issued, but a read-write command is not issued yet, that is, a command that the read-write is not completed.
The timing checker 104 records, detects various timing parameters used in the memory accesses, and provides the necessary timing information for the arbiter 106 and refresh control module 107 to ensure the correctness of the DRAM memory access operations. The bank status record table 105 records the addresses and the status of each bank of the DRAM, and updates the address and the status according to the arbitration result of the arbiter 106. At the same time, each time a memory request arrives, the block status record table 105 also provides the initial block status information of the memory request to the command queue 102.
The arbiter 106 is configured to receive various requests from other modules, and to filter the requests according to established rules. When a command wins arbitration, the arbiter 106 sends the command to the departure queue 108, and the arbiter 106 provides feedback signals to the modules to assist the modules in updating the information. For example, the arbiter 106 is further configured to block a request (e.g., a read-write request, etc.) corresponding to the block address according to the block address provided by the refresh control module 107.
Refresh control module 107 is configured to defer or generate refresh requests and provide associated priority indications based on configuration of configuration registers and information provided by command queue 102, timing checker 104, and block status record table 105. Since executing REFsb requires that the block of the corresponding address be in an idle state, the refresh control module 107 is further configured to generate a block address precharge (PCHGsb) request as needed, which causes all blocks having a certain same block address in a memory queue to be precharged. To ensure that refresh and precharge can be performed in normal order, and also to allow read and write access, the refresh control module 107 also provides a blocking address and informs the arbiter 106 to block other commands corresponding to the blocking address according to the blocking address.
The departure queue 108 is configured to send requests from the arbiter 106 to the DFI interface, and eventually to the DRAM, and to receive data read back from the DRAM and return the data to the bus interface to cause the data to reach the CPU core. For example, when the request from the arbiter 106 is a write request, the departure queue 108 also sends the data obtained from the data buffer 103 to the DFI interface according to rules and finally to the DRAM to achieve data writing.
The precharge module 109 is configured to monitor a block access history, and when a block is not read-write accessed for a certain period of time, the precharge module 109 generates a precharge command to turn off the block.
Fig. 2 is a schematic structural diagram of a refresh control module in a memory controller for a dynamic random access memory according to some embodiments of the present disclosure. For example, as shown in fig. 2, the refresh control module 107 includes a refresh interval counter 201, a plurality of deferred refresh counters 202, a plurality of state machines 203, a request generation unit 204, a plurality of address prediction units 205, a plurality of barrier address generation units 206, and a plurality of refresh address recording units 207.
The plurality of state machines 203 are in one-to-one correspondence with the plurality of store queues, that is, each store queue is individually assigned one state machine 203. The state machine 203 is configured to switch between a plurality of states. The plurality of address prediction units 205 are in one-to-one correspondence with the plurality of store queues, that is, each store queue is allocated with one address prediction unit 205. The address prediction unit 205 is configured to determine a predicted address of a corresponding store queue. The request generation unit 204 is configured to generate a refresh request based on the states of the plurality of state machines 203 and the predicted addresses, and to send the refresh request to the arbiter 106 connected to the DRAM.
The plurality of barrier address generating units 206 are in one-to-one correspondence with the plurality of store queues, that is, each store queue is individually allocated with one barrier address generating unit 206. The barrier address generation unit 206 is configured to generate a barrier address based on the predicted address and the state of the state machine 203 of the store queue to which the predicted address corresponds, and to send the barrier address to the arbiter 106.
The refresh interval counter 201 is configured to cycle count and generate and clear a pulse when the count value reaches a count set value, and send the pulse to the plurality of deferred refresh counters 202. The plurality of deferred refresh counters 202 are in one-to-one correspondence with the plurality of memory queues, i.e., each memory queue is individually assigned one deferred refresh counter 202. Deferred refresh counter 202 is configured to count deferred refresh requests of a corresponding store queue based on a received pulse, and state machine 203 determines a state based on the value of the corresponding deferred refresh counter 202. The plurality of refresh address recording units 207 are in one-to-one correspondence with the plurality of memory queues, that is, each memory queue is individually allocated with one refresh address recording unit 207. The refresh address recording unit 207 is configured to record addresses of blocks that have been refreshed.
Fig. 3 is a flowchart of a refresh method for a dynamic random access memory according to some embodiments of the present disclosure. For example, in some examples, as shown in fig. 3, the method includes the following operations.
Step S10: determining states of a plurality of state machines corresponding to a plurality of storage queues, wherein the plurality of storage queues are in one-to-one correspondence with the plurality of state machines;
step S20: determining a plurality of prediction addresses corresponding to a plurality of storage queues;
step S30: based on the states of the plurality of state machines and the plurality of predicted addresses, a refresh request is generated and sent to an arbiter coupled to the dynamic random access memory such that the arbiter arbitrates the refresh request and, in response to the refresh request winning the arbitration, sends the refresh request to the dynamic random access memory for effecting a refresh of the dynamic random access memory.
The above steps are exemplarily described below in conjunction with the refresh control module 107 shown in fig. 2.
For example, in step S10, the plurality of storage queues are in one-to-one correspondence with the plurality of state machines 203, that is, each storage queue is individually allocated with one state machine 203, and the states of the plurality of state machines 203 may be the same or different.
As shown in fig. 4, state machine 203 includes 4 states: a first priority state 302, a second priority state 303, a flush state 304, and a self-refresh state 301. For example, the first priority state 302 has a higher priority than the second priority state 303, i.e., the first priority state 302 is a high priority state and the second priority state 303 is a low priority state. The self-refresh state 301 is a state of the DRAM in a sleep mode or a low power consumption mode, in which the DRAM is periodically refreshed according to an internal clock to hold data in a state in which the DRAM does not receive any command from the outside. The flush state 304 is used to prepare for entering the self-refresh state 301. In the flush state 304, the command queue 102 is flushed (i.e., issued in its entirety), in addition to the high priority refresh request being flushed (i.e., issued in its entirety), the low priority refresh request being selectively flushed (i.e., selected whether or not issued in its entirety) as desired. For example, as shown in FIG. 4, state machine 203 may jump and switch between 4 states in the direction indicated by the arrow lines in the figure to effect a change in state.
For example, determining the state of the plurality of state machines 203 corresponding to the plurality of store queues may include: for each state machine 203, the state of the state machine 203 is determined based on the value of the deferred refresh counter 202, the self-refresh entry request, and the self-refresh exit command.
Further, for each state machine 203, determining the state of the state machine 203 based on the value of the deferred refresh counter 202, the self-refresh entry request, and the self-refresh exit command may include the following operations: responsive to the value of deferred refresh counter 202 being greater than or equal to a threshold, causing state machine 203 to enter first priority state 302; in response to the deferred refresh counter 202 having a value less than the threshold, causing the state machine 203 to enter a second priority state 303; responsive to the self-refresh entry request, causing the state machine 203 to immediately or delay entering the flush state 304 in accordance with the current state of the state machine 203; in response to completion of the operation corresponding to the flush state 304, causing the state machine 203 to enter the self-refresh state 301; in response to the self-refresh exit command, the state machine 203 is caused to enter either the first priority state 302 or the second priority state 303 depending on the value of the deferred refresh counter 202.
For example, the above threshold may be set as desired and specified by a configuration register. Typically, DRAM needs to be refreshed periodically at refresh average time intervals (Trefi), which can be deferred up to 4 times in normal refresh mode and up to 8 times in fine-grained refresh mode. Thus, the above threshold may be set to a value less than 8, for example, 5, 6, or 7, etc., which may be determined according to actual needs, and embodiments of the present disclosure are not limited thereto.
The main basis for the state machine 203 to make a state transition between the first priority state 302 and the second priority state 303 is to defer the value of the refresh counter 202. When the value of deferred refresh counter 202 is greater than or equal to the threshold value, state machine 203 enters first priority state 302 (i.e., enters a high priority state); when the value of deferred refresh counter 202 is less than the threshold value, state machine 203 enters second priority state 303 (i.e., enters a low priority state).
For example, based on the value of deferred refresh counter 202, first priority state 302 is divided into a first sub-state and a second sub-state, the first sub-state having a higher priority than the second sub-state. For example, the first sub-state is where the value of deferral refresh counter 202 reaches a maximum value, and the second sub-state is where the value of deferral refresh counter 202 is less than the maximum value and greater than or equal to a threshold value. For example, the maximum value may be set according to actual requirements, such as 8 or other applicable values, to which embodiments of the present disclosure are not limited.
For example, the refresh interval counter 201 starts to operate after the memory controller 100 and the DRAM have completed initialization. The refresh interval counter 201 counts cyclically, and when the count value reaches a count set value (e.g., trefi), pulses are generated and cleared, and the generated pulses are sent to the deferred refresh counter 202. Every time the refresh interval counter 201 generates a pulse, all deferred refresh counters 202 are counted up by 1. The value of deferral refresh counter 202 represents the number of refresh requests that are currently deferred, and a value of 0 for deferral refresh counter 202 indicates that the DRAM does not need to be refreshed.
When the refresh control module 107 receives a self-refresh entry request, the state machine 203 is caused to enter the flush state 304 immediately or with a delay, depending on the current state of the state machine 203. For example, in response to a self-refresh entry request, where state machine 203 is in first priority state 302, state machine 203 is caused to remain in first priority state 302 until the value of deferred refresh counter 202 is less than the threshold value to reenter flush state 304, i.e., the entry into flush state 304 is delayed. In response to the self-refresh entry request, in a case where the state machine 203 is in the second priority state 303 and no address is recorded (no address is recorded) in the refresh address recording unit 207, the state machine 203 is caused to enter the flush state 304, that is, the flush state 304 is immediately entered. In response to the self-refresh entry request, where the state machine 203 is in the second priority state 303 and there is a recorded address in the refresh address recording unit 207, the state machine 203 is caused to remain in the second priority state 303 until no recorded address in the refresh address recording unit 207 (i.e., after the refresh of all block addresses is completed once) is again entered into the flush state 304.
For example, in flush state 304, command queue 102 will be flushed (i.e., issued in full), the high priority refresh command will be flushed (i.e., issued in full), and refresh control module 107 will select whether to issue the remaining accumulated low priority refresh requests in full based on the indication of the configuration registers. When the operations corresponding to the flush state 304 are completed, i.e., after the above-described requests to be issued have all been issued, the state machine 203 enters the self-refresh state 301.
Upon receipt of the self-refresh exit command, state machine 203 is caused to enter first priority state 302 if the value of deferred refresh counter 202 is greater than or equal to a threshold value, and state machine 203 is caused to enter second priority state 303 if the value of deferred refresh counter 202 is less than the threshold value. By employing the state transition mechanism described above, it is ensured that all blocks have received REFsb requests (i.e., all blocks have been refreshed once) before entering the self-refresh state 301, and therefore, after exiting the self-refresh state 301, the state machine 203 will skip the process of sending the compensatory refresh, and jump directly from the self-refresh state 301 to the first priority state 302 or the second priority state 303. Since the number of REFsb required for the compensation refresh is always greater than or equal to the number of REFsb required for the condition that all blocks receive REFsb before entering the self-refresh state 301, overhead of the overall process to enter and exit the self-refresh state 301 can be reduced in the above manner. Of course, embodiments of the present disclosure are not limited thereto, and in other examples, the compensation refresh may be performed after exiting the self-refresh state 301 without any limitation before entering the self-refresh state 301.
For example, in step S20, the plurality of address prediction units 205 determine a plurality of predicted addresses corresponding to the plurality of storage queues, that is, the plurality of address prediction units 205 are in one-to-one correspondence with the plurality of storage queues, and each address prediction unit 205 determines a predicted address corresponding to the corresponding storage queue. For example, the predicted address may be the address of a block, which indicates the block address of the next REFsb request of the current memory queue predicted by the address prediction unit 205. The address prediction unit 205 supplies the determined predicted address to the request generation unit 204 and the barrier address generation unit 206. It should be noted that, each address prediction unit 205 determines a predicted address, and the predicted address may be an address of a certain block or may be null.
For example, determining a plurality of predicted addresses for a plurality of store queues includes: for each store queue, a prediction address is determined based on the block information and the state of the state machine 203 to which the store queue corresponds.
Further, for each store queue, determining the predicted address based on the block information and the state of the state machine 203 corresponding to the store queue may include the following operations: in response to state machine 203 being in first priority state 302 and no executing refresh task in the corresponding storage queue, determining addresses of blocks meeting the requirements as predicted addresses in a priority order from the first level to the nth level; in response to state machine 203 being in second priority state 303 and no executing refresh task in the corresponding storage queue, determining the addresses of blocks meeting the requirements as predicted addresses in a priority order from the first level to the mth level; in response to state machine 203 being in first priority state 302 and there being an executing refresh task in the corresponding store queue, determining that the predicted address is empty; in response to state machine 203 being in second priority state 303 and there being no block meeting the requirements or a refresh task being performed in the corresponding store queue, it is determined that the predicted address is empty. For example, the block is divided into N levels, N > M >1, and N and M are integers, and the priority order of the first level to the nth level gradually decreases. It should be noted that the specific values of N and M may be determined according to actual requirements, and the embodiments of the present disclosure are not limited thereto.
For example, the priority order of the respective levels is determined based on block information including at least: whether valid, whether refreshed, whether there is a memory access request, whether idle, whether timing is in compliance, etc.
For example, in some examples, when address prediction unit 205 performs address prediction, the following information may be referred to: (1) Not refreshed, i.e., whether the block address has a record in the refresh address recording unit 207; (2) No memory request, i.e., whether there is a memory request for the block in command queue 102; (3) The block is idle, namely whether the corresponding block is in an idle state; (4) The read-write of the block is completed, namely, the corresponding block has no unfinished read-write command; (5) The refresh timing accords with the timing check of REFsb request requirement; (6) The precharge timing is in conformity, i.e., the timing required by the PCHGsb request checks whether it is satisfied; (7) The effective block, i.e. whether the current block address is an effective address (the number of blocks included in each block group may be 2 or 4).
It should be noted that, the block information referred to when the address prediction unit 205 performs address prediction is not limited to the above-listed information, and may include any other applicable information, which may be determined according to actual requirements, and the embodiments of the present disclosure are not limited thereto.
For example, in some examples, the blocks are divided into 10 levels, i.e., N is equal to 10 as previously described. For example, M is equal to 2. The address prediction unit 205 picks blocks to determine the predicted address based on the following rules:
(1) First level: the effective block is not refreshed, no access request exists, the block is idle, and the refresh time sequence is in accordance;
(2) Second level: the effective block is not refreshed, no access request exists, the block is idle, and the refresh time sequence is not consistent;
(3) Third level: the effective block is not refreshed, no access request exists, the block is not idle, and the precharge time sequence is consistent;
(4) Fourth level: the effective block is not refreshed, no access request exists, the block is not idle, and the precharge time sequence is not consistent;
(5) Fifth level: the effective block is not refreshed, has access request, completes the reading and writing of the block, is idle, and accords with the refresh time sequence;
(6) Sixth level: the effective block is not refreshed, has access request, and has complete read-write, idle block and non-conforming refresh time sequence;
(7) Seventh level, effective block, not refreshed, having access request, block read-write completed, block not idle, precharge time sequence conforming;
(8) Eighth level: the effective block is not refreshed, has access request, and has block read-write completion, the block is not idle, and the precharge time sequence is not consistent;
(9) Ninth level: the effective block is not refreshed, the reading and writing of the block are not completed, and the precharge time sequence is consistent;
(10) Tenth level: the effective block is not refreshed, the read-write of the block is not completed, and the precharge time sequence is not consistent.
For example, the order of priority from the first level to the tenth level gradually decreases.
When the state machine 203 is in the first priority state 302 and there is no refresh task being executed in the corresponding storage queue, determining the addresses of the blocks meeting the requirements as predicted addresses according to the priority order from the first level to the tenth level, that is, performing the predictions from the first level to the tenth level; when state machine 203 is in first priority state 302 and there is an executing refresh task in the corresponding store queue, the predicted address is determined to be empty.
When the state machine 203 is in the second priority state 303 and there is no refresh task being executed in the corresponding storage queue, determining the addresses of the blocks meeting the requirements as predicted addresses according to the priority order from the first level to the second level, that is, performing the prediction from the first level to the second level; when the state machine 203 is in the second priority state 303 and there is no block satisfying the first level to the second level or a refresh task being performed in the corresponding memory queue, it is determined that the prediction address is empty.
The low priority command has only the first two levels of prediction, and in the absence of a condition, the low priority command will accumulate, i.e. the prediction address corresponding to the corresponding store queue is empty and not selected by the request generation unit 204. The high priority command has all 10-level predictions. By the method, the simultaneous reading, writing and refreshing can be ensured, meaningless row gating can be avoided, and accordingly the bandwidth utilization rate is improved. Here, "nonsensical row strobe" means that after row strobe, the read/write command is precharged without being issued.
For example, in step S30, the request generation unit 204 generates a refresh request based on the states of the plurality of state machines 203 and the plurality of predicted addresses, and sends the refresh request to the arbiter 106 connected to the DRAM. Thereby causing the arbiter 106 to arbitrate the refresh request and, in response to the refresh request winning the arbitration, the arbiter 106 sends the refresh request to the DRAM for use in effecting the refresh of the DRAM. For example, the request generation unit 204 selects a storage queue and generates a refresh request according to a priority selection rule based on the states of the plurality of state machines 203, and sends the refresh request to the arbiter 106.
For example, the generated refresh request includes a request command, a request address, and a flag bit. The request address is the predicted address corresponding to the selected store queue. The flag bit indicates that the state machine 203 corresponding to the selected store queue is in either the first priority state 302 or the second priority state 303. For example, in some examples, a 1-bit binary number (e.g., "0" and "1") may be employed to indicate that the state machine 203 corresponding to the selected store queue is in the first priority state 302 or the second priority state 303.
For example, in some examples, the request generation unit 204 selects the store queue according to a priority selection rule as follows. The priority selection rule is: the corresponding storage queues are selected according to the priority order of the first sub-state of the first priority state 302, the second sub-state of the first priority state 302, and the second priority state 303, that is, the priority relationship is: the first sub-state of the first priority state 302 > the second priority state 303; if all state machines 203 are in the second priority state 303, selecting a storage queue whose predicted address is not empty; if there are multiple state machines 203 with the same priority order, a storage queue corresponding to one state machine 203 is randomly selected from the multiple state machines 203 with the same priority order. Based on the priority selection rule, the request generating unit 204 preferentially selects the storage queue with high priority to generate the corresponding refresh request and sends the refresh request to the arbiter 106.
By selecting the storage queue according to the first sub-state, the second sub-state and the second priority state 303, the storage queue with the value of the refresh counter 202 reaching the critical point can be preferentially selected and deferred for refresh, and the refresh can not be violated. If there are multiple state machines 203 with the same priority order, a storage queue corresponding to one state machine 203 is randomly selected from the multiple state machines 203 with the same priority order, and by randomly selecting, excessive refresh accumulation of a certain storage queue caused by a fixed order can be avoided.
By selecting the storage queue and generating the corresponding refresh request according to the priority selection rule, the block without the read-write access memory request can be refreshed, so that when the block is refreshed, other blocks with the read-write access memory can perform read-write operation, and the refresh and the read-write are parallel to the maximum extent, thereby effectively improving the bandwidth.
The arbiter 106 receives not only the refresh request from the request generation unit 204 but also read-write requests, row strobe requests, precharge requests, etc. from other units and modules, and the arbiter 106 is configured to arbitrate a plurality of requests such as refresh requests, read-write requests, row strobe requests, precharge requests, etc. For example, the priority of arbitration by the arbiter 106 decreases in the following order: the flag bit indicates a refresh request, a read-write request, a row strobe request, a precharge request for the first priority state 302, and the flag bit indicates a refresh request for the second priority state 303. The arbiter 106 can make the REFsb of high priority arrive at the DRAM in time to hold the data of the DRAM. It should be noted that, the priority order of the arbiter 106 for arbitration is not limited to the above order, any other applicable rule may be used for arbitration, and the request involved in arbitration may also include various other requests such as a power down (powerdown) request, a register read (mode register read) request, an impedance calibration (zq calibration) request, etc., which may be determined according to actual requirements, and the embodiments of the present disclosure are not limited thereto.
When the arbiter 106 performs arbitration, if the refresh request from the request generation unit 204 wins arbitration, the arbiter 106 sends the refresh request that wins arbitration to the DRAM for realizing the refresh of the DRAM. For a specific operation of the DRAM for refreshing after receiving the refresh request, reference may be made to a conventional design, and detailed description thereof will be omitted.
For example, when the refresh request wins arbitration, the refresh address recording unit 207 records the block address that has been refreshed. When all the block addresses are refreshed by REFsb, the refresh address recording unit 207 clears the deferred refresh counter 202 by 1 count once. If the refresh interval counter 201 generates a pulse while the refresh address recording unit 207 is cleared, the refresh counter 202 is deferred from counting this time.
The refreshing method provided by the embodiment of the disclosure uses REFsb refreshing requests, and can combine a series of factors such as delayed refreshing, refreshing address prediction, command queue monitoring and the like to form multi-level priority and arbitration logic, so that refreshing and read-write access of the DRAM can be considered, the continuity of the read-write access is ensured as much as possible on the basis of timely finishing the refreshing, the influence of the refreshing on the performance of the DRAM is reduced, the bandwidth of the memory access is improved, and the bandwidth utilization rate is improved.
Fig. 5 is a flowchart of another refresh method for a dynamic random access memory according to some embodiments of the present disclosure. For example, in this embodiment, the method may include the following operations.
Step S10: determining states of a plurality of state machines corresponding to a plurality of storage queues, wherein the plurality of storage queues are in one-to-one correspondence with the plurality of state machines;
step S20: determining a plurality of prediction addresses corresponding to a plurality of storage queues;
step S40: generating a blocking address based on states of a plurality of state machines and a plurality of predicted addresses, and sending the blocking address to an arbiter, so that the arbiter blocks a non-refresh command corresponding to the blocking address;
step S30: generating a refresh request based on states of the plurality of state machines and the plurality of predicted addresses, and transmitting the refresh request to an arbiter coupled to the dynamic random access memory;
step S50: and generating a precharge request and sending the precharge request to the arbiter in response to the generation of the refresh request, the flag bit of the refresh request indicating the first priority state and the non-full idle block corresponding to the request address.
In this embodiment, steps S10, S20 and S30 are substantially the same as steps S10, S20 and S30 shown in fig. 3, and the relevant description will refer to the foregoing and will not be repeated here.
Steps S40 and S50 are exemplarily described below in conjunction with the refresh control module 107 shown in fig. 2.
For example, in step S40, the barrier address generation unit 206 generates a barrier address based on the states of the plurality of state machines 203 and the plurality of predicted addresses, and sends the barrier address to the arbiter 106, so that the arbiter 106 blocks the non-refresh command corresponding to the barrier address. For example, the address prediction unit 205 sends the predicted address to the barrier address generation unit 206 for use by the barrier address generation unit 206.
Further, the blocking address generating unit 206 determines, in response to the state machine 203 being in the first priority state 302 and no refresh task being performed in the corresponding storage queue, the predicted address corresponding to the storage queue as a blocking address, and sends the blocking address to the arbiter 106.
It should be noted that, although the refresh control module 107 includes a plurality of blocking address generating units 206, each storage queue corresponds to one blocking address generating unit 206, the corresponding blocking address generating unit 206 determines the corresponding predicted address as a blocking address and transmits to the arbiter 106 only when the corresponding state machine 203 is in the first priority state 302 and no refresh task is being executed in the corresponding storage queue. The blocking address generating unit 206 corresponding to the other storage queues that do not satisfy the requirements does not generate blocking addresses, that is, does not provide effective address information, so as to avoid the situation that at least two block addresses are inaccessible in the same storage queue at the same time. For example, if any block address in a certain memory queue is being refreshed, the corresponding blocking address generating unit 206 will not generate the blocking address, so that it can be avoided that at least two block addresses in the same memory queue cannot perform other access requests at the same time, and bandwidth is prevented from being reduced.
After the arbiter 106 receives the blocking address, the non-refresh command (e.g. access request) corresponding to the blocking address will be blocked from participating in the arbitration of the arbiter 106, so as to provide the guarantee of timing and block status for the high priority refresh request to be sent as soon as possible. For example, in some examples, where the refresh control module 107 also generates a precharge request, the arbiter 106 may block commands other than the precharge command and the refresh command among the commands corresponding to the block address. This can provide preconditions for refresh and precharge to occur so that their block states and timing requirements are reached as soon as possible. When some blocks are subject to refresh requests or blocked target blocks, the arbiter 106 temporarily removes these blocks from the read/write switching, read/write statistics, command priority, etc. logic, so as to avoid blocking the operation of other memory related functional logic, and prevent blocks that cannot be read/written from interfering with other reads/writes.
In the refresh method provided by the embodiment of the disclosure, by generating the blocking address to block the corresponding non-refresh command, the refresh request with high priority can win the arbitration of the arbiter 106 and reach the DRAM as soon as possible, so as to ensure that the refresh is completed in time.
For example, in other examples, instead of generating a blocking address based on the predicted address, the entire store queue may be blocked, and then after sending REFsb, the memory requests of other block addresses may be released, which may be according to actual requirements, and embodiments of the present disclosure do not limit this.
For example, in step S50, in response to the generation of the refresh request, the flag bit of the refresh request indicating that the first priority state 302 and the block corresponding to the request address are not fully idle, the request generation unit 204 generates a precharge request corresponding to the block corresponding to the request address, that is, the request address, and sends the precharge request to the arbiter 106. For example, while the generation flag bit indicates the refresh request of the first priority state 302, if the corresponding block is in the on state, a precharge request (PCHGsb) needs to be issued, and the PCHGsb closes the corresponding block so that the REFsb corresponding to the block that wins arbitration can be executed later.
In the refresh method provided by the embodiment of the disclosure, by generating the precharge request, preparation can be made for executing the refresh request with high priority as soon as possible, so as to ensure timely completion of the refresh.
It should be noted that, while the generation flag bit indicates the refresh request in the second priority state 303, if the corresponding block is in the on state, PCHGsb will not be generated. At this point, the flag bit indicates that the refresh request of the second priority state 303 will wait for the precharge module 109 to close the corresponding block, otherwise it will accumulate until the first priority state 302 is reached. The low-priority refresh request does not generate PCHGsb, so that the low-priority refresh can be prevented from possibly interfering with the future reading and writing process, and the effects of releasing reading and writing preferentially and improving the bandwidth can be achieved.
At least one embodiment of the present disclosure further provides a memory controller for a dynamic random access memory, where the memory controller has a multi-level priority and arbitration logic, and can consider both refresh and read-write access of the dynamic random access memory, ensure continuity of the read-write access as much as possible on the basis of ensuring timely completion of refresh, reduce the influence of refresh on performance of the dynamic random access memory, improve bandwidth of the memory access, and increase bandwidth utilization.
As shown in connection with fig. 1 and 2, the memory controller 100 is configured to interface with and control the DRAM for refresh. The DRAM includes a plurality of memory queues, each memory queue including a plurality of banks, each bank including a plurality of banks.
The memory controller 100 includes at least an arbiter 106 and a refresh control module 107.
The refresh control block 107 includes a refresh interval counter 201, a plurality of deferred refresh counters 202, a plurality of state machines 203, a request generation unit 204, a plurality of address prediction units 205, a plurality of blocking address generation units 206, and a plurality of refresh address recording units 207.
The plurality of state machines 203 are in one-to-one correspondence with the plurality of store queues, the state machines 203 being configured to switch between a plurality of states, such as a first priority state 302, a second priority state 303, a flush state 304, and a self-refresh state 301. The plurality of address prediction units 205 are in one-to-one correspondence with the plurality of store queues, and the address prediction unit 205 is configured to determine a predicted address of the corresponding store queue. The request generation unit 204 is configured to generate a refresh request based on the states of the plurality of state machines 203 and the predicted addresses, and to send the refresh request to the arbiter 106 connected to the DRAM. The plurality of barrier address generating units 206 are in one-to-one correspondence with the plurality of store queues, and the barrier address generating unit 206 is configured to generate a barrier address based on the predicted address and the state of the state machine 203 of the store queue to which the predicted address corresponds, and send the barrier address to the arbiter 106.
The refresh interval counter 201 is configured to cycle count and generate and clear a pulse when the count value reaches a count set value, and send the pulse to the plurality of deferred refresh counters 202. The plurality of deferred refresh counters 202 are in one-to-one correspondence with the plurality of memory queues, and the deferred refresh counter 202 is configured to count based on deferred refresh requests of the memory queue to which the received pulse corresponds. State machine 203 determines the state based on the value of the corresponding deferred refresh counter 202. The plurality of refresh address recording units 207 are in one-to-one correspondence with the plurality of memory queues, and the refresh address recording unit 207 is configured to record addresses of the refreshed blocks.
The refresh control block 107 is connected to the arbiter 106, and the arbiter 106 is connected to the DRAM. The arbiter 106 is configured to arbitrate the refresh request and to send the refresh request to the DRAM for implementing the refresh of the DRAM in response to the refresh request winning the arbitration. The arbiter 106 is further configured to block non-refresh commands corresponding to the blocking address.
It should be noted that, in the embodiment of the present disclosure, the memory controller 100 may further include more modules and units, and the refresh control module 107 may also include more modules and units, which are not limited to the modules and units shown in fig. 1 and 2, and may be determined according to actual needs, which is not limited by the embodiment of the present disclosure. For detailed description and technical effects of the memory controller 100, reference is made to the above description of the refresh method, and detailed description thereof is omitted here.
At least one embodiment of the present disclosure also provides an electronic device including the memory controller provided by any one of the embodiments of the present disclosure. The memory controller in the electronic device has multi-level priority and arbitration logic, can give consideration to the refreshing and the read-write access of the dynamic random access memory, ensures the continuity of the read-write access as far as possible on the basis of ensuring the timely completion of the refreshing, reduces the influence of the refreshing on the performance of the dynamic random access memory, improves the bandwidth of the memory access and improves the bandwidth utilization rate.
Fig. 6 is a schematic block diagram of an electronic device provided in some embodiments of the present disclosure. For example, as shown in fig. 6, the electronic device 200 includes a memory controller 100, where the memory controller 100 is a memory controller provided in any embodiment of the disclosure, such as the memory controller 100 shown in fig. 1. For example, the electronic device 200 may also include a dynamic random access memory 210. The memory controller 100 is configured to interface with the dynamic random access memory 210 and is configured to control the dynamic random access memory 210 to refresh. For example, the electronic device 200 may be implemented as a Central Processing Unit (CPU) or any other device, as embodiments of the present disclosure are not limited in this regard.
It should be noted that, in the embodiment of the present disclosure, the electronic device 200 may further include more modules and units, which are not limited to the modules and units shown in fig. 6, and this may be according to actual requirements, which is not limited by the embodiment of the present disclosure. For detailed description and technical effects of the electronic device 200, reference may be made to the above description of the refresh method and the memory controller, which are not repeated herein.
The following points need to be described:
(1) The drawings of the embodiments of the present disclosure relate only to the structures to which the embodiments of the present disclosure relate, and reference may be made to the general design for other structures.
(2) The embodiments of the present disclosure and features in the embodiments may be combined with each other to arrive at a new embodiment without conflict.
The foregoing is merely specific embodiments of the disclosure, but the scope of the disclosure is not limited thereto, and the scope of the disclosure should be determined by the claims.

Claims (19)

1. A refresh method for a dynamic random access memory, wherein the dynamic random access memory comprises a plurality of storage queues, each storage queue comprising a plurality of block groups, each block group comprising a plurality of blocks, the method comprising:
Determining states of a plurality of state machines corresponding to the plurality of storage queues, wherein the plurality of storage queues are in one-to-one correspondence with the plurality of state machines;
determining a plurality of predicted addresses corresponding to the plurality of storage queues, wherein the predicted addresses are block addresses of a next block refresh request of a predicted current storage queue;
based on the states of the state machines and the predicted addresses, a refresh request is generated and sent to an arbiter coupled to the dynamic random access memory, such that the arbiter arbitrates the refresh request and, in response to the refresh request, wins the arbitration, sends the refresh request to the dynamic random access memory for implementing the refresh of the dynamic random access memory.
2. The method of claim 1, wherein determining states of a plurality of state machines corresponding to the plurality of store queues comprises:
for each state machine, determining the state of the state machine according to the value of the deferred refresh counter, the self-refresh entry request, and the self-refresh exit command.
3. The method of claim 2, wherein the state machine comprises 4 states: a first priority state, a second priority state, a flush state, and a self-refresh state,
The first priority state has a higher priority than the second priority state.
4. The method of claim 3, wherein for each state machine, determining the state of the state machine from the value of the deferred refresh counter, the self-refresh entry request, and the self-refresh exit command comprises:
responsive to the deferred refresh counter having a value greater than or equal to a threshold, causing the state machine to enter the first priority state;
responsive to the value of the deferred refresh counter being less than the threshold, causing the state machine to enter the second priority state;
responsive to the self-refresh entry request, causing the state machine to enter the flush state immediately or with a delay in accordance with a current state of the state machine;
responsive to completion of an operation corresponding to the flush state, causing the state machine to enter the self-refresh state;
and in response to the self-refresh exit command, enabling the state machine to enter the first priority state or the second priority state according to the value of the deferred refresh counter.
5. The method of claim 4, wherein responsive to the self-refresh entry request, causing the state machine to enter the flush state immediately or with a delay in accordance with a current state of the state machine comprises:
In response to the self-refresh entry request, in a case where the state machine is in the first priority state, causing the state machine to maintain the first priority state until the deferred refresh counter value is less than the threshold value to reenter the flush state;
in response to the self-refresh entry request, causing the state machine to enter the flush state in a case where the state machine is in the second priority state and no address is recorded in a refresh address recording unit;
in response to the self-refresh entry request, in a case where the state machine is in the second priority state and there is a recorded address in the refresh address recording unit, causing the state machine to maintain the second priority state until no recorded address in the refresh address recording unit reenters the flush state.
6. The method of any of claims 3-5, wherein determining the plurality of predicted addresses corresponding to the plurality of store queues comprises:
for each storage queue, determining the prediction address based on block information and the state of a state machine corresponding to the storage queue.
7. The method of claim 6, wherein for each store queue, determining the predicted address based on the block information and a state of a state machine to which the store queue corresponds comprises:
Determining the addresses of the blocks meeting the requirements as the predicted addresses according to the priority sequence from the first level to the N level in response to the state machine being in the first priority state and no refresh task being executed in the corresponding storage queue;
determining the addresses of the blocks meeting the requirements as the predicted addresses according to the priority sequence from the first level to the Mth level in response to the state machine being in the second priority state and no refresh task being executed in the corresponding storage queue;
determining that the predicted address is empty in response to the state machine being in the first priority state and there being an executing refresh task in a corresponding storage queue;
determining that the predicted address is empty in response to the state machine being in the second priority state and there being no block meeting a requirement or a refresh task being performed in a corresponding store queue;
wherein N > M >1 and N and M are integers, the priority order from the first level to the N level is gradually reduced,
the priority order of each level is determined based on the block information, and the block information at least comprises: whether valid, whether refreshed, whether there is a memory access request, whether idle, and whether timing is in compliance.
8. The method of any of claims 3-5, further comprising:
and generating a blocking address based on the states of the state machines and the predicted addresses, and sending the blocking address to the arbiter so that the arbiter blocks a non-refresh command corresponding to the blocking address.
9. The method of claim 8, wherein generating the blocking address and sending the blocking address to the arbiter based on the states of the plurality of state machines and the plurality of predicted addresses comprises:
and in response to the state machine being in the first priority state and no executing refreshing task in the corresponding storage queue, determining a predicted address corresponding to the storage queue as the blocking address, and sending the blocking address to the arbiter.
10. The method of claim 4 or 5, wherein generating the refresh request based on the states of the plurality of state machines and the plurality of predicted addresses and sending the refresh request to the arbiter coupled to the dynamic random access memory comprises:
selecting a storage queue and generating the refresh request according to a priority selection rule based on states of the plurality of state machines, and transmitting the refresh request to the arbiter;
The refresh request comprises a request command, a request address and a flag bit, wherein the request address is a predicted address corresponding to the selected storage queue, and the flag bit indicates that a state machine corresponding to the selected storage queue is in the first priority state or the second priority state.
11. The method of claim 10, wherein the first priority state comprises a first sub-state and a second sub-state, the first sub-state having a higher priority than the second sub-state, the first sub-state being that the deferred refresh counter reaches a maximum value, the second sub-state being that the deferred refresh counter is less than the maximum value and greater than or equal to the threshold value;
the priority selection rule is:
selecting a corresponding storage queue according to the priority order of the first sub-state, the second sub-state and the second priority state,
if all state machines are in the second priority state, selecting a store queue whose predicted address is not empty,
if a plurality of state machines with the same priority order exist, randomly selecting a storage queue corresponding to one state machine from the plurality of state machines with the same priority order.
12. The method of claim 10, further comprising:
generating a precharge request and sending the precharge request to the arbiter in response to the generation of the refresh request, the flag bit of the refresh request indicating that the first priority state and the block corresponding to the request address are not fully idle;
wherein the precharge request corresponds to a block corresponding to the request address.
13. The method of claim 12, wherein the arbiter is further configured to arbitrate read and write requests, row strobe requests, the precharge requests,
the priority of arbitration is reduced in the following order: the flag bit indicates a refresh request of the first priority state, the read-write request, the row strobe request, the precharge request, and the flag bit indicates a refresh request of the second priority state.
14. A memory controller for a dynamic random access memory, wherein the memory controller is configured to be coupled to the dynamic random access memory and configured to control refresh of the dynamic random access memory, the dynamic random access memory comprising a plurality of memory queues, each memory queue comprising a plurality of block groups, each block group comprising a plurality of blocks;
The memory controller comprises a refresh control module, wherein the refresh control module comprises a plurality of state machines, a plurality of address prediction units and a request generation unit;
the plurality of state machines are in one-to-one correspondence with the plurality of storage queues, and the state machines are configured to switch among a plurality of states;
the plurality of address prediction units are in one-to-one correspondence with the plurality of storage queues, and the address prediction units are configured to determine the predicted addresses of the corresponding storage queues, wherein the predicted addresses are block addresses of a next block refresh request of the current storage queue to be predicted;
the request generation unit is configured to generate a refresh request based on states of the plurality of state machines and the predicted address, and to send the refresh request to an arbiter connected to the dynamic random access memory.
15. The memory controller of claim 14, further comprising the arbiter,
wherein the refresh control module is coupled to the arbiter, the arbiter is coupled to the dynamic random access memory,
the arbiter is configured to arbitrate the refresh request and to win arbitration in response to the refresh request, send the refresh request to the dynamic random access memory for effecting refresh of the dynamic random access memory.
16. The memory controller of claim 14 or 15, wherein the refresh control module further comprises a plurality of barrier address generation units;
the plurality of blocking address generating units are in one-to-one correspondence with the plurality of storage queues, and are configured to generate blocking addresses based on the predicted addresses and states of state machines of the storage queues corresponding to the predicted addresses, and send the blocking addresses to the arbiter;
the arbiter is further configured to block non-refresh commands corresponding to the blocking address.
17. The memory controller of claim 14 or 15, wherein the refresh control module further comprises a refresh interval counter, a plurality of deferred refresh counters, and a plurality of refresh address logging units;
the refresh interval counter is configured to cycle count and generate and empty a pulse when the count value reaches a count set value and send the pulse to the plurality of deferred refresh counters;
the plurality of deferred refresh counters are in one-to-one correspondence with the plurality of storage queues, the deferred refresh counters are configured to count deferred refresh requests of the storage queues corresponding to the received pulses, and the state machine determines a state based on the values of the corresponding deferred refresh counters;
The refresh address recording units are in one-to-one correspondence with the storage queues and are configured to record addresses of the refreshed blocks.
18. An electronic device comprising a memory controller as claimed in any one of claims 14 to 17.
19. The electronic device of claim 18, further comprising the dynamic random access memory.
CN202011162195.1A 2020-10-27 2020-10-27 Refreshing method of dynamic random access memory, memory controller and electronic device Active CN112259141B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011162195.1A CN112259141B (en) 2020-10-27 2020-10-27 Refreshing method of dynamic random access memory, memory controller and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011162195.1A CN112259141B (en) 2020-10-27 2020-10-27 Refreshing method of dynamic random access memory, memory controller and electronic device

Publications (2)

Publication Number Publication Date
CN112259141A CN112259141A (en) 2021-01-22
CN112259141B true CN112259141B (en) 2023-11-03

Family

ID=74261120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011162195.1A Active CN112259141B (en) 2020-10-27 2020-10-27 Refreshing method of dynamic random access memory, memory controller and electronic device

Country Status (1)

Country Link
CN (1) CN112259141B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20000025105A (en) * 1998-10-08 2000-05-06 김영환 Memory controller
US6298413B1 (en) * 1998-11-19 2001-10-02 Micron Technology, Inc. Apparatus for controlling refresh of a multibank memory device
CN104137081A (en) * 2012-02-13 2014-11-05 国际商业机器公司 Memory reorder queue biasing preceding high latency operations
CN110729006A (en) * 2018-07-16 2020-01-24 超威半导体(上海)有限公司 Refresh scheme in a memory controller

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20000025105A (en) * 1998-10-08 2000-05-06 김영환 Memory controller
US6298413B1 (en) * 1998-11-19 2001-10-02 Micron Technology, Inc. Apparatus for controlling refresh of a multibank memory device
CN104137081A (en) * 2012-02-13 2014-11-05 国际商业机器公司 Memory reorder queue biasing preceding high latency operations
CN110729006A (en) * 2018-07-16 2020-01-24 超威半导体(上海)有限公司 Refresh scheme in a memory controller

Also Published As

Publication number Publication date
CN112259141A (en) 2021-01-22

Similar Documents

Publication Publication Date Title
CN112382321B (en) Refreshing method of dynamic random access memory, memory controller and electronic device
US9293188B2 (en) Memory and memory controller for high reliability operation and method
US9281046B2 (en) Data processor with memory controller for high reliability operation and method
US8996824B2 (en) Memory reorder queue biasing preceding high latency operations
US7698498B2 (en) Memory controller with bank sorting and scheduling
EP3729280B1 (en) Dynamic per-bank and all-bank refresh
US20210073152A1 (en) Dynamic page state aware scheduling of read/write burst transactions
CN1822224B (en) Memory device capable of refreshing data using buffer and refresh method thereof
KR101527308B1 (en) Memory interface
CN101038783B (en) Semiconductor memory, memory system, and operation method of memory system
JP7407167B2 (en) Configuring Dynamic Random Access Memory Refresh for Systems with Multiple Ranks of Memory
US11561862B2 (en) Refresh management for DRAM
KR102615693B1 (en) Refresh management for DRAM
US20180342283A1 (en) Memory device performing care operation for disturbed row and operating method thereof
CN112612596B (en) Command scheduling method, device, equipment and storage medium
JP2024512625A (en) Masking write bank groups during arbitration
CN112259141B (en) Refreshing method of dynamic random access memory, memory controller and electronic device
CN114819124A (en) Memory access performance improving method of deep neural network inference processor
CN111158585B (en) Memory controller refreshing optimization method, device, equipment and storage medium
EP4386754A1 (en) System for refreshing dynamic random access memory

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant