CN111459414B - Memory scheduling method and memory controller - Google Patents

Memory scheduling method and memory controller Download PDF

Info

Publication number
CN111459414B
CN111459414B CN202010278112.9A CN202010278112A CN111459414B CN 111459414 B CN111459414 B CN 111459414B CN 202010278112 A CN202010278112 A CN 202010278112A CN 111459414 B CN111459414 B CN 111459414B
Authority
CN
China
Prior art keywords
requests
memory
rank
page
scheduling method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010278112.9A
Other languages
Chinese (zh)
Other versions
CN111459414A (en
Inventor
陈忱
金杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zhaoxin Semiconductor Co Ltd
Original Assignee
VIA Alliance Semiconductor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by VIA Alliance Semiconductor Co Ltd filed Critical VIA Alliance Semiconductor Co Ltd
Priority to CN202010278112.9A priority Critical patent/CN111459414B/en
Publication of CN111459414A publication Critical patent/CN111459414A/en
Application granted granted Critical
Publication of CN111459414B publication Critical patent/CN111459414B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0625Power saving in storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0658Controller construction arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

A memory scheduling method, the memory including a plurality of ranks (ranks), each rank including a plurality of banks groups (banks), each bank group including a plurality of pages (pages), a memory controller for scheduling the memory, the scheduling method including performing a rank scheduling when the memory controller receives a plurality of requests (requests), comprising: a first statistics step of counting the statistics of the requests corresponding to each rank according to the ranks respectively corresponding to the requests; a first selection step of selecting one of the ranks having the largest number of statistics of the requests for access.

Description

Memory scheduling method and memory controller
Technical Field
The present invention relates to a memory scheduling method, and more particularly, to a memory scheduling method for reducing the number of memory rank (rank) switching.
Background
The memory controller can correspondingly access the memory according to the request sent by the host. The request is described with respect to the corresponding access (including writing or reading) address, which includes information of rank (rank), bank group (bank group) and page (page) of the memory corresponding to the access address. In the existing memory scheduling method, the memory controller sequentially accesses the memory according to the sequence of the received requests (requests).
When the ranks of the memories pointed to by the requests arranged according to the time sequence are inconsistent, the memory controller must be constantly switched between different ranks of the memories to access the pages corresponding to the access addresses in the different ranks, so that the operation efficiency of the memories is low and the memories consume more power.
Disclosure of Invention
In order to solve the above-mentioned problems, the present invention provides a memory scheduling method for preferentially selecting ranks that can be accessed in a large amount in a short time, thereby reducing the number of times of rank switching. According to an embodiment of the present invention, a memory scheduling method includes a plurality of ranks (ranks), each rank including a plurality of banks, each bank including a plurality of pages (pages), a memory controller for scheduling the memory, and when the memory controller receives a plurality of requests (requests), the scheduling method includes performing a rank scheduling, including: a first statistics step of counting the statistics of the requests corresponding to each rank according to the ranks respectively corresponding to the requests; and a first selecting step of selecting one of the ranks having the largest number of the requests for access.
The memory scheduling method as described above further includes: after all requested accesses within one of the ranks selected are completed, and the memory controller receives a plurality of second requests that are at least partially different from the requests, the first counting step and the first selecting step are re-executed.
The memory scheduling method as above, wherein the first statistical step includes: calculating the statistical quantity of the requests in a page hit state in each current rank, and accumulating the statistical quantity of the requests in the page hit state to obtain a first numerical value; wherein, in a request, if a currently hit page is previously accessed, the request is in the page hit state; and determining whether there is at least one of the requests in a page miss state in each rank, and if so, adding 1 to the first value to obtain a first statistics, wherein in the request, if the page hit is never accessed before, the request is in the page miss state. In the first selecting step, accessing a selected one of the ranks includes: one of the ranks of the first statistics having the largest value at the moment is selected for access.
The memory scheduling method as described above, wherein the first selecting step accesses one of the ranks, further comprising performing a memory bank group scheduling, comprising: a second statistics step of counting the number of the requests of each bank group in one of the ranks selected according to the banks to which the requests respectively correspond; a second selecting step of selecting one of the memory bank groups having the largest statistical number of the requests for access; selecting one of the memory bank groups for access after all requested accesses within the selected one of the memory bank groups are completed; wherein another one of the selected storage library groups has a statistical number of the requests in a second plurality.
The memory scheduling method as above, wherein the second statistical step includes: calculating the statistical number of the requests in the page hit state in each memory bank group in one of the currently selected ranks, and accumulating the statistical number of the requests in the page hit state to obtain a second value; determining whether there is at least one of the requests in the page fault state in each bank group in one of the ranks selected at present, and if so, adding 1 to the second value to obtain a second statistics. In the second selecting step, accessing a selected one of the memory banks comprises: one of the memory bank groups having the second statistics having the greatest value at the present time is selected for access.
The memory scheduling method as described above, wherein, in the second selecting step, accessing one of the selected memory bank groups further comprises performing a page scheduling, including: each page of one of the selected memory banks is accessed according to the timing of the received requests. In the first statistics step, the first value is maintained when the requests in the page miss state are not present in each rank. In the second counting step, the second value is maintained when each bank group of one of the ranks selected does not have the requests in the page fault state.
In the above memory scheduling method, in the request, if the page that is currently hit is not the same as the page accessed in the previous request, the request is in a state of a page conflict.
The memory scheduling method as described above, wherein when the first statistics of at least two of the ranks are equal, a round robin (round robin) scheme is used to make at least two of the ranks access in turn according to a timing schedule. When the second statistics of at least two of the memory banks are equal, then a round robin allocation is used such that at least two of the memory banks are alternately accessed in a time schedule.
The present invention further provides a memory controller including an arbiter, a write queue, a read queue, and a scheduling unit. The arbiter receives requests from the host and classifies the received requests into write instructions and read instructions according to the type of the requests. The write queue stores the write instructions. The read queue stores the read instructions. The scheduling unit stores and executes a scheduling software for executing the memory scheduling method according to the received write instructions and the read instructions.
Drawings
Fig. 1 is a flowchart of a memory scheduling method according to an embodiment of the present disclosure.
Fig. 2A and 2B are detailed flowcharts of a memory scheduling method according to an embodiment of the present disclosure.
Fig. 3 is a schematic diagram of a memory controller for executing the flows of fig. 1, 2A and 2B according to an embodiment of the disclosure.
Detailed Description
The present invention is described with reference to the attached drawings, wherein like reference numerals designate similar or identical elements throughout the figures. The figures are not drawn to scale and merely provide an illustration of the invention. Some inventive aspects are described below as reference to illustrating exemplary applications. It is intended that numerous specific details, relationships, and methods are set forth to provide a full understanding of the invention. In any event, those skilled in the art will recognize that the invention can be practiced without one or more of the specific details, or with other methods. As other examples, well-known structures or operations are not shown in detail to avoid obscuring the invention. The present invention is not limited by the illustrated ordering of acts or events, as some acts may occur in different orders or concurrently with other acts or events. Moreover, not all illustrated acts or events are required to be performed in the same manner as the illustrated embodiments.
The invention provides a memory scheduling method, wherein the memory comprises a plurality of ranks (rank), each rank comprises a plurality of memory bank groups (bank groups), each memory bank group comprises a plurality of pages (pages), and a memory controller is used for scheduling the memory. Fig. 1 is a flowchart of a memory scheduling method according to an embodiment of the present disclosure. As shown in fig. 1, first, the memory scheduling method of the present invention receives a plurality of requests (requests) (which may include a write instruction or a read request) from a host side (e.g., CPU) within a period (e.g., within a few nanoseconds (ns)). In other words, since the sampling time of the memory scheduling method of the present invention is in nanosecond level, the memory scheduling method of the present invention can collect a plurality of requests from the host as soon as the host is powered on. Next, the memory scheduling method of the present invention performs a rank scheduling, i.e. counts the number of the requests corresponding to each rank according to the ranks respectively corresponding to the requests (step S102). For example, if the memory has 4 ranks, a first rank, a second rank, a third rank and a fourth rank are respectively. After statistics, the memory scheduling method of the present invention knows that, among the received requests, 10 requests in total correspond to the first rank (in other words, 10 requests have access addresses located on a page within the first rank), 15 requests correspond to the second rank, 5 requests correspond to the third rank, and 20 requests correspond to the fourth rank. Then, the memory scheduling method of the present invention selects one of the ranks with the largest number of the requests for access (step S104). For example, the memory scheduling method of the present invention selects the fourth rank to access because the fourth rank has the largest number of statistics of the requests (20 (fourth rank) >15 (second rank) >10 (first rank) >5 (third rank)). According to the memory scheduling method of the present invention, the rank scheduling may include step S102 and step S104.
When accessing one of the ranks (e.g., the fourth rank), the memory scheduling method of the present invention then performs a memory group scheduling, i.e., counts the number of requests of each memory group in the selected one of the ranks according to the memory groups to which the requests respectively correspond (step S106). For example, if each rank in the memory has 4 memory bank groups, the selected fourth rank has 4 memory bank groups, which are a first memory bank group, a second memory bank group, a third memory bank group, and a fourth memory bank group, respectively. After statistics, the memory scheduling method of the present invention knows that among the 20 requests corresponding to the fourth rank, 3 requests correspond to the first bank group, 11 requests correspond to the second bank group, 2 requests correspond to the third bank group, and 4 requests correspond to the fourth bank group. Then, the memory scheduling method of the present invention selects one of the memory bank groups having the largest statistical number of the requests for access (step S108). For example, the memory scheduling method of the present invention selects the second bank group within the fourth rank to access because the second bank group has the largest statistics of the requests (11 (second bank group) >4 (fourth bank group) >3 (first bank group) >2 (third bank group)). According to the memory scheduling method of the present invention, the memory group scheduling may include step S106 and step S108. Furthermore, after the access of each page in the second memory bank group is completed, the memory scheduling method of the present invention selects the fourth memory bank group with a plurality of statistics for access. In other words, the memory scheduling method of the present invention sequentially accesses each bank group according to the number of the requests of each bank group of one of the ranks selected.
When accessing one of the selected memory bank groups (e.g., the second memory bank group in the fourth rank), the memory scheduling method of the present invention then performs a page scheduling, i.e., accesses each page of the selected one of the memory bank groups according to the timing of the received requests (step S110). For example, the memory scheduling method of the present invention accesses 11 requests according to the receiving timing of the 11 requests corresponding to the second bank group in the fourth rank. After completing the 11 requests access of the second bank group in the fourth rank, the memory scheduling method of the present invention then selects another one of the bank groups having a statistical number of the requests for access (not shown). For example, the memory scheduling method of the present invention selects the fourth bank group having 4 requests within the fourth rank to access because the fourth bank group has a statistical number of the requests of a plurality of times. Next, the memory scheduling method of the present invention continues to step S110, where 4 requests in the fourth bank group in the fourth rank are accessed.
After the first bank group (statistics=3) with the third most statistics and the third bank group (statistics=2) with the least statistics of the requests are sequentially completed, the memory scheduling method of the present invention re-executes step S100, and it is noted that in step S100, the memory scheduling method of the present invention continues to execute steps S102 and S104 and subsequent steps when a plurality of second requests at least partially different from the requests are received. In other words, if the memory scheduling method of the present invention again counts the number of the second requests of each rank, and then knows that the fourth rank still has the largest number of the second requests, the memory scheduling method of the present invention still continues to access the fourth rank. On the other hand, if the first rank has the largest number of the second requests, the memory scheduling method of the present invention accesses the first rank.
Fig. 2A and 2B are detailed flowcharts of a memory scheduling method according to an embodiment of the present disclosure. As shown in fig. 2A and 2B, assuming that the memory also includes 4 ranks (e.g., the first rank, the second rank, the third rank, and the fourth rank), after receiving a plurality of requests in a period after completing step S100, the memory scheduling method of the present invention calculates the statistics of the requests in a page hit (page hit) state in each rank, and adds the statistics of the requests in the page hit state to obtain a first value (step 200). In some embodiments, in a request, if the page that is currently hit is previously accessed, the request is in the state of the page hit. Next, the memory scheduling method of the present invention determines whether there is at least one of the requests in a page miss state in each rank (step S202). In some embodiments, in the request, if the page that is currently hit is never accessed before, the request is in a state of the page fault. If yes in step S202, the memory scheduling method of the present invention executes step S204 to add 1 to the first value calculated in step S200, thereby obtaining a first statistics. If no in step S202, the memory scheduling method of the present invention executes step S206, maintains the first value calculated in step S200, and sets the first value as the first statistics. Then, the memory scheduling method of the present invention performs step S208, and selects one of the ranks of the first statistics having the largest value to access.
For example, assume that a sequence of the requests received in sequence is "read (fourth rank, first bank group, page 3), read (third rank, first bank group, page 4), write (fourth rank, first bank group, page 3), read (first rank, second bank group, page 1), read (fourth rank, second bank group, page 2)". The term "read (fourth rank, first bank group, page 3)" means that the request is a read command with an access address on the fourth rank-first bank group-page 3, and the term "write (fourth rank, first bank group, page 3)" means that the request is a write command with an access address on the fourth rank-first bank group-page 3. In the memory scheduling method of the present invention, when checking the read (fourth rank, first bank group, page 3) request in the sequence, since page 3 in the first bank group in the fourth rank has not been accessed previously (according to the status of page fault), the statistical number of the requests corresponding to the fourth rank is increased by 1 (according to step S204). Then, the method continues to examine the requests of "read (third rank, first bank group, page 4)" in the sequence, and the statistical number of the requests corresponding to the third rank is increased by 1 (step S204) because page 4 in the first bank group in the third rank is not accessed (the status of page fault is met).
Further, the method continues to examine the requests of "write (fourth rank, first bank group, page 3)" in the sequence, and the statistical number of the requests corresponding to the fourth rank is further increased by 1 (total is 2) because page 3 in the first bank in the fourth rank was accessed (the state of page hit is met) (i.e. step S200 and step S206 are met). Continuing to examine the requests of "read (first rank, second bank group, page 1)" in the sequence, the memory scheduling method of the present invention adds 1 to the statistical number of the requests corresponding to the first rank (step S204) since page 1 in the second bank group in the first rank has not been accessed previously (in line with the status of page fault). Finally, the method continues to examine the requests of "read (fourth rank, second bank group, page 2)" in the sequence, and the statistical number of the requests corresponding to the first rank is added by 1 (total is 3) because page 2 in the second bank group in the fourth rank has not been accessed previously (according to the status of page fault) (according to step S204). After statistics, the memory scheduling method of the present invention obtains that the first statistics corresponding to the first rank is 1, the first statistics corresponding to the second rank is 0, the first statistics corresponding to the third rank is 1, and the first statistics corresponding to the fourth rank is 3. Therefore, in step S208, one of the ranks of the first statistics having the largest value is selected for access, i.e. the fourth rank having the first statistics of 3 is selected for reading. According to the memory scheduling method of the present invention, step S102 of fig. 1 may include step S200, step S202, step S204, and step S206 of fig. 2A. Step S104 of fig. 1 may include step S208 of fig. 2A. In some embodiments, when the first statistics of at least two of the ranks are equal, a round robin (round robin) scheme is used such that at least two of the ranks are alternately accessed in a time schedule.
When accessing one of the ranks (e.g., the fourth rank), the memory scheduling method of the present invention continues to step S210 to calculate the statistical number of the requests in the page hit state in each bank group in the currently selected one of the ranks (e.g., the fourth rank), and adds the statistical number of the requests in the page hit state to obtain a second value. Next, the memory scheduling method of the present invention determines whether each memory bank group in one of the ranks selected at present has at least one of the requests in the page fault state (step S212). If yes in step S212, the memory scheduling method of the present invention executes step S214 to add 1 to the second value calculated in step S210, thereby obtaining a second statistics. If no in step S212, the memory scheduling method of the present invention executes step S216, maintains the second value calculated in step S210, and sets the second value as the second statistics value. Then, the memory scheduling method of the present invention executes step S218 to select one of the memory banks having the second statistics with the largest value for access.
For example, as in the previous example, the request corresponding to the fourth rank is "read (fourth rank, first bank group, page 3), write (fourth rank, first bank group, page 3), read (fourth rank, second bank group, page 2)". In the memory scheduling method of the present invention, when checking the read (fourth rank, first bank group, page 3) request in the sequence, since page 3 in the first bank group in the fourth rank has not been previously accessed (according to the status of page fault), the statistical number of the requests corresponding to the first bank in the fourth rank is increased by 1 (according to step S214). Then, the sequence of write requests (fourth rank, first bank group, page 3) is continuously examined, and since page 3 in the first bank group in the fourth rank has been previously accessed (in line with the page hit status), the memory scheduling method of the present invention adds 1 more (total of 2) to the statistics of the requests corresponding to the first bank group in the fourth rank. Further, the method continues to examine the requests of "read (fourth rank, second bank group, page 2)" in the sequence, and the statistical number of the requests corresponding to the second bank in the fourth rank is increased by 1 (step S214) because page 2 in the second bank in the fourth rank is never accessed (in accordance with the status of page fault). After statistics, the memory scheduling method of the present invention obtains that the second statistics corresponding to the first bank group in the fourth rank is 2, the second statistics corresponding to the second bank group in the fourth rank is 1, the second statistics corresponding to the third bank group in the fourth rank is 0, and the second statistics corresponding to the fourth bank group in the fourth rank is 0. Therefore, in step S218, one of the memory banks having the second statistics with the largest value is selected for access, i.e. the first memory bank group in the fourth rank with the second statistics of 2 is selected for reading. According to the memory scheduling method of the present invention, step S106 of fig. 1 may include step S210, step S212, step S214, and step S216 of fig. 2B. Step S108 of fig. 1 may include step S218 of fig. 2B. In some embodiments, when the second statistics of at least two of the memory banks are equal, a round robin allocation is used such that at least two of the memory banks are alternately accessed in a time schedule.
While accessing a selected one of the memory banks (e.g., the first memory bank in the fourth rank), the memory scheduling method of the present invention proceeds to step S110 of fig. 1, where each page of the selected one of the memory banks is accessed according to the timing of the received requests. In some embodiments, in the rank schedule or the bank group schedule of the memory scheduling method of the present invention, if the page accessed (hit) at the present time is different from the page accessed in the previous request, the request is in a page conflict (page conflict) state. When the request is in the page conflict state, the calculation of the first statistics in the rank schedule and the second statistics in the repository group schedule of the present invention is not affected.
Fig. 3 is a schematic diagram of a memory controller for executing the flows of fig. 1, 2A and 2B according to an embodiment of the disclosure. As shown in FIG. 3, the memory controller 300 includes an arbiter (arbiter) 302, a write queue 304, a read queue 306, and a scheduling unit 308. The arbiter 302 receives the plurality of requests 310 and classifies the received plurality of requests 310 into a plurality of write instructions 312 and a plurality of read instructions 314 according to the type. Wherein a plurality of write instructions 312 are stored in the write queue 304 and a plurality of read instructions are stored in the read queue 306. Next, the scheduling unit 308 stores and executes a scheduling software for executing the memory scheduling method of the present invention, i.e. for scheduling the access addresses of the memories corresponding to the write commands 312 and the read commands 314, for example, the schedule (1) in fig. 3 (i.e. the page schedule) may correspond to the step S110 in fig. 1, for switching and accessing different pages in different banks (e.g. bank 0, bank 1, bank 2, bank 3) of the same Rank (e.g. the first Rank (Rank 0)) of the same bank group (e.g. the bank group 0). Schedule (2) (i.e., the bank group schedule) in fig. 3 may correspond to step S106 and step S108 of fig. 1, and schedule 2 may also correspond to step S210, step S212, step S214, step S216, and step S218 of fig. 2B, for switching and accessing different pages in different banks (e.g., first bank group (bank group 0), second bank group (bank group 1), third bank group (bank group 2), and fourth bank group (bank group 3)) of the same Rank (e.g., first Rank (Rank 0)).
Schedule (3) of fig. 3 (i.e., the Rank schedule) may correspond to step S102 and step S104 of fig. 1, and schedule 3 may correspond to step S200, step S202, step S204, step S206, and step S208 of fig. 2A for switching and accessing different pages in different ranks (e.g., first Rank (Rank 0), second Rank (Rank 1), third Rank (Rank 2), and fourth Rank (Rank 3)). The memory scheduling method of the present invention is that the schedule (1), schedule (2) and schedule (3) are completed by the schedule unit 308 executing a schedule software stored in the memory controller 300 in sequence, and the schedule unit 308 sequentially transmits the instruction in the execution queue to a memory according to an execution queue scheduled by the schedule (1), schedule (2) and schedule (3) so that the memory can correspondingly execute the access action according to the instruction.
While embodiments of the present invention have been described above, it should be understood that the foregoing is presented by way of example only, and not limitation. Many variations of the above-described exemplary embodiments according to the present embodiment can be implemented without departing from the spirit and scope of the invention. Thus, the breadth and scope of the present invention should not be limited by any of the above-described embodiments. Rather, the scope of the invention should be defined by the appended claims and equivalents thereof. Although the invention has been illustrated and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art based upon the description and the annexed drawings. Furthermore, although a particular feature of the invention may have been described above with respect to only one of several implementations, such feature may be combined with one or more other features as may be desired and advantageous for any given or particular application.
The terminology used in the description of the particular embodiments is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Furthermore, the words "comprise", "include", "have", "provided" and "having", or variations thereof, are intended to be either the detailed description or the claims. The above words are words of inclusion and are to some extent equivalent to the word "comprising". Unless defined differently, all terms (including technical or scientific terms) used herein are generally understood by those skilled in the art. It should be further appreciated that the terms described above, as defined in a dictionary used by the public, should be construed in the context of the related art to have the same meaning. Unless explicitly defined herein, the above words are not to be interpreted in an idealized or overly formal sense.

Claims (14)

1. A memory scheduling method for scheduling the memory by a memory controller, the memory comprising a plurality of ranks, each rank comprising a plurality of banks groups, each bank group comprising a plurality of pages, the scheduling method comprising performing a rank scheduling when the memory controller receives a plurality of requests, comprising:
a first statistics step of counting the number of the requests corresponding to each rank according to the ranks to which the requests respectively correspond, wherein the first statistics step counts the number of the requests corresponding to each rank based on the number of the requests currently in a page hit state in each rank and whether at least one of the requests currently in a page miss state in each rank; and
a first selection step of selecting one of the ranks having the largest number of statistics of the requests for access.
2. The memory scheduling method of claim 1, further comprising: after all requested accesses within one of the ranks selected are completed, and the memory controller receives a plurality of second requests that are at least partially different from the requests, the first counting step and the first selecting step are re-executed.
3. The memory scheduling method of claim 1, wherein the first counting step comprises:
calculating the statistical quantity of the requests in a page hit state in each current rank, and accumulating the statistical quantity of the requests in the page hit state to obtain a first numerical value; wherein, in a request, if a currently hit page is previously accessed, the request is in the page hit state;
determining whether there is at least one of the requests in a page fault state in each rank, adding 1 to the first value to obtain a first statistics value when there is at least one of the requests in a page fault state, wherein in the requests, if the page hit currently is never accessed before, the request is in the page fault state.
4. The memory scheduling method of claim 3, wherein in the first selecting step, one of the ranks is selected to be accessed, the scheduling method further comprising: one of the ranks of the first statistics having the largest value at the moment is selected for access.
5. The memory scheduling method of claim 1, wherein in the first selecting step, one of the ranks is selected to be accessed, further comprising performing a memory bank group scheduling, the scheduling method further comprising:
a second statistics step of counting the number of the requests of each bank group in one of the ranks selected according to the banks to which the requests respectively correspond; and
a second selecting step of selecting one of the memory bank groups having the largest number of the requests for access,
upon completion of access of all requests within one of the selected memory bank groups, another one of the memory bank groups is selected for access, wherein the other one of the selected memory bank groups has a statistical number of the requests in a next plurality.
6. The memory scheduling method of claim 5, wherein the second statistical step comprises:
calculating the statistical number of the requests in the page hit state in each memory bank group in one of the currently selected ranks, and accumulating the statistical number of the requests in the page hit state to obtain a second value; and
determining whether there is at least one of the requests in the page fault state in each of the memory bank groups in one of the ranks selected at present, and adding 1 to the second value to obtain a second statistics value when there is at least one of the requests in the page fault state.
7. The memory scheduling method of claim 6, wherein, in the second selecting step, accessing the selected one of the memory bank groups comprises: one of the memory bank groups having the second statistics having the greatest value at the present time is selected for access.
8. The memory scheduling method of claim 5, wherein in the second selecting step, accessing the selected one of the memory bank groups further comprises performing a page scheduling comprising: each page of one of the selected memory banks is accessed according to the timing of the received requests.
9. The memory scheduling method of claim 3, wherein in the first statistics step, the first value is maintained when the requests in the page fault state are not present in each rank.
10. The memory scheduling method of claim 6, wherein in the second counting step, the second value is maintained when each bank group of one of the ranks selected does not have the requests in the page fault state.
11. The memory scheduling method of claim 3, wherein in the request, if the page that is hit is not identical to the page accessed in the previous request, the request is in a state of a page conflict.
12. The memory scheduling method of claim 3, wherein when the first statistics of at least two of the ranks are equal, the at least two of the ranks are sequentially accessed using a round robin allocation.
13. The memory scheduling method of claim 6, wherein when the second statistics of at least two of the memory banks are equal, the at least two of the memory banks are sequentially accessed in turn using a round robin allocation.
14. A memory controller, comprising:
an arbiter for receiving a plurality of requests from the host and classifying the received requests into a plurality of write instructions and a plurality of read instructions according to the types of the requests;
a write queue storing the write instructions;
a read queue storing the read instructions;
a scheduling unit storing and executing a scheduling software for executing the memory scheduling method of claims 1 to 13 according to the received write instructions and read instructions.
CN202010278112.9A 2020-04-10 2020-04-10 Memory scheduling method and memory controller Active CN111459414B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010278112.9A CN111459414B (en) 2020-04-10 2020-04-10 Memory scheduling method and memory controller

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010278112.9A CN111459414B (en) 2020-04-10 2020-04-10 Memory scheduling method and memory controller

Publications (2)

Publication Number Publication Date
CN111459414A CN111459414A (en) 2020-07-28
CN111459414B true CN111459414B (en) 2023-06-02

Family

ID=71679422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010278112.9A Active CN111459414B (en) 2020-04-10 2020-04-10 Memory scheduling method and memory controller

Country Status (1)

Country Link
CN (1) CN111459414B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200534090A (en) * 2003-12-13 2005-10-16 Samsung Electronics Co Ltd Arbiter capable of improving access efficiency of multi-bank memory device, memory access arbitration system including the same, and arbitration method thereof
CN101256827A (en) * 2007-02-26 2008-09-03 富士通株式会社 Memory controller, control method for accessing semiconductor memory and system

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080162852A1 (en) * 2006-12-28 2008-07-03 Surya Kareenahalli Tier-based memory read/write micro-command scheduler
US7870351B2 (en) * 2007-11-15 2011-01-11 Micron Technology, Inc. System, apparatus, and method for modifying the order of memory accesses
US8539129B2 (en) * 2010-04-14 2013-09-17 Qualcomm Incorporated Bus arbitration techniques to reduce access latency
US8332589B2 (en) * 2010-09-29 2012-12-11 International Business Machines Corporation Management of write cache using stride objects
US8819379B2 (en) * 2011-11-15 2014-08-26 Memory Technologies Llc Allocating memory based on performance ranking
US9904635B2 (en) * 2015-08-27 2018-02-27 Samsung Electronics Co., Ltd. High performance transaction-based memory systems
US10296238B2 (en) * 2015-12-18 2019-05-21 Intel Corporation Technologies for contemporaneous access of non-volatile and volatile memory in a memory device
KR20180012565A (en) * 2016-07-27 2018-02-06 에스케이하이닉스 주식회사 Non-volatile memory system using volatile memory as cache
CN108829348B (en) * 2018-05-29 2022-03-04 上海兆芯集成电路有限公司 Memory device and command reordering method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200534090A (en) * 2003-12-13 2005-10-16 Samsung Electronics Co Ltd Arbiter capable of improving access efficiency of multi-bank memory device, memory access arbitration system including the same, and arbitration method thereof
CN101256827A (en) * 2007-02-26 2008-09-03 富士通株式会社 Memory controller, control method for accessing semiconductor memory and system

Also Published As

Publication number Publication date
CN111459414A (en) 2020-07-28

Similar Documents

Publication Publication Date Title
CN111566610B (en) Command selection strategy
US10776263B2 (en) Non-deterministic window scheduling for data storage systems
Qureshi et al. Improving read performance of phase change memories via write cancellation and write pausing
US9639280B2 (en) Ordering memory commands in a computer system
TWI240166B (en) Method and related apparatus for reordering access requests used to access main memory of a data processing system
CN105027211B (en) Adaptive granularity line buffer cache
KR102519019B1 (en) Ordering of memory requests based on access efficiency
US11243898B2 (en) Memory controller and method for controlling a memory device to process access requests issued by at least one master device
KR20070086640A (en) Priority scheme for executing commands in memories
US11481342B2 (en) Data storage system data access arbitration
CN102637147A (en) Storage system using solid state disk as computer write cache and corresponding management scheduling method
WO2015021919A1 (en) Method and device for data storage scheduling among multiple memories
US10515671B2 (en) Method and apparatus for reducing memory access latency
CN111459414B (en) Memory scheduling method and memory controller
CN117251275A (en) Multi-application asynchronous I/O request scheduling method, system, equipment and medium
US10872015B2 (en) Data storage system with strategic contention avoidance
JP2021135538A (en) Storage control apparatus and storage control program
CN103295627B (en) Phase transition storage, data parallel wiring method and method for reading data
KR100328726B1 (en) Memory access system and method thereof
US11966635B2 (en) Logical unit number queues and logical unit number queue scheduling for memory devices
CN107646107B (en) Apparatus and method for controlling access of memory device
US20240069808A1 (en) Logical unit number queues and logical unit number queue scheduling for memory devices
CN113655949B (en) PM-based database page caching method and system
CN116107843B (en) Method for determining performance of operating system, task scheduling method and equipment
CN112259141B (en) Refreshing method of dynamic random access memory, memory controller and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: Room 301, 2537 Jinke Road, Zhangjiang High Tech Park, Pudong New Area, Shanghai 201203

Patentee after: Shanghai Zhaoxin Semiconductor Co.,Ltd.

Address before: Room 301, 2537 Jinke Road, Zhangjiang hi tech park, Shanghai 201203

Patentee before: VIA ALLIANCE SEMICONDUCTOR Co.,Ltd.

CP03 Change of name, title or address