CN118331480A - Memory control system and memory control method - Google Patents

Memory control system and memory control method Download PDF

Info

Publication number
CN118331480A
CN118331480A CN202310032114.3A CN202310032114A CN118331480A CN 118331480 A CN118331480 A CN 118331480A CN 202310032114 A CN202310032114 A CN 202310032114A CN 118331480 A CN118331480 A CN 118331480A
Authority
CN
China
Prior art keywords
requests
memory
devices
circuitry
access
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310032114.3A
Other languages
Chinese (zh)
Inventor
赖奇劭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Realtek Semiconductor Corp
Original Assignee
Realtek Semiconductor Corp
Filing date
Publication date
Application filed by Realtek Semiconductor Corp filed Critical Realtek Semiconductor Corp
Publication of CN118331480A publication Critical patent/CN118331480A/en
Pending legal-status Critical Current

Links

Abstract

Embodiments of the present disclosure relate to a memory control system and a memory control method. The memory control system includes a plurality of front-end circuitry, flow control circuitry, and a plurality of back-end circuitry. Each of the front-end circuitry receives a plurality of access requests from a corresponding one of the plurality of devices and sequentially outputs the access requests as a corresponding one of a plurality of first requests. The flow control circuitry outputs the first requests as a plurality of second requests. The plurality of back-end circuitry is configured to adjust a task schedule of a memory according to the second requests. The performance of these devices has different sensitivity to the access delay time of the memory.

Description

Memory control system and memory control method
Technical Field
The present disclosure relates to memory control systems, and more particularly to a memory control system and a memory control method applicable to a multi-channel memory.
Background
Existing memory controllers often use the concept of decision tree (decision tree) to adjust the scheduling of access memory for better stability and predictability. However, the decision tree is configured by software and/or firmware to update the decision conditions according to the known conditions. As such, the memory controller described above cannot optimize the performance of devices that are more delay-sensitive in real time. In addition, to support multi-port and multi-channel applications, some technologies use more complex flow control mechanisms to control the multi-channel memory. As such, the overall cost of the system would be prohibitive.
Disclosure of Invention
In some implementations, it is an object of the present disclosure, but not limited to, to provide a memory control system and method that groups devices based on performance-latency relationships.
In some implementations, a memory control system includes a plurality of front-end circuitry, flow control circuitry, and a plurality of back-end circuitry. Each of the front-end circuitry is configured to receive a plurality of access requests from a corresponding one of the plurality of devices and to sequentially output the access requests as a corresponding one of a plurality of first requests. The flow control circuitry is configured to output the first requests as a plurality of second requests. The plurality of back-end circuitry is configured to adjust a task schedule of a memory according to the second requests, wherein performance of the devices has different sensitivities to access latency of the memory.
In some implementations, a memory control method includes the operations of: receiving a plurality of access requests from each of a plurality of devices, and sequentially outputting the access requests as a corresponding one of a plurality of first requests; outputting the first requests as a plurality of second requests and transmitting the second requests to a memory, wherein the performances of the devices have different sensitivities to the access delay time of the memory; and adjusting the task scheduling of the memory according to the second requests.
The features, implementation and functions of the present invention are described in detail below with reference to the preferred embodiments shown in the drawings.
Drawings
FIG. 1A is a schematic diagram of a memory control system according to some embodiments of the present disclosure;
FIG. 1B is a schematic diagram of a memory control system according to some embodiments of the present disclosure;
FIG. 2 is a schematic diagram depicting the flow control circuitry of FIG. 1A or FIG. 1B, according to some embodiments of the present disclosure;
FIG. 3 is a schematic diagram depicting the flow control circuitry of FIG. 1A or FIG. 1B, according to some embodiments of the present disclosure; and
FIG. 4 is a flowchart depicting a method for memory control, in accordance with some embodiments of the present disclosure.
Detailed Description
All terms used herein have their ordinary meaning. The foregoing words are defined in commonly used dictionaries, and the use of any word discussed herein is exemplary only and should not be interpreted as limiting the scope and meaning of the present disclosure. Similarly, the present disclosure is not limited to the various embodiments shown in this specification.
As used herein, "coupled" or "connected" may mean that two or more elements are in direct physical or electrical contact with each other, or in indirect physical or electrical contact with each other, and may also mean that two or more elements are in operation or action with each other. As used herein, the term "circuitry" may be a single system formed of at least one circuit, and the term "circuit" may be a device connected in a manner by at least one transistor and/or at least one active and passive element to process a signal.
As used herein, the term "and/or" includes any combination of one or more of the listed associated items. The terms first, second, third, etc. are used herein to describe and identify various elements. Thus, a first element could also be termed a second element herein without departing from the spirit of the present disclosure. For ease of understanding, like elements in the various figures will be designated with like reference numerals.
FIG. 1A is a schematic diagram of a memory control system 100, according to some embodiments of the present disclosure. In some embodiments, the memory control system 100 may be implemented as a single chip system. In some embodiments, the memory control system 100 can adjust the task scheduling (task scheduling) of the memory 150 and the order in which the devices access the memory 150 according to the requirements of the plurality of sets 1[1-1 [ n ]. In some embodiments, the number of devices in each set of devices 1[1-1 n may be one or more.
The memory control system 100 includes a plurality of front-end circuitry 110[1] to 110[ n ], flow control circuitry 120, a plurality of back-end circuitry 130[1] to 130[ m ], a port physical layer circuit 140, and a plurality of data buffer circuits 145[1] to 145[ m ]. In some embodiments, the values n and m may each be positive integers greater than 1. Each of the front-end circuitry 110[1] to 110[ n ] may be coupled to a corresponding one of the plurality of sets of devices 1[1] to 1[ n ] via a plurality of connection ports. For example, the front-end circuitry 110[1] may be coupled to an interconnect (interconnect) circuit 2 via a plurality of ports P [1] to P [ x ] (which may be virtual channels), and to a group 1 device 1[1 via the interconnect circuit 2, where the value x is a positive integer greater than 1. In some embodiments, the performance of multiple sets of devices 1[1-1 n have different sensitivities to the access delay time of memory 150. For example, the performance of group 1 device 1[1 has a first sensitivity to the access latency of memory 150, the performance of group 2 device 1[2] has a second sensitivity to the access latency of memory 150, and the first sensitivity is higher than the second sensitivity. In other words, multiple devices having similar or identical sensitivity to access delay time of the memory 150 can be grouped together and controlled via the same front-end circuitry. A description of the sensitivity will be provided later with reference to fig. 2. In some embodiments, interconnect circuit 2 may include multiple types of bus circuits. For example, interconnect circuit 2 may include, but is not limited to, advanced extensible interface (advanced extensible interface) circuits.
In some embodiments, the performance of the first device has a first sensitivity to the access delay time of the memory 150, the performance of the second device has a second sensitivity to the access delay time of the memory 150, and if the difference between the first sensitivity and the second sensitivity is less than or equal to a predetermined range, the first sensitivity and the second sensitivity can be considered as similar or identical sensitivity, so that the first device and the second device can be grouped into the same group. For example, the predetermined range may be, but is not limited to, ±5%, ±10% and/or ±20%. Or if both the first sensitivity and the second sensitivity have similar or identical performance-delay time correspondence (as discussed below), the first sensitivity and the second sensitivity may be considered as similar or identical sensitivities so that the first device and the second device may be grouped together.
Each of the front-end circuitry 110[1] to 110[ n ] receives a plurality of access requests (indicated by dashed arrows) for a corresponding set of devices and sequentially outputs the access requests as one of a plurality of requests R1[1] to R1[ n ]. For example, front-end circuitry 110[1] may receive multiple access requests from group 1 devices 1[1], reorder the access requests, and sequentially output the reordered access requests as request R1[1]. By analogy, it should be understood that the correspondence between other sets of devices 1[2] 1[ n ], the plurality of front-end circuitry 110[2] 110[ n ], and the plurality of requests R1[2] R1[ n ]. In some embodiments, the plurality of requests R1[1] R1[ n ] may be out-of-order data reply requests converted by the plurality of front-end circuitry 110[1] 110[ n ] according to the access characteristics of the memory 150. In some embodiments, each of the plurality of front-end circuitry 110[1] to 110[ n ] may set the capability of the devices to handle data ordering according to the requirements of the plurality of sets of devices 1[1] to 1[ n ].
In some embodiments, the implementation of each of the plurality of front-end circuitry 110[1] to 110[ n ] may refer to the first patent document (U.S. Pat. No. 3, 11,269,797) and/or the second patent document (Chinese patent application No. 202210651173.4). Taking the front-end circuitry 110[1] as an example, in some embodiments, the front-end circuitry 110[1] may include a read-order buffer (read-order buffer) in the first patent document, which may fix a burst length (burst length) to segment the reordered access request (e.g., request R1[1 ]), and assign a unique (unique) tag identifier as an index value of the read-order buffer, thereby accessing the read-order buffer. In some embodiments, the fixed burst length may correspond to a transmission bandwidth of the read ordering buffer. For example, the fixed burst length may be, but is not limited to, 64 bytes. By the above arrangement, the data capacity of the read ordering buffer can be set to meet only the maximum data throughput requirements of the 1 st set of devices 1[1 (independent of the data throughput returned by the plurality of back-end circuitry 130[1] to 130[ m ]). In other words, the data capacities of the read ordering buffers in the front-end circuitry 110[1] to 110[ n ] may be different from each other according to actual needs.
In some embodiments, front-end circuitry 110[1] may include a plurality of read ordering buffers that may be respectively coupled to a plurality of ports P [1] through P [ x ] for connecting corresponding sets of devices 1[1 ]. In other embodiments, front-end circuitry 110[1] may include 1 read ordering buffer that may be coupled to at least one connection port for connection to flow control circuitry 120. In some embodiments, the read sort buffer may use techniques such as tag code mapping (TAP ID MAPPING) in the first patent document to further reduce the data capacity required to read the sort buffer. In some embodiments, the "request" herein corresponds to "transaction" of the first patent document. The detailed configuration and operation of the read sort buffer can be referred to in the first patent document, and will not be repeated herein.
On the other hand, in some embodiments, the front-end circuitry 110[1] may further include traffic scheduling circuitry in the second patent document, which may determine the output order of the access requests of the group 1 device 1[1 according to information such as quality of service (quality of service, qoS) level, expiration value of the access, upper limit of the number of outstanding requests, and performance delay of each of the plurality of ports P [1] to P [ x ]. The detailed configuration and operation of the traffic scheduling circuitry can be referred to in the second patent document, and will not be described herein.
The flow control circuitry 120 is configured to output the plurality of requests R1[1] to R1[ n ] as a plurality of requests R2[1] to R2[ m ]. In some embodiments, the flow control circuitry 120 may operate as a network-on-chip (network-on-chip) that may operate as a high-speed bus between the plurality of front-end circuitry 110[1] to 110[ n ] and the plurality of back-end circuitry 130[1] to 130[ m ]. In some embodiments, the flow control circuitry 120 may forward multiple requests R1[1] R1[ n ] to multiple back-end circuitry 130[1] 130[ m ] based on the memory architecture of the unified memory access (uniform memory access). Thus, multiple sets of devices 1[1-1 n may access the memory 150 with multiple channels via the memory control system 100. As previously described, the performance of the multiple sets of devices 1[1-1 n have different sensitivities to the access delay time of the memory 150. In some embodiments, a corresponding set of devices 1[1-1 [ n ] (e.g., set 1 device 1[1) of the sets of devices may be coupled to the memory 150 via at least one lowest delay path of the flow control circuitry 120, wherein the corresponding set of devices has a highest sensitivity to an access delay time of the memory 150. In this way, the minimum access latency requirement of the performance of the corresponding set of devices to the memory 150 can be satisfied to maintain the performance of the corresponding set of devices. In some embodiments, the plurality of requests R2[1] R2[ m ] may be requests ordered by the flow control circuitry 120, which may be characterized as out-of-order data replying.
Specifically, as the access delay time of the memory 150 is longer, the performance of the corresponding device is significantly lower (for example, corresponding to fig. 3A of the second patent document). The performance of such a device has the highest sensitivity to the access latency of the memory 150. In some embodiments, such devices may include a central processing unit, circuitry that requires consistency with flash memory in the system, and so forth. If the access delay time of the memory 150 exceeds a certain value (e.g., minimum bandwidth requirement), the performance of the corresponding device starts to be degraded (e.g., corresponding to fig. 3B of the second patent document). The performance of such a device has a lower sensitivity to the access latency of the memory 150 (compared to the aforementioned cpu). In some embodiments, such devices may include image processing units, data engines, and/or direct memory access controllers, among others. If the access delay time of the memory 150 exceeds a certain value (e.g., minimum bandwidth requirement), the operation of the corresponding device is instantaneously disabled (e.g., corresponding to fig. 3C of the second patent document). The performance of such devices typically has real-time requirements for access latency to memory 150, and if it is determined that the access latency is close to the fixed value, the QoS level of the device needs to be set highest (e.g., set to have the highest priority). In some embodiments, such devices may include displays, image controllers, and the like.
In some embodiments, the devices to access the memory 150 may be grouped according to the performance-latency correspondence. For example, group 1 devices 1[1 may include a central processor or circuitry consistent with flash memory in the system, etc., group 2 devices 1[2] may include displays, video controllers, etc., and group n devices 1[ n ] may include image processing units, data engines, and/or direct memory access controllers, etc. By means of the arrangement, devices with the same or similar sensitivity can be divided into the same group, and preliminary arbitration is performed through the same front-end circuit system. Thus, the design complexity of the plurality of front-end circuitry 110[1] to 110[ n ] can be reduced.
The plurality of back-end circuitry 130[1] to 130[ m ] is coupled to the memory 150 via the port physical layer circuitry 140. Specifically, the back-end circuitry 130[1] 130[ m ] is coupled to the channels of the memory 150 via the port physical layer circuit 140, respectively, and adjusts the task schedule of the memory 150 according to the requests R2[1] R2[ m ]. In some embodiments, the port physical layer circuit 140 includes a plurality of interface circuits (not shown) coupled between the plurality of back-end circuitry 130[1] to 130[ m ] and the plurality of channels of the memory 150, respectively. In some embodiments, the port physical layer circuit 140 may include data transceiver circuitry, clock/power management circuitry, command/address control circuitry, data queuing circuitry, etc., to operate as a communication medium between the plurality of back-end circuitry 130[1] to 130[ m ] and the memory 150. In some embodiments, the back-end circuitry 130[1] to 130[ m ] may convert the plurality of requests R2[1] to R2[ m ] into a memory protocol (e.g., may include, but is not limited to, operations to acknowledge burst type, length, aligned address, etc.), such that the port physical layer circuitry 140 may recognize the format of the plurality of requests R2[1] to R2[ m ] and may reorder the output order of the requests R2[1] to R2[ m ] using the unique tag code as described above, thereby adjusting the task schedule of the memory 150. The detailed implementation and operation of the back-end circuitry 130[1] to 130[ m ] and the port physical layer 140 can be referred to in the first and second patent documents, and thus the description thereof is omitted herein.
A plurality of data buffer circuits 145[1] to 145[ m ] are coupled to the plurality of back-end circuitry 130[1] to 130[ m ], respectively. In this example, each of the plurality of data buffer circuits 145[1] to 145[ m ] is used through a corresponding one of the plurality of back-end circuitry 130[1] to 130[ m ]. For example, back-end circuitry 130[1] may use data buffer circuit 145[1]. If the access request R2[1] is a read request, the back-end circuitry 130[1] may read the data DT from the memory 150 in response to the access request R2[ 1]. When the back-end circuitry 130[1] fails to return the data DT to the circuitry portion of the flow control circuitry 120 coupled to the back-end circuitry 130[1] (e.g., when the circuitry portion fails to receive any data DT), the back-end circuitry 130[1] may register the data DT to the corresponding data buffer circuit 145[1]. When a circuit portion can receive data DT, back-end circuitry 130[1] can transfer data DT from data buffer circuit 145[1] to the circuit portion. In this way, the scheduling performance of the memory 150 is prevented from being affected and the data DT is ensured not to be missed.
In some embodiments, each of the plurality of data buffer circuits 145[1] to 145[ m ] may be a first-in-first-out circuit, but the disclosure is not limited thereto. In some embodiments, memory 150 may be, but is not limited to, synchronous dynamic random access memory (synchronous dynamic random access memory).
FIG. 1B is a schematic diagram of a memory control system 105 according to some embodiments of the present disclosure. In this example, the number of data buffer circuits used by the memory control system 105 is less than the arrangement of FIG. 1A. The memory control system 105 includes a plurality of data buffer circuits 245[1] 245[ y ], wherein the value y is a positive integer greater than 1 and less than the value m.
In this example, at least two of the plurality of back-end circuitry 130[1] to 130[ m ] may share one of the plurality of data buffer circuits 245[1] to 245[ y ]. For example, the data buffer circuit 245[1] is coupled between the plurality of back-end circuitry 130[1] and 130[2], and the plurality of back-end circuitry 130[1] and 130[2] may share the data buffer circuit 245[1]. The plurality of back-end circuitry 130[1] and 130[2] sharing the same data buffer circuit 245[1] are coupled to at least two adjacent channels (e.g., channel CH1 and channel CH 2) of the memory 150. The at least two adjacent channels have similar or identical data transmission frequencies and similar or identical data throughput to reduce the probability of blocking data access. Similarly, multiple back-end circuitry 130[ m-1] and 130[ m ] may share data buffer circuit 245[ m ]. By sharing the same data buffer, circuit area and cost can be further reduced.
Fig. 2 is a schematic diagram depicting the flow control circuitry 120 of fig. 1A or 1B, according to some embodiments of the present disclosure. For easy understanding, in the example of fig. 2, the value m is 4 and the value n is 3, but the present disclosure is not limited thereto. The flow control circuitry 120 includes a plurality of target agent (TARGET AGENT) circuits 210[1] to 210[4], a plurality of pass-through circuits 220[1] to 220[4], a flow scheduling circuit 230, and a plurality of arbiter circuits 240[1] to 240[3]. In addition, group 1 device 1[1 is the device with the highest sensitivity (e.g., devices that include a central processing unit and/or have consistency with flash memory, etc.); group 2 devices 1[2] are devices (e.g., including display, video controller, etc.) that have real-time requirements for access latency; the 3 rd set of devices 1[3] is a device comprising an image display unit, a data engine, a direct memory access controller, etc.
The target agent circuits 210[1] to 210[4] are coupled to the back-end circuits 130[1] to 130[ m ] of FIG. 1A or FIG. 1B, respectively (in this example, the value m is 4), and output the requests R1[1] to R1[3] as the requests R2[1] to R2[4]. The pass circuit 220[1] is coupled to a plurality of target agent circuits 210[1] and 210[2]. The pass circuit 220[2] is coupled to the plurality of target agent circuits 210[3] and 210[4]. The transfer circuit 220[3] is coupled to a plurality of transfer circuits 220[1] and 220[2]. The pass circuit 220[4] is coupled to the pass circuit 220[3]. The plurality of transfer circuits 220[1] 220[4] may transfer the received request to at least one corresponding one of the plurality of target agent circuits 210[1] 210[4].
In some embodiments, each of the plurality of pass-through circuits 220[1] 220[4] may include a router (not shown) and a switch (not shown). The router may be responsible for handling requests and data transfer paths for the plurality of target agent circuits 210[1] to 210[4] and/or the plurality of arbiter circuits 240[1] to 240[3], while the switch is responsible for managing or scheduling the data transfer sequence for the plurality of transfer circuits 220[1] to 220[4 ]. In some embodiments, the router may utilize a lookup table to communicate data, e.g., may process received requests according to address maps of read address (READ ADDRESS) channels and/or write address (WRITE ADDRESS) channels, and query the lookup table for responses according to tag identifiers of read data (read data) channels and/or write response (write response) channels.
The arbiter circuits 240[1] to 240[3] are configured to output a plurality of requests R1[1] to R1[3] to the plurality of transfer circuits 220[1] to 220[4] according to the plurality of control signals VC1 to VC 3. Arbiter circuit 240[1] may reorder the received plurality of requests R1[1] according to control signal VC1 and sequentially output the ordered plurality of requests R1[1] to pass circuits 220[1] and/or 220[2]. The arbiter circuit 240[2] may reorder the received plurality of requests R1[2] according to the control signal VC2 and sequentially output the ordered plurality of requests R1[2] to the pass circuit 220[3]. Arbiter circuit 240[3] may reorder the received plurality of requests R1[3] according to control signal VC3 and sequentially output the ordered plurality of requests R1[3] to pass circuits 220[1] and/or 220[2].
In some embodiments, each of the plurality of arbiter circuits 240[1] 240[3] may be reordered according to the priority of the received request. Thus, access requests with higher priority are output to the corresponding pass-through circuit first, without being limited to access requests with lower priority. In some embodiments, each of the plurality of arbiter circuits 240[1] 240[3] may include an initiator (initiator) and a regulator (regulator) that may adjust the output rate of the requests (e.g., the plurality of requests R1[1] R1[3 ]) according to the control of the traffic scheduling circuit 230 (e.g., the plurality of control signals VC 1-VC 3).
The traffic scheduling circuit 230 may analyze a plurality of requests R1[1] R1[3] to generate the control signals VC 1-VC 3. In some embodiments, the flow scheduler 230 may be implemented by the flow scheduler circuitry in the second publication, which may generate the control signals VC 1-VC 3 according to the QoS level of the requests R1-R3, the expiration value of the access, the upper limit of the number of outstanding requests, and the performance delay. The detailed configuration and operation of the traffic scheduling circuit 230 can be referred to in the second patent document, and will not be described herein.
In this example, since the performance of group 1 device 1[1 has the highest sensitivity to the access latency of memory 150, arbiter circuit 240[1] coupled to receive multiple requests R1[1] from group 1 device 1[1] may be coupled to pass circuit 220[1] via path P1 and to pass circuit 220[2] via path P2, where paths P1 and P2 are the lowest latency paths in flow control circuitry 120. For example, path P1 is coupled to memory 150 via only 1 transfer circuit (e.g., transfer circuit 220[1 ]) and 1 target agent circuit (e.g., target agent circuit 210[1] or target agent circuit 210[2 ]), and path P2 is coupled to memory 150 via only 1 transfer circuit (e.g., transfer circuit 220[2 ]) and 1 target agent circuit (e.g., target agent circuit 210[3] or target agent circuit 210[4 ]). In other words, the flow control circuitry 120 may connect the group 1 device 1[1 to the memory 150 using the paths P1 and P2 that are structurally the lowest latency. Thus, the minimum access latency requirements of the memory 150 for the performance of the group 1 device 1[1 may be met as much as possible to maintain the performance of the group 1 device 1[1.
In some embodiments, due to practical layout location and timing constraints, the flow control circuitry 120 may further include a plurality of register circuits (not shown) that may be coupled between the plurality of arbiter circuits 240[1] 240[3] and the plurality of target agent circuits 210[1] 210[4] to perform pipelining operations. In this case, the circuitry associated with group 1 device 1[1 (e.g., the plurality of pass circuits 220[1] and 220[2 ]) may be disposed adjacent to the intermediate channel of memory 150 to reduce the number of register circuits used. Thus, the latency of group 1 device 1[1 to access memory 150 can be further reduced.
Fig. 3 is a schematic diagram depicting the flow control circuitry 120 of fig. 1A or 1B, according to some embodiments of the present disclosure. Similarly to the example of fig. 2, for easy understanding, in the example of fig. 3, the value m is 4 and the value n is 3, but the present disclosure is not limited thereto. In this example, the flow control circuitry 120 further includes a last-level flash memory 350, as compared to the example of FIG. 2.
In this example, arbiter circuit 240[1] and pass circuit 240[4] are instead coupled to final stage flash memory 350. Final level flash memory 350 may be used by at least one of the sets of devices 1[1-1 < 3 >. For example, last level flash memory 350 may be coupled to group 1 device 1[1 and group 1 device 1[3] to receive reordered plurality of requests R1[1] and R1[3]. In some applications, some of the data may be shared among some of the devices and stored in the last level flash memory 350. For example, some of group 1 devices 1[1 and/or group 1 devices 1[3] may store shared data (denoted as data DS) in final stage flash memory 350. When the data requested to be accessed by R1[1] or R1[3] is the data DS stored in the last-stage flash memory 350, the last-stage flash memory 350 can return the corresponding data DS accordingly without accessing the memory 150. Thus, the performance of the whole system can be further improved. When the data to be accessed by the request R1[1] or R1[3] is not the shared data stored in the last level flash memory 350, the last level flash memory 350 may forward the request to the pass circuit 220[1] or the pass circuit 220[2].
The circuit arrangement shown in fig. 2 and 3 is only for example, and the disclosure is not limited thereto. Various arrangements for accomplishing the same type of function and/or operation are contemplated by the present disclosure.
FIG. 4 is a flow chart depicting a memory control method 400 according to some embodiments of the present disclosure. In operation S410, a plurality of access requests from each of the plurality of devices are received, and the access requests are sequentially output as a corresponding one of a plurality of first requests. In operation S420, the first requests are output as a plurality of second requests, and the second requests are transmitted to a memory, wherein the performance of the devices has different sensitivities to the access delay time of the memory. In operation S430, the task schedule of the memory is adjusted according to the second requests.
The above description of the operations of the memory control method 400 may refer to the above embodiments, and thus will not be repeated herein. The above-described operations are merely examples and are not limited to being performed in the order in this example. The various operations under the memory control method 400 may be added, replaced, omitted, or performed in a different order as appropriate without departing from the manner and scope of operation of the various embodiments of the present disclosure. Or one or more operations under the memory control method 400 may be performed concurrently or with partial concurrence.
In summary, the memory control system and the memory control method provided in some embodiments of the present disclosure can utilize the front-end circuitry to divide at least one of the plurality of devices having similar or identical sensitivities into the same group. Thus, the overall control complexity can be reduced at the system level, and the control of the multi-channel memory can be completed under the unified memory architecture.
Although the embodiments of the present disclosure have been described above, these embodiments are not limited thereto, and those skilled in the art can make various changes to the technical features of the present disclosure according to the explicit or implicit disclosure of the present disclosure, where the various changes may be within the scope of protection sought herein, in other words, the scope of protection of the present disclosure shall be defined by the claims of the present disclosure.
[ Symbolic description ]
1[1-1 N one group of devices
100,105 Memory control system
110[1] To 110[ n ] front-end circuit system
120 Flow control circuitry
130[1] To 130[ m ] back-end circuit system
140 Physical layer circuit of port
145[1] To 145[ m ] data buffer circuit
150 Memory
2 Interconnect circuit
210[1] To 210[4] target agent circuit
220[1] To 220[4] transmission circuit
230 Flow scheduling circuit
240[1] To 240[3] transmission circuit
350 Final stage flash memory
400 Memory control method
CH1, CH2 channel
DS, DT data
P1-Px connection port
P1, P2 paths
R1[1] to R1[ n ], R2[1] to R2[ m ] are requests
S410, S420, S430 operation
VC 1-VC 3 control signals

Claims (10)

1. A memory control system, comprising:
A plurality of front-end circuitry, wherein each of the plurality of front-end circuitry is configured to receive a plurality of access requests for a corresponding device of a plurality of devices, and to sequentially output the plurality of access requests as a corresponding one of a plurality of first requests;
A flow control circuitry to output the plurality of first requests as a plurality of second requests; and
And the plurality of back-end circuit systems are used for adjusting the task scheduling of a memory according to the plurality of second requests, wherein the performances of the plurality of devices have different sensitivities to the access delay time of the memory.
2. The memory control system of claim 1, wherein the plurality of devices are divided into a plurality of groups of devices, and performance of at least one of the plurality of devices within a same group of devices of the plurality of groups of devices has a similar or identical sensitivity to access latency of the memory.
3. The memory control system of claim 2, wherein a first set of devices of the plurality of sets of devices is coupled to the memory via a lowest latency path of the flow control circuitry, and performance of the first set of devices has a highest sensitivity to access latency of the memory.
4. The memory control system of claim 1, wherein the flow control circuitry comprises:
a plurality of target agent circuits to output the plurality of first requests as the plurality of second requests;
A plurality of transfer circuits coupled to the plurality of target agent circuits and configured to output the plurality of first requests to the plurality of target agent circuits;
a flow scheduling circuit for analyzing the processing sequence of the first requests to generate a plurality of control signals; and
And a plurality of arbiter circuits respectively coupled to the plurality of devices and outputting the plurality of first requests to the plurality of transfer circuits according to the plurality of control signals.
5. The memory control system of claim 4, wherein the flow control circuitry further comprises:
a last level flash memory is coupled to at least one of the plurality of devices via one of the plurality of arbiter circuits and configured to return a shared data to the at least one device based on at least one of the plurality of first requests issued by the at least one device.
6. The memory control system of claim 1, further comprising:
A plurality of data buffer circuits, wherein a corresponding one of the plurality of back-end circuitry is further configured to register a data read from the memory based on a corresponding one of the plurality of second requests in a corresponding one of the plurality of data buffer circuits.
7. The memory control system of claim 6, wherein at least two of the plurality of back-end circuitry share one of the plurality of data buffer circuits.
8. The memory control system of claim 7, wherein the memory includes a plurality of channels, and the at least two of the plurality of back-end circuitry are coupled to the memory via at least two adjacent channels of the plurality of channels.
9. The memory control system of claim 6, wherein the plurality of back-end circuitry is to register the data to the corresponding data buffer circuit when the data cannot be returned to the flow control circuitry.
10. A memory control method, comprising:
Receiving a plurality of access requests from each of a plurality of devices, and sequentially outputting the plurality of access requests as a corresponding one of a plurality of first requests;
outputting the plurality of first requests as a plurality of second requests and transmitting the plurality of second requests to a memory, wherein the performance of the plurality of devices has different sensitivities to access delay times of the memory; and
And adjusting task scheduling of the memory according to the second requests.
CN202310032114.3A 2023-01-10 Memory control system and memory control method Pending CN118331480A (en)

Publications (1)

Publication Number Publication Date
CN118331480A true CN118331480A (en) 2024-07-12

Family

ID=

Similar Documents

Publication Publication Date Title
US7899052B1 (en) Memory structure for resolving addresses in a packet-based network switch
EP1775897B1 (en) Interleaving in a NoC (Network on Chip) employing the AXI protocol
US7257683B2 (en) Memory arbitration system and method having an arbitration packet protocol
EP1192753B1 (en) Method and apparatus for shared buffer packet switching
US7150021B1 (en) Method and system to allocate resources within an interconnect device according to a resource allocation table
US6715023B1 (en) PCI bus switch architecture
KR100775406B1 (en) Apparatus and method for performing dma data transfer
US8190801B2 (en) Interconnect logic for a data processing apparatus
US6901451B1 (en) PCI bridge over network
US20090177805A1 (en) Dual port serial advanced technology attachment (sata ) disk drive
EP0752780A2 (en) Method and apparatus for separating data packets into multiple busses
US7483429B2 (en) Method and system for flexible network processor scheduler and data flow
US20050125590A1 (en) PCI express switch
KR20160117108A (en) Method and apparatus for using multiple linked memory lists
US6493784B1 (en) Communication device, multiple bus control device and LSI for controlling multiple bus
US9274586B2 (en) Intelligent memory interface
KR20030084974A (en) Buffer network for correcting fluctuations in a parallel/serial interface
WO2024082747A1 (en) Router having cache, routing and switching network system, chip, and routing method
US7218638B2 (en) Switch operation scheduling mechanism with concurrent connection and queue scheduling
CN115136125A (en) NOC relaxed write sequence scheme
US7313146B2 (en) Transparent data format within host device supporting differing transaction types
US20200356497A1 (en) Device supporting ordered and unordered transaction classes
CN118331480A (en) Memory control system and memory control method
TWI826216B (en) Memory control system and memory control method
US20240220104A1 (en) Memory control system and memory control method

Legal Events

Date Code Title Description
PB01 Publication