CN117873928A - Controller, access request processing method and electronic equipment - Google Patents

Controller, access request processing method and electronic equipment Download PDF

Info

Publication number
CN117873928A
CN117873928A CN202311872784.2A CN202311872784A CN117873928A CN 117873928 A CN117873928 A CN 117873928A CN 202311872784 A CN202311872784 A CN 202311872784A CN 117873928 A CN117873928 A CN 117873928A
Authority
CN
China
Prior art keywords
queue
cache
access request
storage pool
controller
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311872784.2A
Other languages
Chinese (zh)
Inventor
王伟
刘存
王燚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dingdao Zhixin Shanghai Semiconductor Co ltd
Original Assignee
Dingdao Zhixin Shanghai Semiconductor Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dingdao Zhixin Shanghai Semiconductor Co ltd filed Critical Dingdao Zhixin Shanghai Semiconductor Co ltd
Priority to CN202311872784.2A priority Critical patent/CN117873928A/en
Publication of CN117873928A publication Critical patent/CN117873928A/en
Pending legal-status Critical Current

Links

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The application discloses a controller, a processing method of access request and electronic equipment, wherein the controller sequentially comprises: the device comprises a port queue, a cache queue processing module and a cache module; the port queue is used for sequentially receiving access requests sent by at least one processor of the electronic equipment; the access request is a request by the processor to access a memory of the electronic device; the cache queue processing module is used for receiving and processing the access request in the port queue; the method comprises the steps of aiming at a first storage pool in a caching module, caching a first access request corresponding to the first storage pool under the condition that the first storage pool is in a blocking state; the first storage pool is any storage pool in the cache module; transmitting a first access request corresponding to the first storage pool to the cache module under the condition that the first storage pool is in a non-blocking state; such that the memory is accessed based on the access request in the cache module.

Description

Controller, access request processing method and electronic equipment
Technical Field
The present application relates to the field of data processing technologies, and in particular, but not limited to, a controller, a method for processing an access request, and an electronic device.
Background
In current designs of Double Data Rate (DDR) controllers, the DDR controller supports multiple ports, each of which receives an access request from a host (master).
Each host sends access requests of different quality of service (Quality of Service, qoS) into the controller. The DDR controller takes the DDR bandwidth utilization ratio as priority, and performs arbitration scheduling on an access request sent by the host to access the DDR.
Read and write requests from different hosts are cached by multiple shared memory pools in one configurable content addressable memory (content addressable memory, CAM) due to scheduling requirements. For example, high QoS requests are buffered through pool 1 and low QoS requests are buffered through pool 2.
For a request, after the request enters the DDR controller, the request will first enter the port's queue, where it is queued for entry into the CAM.
Thus, if the low QoS pool in the CAM is full but the high QoS pool is not full, the first request in the port queue is a low QoS request followed by a high QoS request, and at this time the low QoS request in the queue cannot enter the CAM, blocking the subsequent high QoS requests from entering the CAM. Thus, a head of line blocking (HOL) problem occurs.
Disclosure of Invention
In order to solve the problems, the application provides a controller, a processing method of an access request and electronic equipment.
The technical scheme of the application is realized as follows:
in a first aspect, the present application provides a controller of an electronic device, the controller comprising in order: the device comprises a port queue, a cache queue processing module and a cache module;
the port queue is used for sequentially receiving access requests sent by at least one processor of the electronic equipment; the access request is a request from a processor to access a memory of the electronic device;
the buffer queue processing module is used for receiving and processing the access request in the port queue;
the method comprises the steps of aiming at a first storage pool in the caching module, caching a first access request corresponding to the first storage pool under the condition that the first storage pool is in a blocking state; the first storage pool is any storage pool in the cache module;
transmitting a first access request corresponding to the first storage pool to the cache module under the condition that the first storage pool is in a non-blocking state; so that the memory is accessed based on the access request in the cache module.
In a second aspect, the present application provides a method for processing an access request, where the method applies a controller of an electronic device, and the method includes:
sequentially receiving access requests sent by at least one processor of the electronic equipment, and sequentially transmitting the access requests to a port queue of the controller; the access request is a request from a processor to access a memory of the electronic device;
invoking an access request in the port queue to a cache queue processing module of the controller;
processing the access request in the buffer queue processing module;
the method comprises the steps that a first access request corresponding to a first storage pool in a cache module of the controller is cached for the first storage pool under the condition that the first storage pool is in a blocking state; the first storage pool is any storage pool in the cache module;
transmitting a first access request corresponding to the first storage pool to the cache module under the condition that the first storage pool is in a non-blocking state; so that the memory is accessed based on the access request in the cache module.
In a third aspect, the present application further provides an electronic device, including: a memory and a controller as provided in the first aspect above.
In a fourth aspect, the present application further provides a storage medium having stored thereon a computer program which, when executed by a processor, implements the method for processing an access request provided in the second aspect.
Drawings
FIG. 1 is a schematic diagram of a first alternative configuration of a controller according to an embodiment of the present application
FIG. 2 is a schematic diagram of a second alternative configuration of a controller according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a third alternative configuration of a controller according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a fourth alternative configuration of a controller according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a fifth alternative configuration of a controller according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a sixth alternative configuration of a controller according to an embodiment of the present disclosure;
FIG. 7 is a schematic flow chart of an alternative method for processing an access request according to an embodiment of the present disclosure;
fig. 8 is a schematic flow chart of another alternative method for processing an access request according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application more clear, the specific technical solutions of the present application will be described in further detail below with reference to the accompanying drawings in the embodiments of the present application. The following examples are illustrative of the present application, but are not intended to limit the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
In the following description, the term "first\second\third" is merely used for example to distinguish different objects, and does not represent a specific ordering for the objects, and does not have a limitation of precedence order. It will be appreciated that the "first-/second-/third-" may be interchanged with one another in the specific order or sequence of parts where appropriate to enable the embodiments of the present application described herein to be implemented in other than those illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
The embodiment of the application can provide a processing method, a processing device, equipment and a storage medium for an access request. In practical application, the processing method of the access request can be realized by the processing device of the access request, and each functional entity in the processing device of the access request can be cooperatively realized by hardware resources of the electronic equipment, such as computing resources of a processor and the like and communication resources (such as various modes for supporting communication of realizing optical cables, cells and the like).
Next, embodiments of a controller of an electronic device, a processing method of an access request, an electronic device, and a storage medium provided in embodiments of the present application are described.
In a first aspect, an embodiment of the present application provides a controller of an electronic device, where the controller is configured to receive an access request, schedule the access request, and access a memory of the electronic device.
The embodiment of the application does not limit the specific type of the electronic equipment. In one possible implementation, the electronic device may be a DDR and the corresponding controller is a DDR controller.
It will be appreciated that the controller may also be a controller of other devices, which are not listed here.
Referring to what is shown in fig. 1, the controller 10 may include a port queue 101, a cache queue processing module 102, and a cache module 103 in this order.
Briefly, the received access request 10A passes through the port queue 101, then is transferred from the port queue 101 to the cache queue processing module 102, and then is transferred from the cache queue processing module 102 to the cache module 103, so that the access request in the cache module 103 is invoked to access the memory 20.
Wherein for port queue 101:
the port queue 101 is configured to sequentially receive access requests 10A sent by at least one processor of the electronic device.
The access request 10A is a request by a processor to access the memory 20 of the electronic device.
The access request 10A is used to request access to the memory 20 of the electronic device, and the type of access request is not particularly limited in the embodiments of the present application. For example, the access request 10A may be a read request, a write request, a query request, or a delete request, among others.
Here, the number of port queues 101 and the manner of receiving the access request are not particularly limited, and may be configured according to actual demands.
The number of port queues 101 may be one or more.
In one possible implementation, the plurality of port queues 101 may correspond to receiving access requests 10A sent by the plurality of processors. For example, a port queue 101 is used to receive an access request 10A sent by a processor.
The rule for receiving the plurality of access requests is not particularly limited, and may be configured according to actual requirements, and exemplary, the plurality of access requests 10A may also be sequentially polled based on the arrival time.
The port queue 101 processes access requests in the queue based on a first-in-first-out (first in first out, FIFO) mechanism. For example, when the access request 1, the access request 2 and the access request 3 enter the port queue 101 in sequence, the corresponding access request 1, the access request 2 and the access request 3 flow out of the port queue 101 in sequence.
After passing through the port queue 101, the access request 10A arrives at the cache queue processing module 102.
It can be seen that the first-in-first-out mechanism of port queue 101 can maintain fairness of access requests to some extent.
For the cache queue processing module 102:
the cache queue processing module 102 is configured to receive and process the access request 10A in the port queue 101.
The cache queue processing module 102 may receive the access request 10A in the processing port queue 101 and process the received access request 10A.
The specific manner in which the cache queue processing module 102 may receive the access request 10A in the port queue 101 is not limited in this embodiment. In an example, the cache queue processing module 102 may sequentially receive the access requests 10A in the port queue 101 based on a polling rule.
Next, the processing procedure of the access request 10A by the cache queue processing module 102 will be described.
For the first storage pool 1031 in the cache module 103, in a case where the first storage pool 1031 is in a blocked state, the first access request corresponding to the first storage pool 1031 is cached.
The first storage pool 1031 is any storage pool in the cache module.
In one example, the caching module 103 includes two pools, pool 1 for caching high priority access requests and pool 2 for caching low priority access requests. Correspondingly, the first storage pool 1031 may be the storage pool 1 or the storage pool 2.
It will be appreciated that in practice, the buffer module 103 may be divided into more storage pools, and the first storage pool 1031 may be any one of a plurality of storage pools, which are not illustrated herein.
The first storage pool 1031 is in a blocking state, which is used to indicate that the first storage pool 1031 has no remaining storage space, and is in a full state.
In the case where the first storage pool 1031 is in a blocked state, it is not possible to continue to schedule access requests to the first storage pool 1031 in the cache module 30. Therefore, the first access request corresponding to the first memory pool 1031 is cached by the cache queue processing module 102 to alleviate the blocking of the first memory pool.
The specific number of cache queues included in the cache queue processing module 102 is not specifically limited, and may be configured according to actual requirements.
In one possible implementation, the cache queue processing module 102 includes a cache queue, through which all access requests are cached.
In another possible implementation, the number of cache queues included in the cache queue processing module 102 is consistent with the number of storage pools, and one cache queue is used to cache access requests corresponding to one storage pool.
In yet another possible implementation, the number of cache queues included in the cache queue processing module 102 may be greater than the number of memory pools, and the access request corresponding to one memory pool may be cached in one or more cache queues.
The buffer queue processing module 102 may further receive status information of the buffer module 103, where the buffer queue processing module 102 receives first indication information, where the first indication information is used to indicate that the first storage pool is blocked (storage space is occupied), and buffer access requests corresponding to the first storage pool 1031, so that access requests that are transmitted to the first storage pool 1031 are buffered in the buffer queue processing module 102. In this way, the access request may also enter from the port queue 101, or may flow from the port queue 101 into the cache queue processing module 102, without causing HOL problem, or affecting the other port queues to receive the access request.
Transmitting a first access request corresponding to the first storage pool 1031 to the cache module 103 when the first storage pool 1031 is in a non-blocking state; such that the memory 20 is accessed based on the access request in the cache module 103.
The specific manner of transmitting the first access request corresponding to the first storage pool 1031 to the cache module 103 is not limited herein, and may be configured according to actual requirements.
In one possible implementation, the first access request corresponding to the first storage pool 1031 may be transmitted to the cache module 103 according to a first-in first-out principle.
In another possible implementation manner, the first access request corresponding to the first storage pool 1031 may be transmitted to the cache module 103 according to the priority of the first access request, and based on the principle of preferentially transmitting the access request with the higher priority.
After the access request 10A reaches the cache module 103, the access request in the cache module 103 is invoked to access the memory 20 based on a preset invocation rule.
The type of the preset calling rule is not particularly limited, and may be configured according to actual requirements. For example, the scheduling may be performed based on efficiency, priority, both efficiency and priority, first-in first-out, and so on.
It may be seen that for the processing of the second storage pool, reference may be made to the above detailed description of the two states (blocking and non-blocking) for the first storage pool, which is not repeated here.
The embodiment of the application provides a controller of electronic equipment, the controller includes in proper order: the device comprises a port queue, a cache queue processing module and a cache module; the port queue is used for sequentially receiving access requests sent by at least one processor of the electronic equipment; the access request is a request from a processor to access a memory of the electronic device; the buffer queue processing module is used for receiving and processing the access request in the port queue; the method comprises the steps of aiming at a first storage pool in the caching module, caching a first access request corresponding to the first storage pool under the condition that the first storage pool is in a blocking state; the first storage pool is any storage pool in the cache module; transmitting a first access request corresponding to the first storage pool to the cache module under the condition that the first storage pool is in a non-blocking state; so that the memory is accessed based on the access request in the cache module.
For the controller in the embodiment of the present application, a buffer queue processing module is added in the controller, that is, the access request may be buffered by two levels of buffer devices (buffer modules and buffer queue processing modules), so that even if a certain storage pool (for example, the first storage pool) in the buffer modules is blocked, the received access request may be buffered by the buffer queue processing module, thereby reducing the probability of HOL.
In some possible embodiments, referring to what is shown in fig. 2, the cache queue processing module 102 includes a first scheduler 1021 and the cache module includes a second scheduler 1032.
The cache queue processing module 102 may also include a third scheduler 1023.
Wherein: a third scheduler 1023 for input scheduling of the cache queues 1022. For example, the third scheduler may schedule the access request output from the port queue 101 to the cache queue 1022.
The scheduling policy of the third scheduler 1023 is not specifically limited in this embodiment, and may be configured according to actual requirements. Illustratively, the third scheduler 1023 may schedule access requests output by the port queue 101 based on a priority policy.
In the case where the number of the cache queues 1022 is plural, the third scheduler 1023 may schedule the access requests output from the port queues to different cache queues according to the priority.
The number of the third schedulers 1023 is not particularly limited here, and may be configured according to actual demands. In a possible implementation manner, a third scheduler 1023 may be configured, and scheduling of all access requests output by the port queue 101 may be implemented by the third scheduler; in another possible implementation, as many as the number of third schedulers 1023 will be configured as the number of cache queues 1022, so that the input scheduling of one cache queue 1022 may be implemented by the first scheduler 1023.
And the first scheduler 1021 is used for processing the access request in the cache queue processing module based on a priority calling rule.
That is, the first scheduler 1021 is configured to implement output scheduling of the cache queue processing module 102. In a specific output scheduling process, calling is performed based on a priority calling rule. The plurality of access requests received by the controller have different priorities, and access requests with high priorities are preferentially invoked in the priority invocation rules.
In one possible implementation, the first scheduler 1021 may schedule access requests in the cache queue 1022 based on a strict priority scheduling policy.
In a strict priority scheduling policy, there is a high priority access request, which must be scheduled.
In another possible implementation, the first scheduler 1021 may schedule access requests in the cache queue 1022 based on a polling scheduling policy having a quantum.
In the polling scheduling policy having a share, the first scheduler performs polling scheduling in the plurality of cache queues 1022 based on the priority scheduling policy, and the number of access requests in the invoked one cache queue 1022 may be determined by a fixed share manner, and may be determined by a temporary adjustment share manner.
Example 1, the first scheduler 1021 is configured for cache queue one and cache queue two with an initial share of 10 access requests. In the first case, the first buffer queue includes 15 access requests, and the second buffer queue includes 18 access requests, then the first scheduler 1021 may determine that the share of the first buffer queue is 10 access requests, and determine that the share of the second buffer queue is 10 access requests, based on the initial share. In the second case, the first buffer queue includes 5 access requests, and the second buffer queue includes 30 access requests, then the first scheduler 1021 may temporarily adjust the temporary shares of the two buffer queues 1022, determine the share of the first buffer queue as 5 access requests, and determine the share of the second buffer queue as 15 access requests.
In this way, the buffer queues 1022 with larger depth can be queued by temporarily adjusting the shares, and the queue depth is reduced, so that the queue depth (length) pressure of the buffer queues 1022 is relieved, and the queue lengths of the buffer queues 1022 can be further balanced.
For example, the port queue 101 of the controller receives access requests with different QoS, and the corresponding cache queue processing module 102 receives access requests with different QoS, and the first invoker 1021 processes the access requests with different QoS based on the priority invocation rule. Here, the higher the QoS, the higher the corresponding call priority, and the corresponding higher the priority is scheduled.
It will be appreciated that the call priority may be determined according to service parameters or various combinations of parameters other than QoS, and may be determined according to a certain parameter (e.g. delay parameter) or a combination of several parameters within QoS, which are not listed here.
In a possible implementation manner, the first scheduler 1021 may further receive status information of the buffer module 103, where the buffer queue processing module 102 receives first indication information, where the first indication information is used to indicate that the first storage pool is blocked (storage space is full), and the first scheduler 1021 may suspend output scheduling of the access request corresponding to the first storage pool 1031, so that the access request corresponding to the first storage pool 1031 is buffered in the buffer queue 1022. In this way, on the one hand, the access request may also enter from the port queue 101, or may flow from the port queue 101 into the buffer queue 1022 in the buffer queue processing module 102, which does not cause HOL problem, and does not affect the other port queues to receive the access request. On the other hand, the first scheduler 1021 may schedule based on the priority, and may process the access request with high priority preferentially, so as to meet the access requirement of the high priority access request in practice.
In a possible implementation manner, the first scheduler 1021 may also receive status information of the buffer module 103, where the buffer queue processing module 102 receives first indication information, where the first indication information is used to indicate that the first storage pool 1031 is blocked (storage space is full), and the first scheduler 1021 adjusts the temporary share of the buffer queue based on the first indication information. For example, the first scheduler 1021 may allocate the share of the buffer queue corresponding to the first storage pool 1031 to other buffer queues, thereby accelerating the processing speed of the access request in the other buffer queues.
For example, the cache queue corresponding to the first storage pool includes: the cache queue 1 and the cache queue 2, and the cache queue corresponding to the second storage pool comprises: and if the first storage pool is blocked, the shares of the cache queue 3 and the cache queue 4 are allocated to the cache queue 3 and the cache queue 2, and if the initial shares of the 4 cache queues are all 10 access requests, the shares of the cache queue 1 and the cache queue 2 are temporarily adjusted to be 0 due to the blocking of the first storage pool, and the shares of the cache queue 3 and the cache queue 4 are all 20 access requests.
Illustratively, the cache queue processing module 102 includes a first scheduler 1021, a cache queue 1022, and a third scheduler 1023.
A third scheduler 1023 for input scheduling of the cache queues 1022. The buffer queue 1022 is used for buffering access requests, and the first scheduler 1021 is used for calling the access requests in the buffer queue 1022 to process.
It can be seen that the priority of the access request is considered in the scheduling process of the first scheduler 1021, so that the priority of the access request with high priority can be ensured to be scheduled, and the actual requirement can be met.
A second scheduler 1032 for invoking the access request in the cache module 103 to access the memory 20 based on the broadband utilization efficiency rule of the memory 20.
That is, the second scheduler 1032 is used for the output scheduling process of the buffer module 103. In a specific output scheduling process, a call is made based on the broadband utilization efficiency rules of the memory 20. The caching module 103 caches a plurality of access requests that the second scheduler 1032 invokes to access the memory 20 based on the broadband utilization efficiency rules of the memory 20.
It can be seen that in this embodiment, the scheduling is based on priority by the first scheduler; the second scheduler schedules based on broadband utilization; the decoupling of priority scheduling and broadband utilization scheduling is realized, and the method has the characteristics of simple logic and good realization effect.
Next, an implementation of the cache queue processing module 102 will be described.
In some embodiments, referring to what is shown in FIG. 3, the cache queue processing module 102 further includes a plurality of cache queues 1022.
A cache queue 1022 is used to cache access requests for at least one service class range.
Illustratively, the cache queue processing module 102 may include 4 cache queues 1022, a first cache queue 10221, a second cache queue 10222, a third cache queue 10223, and a fourth cache queue 10224, respectively.
Wherein the first buffer queue 10221 may be used to store access requests of QoS0-QoS3 scheduled from a plurality of port queues, the second buffer queue 10222 may be used to store access requests of QoS4-QoS7 scheduled from a plurality of port queues, the third buffer queue 10223 may be used to store access requests of QoS8-QoS11 scheduled from a plurality of port queues, and the fourth buffer queue 10224 may be used to store access requests of QoS12-QoS15 scheduled from a plurality of port queues.
In some embodiments, to facilitate adjusting the length of the cache queues, the cache queue processing module 102 may further include and free queues for dynamically adjusting the length of each used queue as needed. For example, the cache queues may allocate their own memory space into each cache queue 1022 or reclaim the memory space of each cache queue 1022.
The priority of the cache queue 1022 is associated with the class of service. The priority of the cache queue 1022 herein may be either positively or negatively correlated with the class of service. Wherein the more stringent the requirements for service class characterization, e.g. the shorter the delay, the higher the corresponding priority.
The first scheduler 1021 is further configured to: and processing the access requests in the plurality of cache queues based on the priority calling rule.
The priority of the cache queue is associated with the class of service.
Here, the plurality of cache queues are used to store access requests of different service levels, so the plurality of cache queues may be divided into different priorities based on the needs of the access requests, and the first scheduler 1021 may invoke the service requests in the plurality of cache queues based on the priority polling rule.
It can be seen that the buffer queue processing module 102 includes a plurality of buffer queues 1022, where different buffer queues are used to store access requests with different service levels, and the access requests can be divided into different buffer queues according to the service levels, so that the first scheduler 1021 invokes the access requests in different buffer queues based on the priority scheduling principle. In this way, when the first storage pool in the CAM is blocked, the access request corresponding to the first storage pool can be cached through the cache queue, so that the access request can also enter from the port queue 101, and can also flow into the cache queue processing module 102 from the port queue 101, so that the HOL problem is not caused, the other port queues are not influenced to receive the access request, namely, the other port queues are not blocked; and as the cache queues are divided more carefully, namely the access requests are cached through a plurality of cache queues in practice, the method has the characteristics of simple and flexible realization and capability of preferentially preventing blocking.
The specific number of the cache queues 1022 is not limited in this embodiment, and may be configured according to actual requirements. For example, if the access requests have QoS0-QoS15 levels, the corresponding cache queues in the processing module in the cache queue may be configured to 16, where one cache queue corresponds to an access request with one service level. It can be seen that the greater the number of configured cache queues, the finer the granularity, and the smaller the impact of processing access requests between multiple service classes, the less the probability of blocking each other.
Next, a specific implementation of the cache queue 1022 is described.
In some embodiments, for each cache queue 1022 in the plurality of cache queues 1022, the queue depth of the cache queue 1022 is a variable queue depth.
That is, when the first storage pool 1031 is blocked, the greater the number of access requests from the port queue 101, the greater the queue depth of the corresponding cache queue 1022; the smaller the number of access requests from the port queue 101, the smaller the corresponding queue depth of the cache queue 1022.
In practice, the queue depth of the cache queue 1022 may be adjusted at any time. For example, if the current queue depth of the cache queue 1 can store 5 access requests, and the cache queue 1 is full, and at this time, 1 access request from the port queue 101 is received again, the queue depth of the cache queue 1 is correspondingly increased by 1 access request depth, so as to store the access request. At the next moment, 2 access requests in the cache queue 1 are called to the cache module 103, and the queue depth in the cache queue 1 is correspondingly reduced by 2 access requests. In this way, the released storage space can be used by other cache queues, so that the utilization rate between the cache queues is improved.
Specifically, if the cache queue 1 needs to be added with 1 access request, the storage space corresponding to the 1 access request size in the idle queue is allocated to the cache queue 1 so as to increase the queue depth of the cache queue 1; if the queue depth of the buffer queue 1 needs to be reduced by 2 access requests, the storage space with the size of 2 access requests in the buffer queue 1 is recycled to the idle buffer queue 1, so that the buffer queue 1 is reduced.
Here, the specific manner of adjusting the buffer queue depth is not particularly limited, and may be configured according to actual requirements. By way of example, management may be via a doubly linked list (linklist).
Because the bidirectional linked list is maintained by one node and one node, one or more nodes can be added at any time to increase the storage space of the corresponding number of access requests, and the increase of the queue depth is realized; the reduction of the queue depth can also be realized by reducing the storage space of the corresponding number of access requests by reducing one or more nodes at any time.
The buffer queue with variable queue depth has the characteristics of high buffer space utilization rate, flexibility and adjustability and meeting actual demands.
Next, a call processing procedure of the first scheduler 1021 in the non-blocking state will be described.
In some embodiments, if the cache queue processing module 102 includes a plurality of cache queues 1022 and the first scheduler 1021, in the case that the first storage pool 1031 is in the non-blocking state, the first scheduler 1021 is further configured to:
determining a target cache queue of the current processing in the plurality of cache queues based on a first calling rule; determining a target access request processed at the present time based on a queue head position in the target cache queue; transmitting the target access request to the first storage pool.
The specific content of the first call rule is not limited, and the configuration can be performed according to actual requirements.
In one possible implementation, the first invocation rule includes: priority invocation rules and/or polling rules with shares.
In the priority calling rule, the priority calling rule is sequentially determined as a target cache queue according to the order of the priority of the cache queue from high to low.
In a polling rule with shares, each cache queue has an initial credit, which is determined to be the target cache queue in turn based on the credit of each cache queue from high to low. Here, the shares are used to characterize the number of access requests that are invoked.
Specifically, the first scheduler 1021 is further configured to: the first scheduler 1021 determines a target cache queue of the current processing among the plurality of cache queues 1022 based on the first call rule; starting at the queue head position in the target cache queue, taking a specific number (or a specific amount) of access requests, and determining the access requests as target access requests of the current processing; the target access request is transferred to the first storage pool 1031.
The access request is fetched at the queue head position, so that first-in first-out can be realized, and the fairness principle is met.
Next, an implementation of the first scheduler 1021 adjusting the queue depth of the cache queue 1022 will be described.
In some embodiments, if the buffer queue processing module 102 includes a plurality of buffer queues 1022 and the first scheduler 1021, in the case that the first storage pool 1031 is in the blocking state, the first scheduler 1021 is further configured to:
determining N first cache queues corresponding to the first storage pool; for each first cache queue in the N first cache queues, acquiring the remaining storage space of the first cache queue; and if the remaining storage space is smaller than or equal to a first storage threshold value, increasing the queue depth of the first cache queue so that the first cache queue can cache a second access request corresponding to the first cache queue transmitted by the port queue.
And N is greater than or equal to 1.
The specific value of the first storage threshold is not specifically limited in the embodiment of the present application.
The first cache queue refers to a cache queue to which access requests are allocated to the first storage pool.
Here, the processing manner is uniform for each first cache queue.
For a first cache queue, the first scheduler 1021 first obtains the remaining storage space of the first cache queue, determines whether the remaining storage space is smaller than or equal to the first storage threshold, and if the remaining storage space is smaller than or equal to the first storage threshold, which indicates that the remaining storage space is insufficient, correspondingly increases the queue depth of the first cache queue and correspondingly increases the remaining storage space, so that the first cache queue can cache the access request corresponding to the first cache queue transmitted by the port queue.
It can be seen that the queue depth of the first buffer queue is adjusted in real time, and the queue blocking is relieved under the condition of improving the space utilization rate of the buffer queue.
In some embodiments, where the first storage pool 1031 is in a blocking state, the cache module 103 includes a second scheduler 1032, the second scheduler 1032 to:
and increasing the share of the first storage pool processed at the time to relieve the blocking state of the first storage pool.
The shares are used to characterize the number of access requests that access the memory this time.
When the second scheduler 1032 calls the access request of each storage pool, the access request in each storage pool is scheduled according to the share allocated to each storage pool in advance, in practice, since the first storage pool 1031 has already processed blocking, in order to quickly relieve the blocking, the amount of the blocked first storage pool 1031 may be temporarily increased correspondingly, so that the access request in the first storage pool 1031 may be quickly called out, thereby improving the blocking of the first storage pool 1031.
It should be noted that, if the first storage pool is restored to the non-blocking state, the share of the first storage pool 1031 is correspondingly restored to the initial share, so as to ensure fairness as much as possible.
The controller of the electronic device provided in the present application will be described in the following with a complete embodiment.
Current DDR controllers support one or more ports, with multiple hosts in the system, each host being configured to access DDR from a port (port). Each host sends a read-write request of different QoS into the controller, the QoS of the system typically has 4 bits (bits) representing 16 different service classes. For different QoS, it is generally necessary to provide different service levels. The service with smaller delay is performed when the QoS is higher, and the service is performed with the best effort when the level is lower.
The DDR controller is designed to preferentially arbitrate and schedule the read-write request sent by the host based on the DDR bandwidth utilization rate. Because of the scheduling requirements, a configured shared memory pool (also referred to simply as CAM) caches read and write requests from different hosts. The depth of this shared pool is limited. To minimize the counter-pressure upstream hosts, the ports employ a first-in first-out (first in first out, FIFO) mechanism to cache read and write requests, provide sufficient multiple-receive capability, and complete the adaptation of the DDR CAM.
Different QoS classes are designed differently in CAM, requests that can enter and maintain high QoS in order to guarantee high QoS can have relatively high bandwidth utilization in scheduling and tend to be high priority at egress scheduling. In this case, the pool that may result in the low QoS request is full.
Each port typically has only one queue, and when a request that does not require order preservation enters the DDR controller, the request will first enter the port's queue, where it is queued for entry into the CAM. If the first request in the port queue is a low QoS request followed by a high QoS request while the low QoS pool in the CAM is full, but the high QoS pool is stored Chi Weiman, then the low QoS request in the queue blocks the subsequent high QoS request from entering the CAM, thus creating HOL problems.
As shown in fig. 4, the port queue 401 includes an access request a and an access request B, and the port queue 402 has no access request for a while. A port scheduler 403 for scheduling access requests to the CAM. Included in the CAM are a low priority pool 404 and a high priority pool 405.
Wherein, the access request B is a low QoS access request (also simply referred to as a request), and the low priority storage pool 404 into which the access request B needs to enter is already full, so that the access request B cannot go out of the port queue 401, and at this time, the access request a is blocked by going to the high priority storage pool 405, which causes the HOL problem.
To improve the HOL problem, a split queue approach may be used, and another HOL problem is introduced. This type of problem still occurs when 2 requests come in from the same port, but access requests of different Q oS's with Identity (ID).
Since the bus protocol (Advanced Microcontroller Bus Architecture, AMBA) requires the same ID order preservation. Thus, it is difficult to implement by directly queuing on a port, because when directly queuing different queues, the packets that need to be ordered need to enter different sequences, and a dependency (whether the request has the same ID in another queue) needs to be maintained, thus causing HOL problem.
As shown in fig. 5, request a advances, request B advances, request a and B are ID, request a advances to low QoS queue 501, request B advances to high QoS queue 502, at this time, request a advances, request B advances, and therefore, after request a is called by scheduler 503, request B needs to be called, and therefore, request B blocks the high QoS queue. Therefore, generally, due to the problem, a FIFO structure with first-in first-out is generally used for design, which is convenient for order preservation.
The following schemes 1 to 3 are generally included in the related art.
Scheme 1 may include: the red and blue queues are used to correspond to high QoS and low QoS, respectively. Arbitration is performed on ports for different QoS queues. The QoS range support corresponding to the queue is configurable. The extended design of this scheme is to use more port queues at the ports to support different QoS.
The disadvantages of scheme 1 are: the red-blue queue has only 2 grades for the grade distinction of QoS, and the grade of service corresponding to the QoS signal in the system is simply divided into two grades for the insufficient grade distinction of QoS. Meanwhile, when the red-blue queue is supported, if the port does not have a corresponding QoS read-write request, the queue FIFO of the port is not used efficiently. For the extended design of this scheme, more queues can avoid the situation of poor QoS differentiation, but there still exist situations where the queue FIFO cannot be used efficiently.
Scheme 2 may include: using a single queue, but not maintaining first-in-first-out, i.e., not using FIFOs to maintain requests from ports, is equivalent to pooling the queue. At this point, a queue arbiter is required, which will select out the QoS high requests in the queue and send them to the CAM.
The disadvantages of scheme 2 are: the queue depth of the port is generally 16 or 32, and the clock frequency of the port on the DDR controller is kept the same as the internal controller frequency, so that the controller needs to be kept working at a certain high frequency, and it is difficult to realize such a controller. Meanwhile, a request at a certain position is arbitrarily selected from the queue, which is not beneficial to port order preservation.
Scheme 3 may include: the size of the CAM is increased.
The disadvantages of scheme 3 are: increasing the CAM size indirectly can absorb more low QoS requests, alleviating this problem, but still causes HOL problems when the CAM distinguishes between QoS setup spaces. Meanwhile, the scheme can also increase the difficulty of high-frequency timing convergence of the CAM self-scheduling arbitration module.
With respect to the previous schemes 1 to 3, it can be seen that there are several problems:
1. QoS differentiation is less, no matter how many QoS levels are supported on the system, the to DDR controller is limited to 2 levels;
2. The cache utilization rate is poor, and requests with high priority cannot be supported more in the cache space;
3. timing closure is difficult at high frequencies;
4. the design is complex, and the order preservation is difficult.
This embodiment of the present application proposes a new metering scheme for application on multiport DDR controllers. After the multi-port queues, a shared buffer 1 (corresponding to the processing module in the buffer queue) is added, and the shared buffer 1 may be configured with a plurality of memory pools (corresponding to the plurality of buffer queues, which may also be referred to as QoS queues). And maintains the cache using the linklist mechanism. Multiple QoS queues may be maintained in the Linklist. The depth of each QoS queue is not fixed and is dynamically adjusted through a linklist mechanism.
To ensure that each different QoS can enter the shared cache 1, it is not blocked here. Each linklist QoS queue reserves at least one storage pool or reserves multiple storage pools by configuration.
When the space region corresponding to the low QoS in the CAM is blocked, the linklist queue corresponding to the shared cache 1 will absorb the low QoS request in the port queue, avoid the blocking of the port queue, and release the high QoS request after the low QoS as far as possible. Since the high QoS queue can enter the CAM, the low QoS queue in the shared cache is blocked at this time and cannot arbitrate for inclusion in the CAM. The request for high QoS at this time may avoid head congestion at the ports.
Of course, this design does not completely avoid head congestion, which still occurs, for example, when the low QoS queues in the shared cache are filled. But the scheme can alleviate the occurrence of the problem and achieve better design and performance balance in the prior schemes.
Next, the scheme provided in this example was compared with the above schemes 1 to 3, and the following table 1 can be seen as a result of the scheme comparison.
Table 1 scheme comparison result example
By way of example, the implementation of the controller in this scenario is described with reference to what is shown in fig. 6.
The controller 60 may include: port queues 601, scheduling arbiter 602, qoS queues 603, qoS scheduler 604, CAM605, and utilization scheduler 606. The port queues 601 may include a port queue one 6011, a port queue two 6012, and a port queue three 6013; the scheduling arbiter 602 may include: scheduling arbiter one 6021, scheduling arbiter two 6022, scheduling arbiter three 6023, and scheduling arbiter four 6024; qoS queue 603 may include: qoS queue one 6031, qoS queue two 6032, qoS queue three 6033, qoS queue four 6034.
Wherein QoS queue one 6031 caches and processes QoS0-3 level access requests, qoS queue two 6032 caches and processes QoS4-7 level access requests, qoS queue three 6033 caches and processes QoS8-11 level access requests, and QoS queue four 6034 caches and processes QoS12-15 level access requests.
Port one 6011 receives the access request sent by the host 1, port two 6012 receives the access request sent by the host 2, and port three 6013 receives the access request sent by the host 3; the scheduling arbiter 6021 is used to invoke access requests of QoS0-3 class from the port queue one 6011, the port queue two 6012 and the port queue three 6013 to the QoS queue one 6031, the scheduling arbiter 6022 is used to invoke access requests of QoS4-7 class from the port queue one 6011, the port queue two 6012 and the port queue three 6013 to the QoS queue two 6032, the scheduling arbiter three 6023 is used to invoke access requests of QoS8-11 class from the port queue one 6011, the port queue two 6012 and the port queue three 6013 to the QoS queue three 6033, and the scheduling arbiter four 6024 is used to invoke access requests of QoS12-15 class from the port queue one 6011, the port queue two 6012 and the port queue three 6013 to the QoS queue four 6034.
QoS scheduler 604 (corresponding to the first scheduler described above) is configured to invoke access requests in QoS queues 603 (QoS queue one 6031, qoS queue two 6032, qoS queue three 6033, and QoS queue four 6034), and to buffer the access requests into the CAM based on priority invocation principles. For example, access requests in QoS queue one 6031, qoS queue two 6032 are cached to a high QoS pool, and access requests in QoS queue three 6033, qoS queue four 6034 are cached to a low QoS pool.
The utilization scheduler 606 (corresponding to the second scheduler described above) is configured to call the access request in the CA M to access the DDR based on the utilization principle.
Next, each part of the scheme will be described in detail.
1. In the scheme, the front interface comprises 3 ports, and the ports use FIFO as queue management, so that order preservation is facilitated.
2. The scheme middle part includes a shared buffer (QoS queue) and a linklist management module.
Linklist needs to maintain multiple QoS queues (queues) and 1 free queue (free queue); the QoS class range (range) maintained by each queue may be configured to map the QoS class in the SoC system to the QoS class range that the DDR controller is actually scheduling for use.
The shared cache entry and exit are scheduling arbiters. Wherein the ingress arbitration is scheduled from the port queue according to a Round-Robin (Round-Robin) principle or other configuration algorithm; the shared cache may buffer bandwidth from the system when the CAM is full or the utilization scheduler is inefficient.
The efficiency of the utilization rate scheduler may be varied from 60% to 99% according to the address condition of the request of the CAM cache; as CAM internal states tend to be less efficient, the request bandwidth of the system is now independent, so a large number of requests can be buffered there; this may reduce backpressure on the system when controller scheduling is inefficient.
3. The QoS scheduler is a QoS-based scheduling arbiter.
a. The QoS scheduler may here configure strict priority scheduling policies and R R scheduling policies with shares as needed.
The Strict priority (strand priority) basic principle is that when a request with high priority exists, the request with high priority must be scheduled; RR policy (DRR) with share can temporarily dispatch out a large number of requests in a certain linklist in a shortage way so as to relieve certain queue length pressure;
b. performing arbitration output on the head request of the QoS queue;
c. and outputting the storage space occupied by the head request of the queue to the idle queue.
d. The QoS scheduler may accept the CAM status signal (e.g., the first indication signal) and adjust the temporary schedule of the corresponding queue in the DRR based on the first indication signal and the QoS queue length.
4. The CAM is mainly used for caching requests for scheduling by the utilization scheduler, and is not used for maintaining QoS internally, and access states need to be maintained according to a protocol.
5. The main purpose of the utilization scheduler is to consider the bandwidth utilization of DDR, take page hit into priority, and comprehensively consider the bandwidth balance of read and write requests.
In a second aspect, a method for processing an access request according to an embodiment of the present application is applied to the controller of the electronic device provided in the first aspect.
Referring to the content shown in fig. 7, the process may include, but is not limited to, S701 to S703 described below.
S701, a controller sequentially receives access requests sent by at least one processor of the electronic equipment, and sequentially transmits the access requests to a port queue of the controller.
The access request is a request by a processor to access a memory of the electronic device.
The specific implementation of S701 may refer to the description of the port queue 101 in the first aspect, which is not described herein.
S702, the controller calls the access request in the port queue to a cache queue processing module of the controller.
The implementation of S702 may refer to the detailed description of the first aspect regarding the queue processing module 102, which is not described herein.
S703, the controller processes the access request in the buffer queue processing module.
The method comprises the steps that a first access request corresponding to a first storage pool in a cache module of the controller is cached for the first storage pool under the condition that the first storage pool is in a blocking state; the first storage pool is any storage pool in the cache module.
Transmitting a first access request corresponding to the first storage pool to the cache module under the condition that the first storage pool is in a non-blocking state; so that the memory is accessed based on the access request in the cache module.
The implementation of S703 may refer to the detailed description of the first aspect in the queue processing module 102, which is not described herein.
The method for processing the access request provided by the embodiment of the application at least comprises the following steps: sequentially receiving access requests sent by at least one processor of the electronic equipment, and sequentially transmitting the access requests to a port queue of the controller; the access request is a request from a processor to access a memory of the electronic device; invoking an access request in the port queue to a cache queue processing module of the controller; processing the access request in the buffer queue processing module; the method comprises the steps that a first access request corresponding to a first storage pool in a cache module of the controller is cached for the first storage pool under the condition that the first storage pool is in a blocking state; the first storage pool is any storage pool in the cache module; transmitting a first access request corresponding to the first storage pool to the cache module under the condition that the first storage pool is in a non-blocking state; so that the memory is accessed based on the access request in the cache module.
In the method for processing the access request, the controller includes the buffer queue processing module, so that the access request can be buffered by the two-level buffer devices (the buffer module and the buffer queue processing module), and even if a certain storage pool (for example, the first storage pool) in the buffer module is blocked, the buffer queue processing module can buffer the received access request, thereby reducing the probability of HOL.
The method for processing the access request provided by the embodiment of the application can also include, but is not limited to, a configuration process of a cache queue.
Referring to what is shown in fig. 8, before the controller invokes the access request in the port queue to the cache queue processing module of the controller in S702, the following S801 and S802 may be further included but are not limited thereto.
S801, the controller divides a plurality of cache queues in the cache queue processing module.
Wherein one of the cache queues is used for caching access requests of at least one service class.
The specific implementation of S801 may refer to the detailed description of the above first aspect about the cache queue 1022, which is not described herein.
S802, the controller configures a head, a tail and a depth of each of the plurality of cache queues.
The queue head is used for representing the starting position of the queue, and the queue tail is used for representing the end position of the queue. The size of the memory space of the queue can be characterized by the depth of the queue.
In one possible implementation, each of the plurality of cache queues may be configured to have the same corresponding depth, i.e., to correspond to the same memory size.
In another possible implementation, the priorities decrease from low to high, with the corresponding queue depths decreasing in sequence. Since in practice, low priority access requests are easily deferred, there is a greater likelihood of congestion, so the low priority queue depth may be configured to be greater.
The embodiments of the present application do not limit the manner in which the cache queues are configured and managed. The cache queues may be illustratively configured and managed based on a doubly linked list (LinkedList). Specifically, the head of the queue, the tail of the queue, the depth of the queue, and the like of the cache queue can be managed and maintained through a doubly linked list.
The queue depth of the buffer queue based on the management and maintenance of the double linked list is adjustable, so that the space utilization rate of the buffer queue is improved, and the blocking probability is reduced.
In the embodiment of the present application, if the above-mentioned method for processing an access request is implemented in the form of a software functional module, and sold or used as a separate product, the access request may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributing to the related art, and the computer software product may be stored in a storage medium, and include several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a read-only memory (Read O nly Memory, ROM), a magnetic disk, or an optical disk, or the like. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
In a third aspect, the application provides an electronic device, where the electronic device includes a memory and a controller, and the controller is the controller provided in the first aspect.
In a fourth aspect, embodiments of the present application provide a storage medium, that is, a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps in the method for processing an access request provided in the above embodiments.
It should be noted here that: the description of the storage medium and apparatus embodiments above is similar to that of the method embodiments described above, with similar benefits as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and the apparatus of the present application, please refer to the description of the method embodiments of the present application for understanding.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in some embodiments" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application. The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above described device embodiments are only illustrative, e.g. the division of the units is only one logical function division, and there may be other divisions in practice, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units; can be located in one place or distributed to a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, and the foregoing program may be stored in a computer readable storage medium, where the program, when executed, performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read Only Memory (ROM), a magnetic disk or an optical disk, or the like, which can store program codes.
Alternatively, the integrated units described above may be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributing to the related art, and the computer software product may be stored in a storage medium, and include several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a removable storage device, a ROM, a magnetic disk, or an optical disk.
The foregoing is merely an embodiment of the present application, but the protection scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered in the protection scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A controller of an electronic device, the controller comprising in order: the device comprises a port queue, a cache queue processing module and a cache module;
the port queue is used for sequentially receiving access requests sent by at least one processor of the electronic equipment; the access request is a request from a processor to access a memory of the electronic device;
the buffer queue processing module is used for receiving and processing the access request in the port queue;
the method comprises the steps of aiming at a first storage pool in the caching module, caching a first access request corresponding to the first storage pool under the condition that the first storage pool is in a blocking state; the first storage pool is any storage pool in the cache module;
transmitting a first access request corresponding to the first storage pool to the cache module under the condition that the first storage pool is in a non-blocking state; so that the memory is accessed based on the access request in the cache module.
2. The controller of claim 1, wherein the cache queue processing module comprises a first scheduler, wherein the cache module comprises a second scheduler,
the first scheduler is used for processing the access request in the cache queue processing module based on a priority calling rule;
And the second scheduler is used for calling the access request in the cache module to access the memory based on the broadband utilization efficiency rule of the memory.
3. The controller according to claim 2,
the cache queue processing module further comprises a plurality of cache queues; one said buffer queue is used for buffering the access request of at least one service class scope;
the first scheduler is further configured to: processing the access requests in the plurality of cache queues based on the priority calling rule; the priority of the cache queue is associated with the class of service.
4. The controller of claim 3, wherein for each of the plurality of cache queues, a queue depth of the cache queue is a variable queue depth.
5. The controller of claim 1, wherein if the cache queue processing module comprises a plurality of cache queues and a first scheduler, the first scheduler is further to, in the case that the first storage pool is in a non-blocking state:
determining a target cache queue of the current processing in the plurality of cache queues based on a first calling rule;
determining a target access request processed at the present time based on a queue head position in the target cache queue;
Transmitting the target access request to the first storage pool.
6. The controller of claim 1, wherein if the cache queue processing module includes a plurality of cache queues and a first scheduler, the first scheduler is further configured to, in the event that the first storage pool is in a blocked state:
determining N first cache queues corresponding to the first storage pool; the N is greater than or equal to 1;
for each first cache queue in the N first cache queues, acquiring the remaining storage space of the first cache queue;
and if the remaining storage space is smaller than or equal to a first storage threshold value, increasing the queue depth of the first cache queue so that the first cache queue can cache a second access request corresponding to the first cache queue transmitted by the port queue.
7. The controller of claim 1, wherein the cache module comprises a second scheduler to, with the first storage pool in a blocked state:
increasing the share of the first storage pool processed at the time to relieve the blocking state of the first storage pool; the shares are used to characterize the number of access requests that access the memory this time.
8. A method of processing an access request, the method employing a controller of an electronic device, the method comprising:
sequentially receiving access requests sent by at least one processor of the electronic equipment, and sequentially transmitting the access requests to a port queue of the controller; the access request is a request from a processor to access a memory of the electronic device;
invoking an access request in the port queue to a cache queue processing module of the controller;
processing the access request in the buffer queue processing module;
the method comprises the steps that a first access request corresponding to a first storage pool in a cache module of the controller is cached for the first storage pool under the condition that the first storage pool is in a blocking state; the first storage pool is any storage pool in the cache module;
transmitting a first access request corresponding to the first storage pool to the cache module under the condition that the first storage pool is in a non-blocking state; so that the memory is accessed based on the access request in the cache module.
9. The method of claim 8, prior to said invoking the access request in the port queue to a cache queue processing module of the controller, the method further comprising:
Dividing a plurality of cache queues in the cache queue processing module; wherein, one of the cache queues is used for caching the access request of at least one service level;
and configuring a head, a tail and a depth of each of the plurality of cache queues.
10. An electronic device comprising a memory and a controller as claimed in any one of claims 1 to 7.
CN202311872784.2A 2023-12-29 2023-12-29 Controller, access request processing method and electronic equipment Pending CN117873928A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311872784.2A CN117873928A (en) 2023-12-29 2023-12-29 Controller, access request processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311872784.2A CN117873928A (en) 2023-12-29 2023-12-29 Controller, access request processing method and electronic equipment

Publications (1)

Publication Number Publication Date
CN117873928A true CN117873928A (en) 2024-04-12

Family

ID=90580486

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311872784.2A Pending CN117873928A (en) 2023-12-29 2023-12-29 Controller, access request processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN117873928A (en)

Similar Documents

Publication Publication Date Title
CN111444012B (en) Dynamic resource regulation and control method and system for guaranteeing delay-sensitive application delay SLO
US10567307B2 (en) Traffic management for high-bandwidth switching
EP3562110B1 (en) Traffic management for high-bandwidth switching
CN106209679B (en) Method and apparatus for using multiple linked memory lists
JP4636583B2 (en) Method and apparatus for dynamically scheduling class-based packets
US10193831B2 (en) Device and method for packet processing with memories having different latencies
US7190674B2 (en) Apparatus for controlling packet output
CN107220200B (en) Dynamic priority based time-triggered Ethernet data management system and method
US20140198799A1 (en) Scheduling and Traffic Management with Offload Processors
EP0957612A2 (en) Received packet processing for dedicated bandwidth data communication switch backplane
WO2020134425A1 (en) Data processing method, apparatus, and device, and storage medium
US10558591B2 (en) Method and apparatus for in-band priority adjustment forwarding in a communication fabric
US11483259B2 (en) VOQ-based network switch architecture using multi-stage arbitration fabric scheduler
WO2023226948A1 (en) Traffic control method and apparatus, electronic device and readable storage medium
US9294410B2 (en) Hybrid dataflow processor
US8018958B1 (en) System and method for fair shared de-queue and drop arbitration in a buffer
US7613856B2 (en) Arbitrating access for a plurality of data channel inputs with different characteristics
CN111756586B (en) Fair bandwidth allocation method based on priority queue in data center network, switch and readable storage medium
EP3440547B1 (en) Qos class based servicing of requests for a shared resource
US10705985B1 (en) Integrated circuit with rate limiting
CN117873928A (en) Controller, access request processing method and electronic equipment
US9965211B2 (en) Dynamic packet buffers with consolidation of low utilized memory banks
CN117520252A (en) Communication control method, system-level chip, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination