CN118295941A - Memory controller of parallel processor and memory access scheduling method - Google Patents

Memory controller of parallel processor and memory access scheduling method Download PDF

Info

Publication number
CN118295941A
CN118295941A CN202410242214.3A CN202410242214A CN118295941A CN 118295941 A CN118295941 A CN 118295941A CN 202410242214 A CN202410242214 A CN 202410242214A CN 118295941 A CN118295941 A CN 118295941A
Authority
CN
China
Prior art keywords
request
memory
access
buffer queue
arbiter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410242214.3A
Other languages
Chinese (zh)
Inventor
魏朝飞
高晨
赵鑫鑫
姜凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Inspur Science Research Institute Co Ltd
Original Assignee
Shandong Inspur Science Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Inspur Science Research Institute Co Ltd filed Critical Shandong Inspur Science Research Institute Co Ltd
Priority to CN202410242214.3A priority Critical patent/CN118295941A/en
Publication of CN118295941A publication Critical patent/CN118295941A/en
Pending legal-status Critical Current

Links

Landscapes

  • Multi Processors (AREA)

Abstract

The invention provides a memory controller of a parallel processor and a memory access scheduling method, belonging to the technical field of electronics, wherein the memory controller of the parallel processor comprises: the system comprises a request distributor, a request buffer queue and a request arbiter, wherein the request buffer queue is used for accommodating access requests and is a plurality of; the request distributor is used for distributing a first memory access request to a request buffer queue, and the request arbiter is used for responding to memory access operation according to a second memory access request in the request buffer queue. Compared with the prior art, the invention can improve the efficiency of the memory access request by only maintaining one request buffer queue at the memory access request entrance of the memory node by arranging a plurality of request buffer queues.

Description

Memory controller of parallel processor and memory access scheduling method
Technical Field
The invention relates to the technical field of electronics, in particular to a memory controller of a parallel processor and a memory access scheduling method.
Background
A typical parallel processor, such as a general purpose graphics processor (General Purpose Computing on GPU, GPGPU), integrates a large number of simple computational cores at high density, resulting in thousands of threads executing in parallel at the same time, and the resulting large number of memory access requests places a tremendous strain on the memory system, which makes the memory system a performance bottleneck for the GPGPU.
FIG. 1 is a schematic diagram of a DRAM data structure in a conventional memory mechanism, as shown in FIG. 1, in which the access process to DRAM (Dynamic Random Access Memory ) data is divided into three steps: precharge, row access and column access. In the precharge stage, the old data stored in the line buffer is written back to the DRAM; a line access stage, namely activating a new data line to be accessed and calling the new data line into a line buffer; and in the column access stage, the needed data block is fetched and returned to the requesting processor through the data bus. The precharge time and the row access time occupy most of the access time, and if the data rows are frequently switched in the access process, a large amount of precharge time and row access time overhead is generated, so that the access efficiency is greatly reduced.
Therefore, the reasonable and effective storage system and the access scheduling strategy are key factors for improving the access efficiency and the system performance.
Disclosure of Invention
The invention provides a memory controller of a parallel processor and a memory access scheduling method, which can improve memory access efficiency and performance of a memory system of the parallel processor.
The invention provides a memory controller of a parallel processor, comprising: the system comprises a request distributor, a request buffer queue and a request arbiter, wherein the request buffer queue is used for accommodating access requests and is a plurality of; the request distributor is used for distributing a first memory access request to a request buffer queue, and the request arbiter is used for responding to memory access operation according to a second memory access request in the request buffer queue.
Optionally, the storage controller further includes a plurality of Bank units, each of the Bank units includes a Bank number, and each of the Bank units corresponds to one of the request buffer queues through the Bank number.
Optionally, the corresponding request buffer queue is determined according to a Bank number accessed by the first memory access request, where the Bank number accessed by the first memory access request is determined by the request dispatcher according to the first memory access request.
Optionally, the request arbiters are multiple, and the request arbiters and the request buffer queues are in one-to-one correspondence.
Optionally, the second memory request in each request buffer queue is scheduled according to FIFO manner.
Optionally, the request arbiter employs a homogeneous priority arbitration mechanism, and the memory controller further comprises a response output port;
The arbitration mechanism of the homologous priority comprises: if the second memory request of the i-th request arbiter is arbitrated to obtain the use right of the response output port in the kth arbitration cycle, in the kth+1 arbitration cycle, if the second memory request of the i-th request arbiter is still included in the memory request of the response output port, the second memory request of the i-th request arbiter obtains the highest priority use right of the response output port.
Optionally, the request arbiter comprises arbitration logic.
The invention also provides a memory access scheduling method of the parallel processor, which is applied to the memory controller of the parallel processor, and comprises the following steps:
Receiving a first access request, and distributing the first access request to a request buffer queue;
Determining a second memory access request according to the request buffer queue;
And responding to the memory access operation according to the second memory access request.
Optionally, responding to the access operation according to the second access request includes:
If the third memory access request of the response output port in the k+1th arbitration cycle comprises the second memory access request of the i-th request arbiter, and the second memory access request of the i-th request arbiter obtains the use right of the response output port through arbitration in the k-th arbitration cycle; then the access request of the i-th request arbiter is responded to first.
Optionally, allocating the first access request to a corresponding request buffer queue includes:
Analyzing the first access request to obtain a Bank number accessed by the first access request;
and determining the corresponding request buffer queue according to the Bank number accessed by the first access request.
Optionally, determining the second access request according to the request buffer queue includes:
And determining the second access request by adopting a FIFO scheduling mode according to the request buffer queue.
The invention also provides an electronic device comprising a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the memory access scheduling method of the parallel processor is realized when the processor executes the program.
The invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a memory access scheduling method for the parallel processor.
The invention also provides a computer program product comprising a computer program which, when executed by a processor, implements the memory access scheduling method of the parallel processor.
The invention provides a memory controller and a memory access scheduling method of a parallel processor, wherein the memory controller comprises: the system comprises a request distributor, a request buffer queue and a request arbiter, wherein the request buffer queue is used for accommodating access requests and is a plurality of; the request distributor is used for distributing a first memory access request to a corresponding request buffer queue, and the request arbiter is used for responding to memory access operation according to a second memory access request in the request buffer queue. Under the condition of multiple banks, different banks can be read and written in a crossing way, and the read-write bandwidth can be effectively improved. After one Bank finishes reading and writing data of one row, if the data of the other row of the same Bank needs to be read again, the bus must take a longer time to reopen one row, and if only one Bank exists, the bus must be in a waiting state at this time. Assuming that the addresses to be read are in two different banks, after the first read command is sent to one Bank, the read command read data can be sent to the other Bank without waiting for the first data to return, so that the bus utilization rate is highest. When the number of the banks is up to a certain amount, the alternate reading mode forms a cycle, which is equivalent to no refresh waiting from the outside, so the invention sets a plurality of request buffer queues, reduces the line switching time waiting in the memory access process, and can improve the efficiency of the memory access request compared with the prior art that only one request buffer queue is maintained at the memory access request entrance of the memory node.
Drawings
In order to more clearly illustrate the invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a DRAM data structure under a conventional memory mechanism;
FIG. 2 is a schematic diagram of a memory controller of a parallel processor according to the present invention;
FIG. 3 is a schematic diagram of arbitration logic provided by the present invention;
FIG. 4 is a flow chart of a memory access scheduling method of a parallel processor provided by the invention;
Fig. 5 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The memory controller and memory access scheduling method of the parallel processor of the present invention are described below with reference to fig. 2-5.
Fig. 2 is a schematic diagram of a memory controller of a parallel processor according to the present invention, where the basic principle is that it is assumed that the memory controller of each memory node has N banks, and the memory controller includes three core components: a request dispatcher, a request buffer queue, and a request arbiter. The memory controller sets a memory request buffer queue for each Bank, the request distributor distributes the memory request reaching the memory node into the corresponding request buffer queue according to the accessed Bank number, then the memory request in each request buffer queue is scheduled according to the FIFO mode, in addition, a request arbiter is set at the exit of each request buffer queue, which is responsible for analyzing the memory address information and the state of the response address, and then generates the corresponding memory operation.
As shown in fig. 2, a memory controller of a parallel processor includes: the system comprises a request distributor, a request buffer queue and a request arbiter, wherein the request buffer queue is used for accommodating access requests and is a plurality of;
the request distributor is used for distributing a first memory access request to a request buffer queue, and the request arbiter is used for responding to memory access operation according to a second memory access request in the request buffer queue.
Compared with the prior art that only one request buffer queue is maintained at the access request entrance of the storage node, the invention can improve the efficiency of the access request.
In a specific embodiment, the storage controller further includes a plurality of Bank units, each of the Bank units includes a Bank number, and each of the Bank units corresponds to one of the request buffer queues through the Bank number.
It should be noted that, in order to further improve the efficiency of the access request at the storage node, the invention designs that the storage controller includes a plurality of Bank units, each of the Bank units includes a Bank number, and each of the Bank units corresponds to one of the request buffer queues.
In a specific embodiment, the corresponding request buffer queue is determined according to a Bank number accessed by the first memory request, where the Bank number accessed by the first memory request is determined by the request dispatcher according to the first memory request.
The invention determines the corresponding request buffer queue according to the Bank number, can ensure the accuracy of the request buffer queue, and further ensures the scheduling efficiency of the subsequent access request.
In a specific embodiment, the request arbiters are multiple, and the request arbiters and the request buffer queues are in one-to-one correspondence.
It should be noted that, the present invention sets a request arbiter for each request buffer queue exit, which is responsible for analyzing the state of the address information and the response address, and then generating the corresponding memory operation. If the line buffer hits, the access request is directly sent out, and when the line buffer status is empty, the data in the current line buffer is firstly stored back to the DRAM, then the new data is scheduled to enter the line buffer, and the activating operation is carried out on the new data line. According to the method, after the access request reaches the storage node, different request queues are stored first before the access request is scheduled, and the requests in N queues can be simultaneously arbitrated at the same time.
In a specific embodiment, the second memory request in each of the request buffer queues is scheduled according to FIFO manner.
The memory access requests in the same request buffer queue are scheduled in a FIFO (first in first out) mode, so that scheduling efficiency can be ensured.
It should be noted that, by analyzing the running behavior feature of the parallel processor (such as a general-purpose graphics processor) program, the inventor finds that: the access requests sent by the same computing node have good line buffering access locality, but the current arbitration mechanism with fairness as a main factor of the network on chip breaks the order of access requests of different computing nodes reaching the storage node, and the original line buffering locality is greatly reduced. Based on the method, the invention designs a homogeneous priority arbitration mechanism, which ensures that the memory access requests from the same computing node reach the storage node as continuously as possible, thereby preserving the line buffer access locality of the memory access requests.
In a specific embodiment, the request arbiter employs a homogeneous priority arbitration mechanism, and the memory controller further comprises a response output port;
the arbitration mechanism of the homologous priority comprises: if the second memory request of the i-th request arbiter is arbitrated to obtain the use right of the response output port o in the kth arbitration cycle, in the kth+1 arbitration cycle, if the third memory request of the response output port o still includes the second memory request of the i-th request arbiter, the second memory request of the i-th request arbiter obtains the highest priority use right of the response output port o.
In a specific embodiment, the request arbiter comprises arbitration logic.
It should be noted that fig. 3 is a circuit diagram of a single-port arbitration logic of the homologous priority arbitration mechanism, taking an arbitration logic circuit of the ith request arbiter as an example. The basic implementation principle is as follows: (1) If the input port i (the ith request arbiter, denoted as g i) has acquired the usage rights of the response output port o in the previous arbitration cycle, then the circuit at p i is high; (2) If the input port i (the ith request arbiter) still gets arbitration rights for the current arbitration cycle, i.e. the circuit at g i is high; in the case where both the first two conditions are satisfied, the current arbitration cycle p i remains high, i.e., the input port i (the i-th request arbiter) remains the highest priority usage in response to the output port o; otherwise, the next port p i+1 is high, and the highest priority usage in response to output port o is obtained.
The following describes the memory access scheduling method of the parallel processor, and the memory access scheduling method of the parallel processor and the memory controller of the parallel processor described in the following can be referred to correspondingly.
Fig. 4 is a flowchart of a memory access scheduling method of a parallel processor, as shown in fig. 4, where the memory access scheduling method of a parallel processor is applied to a memory controller of the parallel processor, and includes: the system comprises a request distributor, a request buffer queue and a request arbiter, wherein the request buffer queue is used for accommodating access requests and is a plurality of; the request distributor is used for distributing a first memory access request to a corresponding request buffer queue, and the request arbiter is used for responding to memory access operation according to a second memory access request in the request buffer queue.
In some possible embodiments, the storage controller further includes a plurality of Bank units, each of the Bank units including a Bank number, and each of the Bank units corresponding to one of the request buffer queues through the Bank number.
In some possible embodiments, the corresponding request buffer queue is determined according to a Bank number accessed by the first memory request, where the Bank number accessed by the first memory request is determined by the request dispatcher according to the first memory request.
In some possible embodiments, the request arbiter is a plurality of request arbiters, and the request arbiters and the request buffer queues are in one-to-one correspondence.
In some possible embodiments, the second memory request in each of the request buffer queues is scheduled according to a FIFO manner.
In some possible embodiments, the request arbiter employs a homogeneous priority arbitration mechanism, the memory controller further comprising a response output port;
the arbitration mechanism of the homologous priority comprises: if the second memory request of the i-th request arbiter is arbitrated to obtain the use right of the response output port o in the kth arbitration cycle, in the kth+1 arbitration cycle, if the third memory request of the response output port o still includes the second memory request of the i-th request arbiter, the second memory request of the i-th request arbiter obtains the highest priority use right of the response output port o.
In some possible embodiments, the request arbiter comprises arbitration logic.
Further, the method comprises:
Step 401: and receiving a first access request, and distributing the first access request to a request buffer queue.
In a specific embodiment, step 401 allocates the first access request to a corresponding request buffer queue, including:
Analyzing the first access request to obtain a Bank number accessed by the first access request;
and determining the corresponding request buffer queue according to the Bank number accessed by the first access request.
Step 402: and determining a second access request according to the request buffer queue.
In a specific embodiment, step 402 specifically includes:
And determining the second access request by adopting a FIFO scheduling mode according to the request buffer queue.
Step 403: and responding to the memory access operation according to the second memory access request.
In a specific embodiment, step 403 specifically includes:
If the third memory access request of the response output port o in the k+1th arbitration cycle is determined to comprise the second memory access request of the i-th request arbiter, and the second memory access request of the i-th request arbiter obtains the use right of the response output port o through arbitration in the k-th arbitration cycle; then the access request of the i-th request arbiter is responded to first.
Fig. 5 illustrates a physical schematic diagram of an electronic device, as shown in fig. 5, which may include: processor 510, communication interface (Communications Interface) 520, memory 530, and communication bus 540, wherein processor 510, communication interface 520, memory 530 complete communication with each other through communication bus 540. Processor 510 may invoke logic instructions in memory 530 to perform a memory access scheduling method for a parallel processor, the method comprising:
and receiving a first access request, and distributing the first access request to a request buffer queue.
And determining a second access request according to the request buffer queue.
And responding to the memory access operation according to the second memory access request.
Further, the logic instructions in the memory 530 described above may be implemented in the form of software functional units and may be stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product comprising a computer program, the computer program being storable on a non-transitory computer readable storage medium, the computer program, when executed by a processor, being capable of executing a memory access scheduling method of a parallel processor, the method comprising:
and receiving a first access request, and distributing the first access request to a request buffer queue.
And determining a second access request according to the request buffer queue.
And responding to the memory access operation according to the second memory access request.
In yet another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform a memory access scheduling method for a parallel processor, the method comprising:
and receiving a first access request, and distributing the first access request to a request buffer queue.
And determining a second access request according to the request buffer queue.
And responding to the memory access operation according to the second memory access request.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (14)

1. A memory controller for a parallel processor, comprising: the system comprises a request distributor, a request buffer queue and a request arbiter, wherein the request buffer queue is used for accommodating access requests and is a plurality of;
the request distributor is used for distributing a first memory access request to a request buffer queue, and the request arbiter is used for responding to memory access operation according to a second memory access request in the request buffer queue.
2. The memory controller of claim 1 further comprising a plurality of Bank units, each of said Bank units including a Bank number, each of said Bank units corresponding to one of said request buffer queues by said Bank number.
3. The memory controller of claim 2, wherein the corresponding request buffer queue is determined according to a Bank number accessed by the first memory request, and wherein the Bank number accessed by the first memory request is determined by the request dispatcher according to the first memory request.
4. A memory controller for a parallel processor according to any one of claims 1-3, wherein the request arbiter is a plurality of request arbiters, the request arbiter and request buffer queues being in one-to-one correspondence.
5. A memory controller for a parallel processor according to any one of claims 1-3, wherein the second memory requests in each of said request buffer queues are scheduled according to FIFO manner.
6. A memory controller for a parallel processor according to any one of claims 1-3, wherein the request arbiter employs a homogeneous priority arbitration mechanism, the memory controller further comprising a response output port;
The arbitration mechanism of the homologous priority comprises: if the second memory request of the i-th request arbiter is arbitrated to obtain the use right of the response output port in the kth arbitration cycle, in the kth+1 arbitration cycle, if the third memory request of the response output port still includes the second memory request of the i-th request arbiter, the second memory request of the i-th request arbiter obtains the highest priority use right of the response output port.
7. The memory controller of a parallel processor of claim 6 wherein the request arbiter comprises arbitration logic.
8. A memory access scheduling method for a parallel processor, wherein the memory controller implemented by using the parallel processor according to any one of claims 1 to 7 is implemented, and the method comprises:
Receiving a first access request, and distributing the first access request to a request buffer queue;
Determining a second memory access request according to the request buffer queue;
And responding to the memory access operation according to the second memory access request.
9. The method for scheduling accesses to a parallel processor of claim 8, wherein responding to the access operation according to the second access request comprises:
If the third memory access request of the response output port in the k+1th arbitration cycle comprises the second memory access request of the i-th request arbiter, and the second memory access request of the i-th request arbiter obtains the use right of the response output port through arbitration in the k-th arbitration cycle; then the access request of the i-th request arbiter is responded to first.
10. The method of claim 8 or 9, wherein allocating the first memory request to a corresponding request buffer queue comprises:
Analyzing the first access request to obtain a Bank number accessed by the first access request;
and determining the corresponding request buffer queue according to the Bank number accessed by the first access request.
11. The method for scheduling accesses to a parallel processor according to claim 8 or 9, wherein determining a second access request from the request buffer queue comprises:
And determining the second access request by adopting a FIFO scheduling mode according to the request buffer queue.
12. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the memory scheduling method of a parallel processor according to any one of claims 8 to 11 when the program is executed by the processor.
13. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the memory access scheduling method of a parallel processor according to any of claims 8 to 11.
14. A computer program product comprising a computer program which, when executed by a processor, implements a memory access scheduling method for a parallel processor according to any one of claims 8 to 11.
CN202410242214.3A 2024-03-04 2024-03-04 Memory controller of parallel processor and memory access scheduling method Pending CN118295941A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410242214.3A CN118295941A (en) 2024-03-04 2024-03-04 Memory controller of parallel processor and memory access scheduling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410242214.3A CN118295941A (en) 2024-03-04 2024-03-04 Memory controller of parallel processor and memory access scheduling method

Publications (1)

Publication Number Publication Date
CN118295941A true CN118295941A (en) 2024-07-05

Family

ID=91680956

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410242214.3A Pending CN118295941A (en) 2024-03-04 2024-03-04 Memory controller of parallel processor and memory access scheduling method

Country Status (1)

Country Link
CN (1) CN118295941A (en)

Similar Documents

Publication Publication Date Title
JP4299536B2 (en) Multi-bank scheduling to improve performance for tree access in DRAM-based random access memory subsystem
US7360035B2 (en) Atomic read/write support in a multi-module memory configuration
US6732242B2 (en) External bus transaction scheduling system
US6088772A (en) Method and apparatus for improving system performance when reordering commands
CN112231254B (en) Memory arbitration method and memory controller
KR20010081016A (en) Methods and apparatus for detecting data collision on data bus for different times of memory access execution
CN110457238A (en) The method paused when slowing down GPU access request and instruction access cache
EP0901080B1 (en) Arbitration system
CN116893899A (en) Resource allocation method, device, computer equipment and storage medium
JP7320536B2 (en) multi-kernel wavefront scheduler
CN117015767A (en) On-chip interconnect for memory channel controllers
US9436625B2 (en) Approach for allocating virtual bank managers within a dynamic random access memory (DRAM) controller to physical banks within a DRAM
KR102408350B1 (en) Memory controller of graphic processing unit capable of improving energy efficiency and method for controlling memory thereof
CN117215491A (en) Rapid data access method, rapid data access device and optical module
JP3260456B2 (en) Computer system, integrated circuit suitable for it, and requirement selection circuit
CN109491785B (en) Memory access scheduling method, device and equipment
CN118295941A (en) Memory controller of parallel processor and memory access scheduling method
WO2010122607A1 (en) Memory control device and method for controlling same
CN114819124A (en) Memory access performance improving method of deep neural network inference processor
KR20000067533A (en) Memory access system and method thereof
CN117891758B (en) Memory access system, processor and computing device based on arbitration
CN116483536B (en) Data scheduling method, computing chip and electronic equipment
CN117112246B (en) Control device of spin lock
JP7510382B2 (en) System and method for arbitrating access to a shared resource - Patents.com
CN117271391B (en) Cache structure and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication