WO2006042108A1 - Multi-threaded direct memory access - Google Patents

Multi-threaded direct memory access Download PDF

Info

Publication number
WO2006042108A1
WO2006042108A1 PCT/US2005/036144 US2005036144W WO2006042108A1 WO 2006042108 A1 WO2006042108 A1 WO 2006042108A1 US 2005036144 W US2005036144 W US 2005036144W WO 2006042108 A1 WO2006042108 A1 WO 2006042108A1
Authority
WO
WIPO (PCT)
Prior art keywords
port
read
write
channel
dma
Prior art date
Application number
PCT/US2005/036144
Other languages
French (fr)
Inventor
Franck Seigneret
Sivayya Ayinala
Nabil Khalifa
Praveen Kolli
Prabha Atluri
Original Assignee
Texas Instruments Incorporated
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from EP04292406A external-priority patent/EP1645968B1/en
Application filed by Texas Instruments Incorporated filed Critical Texas Instruments Incorporated
Publication of WO2006042108A1 publication Critical patent/WO2006042108A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • G06F13/28Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal

Definitions

  • a Direct Memory Access (DMA)circuit has a read port; a read port scheduler coupled to the read port; a write port; a write port scheduler coupled to the write port; and a request port coupled to the read and write port schedulers.
  • the read port is configured and adapted to support "m" threads
  • the write port is configured and adapted to support "n" threads, whereby each thread can support either a single access or a burst access transaction and the read and write ports can work on different data transfers at the same time.
  • the read and write port schedulers are each configured and adapted to arbitrate between channels at a thread boundary; and each may include a high priority queue for high priority channels and a low priority queue for regular priority channels.
  • FIG. 6 is a block diagram of a DMA read port scheduler in accordance with an embodiment of the invention.
  • Master port features Address generator compatible with logical channel features, 32-bit interface with 64-bit option fixed at design time (the multiple interface widths can be changed to include smaller or larger widths), OCP burst support, one dedicated read port/ one dedicated write port, packing/unpacking support, byte addressing support and programmable write semantics model including posted or non-posted support. 3).
  • Table 1 highlights some generic parameters supported by the DMA circuit 200 in accordance with an embodiment of the invention.
  • DMA circuit 200 also referred to as DMA4 of the present invention.
  • the flexible nature of the DMA circuit 200 allows for its use in multiple areas of a design such as in a DSP subsystem, as a system DMA and in a camera subsystem as shown in FIG. 1.
  • a maximum of four thread IDs can be allocated in the read side, from 0 to 3 (ThO, ThI, Th2 and Th3).
  • the DMA circuit 200 can have up to four outstanding read transactions belonging to up to four channels in the system interconnect.
  • For an arbitration cycle to occur two conditions must be satisfied: (a) There is at least one channel requesting; and (b) there is at least one free thread ID available.
  • the scheduler 302 grants the highest priority channel that has an active request, allocates the thread ID, and tags this thread as Busy.
  • the channel read context is restored from the shared channel context memory 316.
  • FCFS First Come First Serviced
  • channels can be given a high-priority attribute.
  • Any channel that is ready to be scheduled will be put at the end of the queue, either regular (low priority) or high priority depending on a priority bit.
  • Non-synchronized channels will be put in the queue when the software sets the enable bit.
  • Synchronized channels will be put in the queue when the hardware DMA request comes in for them.
  • CHi has a higher priority than CHj if j>i.
  • the others will be processed in subsequent cycles.
  • This rule can of course be modified depending on system design requirements.
  • the arbitration between the two queues can be performed in one illustrative example according to a software parameter that sets the number of times the high priority channel grants the port before the low priority channel wins the port and so on.
  • the read port logic can generate the next OCP address sent to the OCP interface.
  • An OCP READ request is generated by the OCP request generator 306, and is then issued on the OCP interface.
  • the request can be qualified by sideband signals, some of the sideband signals include: MThreadID field, based on the scheduler allocation; MReqSecure attribute, as read from the channel context; MReqSupervisor attribute, as read from the channel context; MReqEndianness, as read from the channel context; MReqDataType, as read from the channel context (element size); MCmd/SCmdAccept handshaking is performed normally, as required by the OCP protocol.
  • the read response manager 308 can identify the channel that owns the data. This data is submitted to the shared-FIFO control logic, and written into the FIFO 314 at the appropriate location. Once the data is written into the FIFO 314, if this is the last data of the channel service (i.e. single data service or last data of a burst service), the threadID becomes free again and its status is updated. The last data of a response is identified by a "SRespLast "qualifier. The context for the channel just serviced is saved back into the shared channel context memory using circuitry 322 which includes four registers, one for each thread and the necessary selection and path set ⁇ up circuitry.
  • Thread responses can be interleaved, even within bursts, on the read port 202.
  • the read port scheduler 302 and the write port scheduler 304 are mainly arbitrating between channels at a thread boundary.
  • One thread is associated to one DMA service, where a service can be a single or burst transaction as mentioned previously.
  • FD FD x OCP width.
  • OCP_width 2 FD x OCP width.
  • the buffering budget, for one channel, is preferably bounded using a programmable threshold specified in a register "DMA4J3CR.”
  • the write port scheduler 304 is responsible for selecting the next channel to be serviced, and for allocating a thread identifier to be used on the OCP interface (MThreadID field).
  • a channel is granted access to the write port 204 by the arbitration logic, for one OCP service, this can be either an OCP single transaction or an OCP burst transaction (4x32, 8x32, 16x32), in accordance with the channel programming for the DMA destination.
  • a maximum of two thread IDs can be allocated, 0 or 1 (ThO and ThI on the write side).
  • DMA circuit 200 can have up to two outstanding write transactions belonging to up to two channels in the system interconnect in this embodiment using circuitry 320.
  • the write port scheduler 304 grants the highest priority channel that has an active request, allocates the thread ID, and tags this thread as Busy.
  • the channel write context is restored from the shared channel context memory 316.
  • the arbitration policy implemented is "First Come First Serviced" (FCFS).
  • FCFS First Come First Serviced
  • a few channels can be given a high-priority attribute.
  • there are two queues, one a high priority queue and the other a low priority queue. Any channel that is ready to be scheduled will be put at the end of the queue, either regular (low priority) or high priority depending on the priority bit.
  • Non-synchronized channels will be put in the queue when the software sets the enable bit. Synchronized channels are put in the queue when the hardware DMA request comes in for them.
  • the channel is rescheduled for each smaller access till it is burst aligned. Also, if the end of the transfer is not burst aligned, the channel is rescheduled for each one of the remaining smaller accesses.
  • the write port logic can generate the next OCP address sent to the OCP interface.
  • An OCP WRITE request is then issued by the OCP request generator 310 on the OCP interface, which may be qualified by sideband signals.
  • the write command used on the OCP interface can be either a posted write (OCP WR command) or a non-posted write (OCP WRNP command):
  • OCP WR command a posted write
  • OCP WRNP command a non-posted write
  • the OCP write interface selects the write command to be used, based on the channel attributes as programmed by the user. There are three possibilities: (1) All channel transactions are mapped on the WRNP (none posted); (2) all channel transactions are mapped on the WR command (posted); or (3) all channel transactions are mapped on the WR command, except the last one that is mapped on a WRNP command, so that the end-of-transfer interrupt can be delayed until the write has reached the target.
  • All DMA4 writes expect a response on the OCP interface.
  • the response is provided very quickly by the system interconnect, whereas a non-posted write transaction gets its response later, after the effective write has been completed at the destination target.
  • Handshaking is performed normally, as required by the OCP protocol.
  • the packing feature is enabled if the DMA source is qualified as a non-packed target, and the DMA destination is qualified as a packed target. Packing is not compatible with source burst transactions, only destination burst can be enabled when packing is selected. Each time a channel requiring a packing operation is scheduled on the read port 202, only a partial write is done to the memory buffer on the appropriate byte lanes, with the valid bytes of the current OCP response. Consequently, the data memory must provide byte access granularity during a write operation in the data FIFO 314. The byte enable memory must also be updated accordingly.
  • An optional two-dimensional (2-D) graphic module 330 provides hardware acceleration for two commonly used graphics operations: (1) Constant Solid Color Fill, and (2) Transparent Copy (also known as transparent-bit, or source color key copy).
  • Transparent Color also known as transparent-bit, or source color key copy.
  • DMA4_COLOR This feature allows filling a region with a solid color or pattern, by repeating the data horizontally and vertically in the region. Since the solid color fill and the transparent copy functions are mutually exclusive in the same channel a "DMA4_COLOR" register is shared to set the constant color value, based on its data type. For 8bpp, 16bpp and 24bpp, the data- type species in a DMA4_CSDP register is respectively 8-bit, 16-bit and 32-bit. During the 32-bit (24bpp) data transfer, the data [31 :24] is "0". The color pattern is written at the following bit field of the DMA4_Color register:
  • DMA 200 never generates non-completed bursts.
  • a channel transfer if there is not enough data (to be read or written) for filling a full burst, single transactions are issued on the OCP interfaces. If burst is enabled and hardware DMA request synchronization is enabled and address is not aligned on burst boundary, then DMA 200 will automatically split this burst access into multiple smaller accesses (minimum number of aligned accesses) until address is aligned on the Burst boundary. If last transfer is not burst aligned, then the remaining data are split into minimum aligned smaller access. Referring to FIG.
  • FIG. 4 there is shown a diagram highlighting a read port 202 multi ⁇ threading scenario were the read port has four threads (ThreadIDO, ThreadIDl, ThreadID2 and ThreadID3) 402-408 in accordance with an embodiment of the invention.
  • the current status for each of the threads (0-3) is shown in time lines 410-416 respectively.
  • With the read requests (OCP_Read_Request) and read responses (OCP_Read_Responses) highlighted on time lines 418 and 420 respectively. As shown in 422, it takes one or two cycles to switch from a first logical channel (LCH(i)) to another logical channel (LCH(J)).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Bus Control (AREA)

Abstract

A direct memory access (DMA) circuit (200) includes a read port (202) and a write port (204). The DMA circuit (200) is a multithreaded initiator with 'm' threads on the read port (202) and 'n' threads on the write port (204). The DMA circuit (200) includes two decoupled read and write contexts and schedulers (302, 304) that provide for more efficient buffering and pipelining. The schedulers (302, 304) are mainly arbitrating between channels at a thread boundary. One thread is associated to one DMA service where a service can be a single or burst transaction. The multithreaded DMA transfer allows for concurrent channel transfers.

Description

MULTI-THREADED DIRECT MEMORY ACCESS
This invention relates in general to the field of electronics; and, more specifically, to a multi-threaded Direct Memory Access (DMA). BACKGROUND DMA is a technique that allows for hardware in a computer to access system memory independently of the system processor. Because the processor is not involved in the transfer of data, DMA is usually fast. DMA is very useful for example in real-time applications and for making backups. A few illustrative examples of hardware that use DMA circuits include sound cards, hard disk controllers and computer subsystems. Traditional DMA circuits have one or more physical channels, wherein each physical channel is a point-to-point communication link connected from a source to a destination port. Although useful, the point- to-point links make the system inflexible and may limit the performance of the DMA for some applications. A need thus exists in the art for a DMA circuit that can help alleviate some of the problems found in prior art DMA circuits. SUMMARY
In described examples of the inventions, a Direct Memory Access (DMA)circuit is provided that has a read port; a read port scheduler coupled to the read port; a write port; a write port scheduler coupled to the write port; and a request port coupled to the read and write port schedulers. The read port is configured and adapted to support "m" threads, and the write port is configured and adapted to support "n" threads, whereby each thread can support either a single access or a burst access transaction and the read and write ports can work on different data transfers at the same time. The read and write port schedulers are each configured and adapted to arbitrate between channels at a thread boundary; and each may include a high priority queue for high priority channels and a low priority queue for regular priority channels. In some implementations, the DMA further has a memory coupled to the read and write ports; and a configuration port coupled to the memory for receiving configuration information for configuring the DMA circuit. A channel context memory may be shared between the channels. Also included may be a read port response manager coupled to the read port, and a write port response manager coupled to the write port. The read response manager may be configured to identify the channel that owns particular data. A first-in- first-out (FIFO) memory may be coupled to the read and write ports. The DMA may provide for multithreaded DMA transfers, allowing for concurrent channel transfers. An address alignment control may be coupled to the FIFO memory so as to allow for any source byte on any read port byte lane to be transferred to any write port byte lane. The DMA circuit may further comprise an endianness conversion circuit coupled to the read port. BRIEF DESCRIPTION OF THE DRAWINGS
Example embodiments of the invention are described with reference to accompanying drawings, wherein:
FIG. 1 is a system level block diagram in accordance with one embodiment of the invention. FIG. 2 is a block diagram of a Direct Memory Access (DMA) in accordance with an embodiment of the invention.
FIG. 3 is a block diagram, giving additional details of an example DMA of FIG. 2.
FIG. 4 is a diagram, highlighting four threads received on a read port of a DMA in accordance with an embodiment of the invention. FIG. 5 is a diagram similar to FIG. 4, highlighting two threads in a write port of a
DMA in accordance with an embodiment of the invention.
FIG. 6 is a block diagram of a DMA read port scheduler in accordance with an embodiment of the invention.
FIG. 7 sis a block diagram of a DMA write port scheduler in accordance with an embodiment of the invention.
DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS
FIG. 1 shows a block diagram of an electronic system 100 in accordance with an example embodiment of the invention. System 100 includes a Main Processor Unit (MPU) subsystem 106 coupled to an Open Core Protocol (OCP) bus or system interconnect 120. The MPU subsystem 106 can include any one of a number of microprocessors or microcontrollers or similar control circuitry. A Digital Signal Processing (DSP) subsystem 102, a camera subsystem 110, an internal memory 114, an external memory 116 and a peripheral interconnect 118 are also coupled to the OCP bus 120. The peripheral interconnect 118 provides interconnection to any one of a number of peripheral devices such as timers, general purpose input/output (GPIO), etc. The DSP subsystem 102 includes a DSP DMA (dDMA) 104, the camera subsystem 110 includes a camera DMA (cDMA) 112 and a system DMA (sDMA) 108 all in accordance with embodiments of the invention.
The DMA circuits used in the dDMA 104, cDMA 112 and sDMA 108 comprise multipoint-to-multipoint DMA circuits which function as multi-threaded initiators each having four threads (or m threads) on their read port and two threads (or n threads) on their write port. The parameters m and n are preferably fixed by the thread budget allocated by the OCP interconnect for each initiator port. In this particular embodiment of the invention, n = 2 and m = 4, although these numbers can of course vary based on a given system's particular design requirements. The number of channels and the number of hardware requests can be changed at user configuration time. In one embodiment, the number of channels < 32 and the number of requests < 127.
In FIG. 2, there is shown a block diagram of a DMA circuit 200 such as used in the dDMA 104, sDMA 108 and the cDMA 112 shown in FIG.l in accordance with an embodiment of the invention. DMA 200 includes a read port 202, a write port 204, a configuration port 208 and a unified data first-in first-out (FIFO) 210. FIFO 210 in FIG. 2 is presented in a logical format and is shown sharing between different channels (ChO-ChN) 212. The DMA 200 includes two decoupled read and write contexts and schedulers (discussed below). The read port 202 and write port 204 can support up to m threads and n threads respectively, where each thread can perform either a single access or a burst access. A thread is associated to a service and a service can comprise a single or burst transaction. For example, if only one channel is scheduled, only one thread is associated to the channel and up to four bursts can be outstanding. Each burst can for example be 4 X 32 bits. Some of the features of the DMA circuit 200 include:
1). General Features: Flexible distributed-DMA fabric, with options fixed at design time, such as channel number and port width, native OCP interface and multithreading capability at both the source and destination.
2). Master port features: Address generator compatible with logical channel features, 32-bit interface with 64-bit option fixed at design time (the multiple interface widths can be changed to include smaller or larger widths), OCP burst support, one dedicated read port/ one dedicated write port, packing/unpacking support, byte addressing support and programmable write semantics model including posted or non-posted support. 3). Logical channels features: Software channel enabling, hardware channel triggering, edge/level hardware DMA request sensitive, programmable request/channel pair mapping, source/destination address generators (constant addressing, post-increment, single indexing and double indexing), different element/frame index for source and destination, unified memory buffer, shared between all channels, unified FIFO memory, size specified at design time, linked LCH support, speculative pre-fetch for synchronized channels, optional software controllable and capability to monitor the progress of the DMA transfer using element and frame counters.
4). Arbitration: All active channels can share ports based on arbitration and priority, can also support LCH first-come-first-served as well as fixed priority arbitration.
5). Security: Per channel secure attributes set by a secure transaction and secure qualifier set on the master interface when a secure channel is scheduled.
6). DMA request synchronization: Supports element, packet, frame and block synchronization. 7). Power management: Standby mode and idle mode, auto-gating capability, auto-idle and software controlled power down.
8). Interrupts: Some of the available interrupts include end frame, end block, end packet, half frame, last frame, transaction access error, secure access error and supervisor access error. 9). Debug: Through the configuration port a user can check current channel status for all channels, FIFO status, channel transfer status, data integrity, etc.
10). FIFO draining: When a channel is disabled and there is data in the corresponding FIFO, the data is drained onto the write port and transferred to a programmed destination. 11). Buffering disable: In case of source synchronized transfers, buffering can be enabled or disabled by setting a buffering disable bit (DMA4_CCR.buffering_disable) respectively to 0 or 1. When buffering is enabled, data fetched from the source side on a hardware request may not be flushed/transferred completely to the destination side until a subsequent hardware request fetches more data from the source side (to be able to pack/burst to the destination). However, if buffering is disabled, then no packing or bursting across the packet boundary is performed, and the remaining data in the packet is transferred using smaller transactions. For both cases, at the end of the block, subsequent hardware requests to flush the data on the destination side are not required. Whether buffering is disabled or not, both the source and destination are synchronized (e.g., element/frame/packet/block synchronized) during transfer. The last write transaction in the frame or in the block is non-posted write (WRNP) even if the write mode is set to 2 (WLNP). However, there should be a WRNP at the end of the packet (even if write mode = 2) only in case of destination synchronization. Whether buffering is disabled or not, the packet interrupt is not generated in the source synchronized case.
12). Other features: Per channel color-key support, per channel optional solid color fill and per channel endianness conversion.
Table 1 highlights some generic parameters supported by the DMA circuit 200 in accordance with an embodiment of the invention.
The mentioned features are not meant to be all inclusive but are just some of the features that can be provided by the DMA circuit 200 (also referred to as DMA4) of the present invention. The flexible nature of the DMA circuit 200 allows for its use in multiple areas of a design such as in a DSP subsystem, as a system DMA and in a camera subsystem as shown in FIG. 1.
FIG. 3 shows a block diagram of an example of the DMA circuit 200. The DMA circuit 200 includes a read port (DMA4 Read Port) 202 and a write port (DMA4 Write Port) 204. Coupled to the DMA4 read port 202 is a channel requests scheduler (DMA4 Read Port scheduler) 302, an OCP request generator 306 and a read port response manager 308. The read port 202 is either a 32-bit or a 64-bit read-only OCP master interface. Choice between 32 or 64 is made at design time.
The DMA4 read port scheduler 302 is responsible for selecting the next channel to be serviced, and for allocating a thread identifier to be used on the OCP interface (MThreadID field). A channel is granted access to the read port 202 by the arbitration logic, for one OCP service. This can be either an OCP single transaction or an OCP burst transaction (4X32- bit/2X64-bit, 8X32-bit/4X64-bit, 16X32-bit/8X64-bit), in accordance with the channel programming for the DMA source. The channel programming can be modified based on system design requirements. Table 1 Generic Parameter List
Figure imgf000008_0001
In one embodiment, a maximum of four thread IDs can be allocated in the read side, from 0 to 3 (ThO, ThI, Th2 and Th3). Hence the DMA circuit 200 can have up to four outstanding read transactions belonging to up to four channels in the system interconnect. For an arbitration cycle to occur, two conditions must be satisfied: (a) There is at least one channel requesting; and (b) there is at least one free thread ID available. Upon an arbitration cycle, the scheduler 302 grants the highest priority channel that has an active request, allocates the thread ID, and tags this thread as Busy. The channel read context is restored from the shared channel context memory 316.
The arbitration policy implemented is "First Come First Serviced" (FCFS). On top of this arbitration, channels can be given a high-priority attribute. There are 2 queues, one high priority queue and one low priority queue (not shown in FIG. 3, see FIGS 6 and 7 for a more detailed view of the schedulers 302 and 304). Any channel that is ready to be scheduled will be put at the end of the queue, either regular (low priority) or high priority depending on a priority bit. Non-synchronized channels will be put in the queue when the software sets the enable bit. Synchronized channels will be put in the queue when the hardware DMA request comes in for them. There can be multiple channels that are ready and need to be put in the same queue at the same cycle, one from the configuration port 208 and multiple DMA requests. In this particular case, only one channel will be put in the queue (one in each queue) according to the following rule in one embodiment of the invention: CHi has a higher priority than CHj if j>i. The others will be processed in subsequent cycles. This rule can of course be modified depending on system design requirements. The arbitration between the two queues can be performed in one illustrative example according to a software parameter that sets the number of times the high priority channel grants the port before the low priority channel wins the port and so on.
The top of each queue can be scheduled in each cycle. A software configurable 8 bits priority counter is used to give weighting to the priority queue. For every N (1 to 256) schedules from the priority queue one will be scheduled from the regular queue. A channel that is scheduled will go to the end of the queue after it finishes its turn on the port. At a given time, a channel cannot be allocated more than one thread ID.
Note that if more than one channel is active, each channel is given a ThreadID for the current service only, not for the whole channel transfer. The current channel number/ThreadID associations are stored, and made available to the read response manager 308. However, if only one channel is active, then one thread ID is allocated during the channel transfer and back to back service (Burst or single) can be done with a maximum of 4 consecutive bursts (e.g., 4x32-bit) without rescheduling the channel at the end of each burst transfer. If non-burst alignment occurs at the beginning of the transfer, then the channel is rescheduled for each smaller access until burst aligned. Also, if the end of the transfer is not burst aligned, the channel is rescheduled for each one of the remaining smaller accesses.
From the restored channel context, the read port logic can generate the next OCP address sent to the OCP interface. An OCP READ request is generated by the OCP request generator 306, and is then issued on the OCP interface. The request can be qualified by sideband signals, some of the sideband signals include: MThreadID field, based on the scheduler allocation; MReqSecure attribute, as read from the channel context; MReqSupervisor attribute, as read from the channel context; MReqEndianness, as read from the channel context; MReqDataType, as read from the channel context (element size); MCmd/SCmdAccept handshaking is performed normally, as required by the OCP protocol.
When receiving an OCP read response (from, for example, a SThreadID field), the read response manager 308 can identify the channel that owns the data. This data is submitted to the shared-FIFO control logic, and written into the FIFO 314 at the appropriate location. Once the data is written into the FIFO 314, if this is the last data of the channel service (i.e. single data service or last data of a burst service), the threadID becomes free again and its status is updated. The last data of a response is identified by a "SRespLast "qualifier. The context for the channel just serviced is saved back into the shared channel context memory using circuitry 322 which includes four registers, one for each thread and the necessary selection and path set¬ up circuitry. Thread responses can be interleaved, even within bursts, on the read port 202. The read port scheduler 302 and the write port scheduler 304 are mainly arbitrating between channels at a thread boundary. One thread is associated to one DMA service, where a service can be a single or burst transaction as mentioned previously.
In one embodiment, each channel context is composed of one read context and one write context, with the read and write contexts being scheduled separately. After a DMA request is received at the DMA request port 206, the associated channel "i" is scheduled. The channel context is loaded, then each time there is an OCP read request, one thread m (0 up to 3) is allocated during the whole read transaction. While there is a free thread, other channels can be scheduled according to the arbitration schema employed. One thread becomes free as soon as the corresponding channel read transaction (e.g., a single transaction, burst transaction of 4 X 32 or 8 X 32) is finished. Once a thread becomes free it can be allocated to another channel. The configuration port 208 operates as a slave port and is not buffered. It enables a host (not shown) to access the entity formed by the DMA circuit 200. The configuration port 208 is used for configuration and access to status registers found in the DMA circuit 200. In one embodiment the configuration port 208 is a synchronous 32-bit data bus that supports 8, 16 and 32-bit aligned data and non-burst accesses. The configuration port 208 can also access memory locations, logical channel context and hardware requests memory locations. All write accesses to any internal register, are handled as non-posted write (WRNP) transactions, even if the OCP command used is WR instead of WRNP. A response is sent back onto the OCP interface, after the write effectively completes. The configuration port 208 can access all the global and channel registers in 8-bit, 16-bit or 32-bit form.
Coupled to the DMA4 write port 204 is a DMA4 write port scheduler 304, an OCP request generator 310 and a response manager 312. The write port 204 is driven from the requests coming from the data FIFO 314. There is no other correlation between channel contexts open on the read port side, and channel contexts open on the write port side. Most of the time, open read channel contexts and simultaneously open write channel contexts are different. The OCP write port is either a 32-bit or a 64-bit write-only OCP master interface, the choice between 32-bit or 64-bit is made at design time, although other designs can have different bit sizes.
. The total FIFO 314 budget is fixed at design time by generic parameters FD and "OCP_width" so that the FIFO_depth = 2FDx OCP width. There is no per-channel allocation of the DMA buffering budget, a full dynamic buffering model is implemented. The buffering budget, for one channel, is preferably bounded using a programmable threshold specified in a register "DMA4J3CR."
The write port scheduler 304 is responsible for selecting the next channel to be serviced, and for allocating a thread identifier to be used on the OCP interface (MThreadID field). A channel is granted access to the write port 204 by the arbitration logic, for one OCP service, this can be either an OCP single transaction or an OCP burst transaction (4x32, 8x32, 16x32), in accordance with the channel programming for the DMA destination. A maximum of two thread IDs can be allocated, 0 or 1 (ThO and ThI on the write side). Hence DMA circuit 200 can have up to two outstanding write transactions belonging to up to two channels in the system interconnect in this embodiment using circuitry 320.
For an arbitration cycle to occur, two conditions must be satisfied: (a) There has to be at least one channel requesting; and (b) there is at least one free thread ID available.
In an arbitration cycle, the write port scheduler 304 grants the highest priority channel that has an active request, allocates the thread ID, and tags this thread as Busy. The channel write context is restored from the shared channel context memory 316. The arbitration policy implemented is "First Come First Serviced" (FCFS). On top of this arbitration, a few channels can be given a high-priority attribute. In one embodiment, there are two queues, one a high priority queue and the other a low priority queue. Any channel that is ready to be scheduled will be put at the end of the queue, either regular (low priority) or high priority depending on the priority bit. Non-synchronized channels will be put in the queue when the software sets the enable bit. Synchronized channels are put in the queue when the hardware DMA request comes in for them.
There can be multiple channels that are ready and need to be put in the same queue at the same cycle, one from the configuration port 208 and multiple DMA requests. In this case only one channel will be put in the queue (one in each queue) according to the following rule: CHi has a higher priority than CHj if j>i. The others will be processed in subsequent cycles. If only one channel is active, then one thread ID is allocated during the channel transfer and back to back service (Burst or single) can be done with maximum of 4 consecutive bursts (e.g., each burst can be for example 4X32-bit) without rescheduling the channel at the end of each burst transfer. If non-burst alignment at the beginning of the transfer then the channel is rescheduled for each smaller access till it is burst aligned. Also, if the end of the transfer is not burst aligned, the channel is rescheduled for each one of the remaining smaller accesses.
The top of each queue can be scheduled in each cycle. A software configurable 8 bits priority counter is used to give weighting to the priority queue. For every N (1 to 256) schedules from the priority queue one will be scheduled from regular queue. A channel that is scheduled will go to the end of the queue after it finishes its turn on the port. Note that if more than channel is active, each channel is given a ThreadID for the current service only, not for the whole channel transfer. The current channel number/ThreadlD associations are stored, and made available to the write port response manager 312.
From the restored channel context, the write port logic can generate the next OCP address sent to the OCP interface. An OCP WRITE request is then issued by the OCP request generator 310 on the OCP interface, which may be qualified by sideband signals.
The write command used on the OCP interface can be either a posted write (OCP WR command) or a non-posted write (OCP WRNP command): The OCP write interface selects the write command to be used, based on the channel attributes as programmed by the user. There are three possibilities: (1) All channel transactions are mapped on the WRNP (none posted); (2) all channel transactions are mapped on the WR command (posted); or (3) all channel transactions are mapped on the WR command, except the last one that is mapped on a WRNP command, so that the end-of-transfer interrupt can be delayed until the write has reached the target.
All DMA4 writes expect a response on the OCP interface. Usually, when issuing a posted write request, the response is provided very quickly by the system interconnect, whereas a non-posted write transaction gets its response later, after the effective write has been completed at the destination target. Handshaking is performed normally, as required by the OCP protocol.
When receiving an OCP write response, from the SThreadID field, the write port response manager 312 can identify the channel that owns the response. Once the data is read from the FIFO 314, if this is the last data of the channel service (i.e. single data service or last data of a burst service), the threadID becomes free again and its status is updated. The context for the channel just serviced is saved back via circuitry 320 into the shared channel context memory 316. If should be noted that thread responses can be interleaved, even within bursts, on the write port 204.
The Configuration port 208 can access all global 318 and channel registers in 8-bit, 16- bit or 32-bit. Four of the registers need a shadow register to be read correctly: DMA4_CSAC : Channel Source Address Counter; DMA4_CDAC : Channel Destination Address Counter; DMA4_CCEN : Channel Current transferred Element Number; and DMA4_CCFN : Channel Current transferred Frame Number. To make implementation easier, only one shadow register is used by the above four registers. Packing is performed on the read port side 202 when the channel element type is narrower than the read port 202, and if this feature has been enabled by the DMA programmer. The packing feature is enabled if the DMA source is qualified as a non-packed target, and the DMA destination is qualified as a packed target. Packing is not compatible with source burst transactions, only destination burst can be enabled when packing is selected. Each time a channel requiring a packing operation is scheduled on the read port 202, only a partial write is done to the memory buffer on the appropriate byte lanes, with the valid bytes of the current OCP response. Consequently, the data memory must provide byte access granularity during a write operation in the data FIFO 314. The byte enable memory must also be updated accordingly.
No new NextWriteAddress 336 is allocated until the memory word is complete, i.e. when the last byte of the memory word is effectively written. The channel FIFO level is also updated on this event. This update event is triggered based on the current byte address of the read access, with respect to the element type and the transaction endianness. Based on address alignment and total transfer count, the first and last packed- words can be partial. This is reported to the write port side using the byte enable memory 332.
Unpacking is done on the write port side when the channel element type is narrower than the write port 204, and if this feature has been enabled by the DMA programmer. The unpacking feature is enabled if the DMA source is qualified as a packed target, and the DMA destination is qualified as a non-packed target. Unpacking is not compatible with destination burst transactions, only source burst can be enabled when unpacking is selected. When both source and destination targets are packed or unpacked then packing and unpacking operations are disabled.
Each time a channel requiring an unpacking operation is scheduled on the write port 204, a regular word read is performed from the memory buffer, at the address stored in the current NextRead Address register 320. Only valid bytes are taken into account, and the NextReadAddress register is only updated from the NextReadAddress FIFO 334 when all bytes within a data FIFO word have been read and sent to the write port 204. On a consistent manner, this NextReadAddress must be declared free again following the last read to the FIFO (i.e. written into the NextWriteAddress FIFO 336). The DMA 200 targets can have different endianness type. An endianness module 324, is used to match the endianness of the source target and the destination target. The endianness conversion takes place if there's an endianness mismatch. This is done according to a source and destination endianness control bit-field (DMA4_CSDP.Src_Endianness = X) and (DMA4_CSDP.DstJEndianness = Y). IfX=Y then no endianess conversion is performed, however, if XJ=Y then an endianness conversion is performed (big endian to little endian or little endian to big endian).
At the system level, more than one endianness module may have the capability to convert endianness if required. It is possible to inform the next module in the target of the read and write request paths to lock the endianness. This is qualified by an in-band signal (MreqEndiannessLock) when (DMA4_CSDP.Src_Endianness_lock) or (DMA4_CSDP.Dst_Endianness_lock) is set to 1. In any case, the DMA 200 generates an MReqDataType and MREqEndianness in-band qualifiers.
In the DMA4 address programming registers for the source target and the destination target, it is assumed that start addresses are always aligned on an element size boundary: 8-bit elements, start addresses aligned on bytes; 16-bit elements, start addresses aligned on 16-bit memory words; and 32-bit elements, start addresses aligned on 32-bit memory words. Once this condition is met, there is still a potential alignment mismatch between source addresses and destination addresses (for example, when transferring a 16-bit data buffer from memory source start address 0x1000 to memory source destination address 0x10002 using a 32-bit DMA4 instance). Address alignment control 328 is required so that any source byte on any read port byte lane can be transferred on any write port byte lane.
An optional two-dimensional (2-D) graphic module 330 provides hardware acceleration for two commonly used graphics operations: (1) Constant Solid Color Fill, and (2) Transparent Copy (also known as transparent-bit, or source color key copy). Transparent Color:
It is often desirable to transfer irregular shape information, and it is especially common in software. game programs. DMA 200 supports a COLOR KEY (defined in a per channel register named DMA4_COLOR) feature for 8bpp, 16bpp and 24bpp from source to destination, i.e., each element of the channel source is compared to a color key value, and those data_bits (pixels) that match the color key are not written to the destination. For 8bpp, 16bpp and 24bpp the data-type specified in the DMA4_CSDP register are respectively 8-bit, 16-bit and 32-bit. During 32-bit (24bpp) data transfer the data [31:24] is 'O'.The color pattern is written at the following bit field of the DMA4_Color register:
[7:0] and don't care at [23:8] for 8bρp; [15:0] and don't care at [23:16] for 16bpp; and
[23:0] for 24bpp.
Burst/packed transactions can be used with no restriction. Each time there is a color key match, the write access is discarded using the write port byte enable pattern, but the write OCP transaction is performed normally. Thus, there is no performance penalty when this feature is enabled.
Solid Constant Color Fill:
This feature allows filling a region with a solid color or pattern, by repeating the data horizontally and vertically in the region. Since the solid color fill and the transparent copy functions are mutually exclusive in the same channel a "DMA4_COLOR" register is shared to set the constant color value, based on its data type. For 8bpp, 16bpp and 24bpp, the data- type species in a DMA4_CSDP register is respectively 8-bit, 16-bit and 32-bit. During the 32-bit (24bpp) data transfer, the data [31 :24] is "0". The color pattern is written at the following bit field of the DMA4_Color register:
[7:0] and don't care at [23:8] for 8bpp; [15:0] and don't care at [23: 16] for 16bρp; and
[23:0] for 24bpp.
The register data does not come from the read port 202; but is the source for solid fill data that goes out on the write port 204.
DMA circuit 200 can generate OCP bursts on both the read port 202 and the write port 204. The burst model complies with the OCPIP2.0 with the following characteristics:
1). Incrementing, precise bursts. The burst size can be 16 bytes, 32 bytes or 64 bytes. For a 32-bit DMA4 instance, that means 4x32 or 8x32 bursts, 16x32-bit burst, for a 64-bit DMA4 instance that means 2x64 or 4x64 bursts or 8x64 bursts. Smaller burst size than the programmed burst size is also allowed. This is usually used when the start address is not aligned to the programmed burst size or the data remaining to be transferred is less than the programmed burst size. Better performance is achieved than by performing single transactions till the address is aligned for programmed burst size. Because of this, 2x32 burst is allowed on 32 bit OCP interface
2). Streaming burst (OCP code = STRM): It's valid if burst mode is enabled in constant addressing mode and non packed transaction. Also the packed target must be enabled when burst is enabled in non constant addressing mode.
3). End-of-burst qualifiers are required: MReqLast and SRespLast (also used for single OCP transactions).
4). AU bursts are aligned: A burst is always starting on the memory address aligned on the burst size. This does not mean the OCP parameter burst_aligned should be ON, as this parameter assumes the byte enable pattern is all 1 's and constant during the whole burst. This condition is not always met on the write port 204 operating in transparent-blit mode, as the byte enable pattern is used to eliminate pixels that must not be written into the memory (when there's a match with the color key). Even with the burst_enable option on, in the channel programming at the beginning of the transfer, DMA 200 can wait for the OCP address to reach a value aligned on the burst size, before issuing burst transactions. Therefore the first channel accesses can consist of single transactions.
Whatever the transfer length, DMA 200 never generates non-completed bursts. At the end of a channel transfer, if there is not enough data (to be read or written) for filling a full burst, single transactions are issued on the OCP interfaces. If burst is enabled and hardware DMA request synchronization is enabled and address is not aligned on burst boundary, then DMA 200 will automatically split this burst access into multiple smaller accesses (minimum number of aligned accesses) until address is aligned on the Burst boundary. If last transfer is not burst aligned, then the remaining data are split into minimum aligned smaller access. Referring to FIG. 4, there is shown a diagram highlighting a read port 202 multi¬ threading scenario were the read port has four threads (ThreadIDO, ThreadIDl, ThreadID2 and ThreadID3) 402-408 in accordance with an embodiment of the invention. The current status for each of the threads (0-3) is shown in time lines 410-416 respectively. With the read requests (OCP_Read_Request) and read responses (OCP_Read_Responses) highlighted on time lines 418 and 420 respectively. As shown in 422, it takes one or two cycles to switch from a first logical channel (LCH(i)) to another logical channel (LCH(J)). Referring to FIG. 5, there is shown a diagram highlighting a write port 204 multi¬ threading scenario in accordance with an embodiment of the invention. Each time there is an OCP write request (OCP_Write_Request); one thread n (0 up to 1) is allocated during the current write transaction. In FIG. 5, two threads, ThreadO and Threadl are shown. While there is a free thread, other channels can be scheduled according to the arbitration schema employed in the particular design. One thread becomes free as soon as the corresponding channel write transaction (e.g., single transaction, burst transaction of 4 X 32 or 8 X 32) is finished. Once a thread becomes free, it can be allocated to another channel. FIG. 5 shows four logical channels LCH(i) 502, LCEtø 504, LCH(k) 506 and LCEfo 508, the current status of the two threads (ThreadO and Threadl) is also shown. As also shown, it takes one or two cycles from the end of a write request to start a new write request.
Referring now to FIG. 6, there is shown a functional diagram of the read port scheduler 302. Hardware 602 and software enabled channel requests 604 are received into the scheduler and go through a first level of arbitration in block 606. In block 605, the channel requests are split into high priority and low (regular) priority channels. The logic for determining what characterizes a high priority and low priority channels are dependent on the system design requirements. The high priority channels go to arbitration logic 606 were arbitration between concurrent channel requests occurs. For example, depending on the arbitration rules, Chi may have priority over Chj when i < j. The low priority channels go through the low priority channel arbitration logic 612,
High and low priority channel scheduling and rescheduling for the high priority channels occurs in 610, while the low priority channel scheduling and rescheduling occurs in 612. Another arbitration between the high and low priority channels occurs in 614 according to the weight (W) given to the high priority channels provided via block 616. The available read threads 618 are allocated and provided to the read service request 620. In FIG. 7, there is shown a write port scheduler block diagram similar to the read port scheduler shown in FIG. 6.
While preferred embodiments of the invention have been illustrated and described, it will be clear that the invention is not so limited. Numerous modifications, changes, variations, substitutions and equivalents will occur to those skilled in the art, without departing from the scope of the invention.

Claims

1. A direct memory access circuit, comprising: a read port; a read port scheduler coupled to the read port; a write port; a write port scheduler coupled to the write port; and a request port coupled to the read and write port schedulers; the read port being configured and adapted to support "m" threads and the write port being configured and adapted to support "n" threads, whereby each thread can support either a single access or a burst access transaction and the read and write ports can work on different data transfers at the same time.
2. A circuit as defined in claim 1, wherein the read port scheduler and write port scheduler are each configured and adapted to arbitrate between channels at a thread boundary.
3. A DMA circuit as defined in claim 1 or claim 2, wherein the read port and the write port scheduler each includes a high priority queue for high priority channels and a low priority queue for regular priority channels.
4. A DMA circuit as defined in any preceding claim, further comprising: a memory coupled to the read and write ports; and a configuration port coupled to the memory for receiving configuration information for configuring the DMA circuit.
5. A DMA circuit as defined in any preceding claim, further comprising: a channel context memory shared between the channels.
6. A DMA circuit as defined in any preceding claim, further comprising: a read port response manager coupled to the read port; and a write port response manager coupled to the write port.
7. A DMA circuit as defined in claim 6, wherein the read response manager can identify the channel that owns particular data.
8. A DMA circuit as defined in any preceding claim, further comprising: a first-in-first-out (FIFO) memory coupled to the read and write ports.
9. A DMA circuit as defined in claim 8, wherein the DMA provides for multithreaded DMA transfers, allowing for concurrent channel transfers.
1
10. A DMA circuit as defined in claim 9, further comprising: an address alignment control coupled to the FIFO memory, the address alignment control allows for any source byte on any read port byte lane to be transferred to any write port byte lane.
11. A DMA circuit as defined in claim 10, further comprising an endianness conversion circuit coupled to the read port.
PCT/US2005/036144 2004-10-11 2005-10-11 Multi-threaded direct memory access WO2006042108A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
EP04292406A EP1645968B1 (en) 2004-10-11 2004-10-11 Multi-threaded DMA
EP04292406.8 2004-10-11
US11/082,564 2005-03-17
US11/082,564 US7761617B2 (en) 2004-10-11 2005-03-17 Multi-threaded DMA

Publications (1)

Publication Number Publication Date
WO2006042108A1 true WO2006042108A1 (en) 2006-04-20

Family

ID=36148667

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2005/036144 WO2006042108A1 (en) 2004-10-11 2005-10-11 Multi-threaded direct memory access

Country Status (1)

Country Link
WO (1) WO2006042108A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8949569B2 (en) 2008-04-30 2015-02-03 International Business Machines Corporation Enhanced direct memory access
CN112783810A (en) * 2021-01-08 2021-05-11 国网浙江省电力有限公司电力科学研究院 Application-oriented multi-channel SRIO DMA transmission system and method
CN114510212A (en) * 2021-12-31 2022-05-17 赛因芯微(北京)电子科技有限公司 Data transmission method, device and equipment based on serial digital audio interface

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020108003A1 (en) * 1998-10-30 2002-08-08 Jackson L. Ellis Command queueing engine
US6557052B1 (en) * 1999-06-07 2003-04-29 Matsushita Electric Industrial Co., Ltd. DMA transfer device
US20040028053A1 (en) * 2002-06-03 2004-02-12 Catena Networks, Inc. Direct memory access circuit with ATM support
US20040177186A1 (en) * 1998-11-13 2004-09-09 Wingard Drew Eric Communications system and method with multilevel connection identification

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020108003A1 (en) * 1998-10-30 2002-08-08 Jackson L. Ellis Command queueing engine
US20040177186A1 (en) * 1998-11-13 2004-09-09 Wingard Drew Eric Communications system and method with multilevel connection identification
US6557052B1 (en) * 1999-06-07 2003-04-29 Matsushita Electric Industrial Co., Ltd. DMA transfer device
US20040028053A1 (en) * 2002-06-03 2004-02-12 Catena Networks, Inc. Direct memory access circuit with ATM support

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8949569B2 (en) 2008-04-30 2015-02-03 International Business Machines Corporation Enhanced direct memory access
CN112783810A (en) * 2021-01-08 2021-05-11 国网浙江省电力有限公司电力科学研究院 Application-oriented multi-channel SRIO DMA transmission system and method
CN112783810B (en) * 2021-01-08 2022-05-03 国网浙江省电力有限公司电力科学研究院 Application-oriented multi-channel SRIO DMA transmission system and method
CN114510212A (en) * 2021-12-31 2022-05-17 赛因芯微(北京)电子科技有限公司 Data transmission method, device and equipment based on serial digital audio interface
CN114510212B (en) * 2021-12-31 2023-08-08 赛因芯微(北京)电子科技有限公司 Data transmission method, device and equipment based on serial digital audio interface

Similar Documents

Publication Publication Date Title
US7761617B2 (en) Multi-threaded DMA
US7373437B2 (en) Multi-channel DMA with shared FIFO
US7603490B2 (en) Barrier and interrupt mechanism for high latency and out of order DMA device
US10241946B2 (en) Multi-channel DMA system with command queue structure supporting three DMA modes
US7496699B2 (en) DMA descriptor queue read and cache write pointer arrangement
KR101557090B1 (en) Hierarchical memory arbitration technique for disparate sources
EP0905629A1 (en) Bridge having a ditributing burst engine
JP2016536692A (en) Computer image processing pipeline
EP0908826A2 (en) Packet protocol and distributed burst engine
JPH077374B2 (en) Interface circuit
US6892266B2 (en) Multicore DSP device having coupled subsystem memory buses for global DMA access
US9015376B2 (en) Method for infrastructure messaging
US8332564B2 (en) Data processing apparatus and method for connection to interconnect circuitry
JP2007219816A (en) Multiprocessor system
US20110022756A1 (en) Data Space Arbiter
JP2018519587A (en) Configurable mailbox data buffer device
US20150268985A1 (en) Low Latency Data Delivery
WO2006042108A1 (en) Multi-threaded direct memory access
US10983937B2 (en) Method for managing access to a shared bus and corresponding electronic device
CN109426562B (en) priority weighted round robin scheduler
US20040010644A1 (en) System and method for providing improved bus utilization via target directed completion
EP1564643A2 (en) Synthesizable vhdl model of a multi-channel dma-engine core for embedded bus systems
WO2006042261A1 (en) Multi-channel direct memory access with shared first-in-first-out memory
WO2007039933A1 (en) Operation processing device
CN108958905B (en) Lightweight operating system of embedded multi-core central processing unit

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV LY MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase