WO2009033966A1 - Dynamic buffer allocation system and method - Google Patents

Dynamic buffer allocation system and method Download PDF

Info

Publication number
WO2009033966A1
WO2009033966A1 PCT/EP2008/061456 EP2008061456W WO2009033966A1 WO 2009033966 A1 WO2009033966 A1 WO 2009033966A1 EP 2008061456 W EP2008061456 W EP 2008061456W WO 2009033966 A1 WO2009033966 A1 WO 2009033966A1
Authority
WO
WIPO (PCT)
Prior art keywords
buffer space
mass memory
space parts
data transfer
allocated
Prior art date
Application number
PCT/EP2008/061456
Other languages
French (fr)
Inventor
Wolfgang Klausberger
Stefan Abeling
Axel Kochale
Herbert Schuetze
Original Assignee
Thomson Licensing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing filed Critical Thomson Licensing
Publication of WO2009033966A1 publication Critical patent/WO2009033966A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F5/00Methods or arrangements for data conversion without changing the order or content of the data handled
    • G06F5/06Methods or arrangements for data conversion without changing the order or content of the data handled for changing the speed of data flow, i.e. speed regularising or timing, e.g. delay lines, FIFO buffers; over- or underrun control therefor
    • G06F5/065Partitioned buffers, e.g. allowing multiple independent queues, bidirectional FIFO's
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0613Improving I/O performance in relation to throughput
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0617Improving the reliability of storage systems in relation to availability
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0635Configuration or reconfiguration of storage systems by changing the path, e.g. traffic rerouting, path reconfiguration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0689Disk arrays, e.g. RAID, JBOD

Definitions

  • the present invention relates to the field of mass storage solutions with multiple storage units.
  • High speed data recording as used, for example, in the workflow for digital cinematography, requires extremely accurate and error-free read and write operations.
  • HDDs is state- of-the-art for achieving useful storage capacities, but such drives are optimized for PC applications, and their real-time performance is usually not specified.
  • error recovery methods used inside HDDs may lead to access times of several seconds. Access times are somewhat improved by internal caching, but additional external data buffering in case of streaming applications remains an important safeguard.
  • U.S. Patent Application Publication No. 20060112252 and U.S. Patent Application Publication No. 20040236905 each purport to disclose a method and apparatus to virtually increase the size of the memory cache of a peripheral device without additional cost.
  • a portion of the memory space of a host computer is used as additional cache memory for the peripheral device.
  • the peripheral device and the host computer may be interfaced with an interface that has a first- party direct memory access mechanism, for example, IEEE 1394 or Serial ATA.
  • FPDMA allows the peripheral device to access the memory space of the host computer under the control of the peripheral device.
  • the host computer provides the peripheral device with the location of the additional cache memory.
  • the peripheral device can transfer data to and from the additional cache memory via FPDMA.
  • the peripheral device effectively manages the additional cache memory as part of the peripheral device's own cache.
  • U.S. Patent Application Publication No. 20020131765 purports to disclose a unique high performance digital video recorder having a number of novel features.
  • the recorder's electronics are all on a unitary printed circuit board.
  • the recorder also requires at least one hard disk drive and audio and video input analog signals from a source such as video camera or broadcast media as well as a suitable monitor for receiving output audio and video analog signals.
  • An external time code generator such as a VITC digital clock is also required for synchronization.
  • various manual control devices such as panel controls for mode selection.
  • the electronics of the preferred embodiment comprise A-to-D and D-to-A converters, a hard disk interface, a JPEG compression encoder/decoder, a multi-port DRAM and DMA subsystem, a microprocessor with RS-232 and RS-422 access ports, various working memory devices and bus interfaces and a 16-bit stereo digital audio subsystem.
  • Novel features of the preferred embodiment include use of an index table for disk addresses of recorded frames, a multi-port memory controller in the form of a field programmable gate array, loop recording using dual channels, and dynamic JPEG compression compensation.
  • U.S. Patent Application Publication No. 20050002642 purports to disclose a device that controls a system that simultaneously processes video and audio data in real time.
  • the device includes read and write track buffers.
  • the device detects a specific state at one of the storage devices that generates a long delay for communication. Upon this detection, the invention dynamically allocates a fixed amount of memory to read and write track buffers.
  • the storage devices include a first storage device having a long delay caused by mechanical performance, such as a DVD read/write drive and a second storage device not having a long delay caused by mechanical performance such as a hard disk drive.
  • Japanese Patent Application Publication No. 2000152136 discloses a video recording device and video server are mutually connected via a network.
  • Delay write-in present by the side of device writes the write-in demand objective data which is stored temporarily in a buffer, into memory, sequentially.
  • a quota allocator present by the side of video server assigns a predetermined size of memory area for storing the objective data.
  • U.S. Patent Application Publication No. 20050289254 purports to disclose a dynamic allocation method for DMA buffers.
  • a DMA controller is directed to move data from an input/output I/O device to buffers linked in a buffer ring.
  • free buffers in the buffer ring are detected when the each buffer is full.
  • At least one new buffer is then allocated to the buffer ring when the number of detected free buffers is less than a first threshold value.
  • at least one buffer is released from the buffer ring when the number of detected free buffers exceeds a second threshold value, wherein the second threshold value exceeds the first threshold value, and the free buffers are all buffers in the buffer ring excluding those with data moved thereto by the DMA controller not yet processed by the CPU.
  • U.S. Patent Application Publication No. 20020124132 purports to disclose a method and apparatus to manage the cache memory of a disc drive.
  • the data rates of different file read and write threads are used to determine the minimum seek time to allow the cache to be used more efficiently.
  • the read/write cache segments are adjusted by determining the summation of the ratio between read/write cache segment sizes and the respective data rates and then adjusting the segment sizes to minimize the seek times for the data streams.
  • U.S. Patent No. 5,933,654 purports to disclose a data control system having a host microprocessor, a data receiving device and a DMA controller.
  • the DMA controller is used to control the fragmentation and recombination of a buffer memory area.
  • the data is processed in data packets and using DMA buffer chaining.
  • Japanese Patent No. 08194602 discloses a system in which, at the beginning of a DMA operation, areas 11 -1 n inside a buffer memory 1 are equally allocated to respective channels under the control of DMA control parts 21 -2n and 31 -3n.
  • a transfer speed detecting part 4 of a DMA monitor part 2 detects data transfer speed corresponding to a system clock or the like and calculates the ratio of data transfer speed for the respective channels.
  • an area allocating part 5 of the DMA monitor part 2 decides the sizes of the areas 11-1 n of the buffer memory 1 and distributes the respective areas 11-1 n to the respective channels again corresponding to the ratio of data transfer speed.
  • the overhead of DMA transfer at the channel of low data transfer speed is reduced and the time for storing data into the buffer memory with the channel of low data transfer speed can be shortened.
  • a system and method of data transfer to an HDD array in which performance is not limited by the stochastic latency of only one drive of the HDD array is desirable.
  • a method for digital data transfer in accordance with the present invention is recited in claim 1.
  • the method is useful in an apparatus comprising two or more mass memory devices and a buffer space encompassing two or more buffer space parts, one of the buffer space parts being allocated, as a current buffer space part, to a data path of each of the mass memory devices.
  • the method comprises storing an association of current buffer space parts and mass memory devices and storing information about unused ones of the buffer space parts.
  • the method further comprises rerouting a data path identified in the association of current buffer space parts and mass memory devices when a current buffer space part approaches or reaches a full state by connecting the data path of an allocated mass memory device to a next unused buffer space part, and storing a sequence of buffer space parts successively allocated to the mass memory device.
  • the apparatus comprises two or more mass memory devices and a buffer space encompassing two or more buffer space parts, one of the buffer space parts being allocated, as a current buffer space part, to a data path of each of the mass memory devices.
  • the apparatus includes memory means for storing an association of current buffer space parts to mass memory devices, as well as information about unused ones of the buffer space parts.
  • the apparatus also comprises re-routing means for re-routing the data path of an allocated mass memory device to a next unused buffer space part when a buffer space part approaches or reaches a full state, and memory means for storing, for each mass memory device, the sequence of buffer space parts successively allocated to the mass memory device.
  • a data transfer may be initiated by setting or inscribing values into a register, the values indicating at least one of a cluster size, a cluster start address or a related command.
  • At least a one of the two or more buffer space parts may comprise a first-in first-out FIFO memory.
  • the first-in first-out FIFO memory may be adapted to receive data from an input multiplexer and to deliver data to an output multiplexer.
  • at least a one of the two or more buffer space parts may comprise a random access memory module.
  • Fig. 1 is a block diagram of a recording system having a system controller module to perform storage functionality on an array of storage units in accordance with an exemplary embodiment of the present invention.
  • Fig. 2 is a block diagram of a DMA engine that is adapted to transfer data in accordance with an exemplary embodiment of the present invention.
  • Fig. 3 is a block diagram of a buffer arrangement that operates in accordance with an exemplary embodiment of the present invention.
  • Fig. 4 is a process flow diagram that shows a method in accordance with an exemplary embodiment of the present invention.
  • a dynamic buffer allocation is proposed for a system in which buffer space is dynamically allocated when a buffer overflow error becomes imminent because of the stochastic latency of the HDD in which a particular buffer operates. Moreover, a buffer that operates in accordance with an exemplary embodiment of the present invention obtains more buffer space when such a buffer overflow error is imminent.
  • an exemplary embodiment of the present invention provides dynamic buffer allocation in the data paths to the hard disk drives, and it will avoid or reduce drop outs of read or write operations and optimize the performance in a high-speed data recording workflow.
  • Fig. 1 is a block diagram of a recording system having a system controller module to perform storage functionality on an array of storage units in accordance with an exemplary embodiment of the present invention.
  • the recording system shown in Fig. 1 is generally referred to by the reference number 100.
  • the recording system 100 includes a user interface 102, which allows a user to control the overall operation of the recording system 100 and to view information about the system status and the like.
  • the user interface includes an LCD touchpad display.
  • the recording system 100 includes a system controller module 104.
  • the system controller module 104 includes an embedded software processor system, which is shown in Fig. 1 as PPC 106. As used herein, PPC is an acronym for Power PC.
  • the system controller module 104 further includes a cache and stream control 108, a RAID control or controller 110 and a DMA engine 112.
  • the system controller module 104 is adapted to transfer data to and receive data from an HDD array 114, which comprises a plurality of individual HDDs.
  • the PPC 106 communicates via a control path 116 with external modules. Additionally, the PPC 106 configures the hardware of the system controller module 104. Transfers of data clusters to or from the disks of the HDD array 114 are initiated by setting or inscribing appropriate values into registers, indicating the cluster size, the cluster start address and the related command, like "read” or "write”.
  • the cache and stream control 108 is adapted to transfer data via a data path 118.
  • the real-time data transferred via the data path 118 are buffered in the cache and stream control 108.
  • the data processing in the exemplary RAID controller 110 ensures that data can be accurately reconstructed when up to two of the HDDs that make up the HDD array 114 provide erroneous data. The skilled person will appreciate that this result can be achieved using, for example, the known EVENODD parity code.
  • the DMA engine 112 provides the data streaming to or from the attached devices in the HDD array 114.
  • the transfers are typically initiated as bursts having a length of, for example, 64 KB.
  • Fig. 2 is a block diagram of the DMA engine 112 shown in Fig. 1.
  • the block diagram is generally referred to by the reference number 200.
  • the DMA engine 112 includes a BusDhver control 202.
  • the BusDriver control 202 is adapted to transfer control information via a control path 204 and to transfer data via a data path 206.
  • an important function of the BusDriver control 202 is to control k separate data paths accessing the HDD array 114.
  • the dynamic buffer allocation is performed between the BusDriver control 202 and a plurality of DMA access units in the DMA engine 112 shown in Fig. 1.
  • a buffer space 208 consisting of n distinct buffer space parts is implemented and managed by a buffer control 210.
  • the buffer control 210 stores data about a plurality of currently allocated buffers.
  • Fig. 2 shows a total of eight currently allocated buffers starting with a buffer ID_0 212 and ending with a buffer ID_n 214.
  • at least one buffer is used for each data path.
  • the buffers 212, 214 transfer data to the HDD array 114 via a plurality of DMA accesses 216.
  • the buffer control 210 stores the allocation of buffer space parts to individual HDDs, identified respectively for example by distinct buffer_id's and disk id's.
  • the buffer control 210 also stores the sequence of successively allocated buffers for each HDD, and also manages the deallocation of unused buffer space. With respect to buffer size, assuming an exemplary data rate of 20 MB/s per disk, a buffer size of 2 MB per 100 ms data transfer latency is needed.
  • the buffers of the buffer space 208 are implemented with first-in first-out buffer memories, which may also be referred to as FIFOs
  • the FIFO-flags can be used for the buffer controlling.
  • the buffer space can be realised with a random access memory RAM module, together with appropriate glue logic for managing sets of read and write pointers.
  • Fig. 3 is a block diagram of a buffer arrangement that operates in accordance with an exemplary embodiment of the present invention.
  • the buffer arrangement is generally referred by the reference number 300.
  • the exemplary buffer arrangement 300 is an implementation of one buffer space in part by a FIFO unit 302.
  • the FIFO unit 302 receives data from an input multiplexer 304 and delivers data to an output multiplexer 306.
  • the FIFO unit 302, as well as the input buffer 304 and the output buffer 306, are controlled by a plurality of control signals received via a control path 308. With the given multiplexers at its input and output, the FIFO unit of the buffer space part can be connected to any of the data paths with which it may be associated.
  • Fig. 4 is a process flow diagram that shows a method in accordance with an exemplary embodiment of the present invention.
  • the method is generally referred to by the reference number 400.
  • the method 400 relates to digital data transfer in an apparatus comprising two or more mass memory devices and a buffer space encompassing two or more buffer space parts, one of the buffer space parts being allocated, as a current buffer space part, to a data path of each of the mass memory devices.
  • the process begins.
  • an association of current buffer space parts and mass storage or memory devices is stored.
  • the skilled person will appreciate that the association may be stored by the buffer control 210 shown in Fig. 2 in a memory space such as the buffer space 208.
  • information about unused ones of the buffer space parts is also stored.
  • a data path identified in the association of current buffer space parts and mass memory devices is rerouted when a current buffer space part approaches or reaches a full state by connecting the data path of the allocated mass memory device to a next unused buffer space part.
  • the sequence of buffer space parts successively allocated to the mass memory device is stored for each mass memory device in the HDD array 114 currently processing a DMA transfer.
  • the process ends.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Bus Control (AREA)

Abstract

The present invention relates to a method (400) for digital data transfer in an apparatus(100) comprising two or more mass memory devices (114) and a buffer space (208) encompassing two or more buffer space parts (212, 214), one of the buffer space parts being allocated, as a current buffer space part, to a data path (216) of each of the mass memory devices, and to the pertaining apparatus (100). A method in accordance with the invention comprises storing (404) an association of current buffer space parts and mass memory devices and storing (406) information about unused ones of the buffer space parts. The method further comprises rerouting (408) a data path identified in the association of current buffer space parts and mass memory devices when a current buffer space part approaches or reaches a full state by connecting the data path of an allocated mass memory device to a next unused buffer space part, and storing (410) a sequence of buffer space parts successively allocated to the mass memory device.

Description

Dynamic Buffer Allocation System and Method
The present invention relates to the field of mass storage solutions with multiple storage units.
High speed data recording, as used, for example, in the workflow for digital cinematography, requires extremely accurate and error-free read and write operations. Using standard ATA hard disk drives, abbreviated as HDDs is state- of-the-art for achieving useful storage capacities, but such drives are optimized for PC applications, and their real-time performance is usually not specified. Especially the error recovery methods used inside HDDs may lead to access times of several seconds. Access times are somewhat improved by internal caching, but additional external data buffering in case of streaming applications remains an important safeguard.
Using buffers of a fixed, uniform size in a set of DMA engines that each provide data to a HDD, when the HDDs have randomly distributed latencies, can be seen to have the disadvantage that the disk that has the worst latency limits the performance of the whole set, constituting a bottleneck.
U.S. Patent Application Publication No. 20060112252 and U.S. Patent Application Publication No. 20040236905 each purport to disclose a method and apparatus to virtually increase the size of the memory cache of a peripheral device without additional cost. A portion of the memory space of a host computer is used as additional cache memory for the peripheral device. The peripheral device and the host computer may be interfaced with an interface that has a first- party direct memory access mechanism, for example, IEEE 1394 or Serial ATA. FPDMA allows the peripheral device to access the memory space of the host computer under the control of the peripheral device. The host computer provides the peripheral device with the location of the additional cache memory. The peripheral device can transfer data to and from the additional cache memory via FPDMA. The peripheral device effectively manages the additional cache memory as part of the peripheral device's own cache. U.S. Patent Application Publication No. 20020131765 purports to disclose a unique high performance digital video recorder having a number of novel features. The recorder's electronics are all on a unitary printed circuit board. The recorder also requires at least one hard disk drive and audio and video input analog signals from a source such as video camera or broadcast media as well as a suitable monitor for receiving output audio and video analog signals. An external time code generator such as a VITC digital clock is also required for synchronization. Also required are various manual control devices such as panel controls for mode selection. The electronics of the preferred embodiment comprise A-to-D and D-to-A converters, a hard disk interface, a JPEG compression encoder/decoder, a multi-port DRAM and DMA subsystem, a microprocessor with RS-232 and RS-422 access ports, various working memory devices and bus interfaces and a 16-bit stereo digital audio subsystem. Novel features of the preferred embodiment include use of an index table for disk addresses of recorded frames, a multi-port memory controller in the form of a field programmable gate array, loop recording using dual channels, and dynamic JPEG compression compensation.
U.S. Patent Application Publication No. 20050002642 purports to disclose a device that controls a system that simultaneously processes video and audio data in real time. The device includes read and write track buffers. The device detects a specific state at one of the storage devices that generates a long delay for communication. Upon this detection, the invention dynamically allocates a fixed amount of memory to read and write track buffers. The storage devices include a first storage device having a long delay caused by mechanical performance, such as a DVD read/write drive and a second storage device not having a long delay caused by mechanical performance such as a hard disk drive.
Japanese Patent Application Publication No. 2000152136 discloses a video recording device and video server are mutually connected via a network. Delay write-in present by the side of device, writes the write-in demand objective data which is stored temporarily in a buffer, into memory, sequentially. A quota allocator present by the side of video server, assigns a predetermined size of memory area for storing the objective data.
U.S. Patent Application Publication No. 20050289254 purports to disclose a dynamic allocation method for DMA buffers. A DMA controller is directed to move data from an input/output I/O device to buffers linked in a buffer ring. Next, free buffers in the buffer ring are detected when the each buffer is full. At least one new buffer is then allocated to the buffer ring when the number of detected free buffers is less than a first threshold value. Further, at least one buffer is released from the buffer ring when the number of detected free buffers exceeds a second threshold value, wherein the second threshold value exceeds the first threshold value, and the free buffers are all buffers in the buffer ring excluding those with data moved thereto by the DMA controller not yet processed by the CPU.
U.S. Patent Application Publication No. 20020124132 purports to disclose a method and apparatus to manage the cache memory of a disc drive. In one aspect the data rates of different file read and write threads are used to determine the minimum seek time to allow the cache to be used more efficiently. In another aspect, the read/write cache segments are adjusted by determining the summation of the ratio between read/write cache segment sizes and the respective data rates and then adjusting the segment sizes to minimize the seek times for the data streams.
U.S. Patent No. 5,933,654 purports to disclose a data control system having a host microprocessor, a data receiving device and a DMA controller. The DMA controller is used to control the fragmentation and recombination of a buffer memory area. The data is processed in data packets and using DMA buffer chaining.
Japanese Patent No. 08194602 discloses a system in which, at the beginning of a DMA operation, areas 11 -1 n inside a buffer memory 1 are equally allocated to respective channels under the control of DMA control parts 21 -2n and 31 -3n. Each time, the DMA transfer of each channel is finished, a transfer speed detecting part 4 of a DMA monitor part 2 detects data transfer speed corresponding to a system clock or the like and calculates the ratio of data transfer speed for the respective channels. Then, an area allocating part 5 of the DMA monitor part 2 decides the sizes of the areas 11-1 n of the buffer memory 1 and distributes the respective areas 11-1 n to the respective channels again corresponding to the ratio of data transfer speed. As a result, the overhead of DMA transfer at the channel of low data transfer speed is reduced and the time for storing data into the buffer memory with the channel of low data transfer speed can be shortened.
A system and method of data transfer to an HDD array in which performance is not limited by the stochastic latency of only one drive of the HDD array is desirable.
A method for digital data transfer in accordance with the present invention is recited in claim 1. The method is useful in an apparatus comprising two or more mass memory devices and a buffer space encompassing two or more buffer space parts, one of the buffer space parts being allocated, as a current buffer space part, to a data path of each of the mass memory devices. The method comprises storing an association of current buffer space parts and mass memory devices and storing information about unused ones of the buffer space parts. The method further comprises rerouting a data path identified in the association of current buffer space parts and mass memory devices when a current buffer space part approaches or reaches a full state by connecting the data path of an allocated mass memory device to a next unused buffer space part, and storing a sequence of buffer space parts successively allocated to the mass memory device.
An apparatus for digital data transfer in accordance with the present invention is recited in claim 6. The apparatus comprises two or more mass memory devices and a buffer space encompassing two or more buffer space parts, one of the buffer space parts being allocated, as a current buffer space part, to a data path of each of the mass memory devices. In addition, the apparatus includes memory means for storing an association of current buffer space parts to mass memory devices, as well as information about unused ones of the buffer space parts. The apparatus also comprises re-routing means for re-routing the data path of an allocated mass memory device to a next unused buffer space part when a buffer space part approaches or reaches a full state, and memory means for storing, for each mass memory device, the sequence of buffer space parts successively allocated to the mass memory device.
In accordance with the present invention, a data transfer may be initiated by setting or inscribing values into a register, the values indicating at least one of a cluster size, a cluster start address or a related command. At least a one of the two or more buffer space parts may comprise a first-in first-out FIFO memory. The first-in first-out FIFO memory may be adapted to receive data from an input multiplexer and to deliver data to an output multiplexer. Alternatively, at least a one of the two or more buffer space parts may comprise a random access memory module.
A preferred embodiment of the present invention is described with reference to the accompanying drawings. The preferred embodiment merely exemplifies the invention. Plural possible modifications are apparent to the skilled person. The gist and scope of the present invention is defined in the appended claims of the present application.
Fig. 1 is a block diagram of a recording system having a system controller module to perform storage functionality on an array of storage units in accordance with an exemplary embodiment of the present invention.
Fig. 2 is a block diagram of a DMA engine that is adapted to transfer data in accordance with an exemplary embodiment of the present invention. Fig. 3 is a block diagram of a buffer arrangement that operates in accordance with an exemplary embodiment of the present invention.
Fig. 4 is a process flow diagram that shows a method in accordance with an exemplary embodiment of the present invention.
In an exemplary embodiment of the present invention, a dynamic buffer allocation is proposed for a system in which buffer space is dynamically allocated when a buffer overflow error becomes imminent because of the stochastic latency of the HDD in which a particular buffer operates. Moreover, a buffer that operates in accordance with an exemplary embodiment of the present invention obtains more buffer space when such a buffer overflow error is imminent. Thus, an exemplary embodiment of the present invention provides dynamic buffer allocation in the data paths to the hard disk drives, and it will avoid or reduce drop outs of read or write operations and optimize the performance in a high-speed data recording workflow.
Fig. 1 is a block diagram of a recording system having a system controller module to perform storage functionality on an array of storage units in accordance with an exemplary embodiment of the present invention. The recording system shown in Fig. 1 is generally referred to by the reference number 100. The recording system 100 includes a user interface 102, which allows a user to control the overall operation of the recording system 100 and to view information about the system status and the like. In one exemplary embodiment of the present invention, the user interface includes an LCD touchpad display.
The recording system 100 includes a system controller module 104. The system controller module 104 includes an embedded software processor system, which is shown in Fig. 1 as PPC 106. As used herein, PPC is an acronym for Power PC. The system controller module 104 further includes a cache and stream control 108, a RAID control or controller 110 and a DMA engine 112. The system controller module 104 is adapted to transfer data to and receive data from an HDD array 114, which comprises a plurality of individual HDDs. The PPC 106 communicates via a control path 116 with external modules. Additionally, the PPC 106 configures the hardware of the system controller module 104. Transfers of data clusters to or from the disks of the HDD array 114 are initiated by setting or inscribing appropriate values into registers, indicating the cluster size, the cluster start address and the related command, like "read" or "write".
In the exemplary embodiment shown in Fig. 1 , the cache and stream control 108 is adapted to transfer data via a data path 118. The real-time data transferred via the data path 118 are buffered in the cache and stream control 108. The data processing in the exemplary RAID controller 110 ensures that data can be accurately reconstructed when up to two of the HDDs that make up the HDD array 114 provide erroneous data. The skilled person will appreciate that this result can be achieved using, for example, the known EVENODD parity code.
The DMA engine 112 provides the data streaming to or from the attached devices in the HDD array 114. The transfers are typically initiated as bursts having a length of, for example, 64 KB.
Fig. 2 is a block diagram of the DMA engine 112 shown in Fig. 1. The block diagram is generally referred to by the reference number 200. The DMA engine 112 includes a BusDhver control 202. The BusDriver control 202 is adapted to transfer control information via a control path 204 and to transfer data via a data path 206. In an exemplary embodiment of the present invention, an important function of the BusDriver control 202 is to control k separate data paths accessing the HDD array 114. The dynamic buffer allocation is performed between the BusDriver control 202 and a plurality of DMA access units in the DMA engine 112 shown in Fig. 1. For that purpose, a buffer space 208 consisting of n distinct buffer space parts is implemented and managed by a buffer control 210. The buffer control 210 stores data about a plurality of currently allocated buffers. Fig. 2 shows a total of eight currently allocated buffers starting with a buffer ID_0 212 and ending with a buffer ID_n 214. In an exemplary embodiment of the present invention, at least one buffer is used for each data path. The buffers 212, 214 transfer data to the HDD array 114 via a plurality of DMA accesses 216.
When a data transfer latency event occurs in one or more of the HDDs that make up the HDD array 114 and the corresponding buffer approaches a full state, the data path is carried over to the next free buffer, and so on. The buffer control 210 stores the allocation of buffer space parts to individual HDDs, identified respectively for example by distinct buffer_id's and disk id's. The buffer control 210 also stores the sequence of successively allocated buffers for each HDD, and also manages the deallocation of unused buffer space. With respect to buffer size, assuming an exemplary data rate of 20 MB/s per disk, a buffer size of 2 MB per 100 ms data transfer latency is needed.
In an exemplary embodiment of the present invention in which the buffers of the buffer space 208 are implemented with first-in first-out buffer memories, which may also be referred to as FIFOs, the FIFO-flags can be used for the buffer controlling. In another exemplary embodiment of the present invention, the buffer space can be realised with a random access memory RAM module, together with appropriate glue logic for managing sets of read and write pointers.
Fig. 3 is a block diagram of a buffer arrangement that operates in accordance with an exemplary embodiment of the present invention. The buffer arrangement is generally referred by the reference number 300. The exemplary buffer arrangement 300 is an implementation of one buffer space in part by a FIFO unit 302. The FIFO unit 302 receives data from an input multiplexer 304 and delivers data to an output multiplexer 306. The FIFO unit 302, as well as the input buffer 304 and the output buffer 306, are controlled by a plurality of control signals received via a control path 308. With the given multiplexers at its input and output, the FIFO unit of the buffer space part can be connected to any of the data paths with which it may be associated.
Fig. 4 is a process flow diagram that shows a method in accordance with an exemplary embodiment of the present invention. The method is generally referred to by the reference number 400. The method 400 relates to digital data transfer in an apparatus comprising two or more mass memory devices and a buffer space encompassing two or more buffer space parts, one of the buffer space parts being allocated, as a current buffer space part, to a data path of each of the mass memory devices. At step 402, the process begins.
At step 404, an association of current buffer space parts and mass storage or memory devices is stored. The skilled person will appreciate that the association may be stored by the buffer control 210 shown in Fig. 2 in a memory space such as the buffer space 208. At step 406, information about unused ones of the buffer space parts is also stored.
At step 408, a data path identified in the association of current buffer space parts and mass memory devices is rerouted when a current buffer space part approaches or reaches a full state by connecting the data path of the allocated mass memory device to a next unused buffer space part. At step 410, the sequence of buffer space parts successively allocated to the mass memory device is stored for each mass memory device in the HDD array 114 currently processing a DMA transfer. At step 412, the process ends.
The skilled person will appreciate that combining any of the above-recited features of the present invention together may be desirable.

Claims

Claims
1. A method (400) for digital data transfer in an apparatus (100) comprising two or more mass memory devices (114) and a buffer space (208) encompassing two or more buffer space parts (212, 214), one of the buffer space parts being allocated, as a current buffer space part, to a data path (216) of each of the mass memory devices, the method comprising:
- storing (404) an association of current buffer space parts and mass memory devices; - storing (406) information about unused ones of the buffer space parts;
- rerouting (408) a data path identified in the association of current buffer space parts and mass memory devices when a current buffer space part approaches or reaches a full state by connecting the data path of an allocated mass memory device to a next unused buffer space part; and - storing (410) a sequence of buffer space parts successively allocated to the mass memory device.
2. Method (300) for digital data transfer according to claim 1 , comprising initiating a data transfer by setting or inscribing values into a register, the values indicating at least one of a cluster size, a cluster start address or a related command.
3. Method (300) for digital data transfer according to claims 1 or 2, wherein at least a one of the two or more buffer space parts (212, 214) comprises a first-in first-out FIFO memory (302).
4. Method (300) for digital data transfer according to claim 3, wherein the first-in first-out FIFO (302) is adapted to receive data from an input multiplexer (302) and to deliver data to an output multiplexer (306).
5. Method (300) for digital data transfer according to claims 1 or 2, wherein at least a one of the two or more buffer space parts (212, 214) comprises a random access memory module.
6. Apparatus (100) for digital data transfer comprising two or more mass memory devices (114) and a buffer space encompassing two or more buffer space parts (212, 214), one of the buffer space parts being allocated, as a current buffer space part, to a data path (216) of each of the mass memory devices (114), the apparatus comprising:
- memory means (208) for storing an association of current buffer space parts to mass memory devices, as well as information about unused ones of the buffer space parts; - re-routing means (210) for re-routing the data path of an allocated mass memory device to a next unused buffer space part when a buffer space part approaches or reaches a full state; and
- memory means (208) for storing, for each mass memory device, the sequence of buffer space parts successively allocated to the mass memory device.
7. Apparatus (100) for digital data transfer according to claim 6, wherein a data transfer is initiated by setting or inscribing values into a register, the values indicating at least one of a cluster size, a cluster start address or a related command.
8. Apparatus (100) for digital data transfer according to claims 6 or 7, wherein at least a one of the two or more buffer space parts (212, 214) comprises a first-in first-out FIFO memory (302).
9. Apparatus (100) for digital data transfer according to claim 8, wherein the first-in first-out FIFO memory (302) is adapted to receive data from an input multiplexer (302) and to deliver data to an output multiplexer (306).
10. Apparatus (100) for digital data transfer according to claims 6 or 7, wherein at least a one of the two or more buffer space parts (212, 214) comprises a random access memory module.
PCT/EP2008/061456 2007-09-13 2008-09-01 Dynamic buffer allocation system and method WO2009033966A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP07116325.7 2007-09-13
EP07116325 2007-09-13

Publications (1)

Publication Number Publication Date
WO2009033966A1 true WO2009033966A1 (en) 2009-03-19

Family

ID=40040022

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2008/061456 WO2009033966A1 (en) 2007-09-13 2008-09-01 Dynamic buffer allocation system and method

Country Status (1)

Country Link
WO (1) WO2009033966A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0545575A1 (en) * 1991-11-29 1993-06-09 AT&T Corp. Multiple virtual FIFO arrangement
US5765023A (en) * 1995-09-29 1998-06-09 Cirrus Logic, Inc. DMA controller having multiple channels and buffer pool having plurality of buffers accessible to each channel for buffering data transferred to and from host computer
US6092127A (en) * 1998-05-15 2000-07-18 Hewlett-Packard Company Dynamic allocation and reallocation of buffers in links of chained DMA operations by receiving notification of buffer full and maintaining a queue of buffers available
EP1645967A1 (en) * 2004-10-11 2006-04-12 Texas Instruments Incorporated Multi-channel DMA with shared FIFO buffer
US20070150683A1 (en) * 2005-12-28 2007-06-28 Intel Corporation Dynamic memory buffer allocation method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0545575A1 (en) * 1991-11-29 1993-06-09 AT&T Corp. Multiple virtual FIFO arrangement
US5765023A (en) * 1995-09-29 1998-06-09 Cirrus Logic, Inc. DMA controller having multiple channels and buffer pool having plurality of buffers accessible to each channel for buffering data transferred to and from host computer
US6092127A (en) * 1998-05-15 2000-07-18 Hewlett-Packard Company Dynamic allocation and reallocation of buffers in links of chained DMA operations by receiving notification of buffer full and maintaining a queue of buffers available
EP1645967A1 (en) * 2004-10-11 2006-04-12 Texas Instruments Incorporated Multi-channel DMA with shared FIFO buffer
US20070150683A1 (en) * 2005-12-28 2007-06-28 Intel Corporation Dynamic memory buffer allocation method and system

Similar Documents

Publication Publication Date Title
US10318164B2 (en) Programmable input/output (PIO) engine interface architecture with direct memory access (DMA) for multi-tagging scheme for storage devices
JP6729914B2 (en) Solid state storage drive, system, and method
US6401149B1 (en) Methods for context switching within a disk controller
US6330626B1 (en) Systems and methods for a disk controller memory architecture
US7730257B2 (en) Method and computer program product to increase I/O write performance in a redundant array
US9268721B2 (en) Holding by a memory controller multiple central processing unit memory access requests, and performing the multiple central processing unit memory requests in one transfer cycle
JP4805351B2 (en) System and method for improving parallel processing of DRAM
TW201303587A (en) Meta data handling within a flash media controller
KR101663066B1 (en) Solid state memory command queue in hybrid device
JP2021503642A (en) Non-volatile memory write credit management
US20120303855A1 (en) Implementing storage adapter performance optimization with hardware accelerators offloading firmware for buffer allocation and automatically dma
KR20010110679A (en) Arbitration Methods and Systems for Arbitrating Access to a Disk Controller Memory
JPH07219844A (en) Apparatus and method for cache line replacing
KR20020020891A (en) System for and method of accessing blocks on a storage medium
JP5068300B2 (en) Apparatus, method and program for data flow and memory sharing of processor
TWI386795B (en) Disk controller methods and apparatus with improved striping, redundancy operations and interfaces
US20040205269A1 (en) Method and apparatus for synchronizing data from asynchronous disk drive data transfers
CN116136748B (en) High-bandwidth NVMe SSD read-write system and method based on FPGA
JP5244909B2 (en) Mass storage system with improved buffer capacity utilization
KR100638378B1 (en) Systems and Methods for a Disk Controller Memory Architecture
US11287985B2 (en) Network data storage buffer system
WO2009033966A1 (en) Dynamic buffer allocation system and method
US20060112301A1 (en) Method and computer program product to improve I/O performance and control I/O latency in a redundant array
US20120311236A1 (en) Memory system, data control method, and data controller
WO2009033971A1 (en) System and method for splitting data and data control information

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08803439

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08803439

Country of ref document: EP

Kind code of ref document: A1