US5359568A - FIFO memory system - Google Patents

FIFO memory system Download PDF

Info

Publication number
US5359568A
US5359568A US08/072,643 US7264393A US5359568A US 5359568 A US5359568 A US 5359568A US 7264393 A US7264393 A US 7264393A US 5359568 A US5359568 A US 5359568A
Authority
US
United States
Prior art keywords
memory
fifo
blocks
block
management means
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
US08/072,643
Inventor
Aviel Livay
Ricardo Berger
Alexander Joffe
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NXP USA Inc
Original Assignee
Motorola Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Motorola Inc filed Critical Motorola Inc
Assigned to MOTOROLA, INC. reassignment MOTOROLA, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JOFFE, ALEXANDER, BERGER, RICARDO, LIVAY, AVIEL
Application granted granted Critical
Publication of US5359568A publication Critical patent/US5359568A/en
Assigned to FREESCALE SEMICONDUCTOR, INC. reassignment FREESCALE SEMICONDUCTOR, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MOTOROLA, INC.
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F5/00Methods or arrangements for data conversion without changing the order or content of the data handled
    • G06F5/06Methods or arrangements for data conversion without changing the order or content of the data handled for changing the speed of data flow, i.e. speed regularising or timing, e.g. delay lines, FIFO buffers; over- or underrun control therefor
    • G06F5/065Partitioned buffers, e.g. allowing multiple independent queues, bidirectional FIFO's
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2205/00Indexing scheme relating to group G06F5/00; Methods or arrangements for data conversion without changing the order or content of the data handled
    • G06F2205/06Indexing scheme relating to groups G06F5/06 - G06F5/16
    • G06F2205/064Linked list, i.e. structure using pointers, e.g. allowing non-contiguous address segments in one logical buffer or dynamic buffer space allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2205/00Indexing scheme relating to group G06F5/00; Methods or arrangements for data conversion without changing the order or content of the data handled
    • G06F2205/06Indexing scheme relating to groups G06F5/06 - G06F5/16
    • G06F2205/066User-programmable number or size of buffers, i.e. number of separate buffers or their size can be allocated freely

Definitions

  • This invention relates to a FIFO memory system and more particularly to a FIFO memory system for use in a serial digital communication system.
  • transmission queues are used in order that the processes can efficiently transmit data to a communication line through a serial channel in the system.
  • a common method of managing the transmission queues through the serial channel is to map each queue into one of a plurality of First-in-First-Out (FIFO) memories.
  • FIFOs are written to or filled by the system and emptied or read from by the communication process or vice versa.
  • a problem with this method is implementing a plurality of FIFOs in a limited area.
  • the preferred solution is to utilise RAM based FIFO memories since they appear to require the least area.
  • the filling rate of the FIFO i.e. the rate at which data is written to the FIFO
  • the emptying rate of the FIFO i.e. the rate at which data is read from the FIFO
  • underrun means carrying out a read operation from the FIFO when it is empty.
  • the latency of the system bus carrying the data to be written must be considered in order to determine the minimum size of each FIFO.
  • the latency of the bus is defined as the maximum period of time required by the system to supply the first data to the FIFO after a Data Request has been generated.
  • the maximum latency required is a critical parameter of the system configuration.
  • Ls is the system latency having units of time
  • Ft is the rate at which the FIFO is emptied by the communication process
  • WM is the minimum FIFO size below which Data Requests are generated and assuming the FIFO is full when the first data is read by the communication process then,
  • a FIFO having a size WM+Delta must be implemented.
  • the system When a Data Request is sensed by the system, the system will fill the FIFO to its maximum size (WM+Delta). However, the Data Request will be asserted again only after the FIFO is emptied below the WM level.
  • a queuing system may comprise n different FIFOs for n different queues so that the total memory size MS must be greater than:
  • Ls is the maximum latency acceptable to the memory system.
  • the above solution requires additional FIFO memory: that is, WM+Delta for each FIFO.
  • the serial communication process can access only one FIFO at a time and this one FIFO should fill up to the size WM+Delta. Since the other FIFOs also occupy WM+Delta of memory when they only require WM of memory, with the above solution there is [(n-1)*Delta of unused bytes] of memory. Thus, large areas of memory are required but only a portion of the memory will be used at any one time.
  • a FIFO memory system comprising a plurality of FIFO memories for handling transmission queues in a serial digital communication system, the memory system comprising:
  • each of the plurality of FIFO memories being assigned a block of the plurality of blocks of memory, the unassigned blocks of memory forming a block pool;
  • memory management means for adding at least one of the unassigned blocks of memory from the block pool to a FIFO memory on writing to the FIFO memory whereby the size of the FIFO memory is selectably variable, and for returning a block of memory from a FIFO memory to the block pool once the contents of the block of memory have been read.
  • a FIFO has additional memory only when it needs it by dynamically applying small blocks of memory to the FIFOs on request.
  • the present invention therefore provides a method and apparatus by which the size of each of the plurality of FIFOs can be dynamically varied on writing to the FIFO.
  • An advantage of this arrangement is that the latency of the bus system is accounted for and the continual issuance of data requests is avoided but the memory area available is efficiently utilized: only the FIFO which is transmitting utilizes WM+Delta of memory.
  • FIG. 1 shows a communication system incorporating a FIFO memory system in accordance with the present invention
  • FIG. 2 shows part of a FIFO memory system in accordance with the present invention
  • FIG. 3 shows part of the FIFO memory system in accordance with the present invention during a write operation
  • FIG. 4 shows part of the FIFO memory system in accordance with the present invention during a read operation.
  • a communication system 200 incorporating a FIFO memory system 202 in accordance with a preferred embodiment of the present invention is shown in FIG. 1.
  • Transmission sources 204-207 transmit data onto a system bus 208 which is coupled to the FIFO memory system 202.
  • the FIFO memory system 202 transmits the data to transmission media (not shown) via a serial communication channel 210.
  • the FIFO memory system 202 comprises internal memory 212 comprising a plurality of FIFOs for handling the transmission queues between the transmission sources 204-207 and the transmission media and a memory management block 214 for managing the data transmission queues.
  • the internal memory of the FIFO memory system (only part 10 of which is shown in FIG. 2) comprises a plurality of small blocks of memory: only eight, 1-8, are shown in FIG. 2.
  • each one of the plurality of FIFOs is assigned a small block of memory: for example FIFO A of FIG. 2 is assigned block 2 and FIFO B is assigned block 1.
  • FIFO A of FIG. 2 is assigned block 2
  • FIFO B is assigned block 1.
  • n a predetermined number of the small blocks determined by n will be assigned to the plurality of FIFOs.
  • the remaining blocks 3-8 form a ⁇ block pool ⁇ of memory from which all the plurality of FIFOs can ⁇ borrow ⁇ when the FIFO requires additional memory.
  • the FIFO should have a size of WM+Delta.
  • the present invention allows a transmitting FIFO to borrow the additional memory (Delta) from the ⁇ block pool ⁇ during a write operation. For example, assuming in this case Delta is equal to WM, a queued FIFO having a size of WM transmits data and during the write operation subsequent FIFO the transmitting increases its size to (2) (X WM) by taking additional blocks from the ⁇ block pool ⁇ .
  • a FIFO that has been written to but is in a queue occupies memory having a size WM.
  • Blocks are returned to the ⁇ block pool ⁇ when the block has been emptied during a read operation.
  • the ⁇ block pool ⁇ thus provides means by which the size of each of the plurality of FIFOs can be dynamically varied.
  • each FIFO is implemented as a linked list of blocks in which each block in the list points to the next block of the FIFO.
  • a Link List Table (LLT) is implemented in the memory management block 214.
  • the Link List Table LLT contains the same number of entries as the number of blocks in the memory and each entry stores the address of the next linked block in memory.
  • Table 1 represents the link list table for the eight blocks 1-8 shown in FIG. 2. The lines on FIG. 2 also represent which blocks are linked.
  • a Read Pointer (RP) and a Write Pointer (WP) are defined.
  • the location of the Read Pointer and Write Pointer is stored in a pointer table PT in the memory management block 214.
  • the pointer table PT is updated depending on the contents of the link list table.
  • a FIFO 20 is initially assigned a block 20a of memory.
  • the Read and Write Pointers for FIFO 20 are defined according to the address of block 20a.
  • the FIFO block 20a is written to first and once this block has been filled the Write Pointer WP is updated so that it points to the next linked block whose address is stored in the entry for block 20a in the link list table.
  • the next block is block 20b.
  • Blocks 20b and 20c are written to in an identical manner.
  • Memory blocks 21a-e form part of the block pool.
  • FIFO 20 borrows a block from the block pool according to the entry for 20c in the link list table whereby the Write Pointer WP points to block 21 a of the block pool.
  • the block pool is also implemented as a linked list of blocks having a stack structure as shown in FIG. 3.
  • any block returned to the pool will be the first block available to a FIFO requiring it during a write operation.
  • NBA Next Block Available indicates the top of the stack.
  • Each one of the plurality of FIFOs uses a predetermined number of the small blocks of memory so as to occupy an area of memory having a size WM which is defined by the user.
  • WM size determines the minimum level below which Data Requests are issued and depends on the latency of the system.
  • FIG. 4 like components to those of FIG. 3 are referred to by the same reference numeral plus a hundred
  • data is read from the blocks according to where the Read Pointer is pointing.
  • Data is thus read from block 120a and once this block has been emptied the Read Pointer RP is updated so that it points to the next block of the FIFO according to the entry for block 120a in the link list table.
  • the next block is block 120b.
  • Data is then read from block 120b.
  • block 120a Once block 120a has been emptied, the block becomes part of the block pool and is placed at the top of the stack as indicated by NBA, The entry in the link list table for block 120a is then updated so that its next linked block is the next available block in the pool: that is block 21b. Thus, block 120a will be the first block from the block pool to be written to during a following write operation.
  • the invention recognizes that the communication process can only read one FIFO at a specific time and so only one FIFO at any time requires additional memory of size Delta in order to avoid underrun.
  • the total memory size MS is given by
  • Ls is the maximum latency acceptable to the memory system.
  • the present invention provides a FIFO memory system which optimizes the latency of the bus system. Furthermore, the memory system in accordance with the invention can be relatively easily adapted to system buses having different latencies.
  • the plurality of FIFOs, the link list table and the pointer table are preferably all implemented in RAM.
  • a FIFO i.e. a write or read operation
  • LLT link list table
  • PT pointer table
  • the LLT and PT are preferably implemented using dual ported RAMs whereby one read and one write can be done during the same memory cycle. This means that the preferred FIFO memory system is capable of updating the tables LLT and PT during the same cycle in which a FIFO is accessed. Thus, managing the FIFOs does not require any wait states and the memory system can be accessed each cycle.
  • a preferred embodiment of the present invention has been implemented in a FDDI (Fibre Distributed Data Interface) system interface.
  • FDDI Fibre Distributed Data Interface
  • a memory array comprising 256 blocks of memory supported 30 FIFOs.

Abstract

This invention relates to a FIFO memory system (10) comprising a plurality of FIFO memories (20) for handling transmission queues in a serial digital communication system. The memory system comprises a plurality of blocks of memory (20a-c, 21a-e), each of the plurality of FIFO memories being assigned a block (20a) of the plurality of blocks of memory, the unassigned blocks of memory forming a block pool (21a-e). The memory system further comprises memory management means (LLT, PT) for adding at least one of the unassigned blocks of memory from the block pool to a FIFO memory on writing to the FIFO memory whereby the size of the FIFO memory is selectably variable, and for returning a block of memory from a FIFO memory to the block pool once the contents of the block of memory have been read.

Description

BACKGROUND OF THE INVENTION
This invention relates to a FIFO memory system and more particularly to a FIFO memory system for use in a serial digital communication system.
With digital communication systems having two or more processes running concurrently, transmission queues are used in order that the processes can efficiently transmit data to a communication line through a serial channel in the system. A common method of managing the transmission queues through the serial channel is to map each queue into one of a plurality of First-in-First-Out (FIFO) memories. The FIFOs are written to or filled by the system and emptied or read from by the communication process or vice versa.
A problem with this method is implementing a plurality of FIFOs in a limited area. The preferred solution is to utilise RAM based FIFO memories since they appear to require the least area.
The filling rate of the FIFO (i.e. the rate at which data is written to the FIFO) should normally be greater than the emptying rate of the FIFO (i.e. the rate at which data is read from the FIFO) onto the communication line. Typically, the FIFO issues Data Requests to the system any time a danger of underrun exists: underrun means carrying out a read operation from the FIFO when it is empty.
The latency of the system bus carrying the data to be written must be considered in order to determine the minimum size of each FIFO. The latency of the bus is defined as the maximum period of time required by the system to supply the first data to the FIFO after a Data Request has been generated. For limited size memory FIFOs, the maximum latency required is a critical parameter of the system configuration.
If Ls is the system latency having units of time, Ft is the rate at which the FIFO is emptied by the communication process and WM is the minimum FIFO size below which Data Requests are generated and assuming the FIFO is full when the first data is read by the communication process then,
WM=Ls*Ft                                                   (1)
With a FIFO having a size WM, once the FIFO is filled to its full size, the data requests will stop. However, Data Requests will be asserted again once the first data read is generated by the communication process in order to avoid an underrun state. Data Requests will therefore be issued all the time.
In order to avoid this situation, a FIFO having a size WM+Delta must be implemented. When a Data Request is sensed by the system, the system will fill the FIFO to its maximum size (WM+Delta). However, the Data Request will be asserted again only after the FIFO is emptied below the WM level.
A queuing system may comprise n different FIFOs for n different queues so that the total memory size MS must be greater than:
MS=(WM1+Delta1)+(WM2+Delta2)+. . .+(WMn+Deltan)            (2)
Assuming the FIFOs are n similar FIFOs and Delta=WM, equation 2 becomes
MS=(2)(n) (WM)
Therefore, ##EQU1## From equation (1),
Ls*Ft=(MS)/(2n)                                            (4)
And,
Ls=(MS)/(2n)*Ft                                            (5)
Where Ls is the maximum latency acceptable to the memory system.
Thus, in order to account for the latency of the system bus and to avoid continuous Data Requests being generated, the above solution requires additional FIFO memory: that is, WM+Delta for each FIFO. The serial communication process can access only one FIFO at a time and this one FIFO should fill up to the size WM+Delta. Since the other FIFOs also occupy WM+Delta of memory when they only require WM of memory, with the above solution there is [(n-1)*Delta of unused bytes] of memory. Thus, large areas of memory are required but only a portion of the memory will be used at any one time.
SUMMARY OF THE INVENTION
In accordance with the present invention there is provided a FIFO memory system comprising a plurality of FIFO memories for handling transmission queues in a serial digital communication system, the memory system comprising:
a plurality of blocks of memory, each of the plurality of FIFO memories being assigned a block of the plurality of blocks of memory, the unassigned blocks of memory forming a block pool; and
memory management means for adding at least one of the unassigned blocks of memory from the block pool to a FIFO memory on writing to the FIFO memory whereby the size of the FIFO memory is selectably variable, and for returning a block of memory from a FIFO memory to the block pool once the contents of the block of memory have been read.
Thus, in the FIFO memory system in accordance with the present invention, a FIFO has additional memory only when it needs it by dynamically applying small blocks of memory to the FIFOs on request.
The present invention therefore provides a method and apparatus by which the size of each of the plurality of FIFOs can be dynamically varied on writing to the FIFO. An advantage of this arrangement is that the latency of the bus system is accounted for and the continual issuance of data requests is avoided but the memory area available is efficiently utilized: only the FIFO which is transmitting utilizes WM+Delta of memory.
BRIEF DESCRIPTION OF THE DRAWINGS
A FIFO memory system in accordance with the present invention will now be described by way of example only with reference to the accompanying drawings in which:
FIG. 1 shows a communication system incorporating a FIFO memory system in accordance with the present invention;
FIG. 2 shows part of a FIFO memory system in accordance with the present invention;
FIG. 3 shows part of the FIFO memory system in accordance with the present invention during a write operation; and
FIG. 4 shows part of the FIFO memory system in accordance with the present invention during a read operation.
DETAILED DESCRIPTION
A communication system 200 incorporating a FIFO memory system 202 in accordance with a preferred embodiment of the present invention is shown in FIG. 1. Transmission sources 204-207 transmit data onto a system bus 208 which is coupled to the FIFO memory system 202. The FIFO memory system 202 transmits the data to transmission media (not shown) via a serial communication channel 210. The FIFO memory system 202 comprises internal memory 212 comprising a plurality of FIFOs for handling the transmission queues between the transmission sources 204-207 and the transmission media and a memory management block 214 for managing the data transmission queues.
Referring now also to FIG. 2, the internal memory of the FIFO memory system (only part 10 of which is shown in FIG. 2) comprises a plurality of small blocks of memory: only eight, 1-8, are shown in FIG. 2.
Initially, each one of the plurality of FIFOs is assigned a small block of memory: for example FIFO A of FIG. 2 is assigned block 2 and FIFO B is assigned block 1. In a FIFO memory system having n FIFOs, a predetermined number of the small blocks determined by n will be assigned to the plurality of FIFOs. The remaining blocks 3-8 form a `block pool` of memory from which all the plurality of FIFOs can `borrow` when the FIFO requires additional memory.
As described in the introduction in order to avoid the continuous issuance of Data Requests during a read operation, (that is, when the FIFO is transmitting queued data) the FIFO should have a size of WM+Delta. The present invention allows a transmitting FIFO to borrow the additional memory (Delta) from the `block pool` during a write operation. For example, assuming in this case Delta is equal to WM, a queued FIFO having a size of WM transmits data and during the write operation subsequent FIFO the transmitting increases its size to (2) (X WM) by taking additional blocks from the `block pool`. A FIFO that has been written to but is in a queue occupies memory having a size WM.
Blocks are returned to the `block pool` when the block has been emptied during a read operation. The `block pool` thus provides means by which the size of each of the plurality of FIFOs can be dynamically varied.
Preferably, each FIFO is implemented as a linked list of blocks in which each block in the list points to the next block of the FIFO. A Link List Table (LLT) is implemented in the memory management block 214. The Link List Table LLT contains the same number of entries as the number of blocks in the memory and each entry stores the address of the next linked block in memory. Table 1 represents the link list table for the eight blocks 1-8 shown in FIG. 2. The lines on FIG. 2 also represent which blocks are linked.
              TABLE 1                                                     
______________________________________                                    
Block       Next linked block                                             
______________________________________                                    
1           3                                                             
2           4                                                             
3           6                                                             
4           5                                                             
5           8                                                             
6           7                                                             
7           X                                                             
8           X                                                             
______________________________________                                    
For each one of the plurality of FIFOs, a Read Pointer (RP) and a Write Pointer (WP) are defined. The location of the Read Pointer and Write Pointer is stored in a pointer table PT in the memory management block 214. The pointer table PT is updated depending on the contents of the link list table.
Each time a block of memory is written to or read from, the corresponding entry in the link list table is read so as to determine the address of the next linked block. The Read Pointer RP or Write Pointer WP is then re-defined according to the address of the next linked block. Thus, the logical connection between memory blocks is implemented via the Link List Table LLT and the pointer table PT which are both controlled by a controller 216.
Read and write operations for one of the plurality of FIFOs in accordance with the present invention will now be described with reference to FIGS. 3 and 4.
A FIFO 20 is initially assigned a block 20a of memory. The Read and Write Pointers for FIFO 20 are defined according to the address of block 20a.
During a write operation, the FIFO block 20a is written to first and once this block has been filled the Write Pointer WP is updated so that it points to the next linked block whose address is stored in the entry for block 20a in the link list table. In the example shown in FIG. 3, the next block is block 20b. Blocks 20b and 20c are written to in an identical manner. Memory blocks 21a-e form part of the block pool.
Once block 20c has been filled, FIFO 20 borrows a block from the block pool according to the entry for 20c in the link list table whereby the Write Pointer WP points to block 21 a of the block pool.
Preferably, the block pool is also implemented as a linked list of blocks having a stack structure as shown in FIG. 3. Thus, any block returned to the pool will be the first block available to a FIFO requiring it during a write operation. NBA (Next Block Available) indicates the top of the stack.
If FIFO 20 requires additional blocks, data will be written to blocks 21b-e in an order which depends on the link list table entries for these blocks.
Each one of the plurality of FIFOs uses a predetermined number of the small blocks of memory so as to occupy an area of memory having a size WM which is defined by the user. As discussed above the WM size determines the minimum level below which Data Requests are issued and depends on the latency of the system.
Referring now also to FIG. 4 (like components to those of FIG. 3 are referred to by the same reference numeral plus a hundred), during a read operation data is read from the blocks according to where the Read Pointer is pointing. Data is thus read from block 120a and once this block has been emptied the Read Pointer RP is updated so that it points to the next block of the FIFO according to the entry for block 120a in the link list table. In the example shown in FIG. 4 the next block is block 120b. Data is then read from block 120b. Once block 120a has been emptied, the block becomes part of the block pool and is placed at the top of the stack as indicated by NBA, The entry in the link list table for block 120a is then updated so that its next linked block is the next available block in the pool: that is block 21b. Thus, block 120a will be the first block from the block pool to be written to during a following write operation.
The invention recognizes that the communication process can only read one FIFO at a specific time and so only one FIFO at any time requires additional memory of size Delta in order to avoid underrun. Thus, for n FIFOs, the total memory size MS is given by
MS=(WM1+WM2+WM3+WMn)+Delta                                 (6)
Assuming the FIFOs are n similar FIFOs and Delta=WM, equation 6 becomes
MS=nWM+WM-WM(n+1)                                          (7)
Substituting WM from equation 7 into equation 1 and rearranging gives
Ls=(MS)/[(n+1)*Ft]                                         (8)
Where Ls is the maximum latency acceptable to the memory system.
Thus, it is clear from comparing equations 8 and 5, for n>1, that the maximum latency supported by the FIFO memory system in accordance with the present invention is bigger than the maximum latency supported by the conventional solution described in the introduction. From a different point of view, if the latencies of the systems Ls are the same, the total memory size required by the memory system in accordance with the present invention is reduced. Thus, the memory system in accordance with the present invention more efficiently utilizes the memory available.
It will be appreciated that the present invention provides a FIFO memory system which optimizes the latency of the bus system. Furthermore, the memory system in accordance with the invention can be relatively easily adapted to system buses having different latencies.
The plurality of FIFOs, the link list table and the pointer table are preferably all implemented in RAM.
Accessing a FIFO (i.e. a write or read operation) requires only one read from the link list table (LLT) and one write to the LLT. The same applies for the pointer table (PT). The LLT and PT are preferably implemented using dual ported RAMs whereby one read and one write can be done during the same memory cycle. This means that the preferred FIFO memory system is capable of updating the tables LLT and PT during the same cycle in which a FIFO is accessed. Thus, managing the FIFOs does not require any wait states and the memory system can be accessed each cycle.
A preferred embodiment of the present invention has been implemented in a FDDI (Fibre Distributed Data Interface) system interface. In this implementation, a memory array comprising 256 blocks of memory supported 30 FIFOs.
Those skilled in the art will recognize that modifications and variations can be made without departing from the spirit of the invention. Therefore, it is intended that this invention encompass all such variations and modifications as fall within the scope of the appended claims.

Claims (8)

We claim:
1. A first-in, first-out (FIFO) memory system comprising a plurality of FIFO memories for handling a predetermined number of transmission queues in a serial communication system, only one of said plurality of FIFO memories transmitting data stored therein at a time, the memory system comprising:
a plurality of blocks of memory, each of the plurality of blocks of memory having an information content, and each of the plurality of FIFO memories being assigned a respective, different block of the plurality of blocks of memory, unassigned blocks of memory forming a block pool; and
memory management means for adding at least one of the unassigned blocks of memory from the block pool to a FIFO memory on writing to said FIFO memory, each of said plurality of FIFO memories having a respective size which is selectively variable when not transmitting queued data up to a predetermined maximum size, and for returning the at least one of the unassigned blocks of memory from said FIFO memory to said block pool once all of said information content of said at least one of the unassigned blocks of memory has been transmitted, said memory management means, in response to determining that any of said plurality of FIFO memories is transmitting queued data, increasing the size of a transmitting FIFO memory to be more than said predetermined maximum size by adding at least one of said unassigned blocks of memory from said block pool to said transmitting FIFO memory in response to writing thereto.
2. The FIFO memory system of claim 1 further comprising m linked blocks of memory, m being an integer, and wherein the memory management means comprises a link table having m entries, each of the m entries being associated with a respective one of said m linked blocks of memory and holding the address of a next block of memory to be linked to respective one of said m linked blocks of memory.
3. The FIFO memory system of claim 2 wherein the memory management means further comprises a pointer table having a read pointer entry and a write pointer entry for each one of said plurality of FIFO memories, said read pointer entry indicating a next block of the respective FIFO memory which is to be read from and said write pointer entry indicating a next block of the respective FIFO memory which is to be written to, the memory management means updating the read or write pointer entries in dependence on respective block entries in said link table after respectively reading or writing.
4. The FIFO memory system of claim 1 wherein said plurality of FIFO memories and said plurality of blocks of memory are implemented as random access memory (RAM).
5. A communication system comprising:
a plurality of transmission sources;
a bus coupled to receive data transmitted from the plurality of transmission sources;
a first-in, first-out (FIFO) memory system for handling the transmitted data, the FIFO memory system comprising a plurality of FIFO memories for queuing the transmitted data; and
a serial communication channel for serially transmitting data queued in said FIFO memory system, only one of said plurality of FIFO memories transmits queued data to the serial communication channel at a time, the memory system comprising:
a plurality of blocks of memory, each of said plurality of FIFO memories being assigned a respective, different block of said plurality of blocks of memory, unassigned blocks of memory forming a block pool, and
memory management means for adding at least one of said unassigned blocks of memory from said block pool to a FIFO memory on writing data to said FIFO memory whereby the size of each of said plurality of FIFO memories is selectively variable up to a predetermined maximum size, and for returning said at least one of said unassigned blocks of memory from said FIFO memory to the block pool once information content of all of said at least one of said unassigned blocks of memory has been transmitted, said memory management means, in response to determining that any of said plurality of FIFO memories is transmitting queued data, increases the size of a transmitting FIFO memory to be more than said predetermined maximum size by adding at least one of said unassigned blocks of memory from said block pool to said transmitting FIFO memory in response to writing thereto.
6. The communication system of claim 5 further comprising m linked blocks of memory, where m is an integer, and wherein said memory management means comprises a table having m entries, each of said m entries being associated with a respective one of said m linked blocks of memory and holding an address of a next block of memory to be linked to the respective one of said m linked blocks of memory.
7. The communication system of claim 6 wherein said memory management means further comprises a pointer table having a read pointer entry and write pointer entry for each one of the said plurality of FIFO memories, said read pointer entry indicating a next block of a respective FIFO memory which is to be read from and said write pointer entry indicating a next block of a respective FIFO memory which is to be written to, said memory management means updating the read or write pointer entries in dependence on respective block entries in the link table after respectively reading or writing.
8. The communication system of claim 5 wherein said plurality of FIFO memories and said plurality of blocks of memory are implemented as random access memory (RAM).
US08/072,643 1992-06-06 1993-06-03 FIFO memory system Expired - Fee Related US5359568A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB9212065A GB2267588B (en) 1992-06-06 1992-06-06 FIFO memory system
GB9212065.8 1992-06-06

Publications (1)

Publication Number Publication Date
US5359568A true US5359568A (en) 1994-10-25

Family

ID=10716694

Family Applications (1)

Application Number Title Priority Date Filing Date
US08/072,643 Expired - Fee Related US5359568A (en) 1992-06-06 1993-06-03 FIFO memory system

Country Status (2)

Country Link
US (1) US5359568A (en)
GB (1) GB2267588B (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5563761A (en) * 1995-08-11 1996-10-08 The Whitaker Corporation Transient voltage surge protection assembly for telecommunications lines
US5682494A (en) * 1993-07-05 1997-10-28 Nec Corporation Memory management system and method
US5904731A (en) * 1994-07-28 1999-05-18 Fujitsu Limited Product-sum device suitable for IIR and FIR operations
US5930525A (en) * 1997-04-30 1999-07-27 Adaptec, Inc. Method and apparatus for network interface fetching initial and data burst blocks and segmenting blocks and scheduling blocks compatible for transmission over multiple virtual circuits
US5978893A (en) * 1996-06-19 1999-11-02 Apple Computer, Inc. Method and system for memory management
US6049802A (en) * 1994-06-27 2000-04-11 Lockheed Martin Corporation System and method for generating a linked list in a computer memory
US6230249B1 (en) 1996-06-17 2001-05-08 Integrated Device Technology, Inc. Methods and apparatus for providing logical cell available information in a memory
US6256707B1 (en) * 1995-07-03 2001-07-03 Mitsubishi Denki Kabushiki Kaisha Semiconductor memory device having cache function
US6442646B1 (en) * 1997-04-02 2002-08-27 Matsushita Electric Industrial Co., Ltd. First-in-first-out (FIFO) memory device for inputting/outputting data with variable lengths
WO2003055156A1 (en) * 2001-12-21 2003-07-03 Axiowave Networks Inc, Adressing sequential data packets
US6822967B1 (en) * 1999-04-16 2004-11-23 Fujitsu Limited Relay unit and frame tracing method
US20060026368A1 (en) * 2004-07-30 2006-02-02 Fujitsu Limited Storage device
US20070192576A1 (en) * 2006-02-16 2007-08-16 Moore Charles H Circular register arrays of a computer
US20080270648A1 (en) * 2007-04-27 2008-10-30 Technology Properties Limited System and method for multi-port read and write operations
US20100023730A1 (en) * 2008-07-24 2010-01-28 Vns Portfolio Llc Circular Register Arrays of a Computer
US7904615B2 (en) 2006-02-16 2011-03-08 Vns Portfolio Llc Asynchronous computer communication
US7937557B2 (en) 2004-03-16 2011-05-03 Vns Portfolio Llc System and method for intercommunication between computers in an array
US7966481B2 (en) 2006-02-16 2011-06-21 Vns Portfolio Llc Computer system and method for executing port communications without interrupting the receiving computer

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9725367D0 (en) * 1997-11-28 1998-01-28 3Com Ireland Dynamic memory allocation
US6715001B1 (en) 1999-09-15 2004-03-30 Koninklijke Philips Electronics N.V. Can microcontroller that employs reconfigurable message buffers
GB2382898B (en) 2000-12-29 2005-06-29 Zarlink Semiconductor Ltd A method of managing data
GB0031761D0 (en) * 2000-12-29 2001-02-07 Mitel Semiconductor Ltd Data queues

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4652874A (en) * 1984-12-24 1987-03-24 Motorola, Inc. Serial communication interface for a local network controller
EP0415862A2 (en) * 1989-08-31 1991-03-06 International Business Machines Corporation Optimized I/O buffers
US5047917A (en) * 1985-07-12 1991-09-10 The California Institute Of Technology Apparatus for intrasystem communications within a binary n-cube including buffer lock bit
US5233701A (en) * 1988-03-29 1993-08-03 Nec Corporation System for managing interprocessor common memory

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2676845B1 (en) * 1991-05-23 1993-09-24 Sextant Avionique DEVICE FOR MANAGING MULTIPLE INDEPENDENT WAITING FILES IN A COMMON AND BANALIZED MEMORY SPACE.
EP0522224B1 (en) * 1991-07-10 1998-10-21 International Business Machines Corporation High speed buffer management
US5426639A (en) * 1991-11-29 1995-06-20 At&T Corp. Multiple virtual FIFO arrangement

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4652874A (en) * 1984-12-24 1987-03-24 Motorola, Inc. Serial communication interface for a local network controller
US5047917A (en) * 1985-07-12 1991-09-10 The California Institute Of Technology Apparatus for intrasystem communications within a binary n-cube including buffer lock bit
US5233701A (en) * 1988-03-29 1993-08-03 Nec Corporation System for managing interprocessor common memory
EP0415862A2 (en) * 1989-08-31 1991-03-06 International Business Machines Corporation Optimized I/O buffers

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5682494A (en) * 1993-07-05 1997-10-28 Nec Corporation Memory management system and method
US6049802A (en) * 1994-06-27 2000-04-11 Lockheed Martin Corporation System and method for generating a linked list in a computer memory
US5904731A (en) * 1994-07-28 1999-05-18 Fujitsu Limited Product-sum device suitable for IIR and FIR operations
US6601141B2 (en) 1995-07-03 2003-07-29 Mitsubishi Denki Kabushiki Kaisha Semiconductor memory device having cache function
US6256707B1 (en) * 1995-07-03 2001-07-03 Mitsubishi Denki Kabushiki Kaisha Semiconductor memory device having cache function
US5563761A (en) * 1995-08-11 1996-10-08 The Whitaker Corporation Transient voltage surge protection assembly for telecommunications lines
US6230249B1 (en) 1996-06-17 2001-05-08 Integrated Device Technology, Inc. Methods and apparatus for providing logical cell available information in a memory
US5978893A (en) * 1996-06-19 1999-11-02 Apple Computer, Inc. Method and system for memory management
US6442646B1 (en) * 1997-04-02 2002-08-27 Matsushita Electric Industrial Co., Ltd. First-in-first-out (FIFO) memory device for inputting/outputting data with variable lengths
US5930525A (en) * 1997-04-30 1999-07-27 Adaptec, Inc. Method and apparatus for network interface fetching initial and data burst blocks and segmenting blocks and scheduling blocks compatible for transmission over multiple virtual circuits
US20050025168A1 (en) * 1999-04-16 2005-02-03 Fujitsu Limited Relay unit and frame tracing method
US6822967B1 (en) * 1999-04-16 2004-11-23 Fujitsu Limited Relay unit and frame tracing method
WO2003055156A1 (en) * 2001-12-21 2003-07-03 Axiowave Networks Inc, Adressing sequential data packets
US7937557B2 (en) 2004-03-16 2011-05-03 Vns Portfolio Llc System and method for intercommunication between computers in an array
US20060026368A1 (en) * 2004-07-30 2006-02-02 Fujitsu Limited Storage device
US7353344B2 (en) * 2004-07-30 2008-04-01 Fujitsu Limited Storage device
US7904615B2 (en) 2006-02-16 2011-03-08 Vns Portfolio Llc Asynchronous computer communication
US7617383B2 (en) * 2006-02-16 2009-11-10 Vns Portfolio Llc Circular register arrays of a computer
US20070192576A1 (en) * 2006-02-16 2007-08-16 Moore Charles H Circular register arrays of a computer
US7966481B2 (en) 2006-02-16 2011-06-21 Vns Portfolio Llc Computer system and method for executing port communications without interrupting the receiving computer
US20110185088A1 (en) * 2006-02-16 2011-07-28 Moore Charles H Asynchronous computer communication
US8825924B2 (en) 2006-02-16 2014-09-02 Array Portfolio Llc Asynchronous computer communication
US7555637B2 (en) 2007-04-27 2009-06-30 Vns Portfolio Llc Multi-port read/write operations based on register bits set for indicating select ports and transfer directions
US20080270648A1 (en) * 2007-04-27 2008-10-30 Technology Properties Limited System and method for multi-port read and write operations
US20100023730A1 (en) * 2008-07-24 2010-01-28 Vns Portfolio Llc Circular Register Arrays of a Computer

Also Published As

Publication number Publication date
GB9212065D0 (en) 1992-07-22
GB2267588B (en) 1996-03-20
GB2267588A (en) 1993-12-08

Similar Documents

Publication Publication Date Title
US5359568A (en) FIFO memory system
US5673416A (en) Memory request and control unit including a mechanism for issuing and removing requests for memory access
US5696940A (en) Apparatus and method for sharing first-in first-out memory space between two streams of data
US6470415B1 (en) Queue system involving SRAM head, SRAM tail and DRAM body
EP0273083B1 (en) Non-locking queueing mechanism
US7603496B2 (en) Buffering data during data transfer through a plurality of channels
CA1290073C (en) Move-out queue buffer
US5568443A (en) Combination dual-port random access memory and multiple first-in-first-out (FIFO) buffer memories
EP0840202B1 (en) Dynamic peripheral control of I/O buffers in peripherals with modular I/O
CA2223890A1 (en) Split buffer architecture
EP1006451B1 (en) A DMA transfer device capable of high-speed consecutive access to pages in a memory
US4742446A (en) Computer system using cache buffer storage unit and independent storage buffer device for store through operation
US20060047874A1 (en) Resource management apparatus
EP0374338A1 (en) Shared intelligent memory for the interconnection of distributed micro processors
GB2259592A (en) A selectable width, burstable FIFO
US5339442A (en) Improved system of resolving conflicting data processing memory access requests
CN112311696B (en) Network packet receiving device and method
US6622186B1 (en) Buffer associated with multiple data communication channels
US8108873B1 (en) System for extending an addressable range of memory
US20050041510A1 (en) Method and apparatus for providing interprocessor communications using shared memory
EP0418447B1 (en) Device for controlling the enqueuing and dequeuing operations of messages in a memory
US4878197A (en) Data communication apparatus
CN100530078C (en) Management method of stack buffer area
US5430846A (en) List-based buffering mechanism for buffering data between processes in a data processing system
KR100236517B1 (en) Memory architecture for assigning dual port random access memory

Legal Events

Date Code Title Description
AS Assignment

Owner name: MOTOROLA, INC.

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LIVAY, AVIEL;BERGER, RICARDO;JOFFE, ALEXANDER;REEL/FRAME:006591/0203;SIGNING DATES FROM 19930511 TO 19930524

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

AS Assignment

Owner name: FREESCALE SEMICONDUCTOR, INC., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MOTOROLA, INC.;REEL/FRAME:015698/0657

Effective date: 20040404

Owner name: FREESCALE SEMICONDUCTOR, INC.,TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MOTOROLA, INC.;REEL/FRAME:015698/0657

Effective date: 20040404

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20061025