EP0886989A1 - Gestion de structures de donnees - Google Patents

Gestion de structures de donnees

Info

Publication number
EP0886989A1
EP0886989A1 EP97950321A EP97950321A EP0886989A1 EP 0886989 A1 EP0886989 A1 EP 0886989A1 EP 97950321 A EP97950321 A EP 97950321A EP 97950321 A EP97950321 A EP 97950321A EP 0886989 A1 EP0886989 A1 EP 0886989A1
Authority
EP
European Patent Office
Prior art keywords
list
memory
elements
read
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP97950321A
Other languages
German (de)
English (en)
Inventor
Simon Daniel Brueckheimer
David John Stacey
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nortel Networks Ltd
Original Assignee
Northern Telecom Ltd
Nortel Networks Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GBGB9626752.1A external-priority patent/GB9626752D0/en
Application filed by Northern Telecom Ltd, Nortel Networks Corp filed Critical Northern Telecom Ltd
Publication of EP0886989A1 publication Critical patent/EP0886989A1/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3081ATM peripheral units, e.g. policing, insertion or extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/22Arrangements for sorting or merging computer data on continuous record carriers, e.g. tape, drum, disc
    • G06F7/24Sorting, i.e. extracting data from one or more carriers, rearranging the data in numerical or other ordered sequence, and rerecording the sorted data on the original carrier or on a different carrier or set of carriers sorting methods in general
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/20Support for services
    • H04L49/201Multicast operation; Broadcast operation
    • H04L49/203ATM switching fabrics with multicast or broadcast capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/04Selecting arrangements for multiplex systems for time-division multiplexing
    • H04Q11/0428Integrated services digital network, i.e. systems for transmission of different types of digitised signals, e.g. speech, data, telecentral, television signals
    • H04Q11/0478Provisions for broadband connections

Definitions

  • This invention relates to data structures, and in particular to the management of dynamic data lists within a real-time environment.
  • One particular application of the invention is in updating connection maps in telecommunications systems.
  • a linked list enables the dynamic storage of uniform or non-uniform data items.
  • a number of related data items can be contained within a single list, with each item being linked to the next in the list by a pointer field.
  • the pointer field enables the list to be traversed and the data accessed.
  • new items are always added to the bottom of the list and old items are always removed from the top of the list.
  • a common example of this is a FIFO device. Updating this type of list requires the operations of list concatenation and item removal.
  • One example of an application in which this type of list is used is in the queuing of tasks within an operating system, with tasks being scheduled purely according to their arrival time.
  • a more complex use of a linked list allows for the ordering of all or part of the data information within the list. Updating this type of list requires the ability to add new items or delete existing items at arbitrary positions within the list. Examples of applications in which this type of list is used include the configuration of routing tables, call records, and translation tables in telecommunications systems and the scheduling of tasks, by priority, in computer operating systems.
  • US Patent US 5,274,768 (Traw et al.) describes an interface for Asynchronous Transfer Mode (ATM) networks which uses this kind of linked list manager.
  • International Patent Application WO 95/32596 A1 (Northern Telecom Limited) describes an SDH/ATM interface which uses a chained structure to link the timeslots for a particular ATM virtual circuit.
  • a memory 30 has a plurality of memory locations, which are numbered 1 to 20.
  • a reserved part of the memory such as location 1 , holds a pointer start_ptr to the memory location of the first element in the list.
  • the first element in the list occupies memory location 4. It could however occupy any one of the memory locations.
  • An enlargement shows the contents of memory location 4 in more detail.
  • a data item DATA and a pointer ptr_next are stored at this memory location.
  • the pointer points to the memory location of the next element in the linked list. This has a value "16" which is the memory location where the second element in the list is stored.
  • the memory holds a number of similar elements which are linked to one another in this manner.
  • Figure 1A shows two elements, one stored at memory location 4, and a second at memory location 9.
  • the pointer of the element at location 4 points to the memory location 9.
  • a problem associated with linked lists is that in order to find a particular data item the list must be searched in a systematic manner, beginning at the first element in the list and continuing through the list until the item is found. This makes the search process slow.
  • doubly linked lists are also commonly used. In a doubly linked list, for each data item, pointers are used to indicate both the next element in the list and the previous item in the list. In this manner the list may be traversed in either direction.
  • the use of linked lists and doubly-linked lists has a number of problems. Firstly the process of configuring and maintaining the lists is complex and thus necessitates its implementation using software. This precludes their use for real-time, high throughput applications. Secondly, the task of updating the linked lists usually requires that the process of accessing the data is interrupted.
  • One known method for updating data lists within real-time hardware systems involves the use of a shadow memory. Firstly, an updated list is configured within the shadow memory and then, secondly, this is swapped with the original memory to complete the update process.
  • Using a shadow memory has a disadvantage of requiring a doubling of the overall memory requirement. It is therefore a particularly expensive option for very high speed real-time applications which demand expensive, high-performance memory devices.
  • the present invention seeks to provide an improved method of updating a data table or list which minimises the disadvantages of the prior art methods.
  • a method of reading and updating at least one list of data elements stored in a predetermined sequence of locations of a memory device comprising: - providing a read pointer and a write pointer which can point to respective memory locations;
  • the written elements comprising: elements read by the reading step; new elements to be inserted into the list; the updating having the effect of moving a portion of the list through the sequence of memory locations as the list is read.
  • the moving process avoids the need for a large additional high speed memory, as used in known shadow memory techniques.
  • the method is both simple and requires a minimum of apparatus for its implementation.
  • Performing updates during the normal accessing of the list removes the necessity to interrupt the system during an update period.
  • This technique is ideally suited to cyclic applications where the complete set of data is read sequentially once per cycle of operation. Connection tables for telecommunications applications are one example where this cyclic reading occurs.
  • the order in which the elements of the list are maintained may have a significance.
  • the step of updating the list to insert a new element into the list at a particular position comprises moving data elements which follow that position forward through the sequence of memory locations to open a space to fit the new element, and inserting the new element into the space.
  • the step of updating the list to delete an existing item at a particular position in the list comprises moving data elements which follow that position backwards through the sequence of memory locations to overwrite the existing item.
  • the predetermined sequence of memory locations in which the data elements are stored is a sequence of contiguous memory locations. This simplifies the process of stepping between data elements in the list.
  • the technique proposed requires a minimum of control overheads compared to a traditional linked list and its simplicity of operation ensures that very high access and maintenance rates may be achieved in a system which is entirely hardware, or a combination of hardware and a minimum of software. Updating the data can be completely transparent to the normal operation of the memory, i.e. it is not necessary to interrupt the reading access process whilst an update occurs.
  • a set of instructions for updating the data list is stored, the set being arranged according to the order in which the list is accessed. This allows a complete list of updates for the list, which may include the addition and deletion of multiple data items, to be accomplished in a single cycle.
  • the method further comprises:
  • the second memory preferably comprises a FIFO device. It can alternatively comprise a reserved part of the main memory configured to work as a FIFO.
  • the step of deleting an item at a particular position in the list preferably comprises setting the write pointer to lag the read pointer whereby to move the data elements following the delete position backwards through the sequence of memory locations to overwrite the desired data element.
  • the step of inserting a new data element into the list at a particular memory location preferably comprises: reading the data element stored at that location into a second memory; and writing the new element into the memory location and on subsequent accesses reading the data element at the next location into the second memory and writing the oldest data element in the second memory into that next location whereby to move data elements which follow the new element forward through the sequence of memory locations.
  • a method for managing a connection map for a telecommunications switch comprising a list of connection instructions for the switch stored in a predetermined sequence of locations within a memory, the method comprising: - providing a read pointer and a write pointer which can point to respective memory locations;
  • the written elements comprising: instructions read by the reading step; new instructions to be inserted into the list; the updating having the effect of moving a portion of the list through the sequence of memory locations as the list is read.
  • the method for managing a connection map forms part of a method of operating a telecommunications switch comprising:
  • connection instructions define into which payload the buffered data is assembled.
  • connection map is ordered so as to assemble the payloads in the order that they will be disassembled at their destination. This avoids the need for a frame store at the destination and avoids the transmission delay which this frame store incurs.
  • the received data can comprise narrowband channels, and the payload can comprise the payload of an ATM cell.
  • the method for managing a connection map can further comprise: - receiving a request for a multicast connection between a source and multiple destinations;
  • the ability to easily configure and manage the connectivity required for multicast narrowband telephony calls is a particularly desirable feature for future networks supporting advanced features such as multi-media and conferencing services.
  • the method for managing a connection map can further comprise:
  • connection map in this manner makes it easy to compute the size of the payload for each destination.
  • a further aspect of the invention provides an apparatus for managing at least one list of data elements, the apparatus comprising:
  • - a storage device for storing the data elements in a predetermined sequence of memory locations
  • the written elements comprising: elements read by the reading means; new elements to be inserted into the list; the means for updating having the effect of moving a portion of the list through the sequence of memory locations as the list is read.
  • the apparatus can be implemented in hardware alone.
  • a further aspect of the invention provides a telecommunications switch comprising: - an input for receiving data from a plurality of sources;
  • - an output for transmitting the payloads to a plurality of destinations; - a memory for storing a map of connection instructions as at least one list of data elements in a predetermined sequence of locations within the memory;
  • a manager for managing the map comprising:
  • the written elements comprising: elements read by the reading means; new elements to be inserted into the list; the means for updating having the effect of moving a portion of the list through the sequence of memory locations as the list is read.
  • the method is particularly suited to updating connection maps and routing tables used in telecommunications systems and data networks. These applications are described in detail. However, it will be appreciated that the method has other applications, such as in event and process scheduling in realtime operating systems.
  • Figure 1 shows a known method of implementing a linked list
  • Figures 1A to 1C show how lists using the structure of figure 1 are updated
  • Figure 2 schematically shows an arrangement of linked lists within a memory
  • Figure 3 shows the updating of a set of linked lists in memory
  • Figure 4 shows a system for implementing the method shown in figure 3;
  • Figures 5 to 5B show the system of figure 4 in use to update linked lists
  • Figure 6 shows a flow-chart of the method for updating the linked lists
  • Figure 7 shows a telecommunications system in which the method is used
  • Figure 8 shows one way of achieving cross-connectivity across the network of figure 7;
  • Figure 9 shows an alternative way of achieving cross-connectivity across the network of figure 7;
  • Figure 10 shows a system to achieve cross-connectivity in the manner of figure 9;
  • Figures 11 and 11 A show the process of updating the connection table used in the system of figure 10;
  • Figure 12 illustrates the concept of multicasting
  • Figures 13 and 13A show the process of updating the connection table used in the system of figure 10 to achieve multicasting.
  • the basic configuration of a set of linked lists in memory is shown in figure 2.
  • the memory can be used to maintain a single data list or can contain multiple linked lists.
  • the total size of the memory defines the maximum limit for the combined length of all data lists.
  • Each list is stored in a predetermined sequence of memory locations.
  • the predetermined sequence of memory locations is a sequence of contiguous memory locations. This avoids fragmentation of the list and obviates the need for the control logic which is required to traverse a traditional linked list structure.
  • all unassigned locations within the memory occur contiguously as a group at the bottom locations of the memory.
  • One means to distinguish the boundary between the linked lists and unassigned memory is to maintain an End-Of-Lists marker EOL at the first free memory location.
  • the additional field could represent an item of information which would otherwise need to be stored, and therefore need not add to the memory requirement.
  • the list number represents the identity of the virtual channel into which each octet is asembled, an item of information which needs to be stored.
  • Figure 3 illustrates the processes for updating the lists.
  • the memory is sequentially traversed in the normal manner until the item to be deleted or the position of the item to be inserted is located. An insert or delete operation will occur at this point.
  • the original data is moved one location down in memory to compensate for an insert, or moved one location up in memory to compensate for a delete.
  • a full update of the linked list structure can be accomplished - the individual linked lists remain contiguous and null locations are still found at the bottom of the memory.
  • the updated data lists are valid for use from the following memory cycle onwards.
  • Block 40 shows the original state of the memory.
  • Block 41 shows the state of the memory following a memory cycle during which a delete operation is performed to remove the item "334" from list 1.
  • Block 42 shows the state of the memory following a further memory cycle during which an additional item "45" is added to linked list 2.
  • the system comprises a general control entity 54, a memory 52 to store the linked lists (its size equal to the combined maximum lengths of all lists); a memory access mechanism that can perform a single read and single write operation during each access of the memory (the position of the read and write pointers may differ), a small FIFO 53 used for intermediate data storage during the shuffle down operation, and a comparator/logic function 51 to locate the position of each update operation within the memory.
  • the size of FIFO 53 is arbitrary, but places an upper limit on the total number of inserts that can be achieved during each memory cycle. This is because the process of inserting items to the memory temporarily displaces some elements, as will be more fully described later.
  • the memory can conveniently be a standard RAM device. These parts can be conveniently integrated in an ASIC.
  • control entity To perform an update the control entity must provide the process with the identity of the list (and, for an insert, the new data item). In the simplest case - where new data is always added to the top of a list and old items are always deleted from the bottom of the list - this is sufficient information for the update process to be performed. However, this technique is not just limited to the pushing and popping of data onto and off of memory and can allow data items to be inserted/deleted at arbitrary positions within an individual list. In such cases the control entity must also provide the process with sufficient information to allow the position of the insertion/deletion point to be located. This information can take the form of either the relative position within a list (e.g. the 3rd item in list 4) or can be the value of a data item (to be deleted or the data item currently located at the insertion point).
  • memory cycle represents one complete sequential scan of the memory contents during which each memory location is inspected and updates are made.
  • access cycle represents one inspection of one memory location during one such memory cycle.
  • the memory is sequentially accessed in the normal manner until the insertion point is found.
  • the contents of this memory location are transferred to the FIFO for temporary storage before the new data item is written to the memory at this point.
  • the data item in the next memory location is placed into the FIFO before it is overwritten (in the memory) by the data held within the FIFO.
  • This process continues to the end of the memory cycle and at this point all of the data items located after the insertion point will have been shuffled down one position in the memory.
  • the process can be extended to enable multiple inserts to be performed during a single memory cycle.
  • the delete process operates in a similar manner.
  • the memory is sequentially accessed until the item to be deleted is found. At this position the write pointer is set to lag the read pointer by a single location.
  • the data read from the subsequent memory location is written into the previous location (overwriting the item to be deleted). This process is repeated until the end of the memory cycle and at this point all data immediately following the deletion position will be shuffled one location up in memory. It is also possible to extend the process to accommodate multiple deletes within a single memory cycle - for each deletion the write pointer is further decremented from the read pointer by one location thus increasing the shuffle length.
  • insert and delete processes are mutually compatible and it is therefore possible to perform multiple inserts and deletes within one cycle - the only proviso being that the control entity orders the updates according to their insert/delete positions.
  • FIG. 5B A detailed example of the update process is shown in figures 5 to 5B.
  • three linked lists are maintained in a sixteen element memory.
  • three insertions and two deletions are performed. It will be apparent that other combinations of insertions and deletions can be performed.
  • Figure 5 shows the situation at the start of the memory cycle, before any updates have occurred, with the set of update instructions for that cycle 50 arranged in the order in which the updates need to performed.
  • Figure 5A shows the situation part-way through the cycle, with three remaining updates still to be performed.
  • Figure 5B shows the situation at the end of the cycle.
  • Table 1 describes each step of the example in detail, with the read and write operations and FIFO contents for each of the 16 accesses of the complete memory cycle. In describing the read and write operations the following notation is used:
  • ' list number/data value memory location' e.g. 1/3 from 1 means data having the value '3' in list 1 is read from memory location 1.
  • Table 1 Status of the update process during each access cycle
  • One way of achieving this is to insert the marker using a spare write access in the following memory cycle. This is needed because the write pointer always lags the read pointer.
  • the control entity ensures that this can be incorporated into any update due to be performed in the next cycle. Under the very rare situations where the immediately following update utilises all available write accesses (i.e. a full shuffle of the memory) the control entity will need to delay this new update by a single cycle.
  • a housekeeping period can be incorporated between successive memory cycles in which 'holdover' write cycles can be performed.
  • the write access need not be performed and may be used for any other housekeeping task such as performing any holdover writes from the following frame cycle that could occur due to the net delete scenario previously described.
  • the synchronous narrowband traffic POTS, ISDN BA, ISDN PRA is carried over a 64kb/s fabric 70. This terminates at a first inter-working unit (IWU) 71.
  • IWU 71 adapts the narrowband traffic into ATM cells which are then transported across the ATM network 72 to the far-end IWU 73.
  • the far-end IWU de-adapts narrowband traffic back to the synchronous domain for transport over a further
  • AVJ Adaptive Virtual Junctor
  • a minimum cell assembly delay can be guaranteed such that there is no need for echo cancellers on each voice circuit. This results in considerable savings in cost and complexity within the network.
  • a second key feature of the AVJ is that bandwidth is allocated to a destination according to its instantaneous traffic load. Therefore, the size of a PVC will be allowed to rise or fall to match the number of voice channels currently connected to the relevant trunk destination. If the traffic falls below the minimum defined channel size padding will be used to maintain the guaranteed cell assembly duration.
  • the ingress function of the AVJ is responsible for the adaptation of narrow-band voice channels into cells prior to transporting the cells across the ATM network 72.
  • the egress function of the AVJ is responsible for the de-adaptation of the narrow-band voice channels from the ATM cells back to the synchronous domain.
  • the AVJ there needs to be a means for the AVJ to maintain a dynamic connection table to enable the narrow-band, frame-synchronised, voice traffic to be packed into and unpacked from the ATM cell payloads.
  • the connection map must be continuously scanned, with each complete scan occurring in a 125 ⁇ s frame interval. This precludes the use of a traditional (software based) linked list structure.
  • Each switching stage consists of a frame store 80, 82 with access to the frame store being controlled by connection maps 81 , 83.
  • the frame store performs a time switching function, i.e. the order of loading and unloading can differ. In this manner the ATM cells can be packed and unpacked in an arbitrary order achieving full cross-connectivity whilst enabling the connection tables to be ordered in an arbitrary manner.
  • a problem with this approach is that each switching stage incurs a significant time delay (of 125 ⁇ s) and the net effect of double switching is to exceed the stringent narrowband delay budgets specified in today's telecommunication networks.
  • VC Virtual Channel
  • the payload for VC A is loaded in ascending order - the data in ingress timeslot 1 from frame store 80, followed by the data in ingress timeslot 2, followed by the data in ingress timeslot 6.
  • this data must be delivered to timeslots 2, 1 and 4 respectively.
  • data is not ordered in the payload with respect to the required positions on the egress timeslot structure.
  • a frame store is required at the egress side to provide a level of indirection between the disassembly of the payload and the egress synchronous timeslot structure.
  • the data from ingress timeslot 1 is read from VC payload A and written into the egress frame store 82 at position 2, data from ingress timeslt 2 is written to egress frame store position 1 and finally data from ingress timeslot 6 is written to egress frame store position 4.
  • the frame store constructed in this cycle can be written sequentially to the synchronous backplane structure.
  • the delay can be reduced, and the narrowband delay budget met, by eliminating one of the switching stages.
  • the egress switching stage can be eliminated by assembling the ATM cells (at the ingress function) in the order that they need to be unpacked at the egress function.
  • the narrowband channels can then be directly written to the synchronous backplane without the need for an egress frame store stage 82.
  • the payload for Virtual Channel (VC) 'A' is assembled in the order of the data from ingress timeslots 2, 1 , 6. At the egress side there is no requirement for a further frame store and the payloads can be disassembled directly to the synchronous backplane.
  • the egress map thus merely states the VC from which to retrieve the relevant data octet which is to be written to the synchronous backplane.
  • egress timeslot 1 the data octet from ingress timeslot 2 is read from the payload and written to the backplane; in egress timeslot 2 the data octet from ingress timeslot 1 is read from the payload and written to the backplane; and in egress timeslot 4 the data octet from ingress timeslot 6 is read from the payload and written to the backplane.
  • the ordering of the ingress connection map is therefore dictated by the egress function.
  • the egress function To set-up a new connection the egress function must therefore signal the order of the new call relative to its other connections. This can be achieved with a minimum of signalling message overhead as a full signalling handshake is usually performed between the ingress and egress functions to ensure a reliable call set-up procedure.
  • a single new connection could now result in a requirement for a complete re-ordering of the ingress connection table. This would be impractical to achieve, without process interruption, using a conventional manner of updating the connection map.
  • the properties of the dynamic data structure manager described here make it ideal for this process.
  • the configuration and maintenance of the dynamic connection table in the ingress function is therefore one application of the invention.
  • the ingress function includes a frame store 80 into which the synchronous narrowband data from the backplane is stored.
  • a payload RAM 91 is used to store ATM cells whilst they are being constructed.
  • a further memory stores the connection map. Octets are read out of the frame store under the control of the connection map and into the payload RAM.
  • Completed ATM cells are then transported across the ATM network to the egress function where they are stored in the egress payload RAM awaiting disassembly.
  • the ATM cells are disassembled under the control of egress connection table 90.
  • the narrowband channels are directly written to the synchronous TDM backplane.
  • connection table is configured as a series of contiguous linked lists - one list per trunk destination (identified by PVC identifier). Each list contains the set of backplane addresses relating to the narrow-band channels currently connected to that destination. The order of the addresses within a list represents the order of unpacking on the egress side.
  • the connection table is addressed sequentially and for each non-empty memory location the address information is used to access a voice channel on the TDM backplane, stored in frame store 80, whilst the linked-list identifier identifies into which payload (channel A, B, C,...) the octet should be placed.
  • the linked lists are maintained contiguously in memory, it can be readily seen that it is a simple process for the AVJ to compute the current channel size of each traffic destination as the ATM payload is packed.
  • the current channel size is represented by the sum of the number of connections in a list. If the instantaneous channel size falls below a pre-defined minimum the payload padding process is automatically activated so as to maintain a predetermined transport delay for each of the narrowband circuits carried over the ATM network.
  • connection table must be updated as new calls are set-up or existing calls are released.
  • the egress function To add a new call, the egress function must signal the order of the new call within the VC. This order is computed as an offset e.g. an offset of 0 indicates that the new channel is to be the first channel in the VC.
  • the ingress control entity sorts the updates (there may be multiple requests from the multiple egress AVJs to which it connects) into VC order.
  • the update of the connection table is illustrated in figures 11 and 11 A. The update requests are presented to the connection map at the start of the frame. Figure 11 shows a queue of three update requests comprising two new connections and one existing connection which is to be removed. The updates are arranged in the order in which they are to be performed; two updates to virtual channel A and one to virtual channel
  • connection table As the connection table is accessed sequentially the updated information is inserted or deleted with the original connection information being shuffled in memory accordingly. A full update is performed in a single 125 ⁇ s frame interval with the new connection information being valid from the following frame onwards.
  • Figure 11A shows the situation at the end of the memory cycle, with the updates completed. It can be seen that the update process is completely transparent to the normal operation of ATM cell assembly.
  • a further feature of the update process is the ability to easily connect and maintain multicast calls.
  • a multicast call comprises copying one ingress backplane voice channel to several egress backplane connections and is shown conceptually in figure 12.
  • a particular channel may be replicated within a particular payload or to a number of different payloads.
  • the contents of location '1 ' of the frame store (representing a particular narrowband channel) are copied twice into the payload for virtual channel A, and once into the payload for virtual channel C.
  • the control entity To set up a multicast call, the control entity must simply supply the process with the TDM backplane address and the multiple insertion points within the connection table (the call may be replicated both within a single PVC and across multiple PVCs).
  • connection table When the connection table has been updated, each copy of the call occupies one location within the connection map.
  • the operation of providing multicast calls is completely transparent to the rest of the adaptation process.
  • Providing multicast calls by this technique also has the important advantage of not requiring complex octet replication circuitry. This is because data is not copied, but is simply read from the frame store.
  • Figures 13 and 13A show the process of updating the connection table to support multicasting.
  • Figure 13 shows a queue of three update requests, to add the contents of location '1' of the frame store twice to channel A and once to channel C.
  • Figure 13A shows the resulting updated connection table.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

L'invention concerne une structure de données, telle qu'un schéma de connexions de télécommunications, comprenant au moins une liste d'éléments de données mis en mémoire dans une séquence définie d'emplacements mémoire (52). La structure est mise à jour par déplacement d'une partie de la liste dans la séquence d'emplacements mémoire de manière à y introduire un nouvel élément, ou à effacer ou modifier un élément existant à une position particulière dans la structure, tout en maintenant l'ordre de la liste. Des éléments de la structure sont lus dans la séquence dans laquelle ils sont mis en mémoire et la structure est mise à jour pendant l'accès à la structure d'éléments par déplacements dans la séquence d'emplacements de mémoire. Une application particulière de la présente invention consiste à tenir à jour un schéma de connexions d'un commutateur de télécommunications, le schéma mettant en mémoire des instructions de connexion définissant dans quelle charge utile de cellule MTA sont assemblées les données d'appel reçues en bande étroite.
EP97950321A 1996-12-23 1997-12-22 Gestion de structures de donnees Withdrawn EP0886989A1 (fr)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
GBGB9626752.1A GB9626752D0 (en) 1996-12-23 1996-12-23 Management of data structures
GB9626752 1996-12-23
US08/869,898 US6252876B1 (en) 1996-12-23 1997-06-05 Management of data structures
US869898 1997-06-05
PCT/GB1997/003535 WO1998028940A1 (fr) 1996-12-23 1997-12-22 Gestion de structures de donnees

Publications (1)

Publication Number Publication Date
EP0886989A1 true EP0886989A1 (fr) 1998-12-30

Family

ID=26310718

Family Applications (1)

Application Number Title Priority Date Filing Date
EP97950321A Withdrawn EP0886989A1 (fr) 1996-12-23 1997-12-22 Gestion de structures de donnees

Country Status (4)

Country Link
EP (1) EP0886989A1 (fr)
JP (1) JP2000506658A (fr)
CA (1) CA2241883A1 (fr)
WO (1) WO1998028940A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040254931A1 (en) * 2003-05-29 2004-12-16 Marconi Communications, Inc. Multiple key self-sorting table
US7536428B2 (en) * 2006-06-23 2009-05-19 Microsoft Corporation Concurrent read and write access to a linked list where write process updates the linked list by swapping updated version of the linked list with internal list
JP5445147B2 (ja) * 2010-01-07 2014-03-19 富士通株式会社 リスト構造制御回路

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE3515881C2 (de) * 1985-05-03 1995-05-04 Deutsche Aerospace Schaltungsanordnung zum im Vergleichstakt synchronen größenmäßigen Einsortieren einer aktuellen digitalen Wertgröße
US4751675A (en) * 1985-08-19 1988-06-14 American Telephone And Telegraph Company, At&T Bell Laboratories Memory access circuit with pointer shifting network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO9828940A1 *

Also Published As

Publication number Publication date
WO1998028940A1 (fr) 1998-07-02
JP2000506658A (ja) 2000-05-30
CA2241883A1 (fr) 1998-07-02

Similar Documents

Publication Publication Date Title
US6252876B1 (en) Management of data structures
US5884297A (en) System and method for maintaining a table in content addressable memory using hole algorithms
US5592476A (en) Asynchronous transfer mode switch with multicasting ability
US6735203B1 (en) Switch arrangement
KR100313411B1 (ko) 전기통신장치와방법
EP0522224B1 (fr) Gestion de mémoire tampon à haute vitesse
Karol et al. Improving the performance of input-queued ATM packet switches
EP0680179A1 (fr) Appareil pour envoi multidestinatoire
JP3177584B2 (ja) パケット交換装置及び同制御方法
EP0471344A1 (fr) Méthode et circuit de mise en forme du trafic
US5600820A (en) Method for partitioning memory in a high speed network based on the type of service
JPH08298522A (ja) 共有メモリ非同期転送モードスイッチ内において使用されるための空間優先度を維持するための選択的押出しシステム及び方法
AU2807992A (en) Packet switch
CA2188882A1 (fr) Architecture de mode de transfert asynchrone et element de commutation
JPH05219098A (ja) フレーム変換方法及び装置
EP0586584A4 (en) A high-performance host interface for atm networks
JP2002538718A (ja) 多重atmセル・キューを管理する方法および装置
EP0609692A2 (fr) Système d'assemblage/désassemblage des cellules ATM
JPH10500545A (ja) 通信システム
JP3735396B2 (ja) Atmセルをマルチキャストする方法と装置
US5475680A (en) Asynchronous time division multiplex switching system
EP0289733A2 (fr) Procédé de commutation pour signaux de voix et de données intégrés
GB2290433A (en) ATM communications system
US5463622A (en) Control unit for the common memory of an ATM node
EP0886989A1 (fr) Gestion de structures de donnees

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): DE FR GB IT NL SE

17P Request for examination filed

Effective date: 19990104

RAP3 Party data changed (applicant data changed or rights of an application transferred)

Owner name: NORTEL NETWORKS CORPORATION

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: NORTEL NETWORKS LIMITED

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: NORTEL NETWORKS LIMITED

17Q First examination report despatched

Effective date: 20030113

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20051126