EP0775346A1 - A network switch - Google Patents

A network switch

Info

Publication number
EP0775346A1
EP0775346A1 EP95930157A EP95930157A EP0775346A1 EP 0775346 A1 EP0775346 A1 EP 0775346A1 EP 95930157 A EP95930157 A EP 95930157A EP 95930157 A EP95930157 A EP 95930157A EP 0775346 A1 EP0775346 A1 EP 0775346A1
Authority
EP
European Patent Office
Prior art keywords
input
output
buffer
channels
storage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP95930157A
Other languages
German (de)
French (fr)
Other versions
EP0775346A4 (en
Inventor
Elon Littwitz
Gabriel Ben-David
Haim Kurz
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Brun Heidi M
ORNET DATA COMMUNICATION TECHNOLOGIES Ltd
Original Assignee
Brun Heidi M
ORNET DATA COMMUNICATION TECHNOLOGIES Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Brun Heidi M, ORNET DATA COMMUNICATION TECHNOLOGIES Ltd filed Critical Brun Heidi M
Publication of EP0775346A1 publication Critical patent/EP0775346A1/en
Publication of EP0775346A4 publication Critical patent/EP0775346A4/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/382Information transfer, e.g. on bus using universal interface adapter
    • G06F13/385Information transfer, e.g. on bus using universal interface adapter for adaptation of a particular data processing system to different peripheral devices

Definitions

  • the present invention relates to network switches in general and to Ethernet network switches in particular.
  • Figs. 1A nd 1B illustrate two different types of Ethernet network formed of a plurality of workstations 10 connected together.
  • the Ethernet network of Fig. 1A is a "shared media" network in which the workstations 10 share the bandwidth of the bus 12 which connects them together.
  • Bus 12 is typically implemented in a "hub” or “multiport repeater".
  • those workstations 10 which are actively sending utilizing the network share the capacity with each other. The fewer the users, the more capacity is available for those who are active. Unfortunately, the opposite is also true; when there are many users, the capacity is shared among all active workstations and thus, each one can only utilize a small portion of the capacity.
  • Switched Ethernet networks were designed in which a number of conversations are allowed at once. The active conversations share the available capacity and the conversations are switched so that all can get access at some time.
  • a switched network 14 is shown in Fig. 1 B in which point-to-point conversations 16 are enabled.
  • the switched network 14 is typically implemented in a network switch.
  • the Ethernet protocol involves sending "frames" of data, which include destination information therein, from one workstation, for example workstation 10a, to the entire network. Since all workstations continually listen to the network, the destination workstation, for example workstation 10b, can pick up the frame sent to it.
  • Prior art network switches utilize a central processing unit (CPU) to direct frames to a memory device for storage and to forward the frames to their destination workstation at the appropriate time.
  • the received frame is loaded into the memory and, only once the frame has been completely stored, is it transmitted to the output port.
  • the time a frame takes from input to output is a function of the length of the frame
  • a network switch which includes a storage buffer, apparatus having fixed delay on input to the storage buffer, apparatus having fixed delay on output from the storage buffer and association apparatus for associating data within the storage buffer whereby the network switch is connectable to a plurality of channels each of which operates as an input and an output channel.
  • the fixed delay input apparatus transfers input data from input channels to the storage buffer with a first fixed delay.
  • the fixed delay output apparatus transfers output data from the storage buffer to the output channels with a second fixed delay.
  • the association apparatus associates, within the storage buffer, input data from one input channel with at least one output channel thereby converting the input data to output data.
  • the storage buffer includes a plurality of storage spaces and the network switch also includes apparatus for temporarily assigning each storage space to a conversation between only one input channel and at least one output channel. Additionally, there is apparatus for indicating to the fixed delay input means to place the input data from each input channel into the assigned storage space for the corresponding conversation.
  • the storage spaces are preferably first in, first out (FIFO) buffers.
  • the fixed delay input apparatus includes an input buffer and an internal bus.
  • the input buffer includes separate input buffer spaces each storing the input data from one of the input channels.
  • the internal bus has separate time slots each receiving data from one of the input buffer spaces.
  • the input buffer also includes a header buffer for storing routing and status information regarding the data received from each input channel.
  • the network switch also includes apparatus for discarding any data for whom the routing information is to an unknown destination.
  • the fixed delay output apparatus includes separate output buffer spaces each storing the output data from one of said output channels.
  • the input and output data are formed of a portion of a frame.
  • the network switch also includes a back pressure controller, activatable once all of the plurality of storage spaces are assigned, for providing either collisions or jam frames on all input channels attempting to start new conversations.
  • a number of network switches can be combined together.
  • the internal busses are combined while the remaining elements remain separate.
  • the network switch includes a two-way buffer at least having one input and one output first in, first out (FIFO) buffer per channel which is large enough to store one frame portion.
  • the network switch also includes an internal bus which receives frame portions from the input FIFOs, a storage buffer having a multiplicity of storage FIFOs and a switch controller.
  • the internal bus has a timing sequence having a plurality of timing periods of which one timing period is allocated to each input FIFO.
  • the switch controller includes apparatus for temporarily assigning each storage FIFO to collect frame portions from timing periods corresponding to one conversation, of the length of a frame, between one input channel and at least one output channel, wherein not all of said output channels are active at the same time.
  • the switch controller also includes apparatus for transferring the oldest frame portions of each active output channel to its corresponding output FIFO of the two-way buffer for later transfer out to its active output channel.
  • the network switch is connectable to a plurality of channels each of which operates as an input and an output channel.
  • the network switch also includes a back pressure controller, activatable once all of said multiplicity of storage FIFOs are active, for providing either collisions or jam frames on all input channels attempting to start new conversations.
  • the two-way buffer also includes a header buffer in which is stored routing and status information regarding each frame portion received from the channels.
  • the present invention incorporates the method performed by the network switch described hereinabove, which switches data among a plurality of channels each of which operates as an input and an output channel.
  • the method includes the steps of transferring input data from the input channels to a storage buffer with a first fixed delay, transferring output data from the storage buffer to the output channels with a second fixed delay.
  • the switching method also associates input data from one input channel with at least one output channel, within the storage buffer. In this way, input data is converted to output data.
  • the storage buffer includes a plurality of storage spaces.
  • the method also includes the steps of temporarily assigning each storage space to a conversation between only one input channel and at least one output channel and of indicating to said fixed delay input apparatus place the input data from each input channel into the assigned storage space for the corresponding conversation.
  • the step of transferring input data includes the steps of providing separate input buffer spaces corresponding to each of the input channels, and providing an internal bus having separate time slots each receiving data from one of the input buffer spaces.
  • the step of transferring output data includes the step of providing separate output buffer spaces corresponding to each of the output channels.
  • the step of transferring output data also includes the step of providing collisions on all input channels attempting to start new conversations once all of said plurality of storage spaces are assigned.
  • the step of transferring output data also includes the step of providing jam frames on all input channels attempting to start new conversations once all of said plurality of storage spaces are assigned.
  • FIGs. 1A and 1B are block diagram illustrations of prior art networks having a multiport repeater (Fig. 1A) and an Ethernet network switch (Fig. 1B);
  • Rg. 2 is a block diagram illustration of a network switch constructed and operative in accordance with a preferred embodiment of the present invention
  • Fig. 3 is a timing diagram useful in understanding the operation of the switch of Fig. 2;
  • Fig. 4 is a schematic illustration of a storage buffer forming part of the switch of Fig. 2;
  • Rg. 5 is a schematic illustration of an alternative embodiment of the switch of Fig. 2 in which a few such switches are connected together;
  • Rg. 6 is a timing diagram of the timing of an internal bus in the alternative switch of Rg. 5.
  • FIG. 2 - 4 illustrate the network switch 19 of the present invention, implemented for the Ethernet protocol. It will be appreciated that the principles of the present invention can also be implemented for other network protocols.
  • the switch 19 comprises two channel busses 20, each connected to a plurality of workstations 10, or channels, two arbiters 21, each operating in conjunction with one of the channel busses 20, a two-way buffer 22, an internal bus 24, a storage buffer 26 and a switch controller 28.
  • two channel busses 20 each connected to a plurality of workstations 10, or channels
  • two arbiters 21 each operating in conjunction with one of the channel busses 20, a two-way buffer 22, an internal bus 24, a storage buffer 26 and a switch controller 28.
  • the two-way buffer 22 comprises an input portion 23 and an output portion 25.
  • the input portion 23 comprises a plurality of input first in, first out (FIFO) buffers 30, one per channel 10, and a header buffer 32 and the output portion 25 comprises a plurality of output FIFO buffers 33, one per channel 10. 12 input and output FIFOs 32 and 33, respectively are shown as an example.
  • the data to be transferred is not the entire frame, Ethernet or otherwise, as in the prior art, but a portion thereof (herein called a "frame portion") of a predefined size, such as a small percentage of the frame length.
  • a frame portion typically includes a destination address (6 bytes), a source address (6 bytes), a payload (46 to 1500 bytes), a length field (2 bytes) and a frame check sum (4 bytes).
  • Each arbiter 21 communicates with its channels 10 in a round robin manner. At any given time, one arbiter 21 is operative for input (the "input arbiter") and one arbiter 21 is operative for output (the “output arbiter”).
  • the input arbiter 21 transfers the frame portions from the channels 10 connected to it to the corresponding input FIFOs 30 of the two-way buffer 22.
  • the output arbiter 21 transfers frame portions from the corresponding output FIFOs 33 of the two-way buffer 22 to the channels 10 connected to the output arbiter.
  • the input arbiter 21 provides the switch controller 28 with information about each frame of each channel. For example, the information might include the status of the channel (i.e.
  • the channel is available and whether or not it is sending data) and, if there is a frame to be sent, the number of the frame portion within the frame (i.e. the second of 1000) including an indication of whether it is the first, last or middle, and the destination channel of the frame.
  • the switch controller 28 accesses the network table 39, in which is stored a map of the entire network as it is currently known, to identify whether or not the designation channels of each frame portion are known. If so, the network table 39 extracts the physical port (i.e. channel) to which each designation channel is attached. If the location of the destination is not known, the switch controller 28 will cause the frame portion to either be broadcast to all channels 10 (in a method known in the art as "negative filtering") or discarded (in a method called herein "positive filtering").
  • the switch controller 28 then creates a header for the entire set of frame portions (all 12) in the input portion 28 and places the header into header buffer 30.
  • each input FIFO 30 receives input frame portions only from its corresponding channel. Thus, if a channel has no frame portions to send to the network, its corresponding input FIFO 30 will not be filled and the switch controller 28 provides a notation to that effect in the header buffer 32.
  • the two-way buffer 22 Once the two-way buffer 22 is full (i.e. it has received frame portions from each of the channels 10 from both channel busses 20), it transfers the data held therein to a transfer buffer 34 which, in turn, transfers all of the data to the internal bus 24 as follows and as shown in Fig. 3.
  • Fig. 3 is a timing diagram of the frame portion flow through network switch 19 of the present invention.
  • the timing diagram is divided into three equal length periods, an input period 36, a transfer period 37 and an output period 38, each of X clock cycles, where X is typically 80.
  • the clock cycles are indicated in Fig. 3 as the space between ticks.
  • input frame portions are provided from the channels 10 to the input portion 23 of the two-way buffer 22, as described hereinabove.
  • the input frame portions and other data are transferred to the internal bus 24 via the transfer buffer 34.
  • output frame portions are provided from the output portion 25 of the two-way buffer 22 to their corresponding channels 10, as detailed hereinbelow. It will be appreciated that the three periods 36 - 38 are pipelined such that, the ith input period 36 occurs at the same time as the (i-1)th transfer period 37 and the (i-2)th output period 38.
  • the header is provided first, during a predefined number, such as four clock cycles. In each following clock cycle, one frame portion is provided, in channel order. Thus, the frame portion from channel 1 is provided in the first clock cycle, followed by the frame portion from channel 2, etc. It will be appreciated that the header and frame portions each have their own allocated "time slots" and that the time slots can be of any desired length. If an input FIFO 30 was empty, because its corresponding channel
  • the network switch 19 of the present invention enables many conversations to occur at the same time.
  • Storage buffer 26 is detailed in Fig. 4 and comprises a multiplicity of "storage FIFOs" 40, where each storage FIFO 40 is operative to collect a frame of data and 10 storage FIFOs 40a - 40j are shown in Fig. 4.
  • Each storage FIFO 40 stores frame portions of a single frame which are to be sent either to a single channel (creating a point-to-point conversation) or to many channels (creating a point-to-multi-point conversation).
  • the storage FIFOs 40a, 40b and 40c of Fig. 4 are for point-to-point conversations and storage FIFO 40d is for a point-to-multi-point conversation.
  • each storage FIFO 40 stores a frame from one conversation.
  • the switch controller 28 Before entering data into the storage buffer 26, the switch controller 28 receives the header information from one transfer period 37 and from that information, determines in which storage FIFO 40 to place the frame portion provided in each clock cycles following the header clock cycles. If the header information indicates that the frame portion of a channel is the first frame portion of a new frame, the switch controller 28 places the frame portion, labeled 42, into the next empty storage FIFO 40, such as 40e. Later frame portions 42 from the same channel (received after the current transfer period
  • the switch controller 28 places the frame portion 42 into the storage FIFO 40 currently allocated to that channel for input.
  • storage FIFOs 40a and 40c more than one channel can be sending frames to the same channel.
  • Storage FIFO 40a is allocated to channel 1 for sending a frame to channel 3 and storage FIFO 40c is allocated to channel 2 for sending a frame also to channel 3. Since storage FIFO 40a comes before storage FIFO 40c, storage FIFO 40a will be emptied, as described hereinbelow, prior to emptying storage FIFO -40c. However, both storage FIFOs 40a and 40c can be filling at the same time.
  • Switch controller 28 keeps track of which storage FIFO 40 belongs to which destination channel and the order in which the storage FIFOs 40 with data from the same channel were received. Thus, the switch controller 28 knows the order in which to empty the storage FIFOs 40 storing frames for the same channel. Furthermore, the switch controller 28 knows which storage FIFOs 40 store frames for broadcast conversations. For these frames, switch controller 28 keeps track of the channels to which it has already sent a copy of the broadcast frame or frame portions and only removes the data in the broadcast storage FIFO 40d once all the designated channels have received the data.
  • switch controller 28 transfers the oldest frame portion 42 in each currently active storage FIFO 40 to the output portion 25 of the two-way buffer 22, via a second transfer buffer
  • the frame portions are stored in the output FIFOs 33 corresponding to the channels 10 to which they are to be sent.
  • the arbiters 21 access each output RFO 33, in order, and transfer the frame portions stored therein, if any, to the corresponding channel 10 for output.
  • the network table 39 also learns the addresses of the source channels from the source address fields present within the frame.
  • each arbiter 21 works half of each input and output period.
  • This feature enables the present invention to implement a full duplex Ethernet protocol in which, during any one period, both incoming and outgoing frames can be transferred.
  • the present invention provides a fixed delay from the input of data through its storage in the storage buffer 26 and from the storage buffer 26 out to the channels 10.
  • the fixed input delay is provided by the fixed allocation of space per channel in the input portion 23 of the two-way buffer 22 and the fixed allocation of time per channel during the transfer period 37.
  • the fixed output delay is provided by the fixed allocation of space per channel in the output portion 25 of the two-way buffer 22. Because the storage in the input portion 23 and the timing on the internal bus 24 are fixedly allocated, the present invention has no need for a standard input buffer which has to be managed. The present invention only needs to manage the storage buffer 26.
  • the time through the storage buffer 26 is variable.
  • the variability is a function of the activity of the network and not of the manner in which the storage buffer 26 is designed. For example, if 11 channels all choose to send frames to the twelfth channel, each storage
  • FIFO 40 will store a frame, or portion thereof, destined for the twelfth channel. Since the twelfth channel can only receive one frame portion at a time, it will receive data only from the currently active storage FIFO 40.
  • switch controller 28 indicates to a back pressure controller 50 to indicate to any channel newly sending to the switch to stop sending for a while.
  • back pressure controller 50 forces a collision or sends a jam frame, as are known in the Ethernet protocol, during the "round trip time" or the first 51.2 microseconds of the transmission of the frame.
  • the sending channel will resume sending at some, randomly chosen, future time.
  • the jam frame as soon as transmission of the jam frames finish, the sending channel will resume sending.
  • more than one network switch 19, such as is described hereinabove, can be connected together on the same internal bus 24 to produce a large switch 70.
  • Rg. 5 illustrates a few network switches 60 of the present invention connected together to form large switch 70, wherein each is connected to its own channels 10.
  • the network switches 60 comprise similar elements and operate in a similar manner to that described hereinabove with respect to network switch 19. However, in this embodiment, the network switches 60 are all connected to the same internal bus 24. Despite this, as will be seen hereinbelow, the fixed delays on input and from the storage buffer are maintained between all of the network switches 60.
  • each switch 60 has its own period 62 during which it can transfer data to the internal bus 24.
  • period 62a the first switch 60 provides data.
  • the second switch 60 provides data, etc.
  • Each period 62 includes the timing of the transfer period 37 as described hereinabove with respect to Fig. 3. In other words, four clock cycles for the header, one clock cycle per channel, and one idle cycle, per switch 60.
  • the network table 39 includes in it the switch 60 and the channel number to which each workstation of the entire network belongs.
  • a channel might be known as the ith channel of the jth switch 60.
  • the arbiters 21 define the destination of each frame portion by the channel and switch of the destination workstation.
  • the switch controller 28 of each network switch 60 listens to all of the traffic on the internal bus 24 but only transfers into its storage buffer 26 those frame portions which are to be output to its associated channels 10.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Small-Scale Networks (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Use Of Switch Circuits For Exchanges And Methods Of Control Of Multiplex Exchanges (AREA)

Abstract

A network switch (19) having minimal latency is provided, which includes a storage buffer 26), fixed delay input means (36, 23, 24) for transferring input data from a plurality of input channels (10) to the storage buffer and fixed delay output means (38, 25, 24) for transferring output data from the storage buffer to a plurality of output channels (10). The switch (19) also has means for associating, within the storage buffer (26), input data from one input channel with at least one output channel thereby converting the input data to output data. The network switch is connectable to a plurality of channels, each of which operates as an input and an output channel.

Description

A NETWORK SWITCH
FIELD OF THE INVENTION
The present invention relates to network switches in general and to Ethernet network switches in particular.
BACKGROUND OF THE INVENTION
Networks of computer workstations are known in the art. There are various protocols for communication on a network of which the Ethernet protocol is one very popular one. Networks using the Ethernet protocol are known as "Ethernet" networks. Figs. 1A nd 1B, to which reference is now made, illustrate two different types of Ethernet network formed of a plurality of workstations 10 connected together.
The Ethernet network of Fig. 1A is a "shared media" network in which the workstations 10 share the bandwidth of the bus 12 which connects them together. Bus 12 is typically implemented in a "hub" or "multiport repeater". In the shared media network of Fig. 1 A, those workstations 10 which are actively sending utilizing the network share the capacity with each other. The fewer the users, the more capacity is available for those who are active. Unfortunately, the opposite is also true; when there are many users, the capacity is shared among all active workstations and thus, each one can only utilize a small portion of the capacity.
To overcome this disadvantage, "switched" Ethernet networks were designed in which a number of conversations are allowed at once. The active conversations share the available capacity and the conversations are switched so that all can get access at some time. A switched network 14 is shown in Fig. 1 B in which point-to-point conversations 16 are enabled. The switched network 14 is typically implemented in a network switch. The Ethernet protocol involves sending "frames" of data, which include destination information therein, from one workstation, for example workstation 10a, to the entire network. Since all workstations continually listen to the network, the destination workstation, for example workstation 10b, can pick up the frame sent to it. If the destination workstation 10b is already talking with a second workstation, such as workstation 10c, the destination workstations 10a and 10b will send "collision" messages to inform the network about their collision. The sending workstation 10a will then resend the message at some later time. Prior art network switches utilize a central processing unit (CPU) to direct frames to a memory device for storage and to forward the frames to their destination workstation at the appropriate time. The received frame is loaded into the memory and, only once the frame has been completely stored, is it transmitted to the output port. Thus, the time a frame takes from input to output (known in the art as its "latency") is a function of the length of the frame
(shorter frames take less time to store in memory) and on the load on the CPU which can vary considerably.
Two other parameters of a network switch have been defined, the throughput, which is the amount of data which can be carried at one time and is measured in frames/second, and the bandwidth, which is the number of sessions per unit time which can concurrently passed through the network switch. Bandwidth is a function of the processing power of the CPU for short frames and on the internal architecture for long frames. Throughput is a function of the CPU power. U.S. Patent 5,274,631 to Bhardwaj describes a network switch which is connected to a plurality of ports. The network switch includes a multiplexer to connect a source port with a destination port. If the destination port is not known, the CPU searches for the destination port. SUMMARY OF THE PRESENT INVENTION
It is an object of the present invention to provide an improved network switch which has minimal latency.
There is therefore provided, in accordance with a preferred embodiment of the present invention, a network switch which includes a storage buffer, apparatus having fixed delay on input to the storage buffer, apparatus having fixed delay on output from the storage buffer and association apparatus for associating data within the storage buffer whereby the network switch is connectable to a plurality of channels each of which operates as an input and an output channel. The fixed delay input apparatus transfers input data from input channels to the storage buffer with a first fixed delay. The fixed delay output apparatus transfers output data from the storage buffer to the output channels with a second fixed delay. The association apparatus associates, within the storage buffer, input data from one input channel with at least one output channel thereby converting the input data to output data.
Additionally, in accordance with a preferred embodiment of the present invention, the storage buffer includes a plurality of storage spaces and the network switch also includes apparatus for temporarily assigning each storage space to a conversation between only one input channel and at least one output channel. Additionally, there is apparatus for indicating to the fixed delay input means to place the input data from each input channel into the assigned storage space for the corresponding conversation. The storage spaces are preferably first in, first out (FIFO) buffers.
Moreover, in accordance with a preferred embodiment of the present invention, the fixed delay input apparatus includes an input buffer and an internal bus. The input buffer includes separate input buffer spaces each storing the input data from one of the input channels. The internal bus has separate time slots each receiving data from one of the input buffer spaces. The input buffer also includes a header buffer for storing routing and status information regarding the data received from each input channel. Furthermore, in accordance with a preferred embodiment of the present invention, the network switch also includes apparatus for discarding any data for whom the routing information is to an unknown destination.
Additionally, in accordance with a preferred embodiment of the present invention, the fixed delay output apparatus includes separate output buffer spaces each storing the output data from one of said output channels.
Moreover, in accordance with a preferred embodiment of the present invention, the input and output data are formed of a portion of a frame.
Furthermore, in accordance with a preferred embodiment of the present invention, the network switch also includes a back pressure controller, activatable once all of the plurality of storage spaces are assigned, for providing either collisions or jam frames on all input channels attempting to start new conversations.
In accordance with a further embodiment of the present invention, a number of network switches can be combined together. In this embodiment, the internal busses are combined while the remaining elements remain separate.
In accordance with a further embodiment of the present invention, the network switch includes a two-way buffer at least having one input and one output first in, first out (FIFO) buffer per channel which is large enough to store one frame portion. The network switch also includes an internal bus which receives frame portions from the input FIFOs, a storage buffer having a multiplicity of storage FIFOs and a switch controller. The internal bus has a timing sequence having a plurality of timing periods of which one timing period is allocated to each input FIFO. The switch controller includes apparatus for temporarily assigning each storage FIFO to collect frame portions from timing periods corresponding to one conversation, of the length of a frame, between one input channel and at least one output channel, wherein not all of said output channels are active at the same time. The switch controller also includes apparatus for transferring the oldest frame portions of each active output channel to its corresponding output FIFO of the two-way buffer for later transfer out to its active output channel. The network switch is connectable to a plurality of channels each of which operates as an input and an output channel.
Furthermore, in accordance with a preferred embodiment of the present invention, the network switch also includes a back pressure controller, activatable once all of said multiplicity of storage FIFOs are active, for providing either collisions or jam frames on all input channels attempting to start new conversations.
Furthermore, in accordance with a preferred embodiment of the present invention, the two-way buffer also includes a header buffer in which is stored routing and status information regarding each frame portion received from the channels.
Additionally, the present invention incorporates the method performed by the network switch described hereinabove, which switches data among a plurality of channels each of which operates as an input and an output channel. The method includes the steps of transferring input data from the input channels to a storage buffer with a first fixed delay, transferring output data from the storage buffer to the output channels with a second fixed delay. The switching method also associates input data from one input channel with at least one output channel, within the storage buffer. In this way, input data is converted to output data.
Furthermore, in accordance with a preferred embodiment of the present invention, the storage buffer includes a plurality of storage spaces. The method also includes the steps of temporarily assigning each storage space to a conversation between only one input channel and at least one output channel and of indicating to said fixed delay input apparatus place the input data from each input channel into the assigned storage space for the corresponding conversation. Additionally, in accordance with a preferred embodiment of the present invention, the step of transferring input data includes the steps of providing separate input buffer spaces corresponding to each of the input channels, and providing an internal bus having separate time slots each receiving data from one of the input buffer spaces.
Additionally, in accordance with a preferred embodiment of the present invention, the step of transferring output data includes the step of providing separate output buffer spaces corresponding to each of the output channels.
Furthermore, in accordance with a preferred embodiment of the present invention, the step of transferring output data also includes the step of providing collisions on all input channels attempting to start new conversations once all of said plurality of storage spaces are assigned.
Finally, in accordance with a preferred embodiment of the present invention, the step of transferring output data also includes the step of providing jam frames on all input channels attempting to start new conversations once all of said plurality of storage spaces are assigned.
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention will be understood and appreciated more fully from the following detailed description taken in conjunction with the drawings in which: Figs. 1A and 1B are block diagram illustrations of prior art networks having a multiport repeater (Fig. 1A) and an Ethernet network switch (Fig. 1B);
Rg. 2 is a block diagram illustration of a network switch constructed and operative in accordance with a preferred embodiment of the present invention;
Fig. 3 is a timing diagram useful in understanding the operation of the switch of Fig. 2;
Fig. 4 is a schematic illustration of a storage buffer forming part of the switch of Fig. 2; Rg. 5 is a schematic illustration of an alternative embodiment of the switch of Fig. 2 in which a few such switches are connected together; and
Rg. 6 is a timing diagram of the timing of an internal bus in the alternative switch of Rg. 5.
DETAILED DESCRIPTION OF A PREFERRED EMBODIMENT
Reference is now made to Figs. 2 - 4 which illustrate the network switch 19 of the present invention, implemented for the Ethernet protocol. It will be appreciated that the principles of the present invention can also be implemented for other network protocols.
The switch 19 comprises two channel busses 20, each connected to a plurality of workstations 10, or channels, two arbiters 21, each operating in conjunction with one of the channel busses 20, a two-way buffer 22, an internal bus 24, a storage buffer 26 and a switch controller 28. In the example shown in Fig. 2, there are twelve channels, six per channel bus 20.
The two-way buffer 22 comprises an input portion 23 and an output portion 25. The input portion 23 comprises a plurality of input first in, first out (FIFO) buffers 30, one per channel 10, and a header buffer 32 and the output portion 25 comprises a plurality of output FIFO buffers 33, one per channel 10. 12 input and output FIFOs 32 and 33, respectively are shown as an example.
In accordance with the present invention, the data to be transferred is not the entire frame, Ethernet or otherwise, as in the prior art, but a portion thereof (herein called a "frame portion") of a predefined size, such as a small percentage of the frame length. It is noted that an Ethernet frame typically includes a destination address (6 bytes), a source address (6 bytes), a payload (46 to 1500 bytes), a length field (2 bytes) and a frame check sum (4 bytes).
Each arbiter 21 communicates with its channels 10 in a round robin manner. At any given time, one arbiter 21 is operative for input (the "input arbiter") and one arbiter 21 is operative for output (the "output arbiter"). The input arbiter 21 transfers the frame portions from the channels 10 connected to it to the corresponding input FIFOs 30 of the two-way buffer 22. The output arbiter 21 transfers frame portions from the corresponding output FIFOs 33 of the two-way buffer 22 to the channels 10 connected to the output arbiter. On input, the input arbiter 21 provides the switch controller 28 with information about each frame of each channel. For example, the information might include the status of the channel (i.e. whether or not the channel is available and whether or not it is sending data) and, if there is a frame to be sent, the number of the frame portion within the frame (i.e. the second of 1000) including an indication of whether it is the first, last or middle, and the destination channel of the frame.
The switch controller 28 accesses the network table 39, in which is stored a map of the entire network as it is currently known, to identify whether or not the designation channels of each frame portion are known. If so, the network table 39 extracts the physical port (i.e. channel) to which each designation channel is attached. If the location of the destination is not known, the switch controller 28 will cause the frame portion to either be broadcast to all channels 10 (in a method known in the art as "negative filtering") or discarded (in a method called herein "positive filtering").
From the information received, the switch controller 28 then creates a header for the entire set of frame portions (all 12) in the input portion 28 and places the header into header buffer 30.
It will be appreciated that each input FIFO 30 receives input frame portions only from its corresponding channel. Thus, if a channel has no frame portions to send to the network, its corresponding input FIFO 30 will not be filled and the switch controller 28 provides a notation to that effect in the header buffer 32.
Once the two-way buffer 22 is full (i.e. it has received frame portions from each of the channels 10 from both channel busses 20), it transfers the data held therein to a transfer buffer 34 which, in turn, transfers all of the data to the internal bus 24 as follows and as shown in Fig. 3.
Fig. 3 is a timing diagram of the frame portion flow through network switch 19 of the present invention. The timing diagram is divided into three equal length periods, an input period 36, a transfer period 37 and an output period 38, each of X clock cycles, where X is typically 80. The clock cycles are indicated in Fig. 3 as the space between ticks.
During the input period 36, input frame portions are provided from the channels 10 to the input portion 23 of the two-way buffer 22, as described hereinabove. During the transfer period 37, the input frame portions and other data are transferred to the internal bus 24 via the transfer buffer 34. During the output period 38, output frame portions are provided from the output portion 25 of the two-way buffer 22 to their corresponding channels 10, as detailed hereinbelow. It will be appreciated that the three periods 36 - 38 are pipelined such that, the ith input period 36 occurs at the same time as the (i-1)th transfer period 37 and the (i-2)th output period 38.
During the transfer period 37, the header is provided first, during a predefined number, such as four clock cycles. In each following clock cycle, one frame portion is provided, in channel order. Thus, the frame portion from channel 1 is provided in the first clock cycle, followed by the frame portion from channel 2, etc. It will be appreciated that the header and frame portions each have their own allocated "time slots" and that the time slots can be of any desired length. If an input FIFO 30 was empty, because its corresponding channel
10 was not sending, the corresponding clock cycle will exist but it will not contain any frame portion. In other words, the frame portion of each channel 10 is transferred to the internal bus 24 regardless of the activity of the other channels. Thus, the network switch 19 of the present invention enables many conversations to occur at the same time.
Once the transfer buffer 34 is emptied (after 16 clock cycles), there is an idle cycle after which the clock cycles of the transfer period 37 are empty.
Storage buffer 26 is detailed in Fig. 4 and comprises a multiplicity of "storage FIFOs" 40, where each storage FIFO 40 is operative to collect a frame of data and 10 storage FIFOs 40a - 40j are shown in Fig. 4. Each storage FIFO 40 stores frame portions of a single frame which are to be sent either to a single channel (creating a point-to-point conversation) or to many channels (creating a point-to-multi-point conversation). For example, the storage FIFOs 40a, 40b and 40c of Fig. 4 are for point-to-point conversations and storage FIFO 40d is for a point-to-multi-point conversation. In other words, each storage FIFO 40 stores a frame from one conversation.
Before entering data into the storage buffer 26, the switch controller 28 receives the header information from one transfer period 37 and from that information, determines in which storage FIFO 40 to place the frame portion provided in each clock cycles following the header clock cycles. If the header information indicates that the frame portion of a channel is the first frame portion of a new frame, the switch controller 28 places the frame portion, labeled 42, into the next empty storage FIFO 40, such as 40e. Later frame portions 42 from the same channel (received after the current transfer period
37) will be placed into the same storage FIFO 40e in order. If the header information indicates that the frame portion of a channel is in the middle of the frame, the switch controller 28 places the frame portion 42 into the storage FIFO 40 currently allocated to that channel for input. As shown in storage FIFOs 40a and 40c, more than one channel can be sending frames to the same channel. Storage FIFO 40a is allocated to channel 1 for sending a frame to channel 3 and storage FIFO 40c is allocated to channel 2 for sending a frame also to channel 3. Since storage FIFO 40a comes before storage FIFO 40c, storage FIFO 40a will be emptied, as described hereinbelow, prior to emptying storage FIFO -40c. However, both storage FIFOs 40a and 40c can be filling at the same time.
Switch controller 28 keeps track of which storage FIFO 40 belongs to which destination channel and the order in which the storage FIFOs 40 with data from the same channel were received. Thus, the switch controller 28 knows the order in which to empty the storage FIFOs 40 storing frames for the same channel. Furthermore, the switch controller 28 knows which storage FIFOs 40 store frames for broadcast conversations. For these frames, switch controller 28 keeps track of the channels to which it has already sent a copy of the broadcast frame or frame portions and only removes the data in the broadcast storage FIFO 40d once all the designated channels have received the data.
During the idle or non-data clock cycles, switch controller 28 transfers the oldest frame portion 42 in each currently active storage FIFO 40 to the output portion 25 of the two-way buffer 22, via a second transfer buffer
44. The frame portions are stored in the output FIFOs 33 corresponding to the channels 10 to which they are to be sent.
It is noted that there is only one active storage FIFO 40 per output channel 10. For the example of Rg. 4, the oldest frame portions 42 in storage FIFOs 40a, 40b and 40d only would be taken. Those from storage FIFOs 40a and 40b would be placed into the output FIFO 33 corresponding to channels 3 and 5, respectively. The oldest frame portion 42 from storage FIFO 40d (which is a broadcast FIFO) would be provided to the remaining channels 1 , 2, 4 and 6 - 12. Later on, copies of the data in storage FIFO 40d will also be sent to channels 3 and 5.
During the output period 38, the arbiters 21 access each output RFO 33, in order, and transfer the frame portions stored therein, if any, to the corresponding channel 10 for output. During this time, the network table 39 also learns the addresses of the source channels from the source address fields present within the frame. Once the output period 38 has ended, the next input period 36 begins during which the arbiters 21 transfer frame portions from the channels 10 to the input portion 23 of the two-way buffer 22.
It is noted that, since the arbiters 21 alternately transfer for input and for output, each arbiter 21 works half of each input and output period. This feature enables the present invention to implement a full duplex Ethernet protocol in which, during any one period, both incoming and outgoing frames can be transferred. It will be appreciated that the present invention provides a fixed delay from the input of data through its storage in the storage buffer 26 and from the storage buffer 26 out to the channels 10. The fixed input delay is provided by the fixed allocation of space per channel in the input portion 23 of the two-way buffer 22 and the fixed allocation of time per channel during the transfer period 37. The fixed output delay is provided by the fixed allocation of space per channel in the output portion 25 of the two-way buffer 22. Because the storage in the input portion 23 and the timing on the internal bus 24 are fixedly allocated, the present invention has no need for a standard input buffer which has to be managed. The present invention only needs to manage the storage buffer 26.
However, it will be appreciated that the time through the storage buffer 26 is variable. The variability is a function of the activity of the network and not of the manner in which the storage buffer 26 is designed. For example, if 11 channels all choose to send frames to the twelfth channel, each storage
FIFO 40 will store a frame, or portion thereof, destined for the twelfth channel. Since the twelfth channel can only receive one frame portion at a time, it will receive data only from the currently active storage FIFO 40.
If, on the other hand, 10 channels choose to send to the twelfth channel and one channel chooses to send to the sixth channel, most of the storage FIFOs 40 will be filled with frames, from different channels, waiting to be sent to the twelfth channel. However, one storage FIFO 40 will be filled with a frame to be sent to the sixth channel. During the output period 38, both the twelfth and sixth channels will receive a frame portion. Thus, the total throughput time for a given output channel is a function of how many workstations want to send to it. Furthermore, the backlog of one channel does not generally affect the other conversations on the network, except during times of overload, as described hereinbelow.
If the storage buffer 26 fills up completely, as will happen when one or more channels have a large backlog (the "backlogged channels"), switch controller 28 indicates to a back pressure controller 50 to indicate to any channel newly sending to the switch to stop sending for a while.
For the Ethernet protocol implementation, for each channel sending to the backlogged channels or to any channel trying to start a new conversation (which requires another storage FIFO 40 which is currently unavailable), back pressure controller 50 then forces a collision or sends a jam frame, as are known in the Ethernet protocol, during the "round trip time" or the first 51.2 microseconds of the transmission of the frame. In the case of the collision, the sending channel will resume sending at some, randomly chosen, future time. In the case of the jam frame, as soon as transmission of the jam frames finish, the sending channel will resume sending.
In accordance with an alternative embodiment of the present invention and as described with reference to Figs. 5 and 6, more than one network switch 19, such as is described hereinabove, can be connected together on the same internal bus 24 to produce a large switch 70.
Rg. 5 illustrates a few network switches 60 of the present invention connected together to form large switch 70, wherein each is connected to its own channels 10. The network switches 60 comprise similar elements and operate in a similar manner to that described hereinabove with respect to network switch 19. However, in this embodiment, the network switches 60 are all connected to the same internal bus 24. Despite this, as will be seen hereinbelow, the fixed delays on input and from the storage buffer are maintained between all of the network switches 60.
The timing of each switch 60 is the same as described hereinabove; however, the timing of the internal bus 24 is different, as illustrated in Fig. 6. In Fig. 6, each switch 60 has its own period 62 during which it can transfer data to the internal bus 24. During period 62a, the first switch 60 provides data. During the second period 62b, the second switch 60 provides data, etc. Each period 62 includes the timing of the transfer period 37 as described hereinabove with respect to Fig. 3. In other words, four clock cycles for the header, one clock cycle per channel, and one idle cycle, per switch 60.
Since there are 80 clock cycles during an entire period and the transfer period of each is 17 cycles long, four network switches 60 can be connected together with some time remaining.
It will be appreciated that, in the large switch 70, the network table 39 includes in it the switch 60 and the channel number to which each workstation of the entire network belongs. Thus, a channel might be known as the ith channel of the jth switch 60. On input, the arbiters 21 define the destination of each frame portion by the channel and switch of the destination workstation.
On output, the switch controller 28 of each network switch 60 listens to all of the traffic on the internal bus 24 but only transfers into its storage buffer 26 those frame portions which are to be output to its associated channels 10.
It will be appreciated by persons skilled in the art that the present invention is not limited to what has been particularly shown and described hereinabove. Rather the scope of the present invention is defined only by the claims which follow:

Claims

1. A network switch comprising: a. a storage buffer; b. fixed delay input means for transferring input data from a plurality of input channels to said storage buffer with a first fixed delay; c. fixed delay output means for transferring output data from said storage buffer to a plurality of output channels with a second fixed delay; and d. means for associating, within said storage buffer, input data from one input channel with at least one output channel thereby converting said input data to output data,
whereby the network switch is connectable to said plurality of channels, each of which operates as an input and an output channel.
2. A network switch according to claim 1 wherein said storage buffer comprises a plurality of storage spaces and wherein said network switch also comprises means for temporarily assigning each storage space to a conversation between only one input channel and at least one output channel and means for indicating to said fixed delay input means to place the input data from each input channel into the assigned storage space for the corresponding conversation.
3. A network switch according to claim 2 and wherein said storage spaces are first in, first out (FIFO) buffers.
4. A network switch according to claim 1 and wherein said fixed delay input means comprises: a. an input buffer comprising separate input buffer spaces wherein each input buffer space stores said input data from one of said input channels; and b. an internal bus having separate time slots each receiving data from one of said input buffer spaces.
5. A network switch according to claim 4 and wherein said input buffer also comprises a header buffer for storing routing and status information regarding the data received from each of said input channels.
6. A network switch according to claim 4 and also comprising means for discarding any data for whom the routing information is to an unknown destination.
7. A network switch according to claim 1 and wherein said fixed delay output means comprises separate output buffer spaces each storing said output data for one of said output channels.
8. A network switch according to claim 1 and wherein said data is formed of a portion of a frame.
9. A network switch according to claim 2 and also comprising a back pressure controller, activatable once all of said plurality of storage spaces are assigned, for providing collisions on all input channels attempting to start new conversations.
10. A network switch according to claim 2 and also comprising a back pressure controller, activatable once all of said plurality of storage spaces are assigned, for providing jam frames on all input channels attempting to start new conversations.
11. A network switch comprising: a. a two-way buffer at least having one input and one output first in, first out (FIFO) buffer per channel large enough to store one frame portion; b. an internal bus which receives said frame portions from said input FIFOs, wherein said internal bus has a timing sequence having a plurality of timing periods of which one timing period is allocated to each input FIFO; c. a storage buffer comprising a multiplicity of storage FIFOs; and d. a switch controller comprising: i. means for temporarily assigning each storage FIFO to collect frame portions from timing periods corresponding to one conversation, of the length of a frame, between one of a plurality of input channels and at least one of a plurality of output channels, wherein not all of said output channels are active at the same time; and ii. means for transferring the oldest frame portions of each active output channel to its corresponding output FIFO of said two-way buffer for later transfer out to its active output channel,
whereby the network switch is connectable to said plurality of channels, each of which operates as an input and an output channel.
12. A network switch according to claim 11 and wherein said two-way buffer also comprises a header buffer in which is stored routing and status information regarding each frame portion received from said channels.
13. A network switch according to claim 11 and also comprising a back pressure controller, activatable once all of said multiplicity of storage FIFOs are active, for providing collisions on all input channels attempting to start new conversations.
14. A network switch according to claim 11 and also comprising a back pressure controller, activatable once all of said multiplicity of storage
FIFOs are active, for providing jam frames on all input channels attempting to start new conversations.
15. A method of switching data comprising the steps of: a. transferring input data from a plurality of input channels to a storage buffer with a first fixed delay; b. transferring output data from said storage buffer to a plurality of output channels with a second fixed delay; and c. associating, within said storage buffer, input data from one input channel with at least one output channel thereby converting said input data to output data,
whereby each of said plurality of channels, operates as an input and an output channel.
16. A method according to claim 15 wherein said storage buffer comprises a plurality of storage spaces and wherein said method also comprises the steps of temporarily assigning each storage space to a conversation between only one input channel and at least one output channel and of indicating to said fixed delay input means to place the input data from each input channel into the assigned storage space for the corresponding conversation.
17. A method according to claim 15 and wherein said step of transferring input data comprises the steps of: a. providing separate input buffer spaces corresponding to each of said input channels; and b. providing an internal bus having separate time slots each receiving data from one of said input buffer spaces.
18. A method according to claim 15 and wherein said step of transferring output data comprises the step of providing separate output buffer spaces corresponding to each of said output channels.
19. A method according to claim 16 and also comprising the step of providing collisions on all input channels attempting to start new conversations once ail of said plurality of storage spaces are assigned.
20. A method according to claim 16 and also comprising the step of providing jam frames on all input channels attempting to start new conversations once all of said plurality of storage spaces are assigned.
EP95930157A 1994-08-14 1995-08-11 A network switch Withdrawn EP0775346A4 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
IL11065794 1994-08-14
IL110657A IL110657A (en) 1994-08-14 1994-08-14 Network switch
PCT/US1995/010256 WO1996005558A1 (en) 1994-08-14 1995-08-11 A network switch

Publications (2)

Publication Number Publication Date
EP0775346A1 true EP0775346A1 (en) 1997-05-28
EP0775346A4 EP0775346A4 (en) 1999-09-22

Family

ID=11066456

Family Applications (1)

Application Number Title Priority Date Filing Date
EP95930157A Withdrawn EP0775346A4 (en) 1994-08-14 1995-08-11 A network switch

Country Status (4)

Country Link
EP (1) EP0775346A4 (en)
AU (1) AU3363995A (en)
IL (1) IL110657A (en)
WO (1) WO1996005558A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5802333A (en) * 1997-01-22 1998-09-01 Hewlett-Packard Company Network inter-product stacking mechanism in which stacked products appear to the network as a single device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2603437A1 (en) * 1986-09-02 1988-03-04 Nippon Telegraph & Telephone PACKET SWITCH
EP0312628A1 (en) * 1987-10-20 1989-04-26 International Business Machines Corporation High-speed modular switching apparatus for circuit and packet switched traffic
US5168492A (en) * 1991-04-11 1992-12-01 Northern Telecom Limited Rotating-access ATM-STM packet switch

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5274631A (en) * 1991-03-11 1993-12-28 Kalpana, Inc. Computer network switching system
DE69129851T2 (en) * 1991-09-13 1999-03-25 International Business Machines Corp., Armonk, N.Y. Configurable gigabit / s switch adapter
US5241536A (en) * 1991-10-03 1993-08-31 Northern Telecom Limited Broadband input buffered atm switch
US5291482A (en) * 1992-07-24 1994-03-01 At&T Bell Laboratories High bandwidth packet switch

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2603437A1 (en) * 1986-09-02 1988-03-04 Nippon Telegraph & Telephone PACKET SWITCH
EP0312628A1 (en) * 1987-10-20 1989-04-26 International Business Machines Corporation High-speed modular switching apparatus for circuit and packet switched traffic
US5168492A (en) * 1991-04-11 1992-12-01 Northern Telecom Limited Rotating-access ATM-STM packet switch

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of WO9605558A1 *

Also Published As

Publication number Publication date
IL110657A (en) 1997-07-13
WO1996005558A1 (en) 1996-02-22
AU3363995A (en) 1996-03-07
IL110657A0 (en) 1994-11-11
EP0775346A4 (en) 1999-09-22

Similar Documents

Publication Publication Date Title
JP4334760B2 (en) Networking system
US7023841B2 (en) Three-stage switch fabric with buffered crossbar devices
US7733889B2 (en) Network switching device and method dividing packets and storing divided packets in shared buffer
US7161906B2 (en) Three-stage switch fabric with input device features
EP1045558B1 (en) Very wide memory TDM switching system
JP3322195B2 (en) LAN switch
US6754222B1 (en) Packet switching apparatus and method in data network
JPH02239747A (en) Atm exchange
JPH0653996A (en) Packet switch
JP2000503828A (en) Method and apparatus for switching data packets over a data network
JPH08293877A (en) Communication system
US6697362B1 (en) Distributed switch memory architecture
US20080013548A1 (en) Data Packet Switch and Method of Operating Same
EP0716557A2 (en) Telecommunication system with detection and control of packet collisions
US20020131412A1 (en) Switch fabric with efficient spatial multicast
US20070297437A1 (en) Distributed switch memory architecture
US7151752B2 (en) Method for the broadcasting of a data packet within a switched network based on an optimized calculation of the spanning tree
JP3820272B2 (en) Exchange device
CN113422741B (en) Time-triggered Ethernet switch structure
US7142515B2 (en) Expandable self-route multi-memory packet switch with a configurable multicast mechanism
EP0775346A1 (en) A network switch
EP1065835B1 (en) Packet memory management (PACMAN) scheme
KR100429907B1 (en) Router and routing method for combined unicast and multicast traffic
JP2004023572A (en) Cell switch and cell switching method
JP2001086124A (en) Method for selecting atm cell waiting in queue and circuit device

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 19970206

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LI LU MC NL PT SE

A4 Supplementary search report drawn up and despatched

Effective date: 19990810

AK Designated contracting states

Kind code of ref document: A4

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LI LU MC NL PT SE

RIC1 Information provided on ipc code assigned before grant

Free format text: 6G 06F 13/12 A, 6G 06F 13/14 B, 6H 04L 12/56 B

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 19991129