GB2035755A - Data communication and data processing system - Google Patents

Data communication and data processing system Download PDF

Info

Publication number
GB2035755A
GB2035755A GB7937039A GB7937039A GB2035755A GB 2035755 A GB2035755 A GB 2035755A GB 7937039 A GB7937039 A GB 7937039A GB 7937039 A GB7937039 A GB 7937039A GB 2035755 A GB2035755 A GB 2035755A
Authority
GB
United Kingdom
Prior art keywords
data
coupled
processor
inputs
modules
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB7937039A
Other versions
GB2035755B (en
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Standard Electric Corp
Original Assignee
International Standard Electric Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Standard Electric Corp filed Critical International Standard Electric Corp
Publication of GB2035755A publication Critical patent/GB2035755A/en
Application granted granted Critical
Publication of GB2035755B publication Critical patent/GB2035755B/en
Expired legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A packet switching system comprises a plurality of nodes each with a number of modules. Each of these modules includes a processor and a memory for storing data about virtual calls in which this module is involved. For each virtual call path established between two end nodes recovery data is stored in the memories of two modules of each end node, these recovery data being transmitted by messages normally used to establish the virtual call path. If one of the modules storing the recovery data fails, means are provided for using the recovery data of the other module.

Description

SPECIFICATION Data communication and data processing system The present invention relates to a data communication system for switching data in a stored-and-forward mode, said system including a plurality of communication nodes each having a plurality of ports and each including a plurality of switching processor modules, each of said switching processor modules of a node being able to establish virtual call paths between at least two ports of said node and to transmit data on these paths and cooperating nodes being able to establish virtual paths between at least two ports belonging to different nodes, each switching processor module including a processor and a memory for storing data about the virtual calls in which said module is involved.
Such a data communication system is already known from US patent 4032 899. However, for this packet switching system it is not described what happens with the established virtual call paths in case a switching processor module involved in these paths becomes faulty.
An object of the present invention is to provide a data communication system of the above type, but which more particularly enables virtual call paths to remain established when a switching processor module involved in these virtual call paths and forming part of an end node becomes faulty.
According to the invention this object is achieved due to the fact that at least one end node involved in a virtual call path includes means to store recovery data about said virtual call path in the memories of at least two two switching processor modules, one of these two modules being the module involved in said virtual path, and recovery means to use the recovery data stored in the memory of the other associated switching processor module so that said virtual paths remain established upon said one switching processor module becoming faulty, said recovery data being made available in said end node by the transmission of control messages required for the establishment of said virtual call path.
The invention is based on the insight that since control messages have in any case to be transmitted between for instance two end nodes to establish a virtual call path between these nodes, and more particularly between two switching processor modules, recovery data can be collected from these messages in these nodes and these messages can advantageously be used to transfer such recovery data from one node to the other. Once the recovery data have been stored in the memory of a module of an end node they can be copied in the memory of an associated module.
It should be noted that collecting recovery data about each call and storing these data in the memories of two computers is in itself already known for the telephone system of US patent 3 557 315 (S. Kobus et al 19-4-1-2-13). However, in the system disclosed therein each time recovery data is obtained by a computer, it is stored in its own memory and transmitted to the memory of the other computer by means of special messages.
Another characteristic feature of the present data communication system is that said switching processor modules are divided into at most n/2 mutually exclusive sets of at least two associated modules.
In this way the recovery data for the various virtual call paths established via the node are distributed over all the switching processor modules of this node.
The present invention also relates to a data processing system including three processors which simultaneously perform identical instructions and which are coupled to at least one memory through a threshold circuit.
Such a data processing system is already known from British patent 1 462 690 wherein use is made of a single memory and a single threshold circuit constituted by a majority decision circuit so that it only provides a a security against errors in the processors but not against errors in the memory and in the majority decision circuit.
Another object of the present invention is to provide a data processing system of the above type but with an improved security.
According to the invention this object is achieved due to the fact that said three processors are coupled to the inputs of two threshold decision circuits the outputs of which are coupled to write inputs of respective ones of two memories via respective busses.
In this way a reasonable security is obtained as two memories are employed and although only two threshold circuits are used. Such threshold circuits indeed have a relatively simple structure and perform relatively simple operations.
Finally, the present invention also relates to a data processing system with at least a first and a second processor having access to a common memory, at least said first processor having further access to its own memory.
Such a data processing system is generally known in the computer art and an object of the present invention is to provide a data processing system wherein the first processor is able to co-operate independently with its own memory whilst simultaneously the second processor is able to co-operate with the common memory.
According to the invention this object is achieved due to the fact that said second processor is coupled to said common memory via a bus whilst said first processor has access to said bus via a bus switch, and that the system further includes control means able to selectively grant the use of said bus to said first or second processor and to hereby close and open said bus switch respectively.
In this way the first processor can have access to the bus when the bus switch is closed but can also operate independently when this bus switch is open without then disturbing the cooperating between the second processor and the common memory.
The above mentioned and other objects and features of the invention will become more apparent and the invention itself will be best understood by referring to the following description of an embodiment taken in conjunction with the accompanying drawings in which: Figure 1 is a schematic view of a network wherein the data communication and data processing system according to the invention is used; Figure 2 is a detailed view of the packet data satellite PDS1 shown in Figure 1; Figure 3 is a detailed view of the line access module LAM1 represented in Figure 2; Figure 4 is a detailed view of the packet processing module PPM 1' shown in Figure 2; Figure 5 is a detailed view of the packet switching exchange PSE1 represented in Figure 1; Figure 6 is a detailed view of the mode data base NDB shown in Figure 5;; Figures 7and 8 are diagrams serving to explain the operation of the data communication system according to the invention.
Referring to Figure 1 the network shown therein includes a plurality of end nodes or packet data satellites PDS1 to PDS7 to each of which a plurality of users each having a data terminal equipment DTE are connected. More particularly, the packet data satellite PDS1 is connected on the one hand to the data terminal equipments DTE11 to DTE1 p via lines L11 to Lip respectively and on the other hand to data terminal equipments DTEnl to DTEnq via lines Lnl to Lnq respectively. The packet data satellite PDS6 is more particularly connected to the data equipment DTE61 via line L61.
The shown networkfurther includes a plurality of central nodes or packet switching exchanges PSE1; PSE2 and PSE3 to which the packet data satellites PDS1 to PDS3; PDS4, PDS5; and PDS6, PDS7 are connected. These PSE1, PSE2 and PSE3 are also interconnected. For instance, PSD1 is connected to PSE1 via line L1 and PSE1 is connected to PSE3 via line L. Finally, the network includes network operation and management centre nomc to which the packet switching exchanges PSE1 to PSE3 are connected.
To be noted that the above mentioned lines may have different data transmission rates. The functions of the packet data satellites and of the packet switching exchanges will become clear from the description, given later, of the functions of their constituent parts. The function of the NOMC is to perform supervision and operational management of the network, processing of billing data, etc.
Making reference to Figure 2 the packet data satellite PDS1 represented therein includes a plurality of line access modules LAM 1 to LAMn which are connected to the above mentioned lines L11 to Llp and Lnl to Lnq and L1 respectively. Each of these line access modules is further connected to two module interconnection busses MIBA and MIBB via a respective one of the busses B1 to Bn, via a respective one of the bus control units BCU1 to BCUn and via a respective one of the bus interfaces circuit BIA1 to BlAn and BIB1 to BlBn.
The PDS1 further includes a plurality of packet processing modules PPM1' to PPMn' which are each connected to the busses MIBA and MIBB via a respective one of the busses By' to Bn', a respective one of the bus control units BCU 1' to BCUn' and a respective one of the bus interface circuits BlAl' to BlAn' and Bibs'to BlBn'. The bus control units BOUT' to BCUn' each include a respective one of the module-out-of service indicating bistable circuits BSi' to BSn' the 1-inputs of which are connected to the associated packet processing modules and the l-outputs of which are the lines ml'to mn' respectively.
Finally, the PDS1 includes two bus supervision units BSUA and BSUB which are connected to the module interconnection busses MIBA and MIBB respectively. The BSUA and BSUB each have a plurality of inputs connected to the above mentioned lines Ml' to mn' of the bistable circuits BSi' to BSn' respectively. In each of the BSUA and BSUB the lines ml' to mn' are connected to stages of a register RA, RB keeping record of the operative and non-operative states of the various packet processing modules. A read lead ka of the RA and a read lead kb of the RB are connected to a respective packet processing module of two such modules, e.g. to PPM1' and PPMn'. These packet processing modules are those storing a node supervision programme NSF.
The latter modules regularly read the contents of the corresponding register RA, RB to be able to detect faulty modules.
The BSUA further includes a priority circuit PA connected to the MIBA request and grant lines alto an of BIA1 to BlAn and al' to an' of BlAl' to BlAn', whilst the BSUB further includes a priority circuit PB connected to the MIBB request and grant lines bl to bn of BIB1 to BlBn and by' to Bn' of BIB1 to BlBn'. Each of these lines in fact includes a request conductor and a grant conductor. Each of the priority circuits PA, PB is able to activate a single one of the grant conductors for which the associated request conductor is activated.
It should be noted that if the BIA and BIB are simultaneously informed by both the BSUA and the BSUB that a request is granted, these circuits themselves select only one of these request grants. To be noted also that in each packet data satellite a packet processing module is associated to a line access module and vice-versa. More particularly PMMi'to PMMn' are associated to LAM1 to LAMn respectively.
The line access modules LAM1 to LAMn have a similar structure and therefore only LAM1 is represented in relative detail in Figure 3. This module includes: - a number of line units LU1 to LUp which are connected on the one hand to the above mentioned lines L11 to Lip respectively and on the other hand to the bus B1 leading to the bus control unit BCU1. Each such line unit includes a microprocessor and a direct memory access unit or DMA unit (both not shown); - a channel buffer memory CBM which is also connected to the bus B1; - a microprocessor MP1 and its own memory MEM1 which are both connected to the bus B1 via a bus switch BS which is controlled by the output of the bus access control logic BAL.This BAL is connected to the bus B1 request and grant lines ul to up of the LU1 to LUp and up+ of the MP1 are connected. The BAL is able to grant one of the requests for using the bus B1 and to inform the corresponding requesting unit, LU1-LUp or MP1, that the request is granted. If a request for using B1 is granted to MP1 the bus switch which is normally open is closed thus giving MP1 the access to the bus Bi. The microprocessor MP1 is also connected to the line units LU1 to LUp via the bus BB.
Referring to Figure 4 the PMM1 shown therein is similar to all other packet processing modules. It includes a microprocessor MP2, a memory MEM2 and timers T which are provided separately because their tasks require much time. These tasks are for instance realising a real time clock and the timing level 3 of the X25 protocol of the CCITT. The MEM2 includes e.g. a call block CB, part of which is a recovery call block RCB, and a data base DB.
The functions of the various circuits and units forming part of PDS1 are briefly highlighted hereinafter.
The line accesses module LAM7 (Figure 3) Data packets received in one of the line units FLU 1 to LUp of LAM1 via the corresponding line LI 1 to Llp are checked under control of the microprocessor included in this line unit e.g. LUl, and then transferred, if everything is O.K., via the bus B1 to the channel buffer memory CBM by the DMA-unit also included in this line unit. This transfer is only possible after the line unit LU 1 has requested for the use of the bus B1 by activating its request line ul and if this unit has subsequently been informed by the BAL and via the same line ul that its request has been granted.The microprocessor MP1 can check via the bus BB connected to the line unit LU1 if the packet received has been stored in the CBM. As a consequence this microprocessor then activates its request line upei to request for the use of the bus B1 and when the BAL grants this request it closes the bus switch BS. The microprocessor MP1 then reads and checks at least a portion e.g. the frame information, of the packet, stored in the CBM, via the bus switch BS and the bus 81. When everything is found OK the MP1 then controls the DMA-unit included in the bus control unit BCU1 to which the bus B1 is connected and this DMA-unit then transfers the packet stored in CBM to PPM1.This happens after this unit previously has requested the PPM 1, by a special message, if this module is ready to accept this packet and has received from this module an affirmative answer.
It is clear that when the bus switch BS is open the microprocessor MP1 can have access to its own memory MEM1, whilst simultaneously transfers can take place between a line unit and the CBM or between the CBM and a packet processing module.
The above is also true for the other line access modules.
The bus control unit BCU 1 (Figure 2) This bus control unit is able to set up on the one hand DMA-transfers by means of its DMA-unit controlled by the microprocessor MP1 and on the other hand single message transfers.
The same is true for all the other bus control units.
The bus interface circuit BIA 1 (Figure 2) This bus interface circuit has the following functions, amongst many others: - receiving data from BCU1 and from MIBA; - checking the parity of these data; - generating a parity bit for the data received from BCU1; - requesting for the use of the MIBA bus via its request and grant line al which is connected to the bus supervision unit BSUA; -decidingtogetherwith BIB1 which bus is to be used.
The above is also true for the other bus interface circuits.
The bus supervision unit BSUA (Figure 2) In the register RA the BSUA keeps records of the states of the packet processing modules PPMi' to PPMn'.
The module PPM1' regularly reads the contents of this register RA via the read line ka and executes its node supervision programme NSFwhen a faulty module is detected.
The priority circuit PA to which the lines alto an and al' to an' are connected is able to grant the use of the bus MIBA to a single one of the requesting bus interface units.
The above is also true for the bus supervision unit BSUB connected to PPMn' via the read line kb.
The packet processing module PPM1 (Figure 4) The main function of the PPM 1 is to execute level 3 of the X25 protocol and to create an interface to the network. The PPM1 is able to establish virtual paths and to control the flow of data on these paths. This module is also able to collect billing data for the calls and to send these data, at call clearing, to the packet switching exchance PSE1. Among the various programmes the PPM1 is able to execute the following can be mentioned: the virtual circuit handling and, the call recovery task.
The above is also true for the other packet processing modules. But as already mentioned above two predetermined modules PPMi' and PPMn' are able to execute a so-called node supervision programme NSF.
Referring now to Figure 5 the packet switching exchange PSE1 shown therein is built-up in a similar way as PDSl,but moreover, includes a node data base NDB and a node supervision module NSM which are each connected to the MIBA and the MIBB via a corresponding common bus control unit BCU and an individual bus interface circuit BIA, BIB, the latter units being similar to those forming part of a packet data satellite.
The PPM1 to PPMt shown have the same structure as the PPMi'to PPMn' of the PSD1, but contrary to the latter they mainly have routing functions. Also the PPM1 to PPMt are not associated to a respective one of the LAMa to LAMz; they are indeed used with a circular priority. A further difference is that ka and kb outputs of the BSUA, BSUB are now connected to the NSM.
The node data base NDB is represented in more detail in Figure 6. Its purpose is to provide fast access to data concerning, amongst others, users coupled via PDS1 to PDS3 to PSE1. The NDB is active in the sense that in addition to storing information, it possesses processing capabilities enabling it to provide answers to questions about data. Since these data contain billing information strenuous precautions are taken to prevent loss or corruption of data in the NDB.
To that end the NDB includes three microprocessors CPU1 to CPU3 which are micro-synchronized by a same clock unit CLU. The outputs of the processors CPU1 to CPU3 are connected to each of two majority decision (or threshold) circuits MLC1 and MLC2 having their outputs connected to the write input of a respective memory of two like memories MEM3 and MEM4. The read outputs of these memories MEM3 and MEM4 are connected to the A-inputs and the B-inputs ofthree multiplexers MUX1 to MUX3 respectively used as change-over devices. These 2-in put/l-output multiplexers each have a control input and all these address leads are connected to the 1 -output c of a scale-of-two circuit BS having O-output c.The data inputs Al to A3 of the multiplexers MUX1 to MUX3 are each connected to a corresponding parity checking circuit PCC1 to PCC3, whilst the data inputs B1 to 83 of these multiplexers are each connected to a corresponding parity checking circuit PCC4 to PCC6. The outputs of the PCC1 to PCC3 are connected to an OR-gate M1 provided with an output a and an inverse output a obtained through an inverter 11. Likewise the outputs of the PCC4 to PCC6 are connected to an OR-gate M2 provided with an output b and an inverse output b obtained via an inverter 12. The outputs a,gand c are connected to the inputs of AND-gate Gl,whilst the outputs a, b and c are connected to the inputs of AND-gate G2.The outputs of these AND-gates are connected to the input of the scale-of-two counter BS via OR-gate M3.
The operation of the NDB just described is as follows.
Each instruction is performed simultaneously in the three processors CPU1 to CPU3 for security reasons.
When the data block obtained after such an operation has to be written or stored in memory, each processor adds a calculated parity bit to its own data block and supplies this block simultaneously with the data blocks of the other processors to both the majority logic circuits MLC1 and MLC2. In each of these circuits MLC1 and MLC2 the three data blocks provided by the three processors are compared on a bit per bit basis and the resultant bit is each time inscribed in the corresponding memory MEM3, MEM4.
By the use of three processors, two majority logic circuits and two memories a sufficient security is obtained. Indeed, by the use of three processors which are relatively complicated units and which execute relatively complex operations one is substantially sure that at least two of them will provide the same result.
On the other hand, two majority logic circuits and two memories are considered to be sufficient because these units are of a relatively simple structure and perform relatively simple operations, and because each majority logic circuit is connected to its associated memory via an individual bus so that a faulty condition of such a bus does not affect the operation of the other memory.
When data have to be transferred from the memories MEM3 and MEM4 to the processors, both these memories supply these data in parallel to the inputs AL, A2, A3 and B1, B2, B3 of the multiplexers MUX1, MUX2 and MUX3 respectively. When it is supposed that the scale-of-two counter BS is in its O-condition wherein its c-output is de-activated (or on O) only the data supplied to the inputs Al, A2, A3 will be transferred to the processors CPU 1, CPU2, CPU3 respectively. However, if the c-output of BS is activated the data supplied to the inputs B1, B2, B3 will be transferred to these processors.
At the inputs Al, A2, A3, B1, B2, B3 of the multiplexers the parity of the data supplied thereat is checked in the parity checking circuits PCC1 to PCC6 respectively and when a parity error is detected the corresponding output is activated so that also the output a, b of the corresponding OR-GATE M1, M2 is then activated.
It is now supposed that BS is in the condition wherein c is de-activated (or c activated) and that a parity error is detected on the data supplied to Al, A2 or A3 due to which the output a of M1 is activated. As a consequence, if b is activated )meaning that the parity of the B inputs is correct) that the scale-of-two counter BS will be brought in its l-condition wherein its output c is activated. Due to this the inputs B1 to B3 of the multiplexers are then connected to the processors CPU1 to CPU3 respectively. By proceeding in this way one is safe guarded against errors in the memories MEM3 and MEM4.
In the following is described on the one hand the establishment of a virtual communication path and on the other hand the recovery procedure executed when a PPM involved in this path becomes faulty.
Reference is hereby mainly made to the diagrammes of Figures 7 and 8.
First it is explained how such a virtual communication path is established between a calling data terminal equipment DTE11 and a called data terminal equipment DTE61 (Figure 1).
The DTE11 initiates a call by sending a call request packet CRP to the packet data satellite PDS1 to which DTE11 is connected via the line Lii. This CRP includes the following useful data: - SLCN : the number of the logical channel number used in the source, i.e. in DTE 1. This number is for instance equal to 4000. Each user is e.g. able to use simultaneously 4096 such channels.The SLCN is used to identify all packets transferred over that logical channel; - SDTE : the source address i.e. the address of the calling DTE 1. SDTE is equal to xl, sl, ul wherein xl is the area number, sl is the user number and ul is an additional number; - DDTE : the destination address i.e. the address of the called DTE61. DDTE is equal to x2, s2, u2 wherein x2 is the area number, s2 is the user number and u2 is an additional number.
The call request packet CRP is received in the line access module LAM1 (Figures 2,3) of the packet data satellite PDS1 to which the TDE11 is connected via the line L11. After having been processed in the LAM 1 in the way briefly described above in connection with Figures 2,3 and 4 the CRP is transmitted from the LAM1 to the memory MEM2 of the PPM1 (Figure 4) associated with the LAM 1.
In the PPM1 and under the control of the virtual call handling programme which makes use of the line number L11 the microprocessor MP2 of this PPM1 accesses data base DB in its memory MEM2. Thus, there is obtained a so-called source call block qualifier SCBBQ = bl which indicates a memory portion reserved for the storage of all the information packets received via line Lii. The microprocessor MP2 then creates in this memory area a call block CB which has a source call block number SCBN = nl and which is reserved for the storage of packets relating to the virtual call path to be established.This call block CB comprises a first portion or recovery block RCB for the storage of data which do not vary during a call and a remaining portion for the storage of data which are subject to variations in the course of such a call.
During the establishment of a virtual call path the recovery call block RCB of the CB is gradually filled in as soon as the information becomes available. However, as long as no virtual call path has been established this cannot yet be done for the remaining portion of the CB and this is the reason why only on Figure 7 the RCB is shown. The recovery call block RCB finally should store the data mentioned hereafter. At the present, however, DNAD, DCBN, DCBBQ and DLCN are not yet known (as indicated by question marks in Figure 7): - SLCN : the source logical channel number, i.e. the logical channel number used in DTE11.This number is equal to 4000; - SDTE : the source DTE, i.e. the number xl, sl, ul of DTE11; - DDTE : the destination DTE, i.e. the number x2, s2, u2 of DTE61; - DNAD - the destination node address, i.e. the address of PSE2 followed by that of PDS6 (Figure 1); - DCBN : the destination call block number, i.e. the number of a call block in the memory of a packet processing module of PDS6; - DCBBQ: the destination call block qualifier; - DLCN : the destination logicl channel number, i.e. the logical channel number used in DTE61.
Still under the control of the virtual call handling programme the microprocessor MP1 of the PPM1 then builds-up a new packet and as it knows the destination PSE1 it transfers this packet to the line access module LAMn (Figure 2) which via the line L1 is connected to the packet switching exchange PSE1. The LAMn then transmits this packet P1 (Figure 7) -to this PSE1 via this line Li. This packet P1 comprises: -the information already contained in the CRP; - DNAD : the destination node address. Instead of this address, and temporarily, the immediate destination of the packet P1 is indicated.This immediate destination is the address of PSE1 followed by zero's; - DCBN : the destination call block number. Instead of this number the line number L11 is indicated; - SNAD : the source node address. This address comprises the address of PSE1 followed by that of PDS1; - SCBBQ = B1; -SCBN = nl.
When the packet P1 is received in the PSE1 via the line L1 and more particularly in the line access module LAMa (Figure 5) thereof it is processed therein and transmitted to one of the PPM 1 to PPMt selected on a cyclic basis, e.g. to the PPMt. The latter PPMt subsequently routes the packet to the node data base NDB (Figures 5,6) via the bus MIBA or MIBB, after it has been found that the DNAD contains the address of PSE1.
The microprocessors CPU1 to CPU3 forming part of this NDB then simultaneously access in the memories MEM3 and MEM4: - a table T with the help of the SDTE; - an area table AT by using x2, part of the DDTE.
The table T permits NDB to check that the SCBBO = bl and the line number L11 are really associated with the DTE11 having address SDTE. The area table AT gives the address of the packet switching exchange serving the area indicated by x2, i.e. PSE2.
If everything is OK the NDB informs the PPMt thereof. Subsequently the PPMt builds-up a new packet and transfers it to the line access module LAMz (Figure 5) which via the line L, is connected to the packet switching exchange PSE2 which has just been indicated by the NDB. The LAMz then transmits this packet P2 (Figure 7) to this PSE2 via this line L. This packet P2 is similar to P1 and only differs therefrom by the fact that DNAD now partially indicates the new destination address of the packet i.e. the address of PSE2 followed by zero's (because the address of PDS6 is not yet known).
In the PSE2 the operations performed are similar to those described in connection with PSE1. More particularly, the packet P2 is received in the LAMa' of this PSE2 and is subsequently transferred to a selected packet processing module PPM, e.g. to PPMt'. The latter PPMt' then reads the DNAD of P2 and concludes therefrom that the packet is to be processed in PSE2. As a consequence the PPMt' then accesses the table T of the NDB of this PSE2 by means of the DDTE which is equal to x2, s2, u2. From this table Tthe NDB obtains the following information: DNAD : this address is equal to that of PSE2 followed by that of PDS6; DCBBQ = b2 : This data is not used here.
The number of the line L61 connecting PDS6 to DTE61.
The NDB transmits these additional data to the PPMt' which subsequently builds-up a new packet and transmits it to the line access module LAMz' which is connected to the PDS6 via line L6. This packet P3 (Figure 8) is similar to the packet P2. It differs therefrom by the fact that DNAD is now equal to the address PSE2 followed by that of PDS6. Moreover it contains the following additional information: DCBBQ = b2 DCBN = L61 After the packet P3 has been received in the PDS6, more particularly in the LAM 1" (Figure 8) thereof, it is transmitted to the PPMi" (Figure 8) allocated to the LAM 1". This PPM1" operates in a similar way as the PPM1' in PDS1 and builds up the recovery call block RCB' (Figure 8) of a call block.This recovery call block RCB' is similar to that which has been built-up in the PDS1 but obviously the source and destination in PDS1 are now different from source and destination in PDS6.
All these data are transmitted to the LAMn" (Figure 8) connected to the DTE61 via the line L61 in this LAMn" then transmits a packet P4 to this DTE61 via this line L61. This packet P4 includes DLCN the logical channel number used in DTE6i, e.g. 0001; SDTE=xi,si,ui; DDTE = x2, s2, u2.
After having received this packet the DTE61 transmits the following or call accepted packet to the PDS6: SLCN = 0001; DDTE = xl,sl, ui; SDTE = x2, s2, u2.
From the PDS6 this packet is transmitted further via PSE2, PSE1 and PDS1 to the DTE11 and along its path this packet is gradually completed with the information available in these units in order to finally be able to completely fill in the recovery call block RCB (Figure 7) stored in the PDS1.
After the recovery call block RCB' has been filled in the PDS6 the microprocessor of the PPM1" (Figure 8) starts a programme which consists in transferring the RCB' to the memory of the PPM2" (not shown) of PDS6. This happens on the bus MIBA or MIBB and after the PPM" has requested this PPM2 if it is ready to receive this RCB' and has received a positive answer therefrom. Likweise, after the RCB (Figure 7) has been fully filled in in the PDS1 the PPMl'thereof copies this block into the memory of the PPM2' (not shown). Thus the same recovery data are each time stored in two packet processing modules i.e. PPMi' and PPM2' (Figure 2) and in PPM1" and PPM2" (Figure 8). This is done for security reasons.Obviously instead of using PPM2' of PDS1 and PPM2" of PDS6 for storing the recovery data of PPMi' and PPMi" respectively one could use any other packet processing module of the corresponding packet data satellite.
The purpose of the recovery data will be described hereinafter in relation to PPM1' of PDS1.
It is supposed that PPM1' (Figure 2) detects that it is faulty. Consequently it triggers the bistable circuit BS1' forming part of BCU to its l-condition so that the output my'thereof is activated. Thus the BSUA and the BSUB are informed via the line my' that module PPMi' is out of service and this fact is registered in the registers RA and RB of BSUA and BSUB respectively. At a certain moment the PPMn' will read the contents of register RB of BSUB via the lead kb and will thus find that PPMi' is out of service. As a consequence the PPMn' will start its node supervision programme NSF. Due to this the PPM2' is informed that it has to start a call recovery programme. By this programme the PPM2' makes all its normally inactive recovery call blocks active meaning that the PPM2' now takes-over the control of the virtual call paths previously handled by PPM1'. PPMn'alsotransmitsawell known resetpacketto both DTE11 andDTE61 to inform the latterthat PPM2' instead of PPM1' is now involved in the virtual call path previously set up.
While the principles of the invention have been described above in connection with specific apparatus, it is to be clearly understood that this description is made only by way of an example and not as a limitation on the scope of the invention.

Claims (16)

1. Data communication system for switching data in a store-and-forward mode, said system including a plurality of communication nodes each having a plurality of ports and each including a plurality of switching processor modules, each of said switching processing modules of a node being able to establish virtual call paths between at least two ports of said node and to transmit data on these paths and cooperating nodes being able to establish virtual paths between at least two ports belonging to different nodes, each switching processor module including a processor and a memory for storing data about the virtual calls in which said module is involved, characterized in that at least one end node (PDS1) involved in a virtual call path includes means to store recovery data (RCB) about said virtual call path in the memories of at least two switching processor modules (PPMi', PPM2'), one (PPM 1') of these two modules being the module involved in said virtual path, and recovery means to use the recovery data (RCB) stored in the memory of the other associated switching processor module (PPM2') so that said virtual paths remain established upon said one switching processor module (PPM1') becoming faulty, said recovery data being made available in said end node by the transmission of control messages required for the establishment of said virtual call path.
2. Data communication system according to claim 1, characterized in that each of said virtual call paths is established by transmitting a call request control message from a calling node to a called node and by transmitting subsequently a call acceptance control message from said called node to said calling node, and that said recovery data about said virtual call path are conveyed by said control messages and are available in said called and calling nodes upon receipt therein of said call request control message and of said call acceptance control message respectively.
3. Data communication system according to claim 1, characterized in that said switching processor modules (PP,1 '-PPMn') are divided into at most n/2 mutually exclusive sets of at least two associated modules.
4. Data communication system according to claim 1, characterized in that said node includes at least one common register (RA, RB) for registering the correct or faulty condition of said switching processor modules (PPM1'-PPMn'), and further includes individual register means (BS1'-BSn') associated to said nodules and each able to register the faulty condition of the associated module, said register means (BS1 '-BSn') being coupled (mi'-mn') to said common register (RA, RB) to be able to communicate thereto the faulty condition of a module and that at least two of said switching processor modules (PPMi', PPMn') are able to regularly read said common register (RA, RB) to supervise the condition of said modules, either one of these two supervisory switching processor modules (PPMi', PPMn') being able to initiate a recovery programme in the module associated to a faulty module, said recovery programme enabling said use of said recovery data.
5. Data communication system according to claim 1, characterized in that it includes two said common registers (RA, RB) which can be read by respective modules of said two supervisory modules (PPM1', PPMn').
6. Data communication system according to claim 1, characterized in that said switching processor modules (PPM1'-PPMn') are intercoupled via at least one first common bus (MIBA, MIBB) to which are also coupled a plurality of line access modules (LAM1 - LAMn) each having a plurality of said ports, that each of said line access modules include a plurality of line units (LU1 - LUp), with a first processor, a second processor (MPl)with its own memory (MEM1), and a common memory (CBM), said line units (LU1-LUp) being directly coupled to a second common bus (81) which is coupled to said first common bus (MIBA, MIBB) and to said common memory (CBM) and said second processor being coupled to said second common bus (B1) via a bus switch (BS), and control means (BAL) to selectively grant the use of said second common bus (B1) to said first or second (MP1) processor and to hereby open and close said bus switch (BS) respectively.
7. Data communication system according to claim 6, characterized in that said control means comprise a logic circuit (BAL) to which said processors of said line units (LU1 - LUp) and said second processor (MP1) are coupled via corresponding request and grant lines (up - sup+1), said logic circuit having an output controlling said bus switch (BS).
8. Data communication stystem according to claim 1, characterized in that said nodes include a plurality of inter-coupled central nodes (PSE1-PSE3) to each of which a plurality of said end nodes (PDS1 - PDS3; PDS6 - PDS7) are coupled, each (PSE1) of said central nodes (PSE1 - PSE3) including a plurality of switching processor modules (PPM1 - PPMt) and a data base module (NDB) all intercoupled via a common bus (MIBA, MIBB), said data base module (NDB) including at least one processor (CPU1 - CPU3) and at least one memory (MEM3, MEM4) which stores data relating to the plurality of end nodes (PDS1 - PDS3) coupled to the central node (PSE1) of which said data base module (NDB) forms part.
9. Data communication system according to claim 8, characterized in that said data base module (NDB) includes three micro-synchronized processors (CPU1 - CPU3) which are coupled to the inputs of two threshold circuits (MLC1, MLC2) the outputs of which are coupled to write inputs of respective ones of two memories (MEM3, MEM4) via respective ones of two busses.
10. Data communication system according to claim 9, characterized in that a read input of one (MEM3) of said two memories (MEM3, MEM4) is coupled to the first inputs of three switching devices (MUX1 - MUX3) each with a first input, a second input and a single output, that a read input of the other (MEM4) of said two memories (MEM3/s, MEM4) is coupled to the second inputs of said three switching devices (MUX1 - MUX3 the outputs of which are coupled to respective ones of said three processors (CPU1 - CPU3, and that parity check circuits (PCC1 - PCC6) are coupled to respective ones of said first and second inputs to check the parity of the data supplied thereat from the corresponding memory (MEM3, MEM4), said parity check circuits (PCC1 - PCC6) controlling the operation of said switching devices (MUX1 - MUX3) in such a way that starting from a position wherein said first (second) inputs are connected to said outputs of said switching devices, this position is changed when simultaneously a parity error is detected on at least one of said first (second) inputs and no parity error is detected on at least one of said second (first) inputs.
11. Data communication system according to claim 10, characterized in that each of said change-over switching devices comprises a multiplexer (MUX1 - MUX3) with a first input, a second input and one output.
12. Data processing system including three processors which simultaneously perform identical instructions and which are coupled to at least one memory through a threshold circuit, characterized in that said three processors are coupled to the inputs of two threshold decision circuits (MLC1, MLC2) the outputs of which are coupled to write inputs of respective ones of two memories (MEM3, MEM4) via respective busses.
13. Data processing system according to claim 12, characterized in that a read output of one (MEM3) of said two memories (MEM3, MEM4) is coupled to the first inputs of three switching devices (MUX1 - MUX3) each with a first input, a second input and a single input, that the other (MEM4) of said two memories (MEM3, MEM4) is coupled to the second inputs of said three switching devices (MUX1 - MUX3) the outputs of which are coupled to respective ones of said three processors (CPU1 - CPU3), and that parity check circuits (PCC1 - PCC6) are coupled to respective ones of said first and second input to check the parity of the data supplied thereat from the corresponding memory (MEM3, MEM4), said parity check circuits (PCC1 - PCC6) controlling the operation of said switching devices (MUX1 - MUX3) in such a way that starting from a position wherein said first (second) inputs are connected to said outputs of said switching devices, this position is changed when simultaneously a parity error is detected on at least one of said first (second) inputs and no parity error is detected on at least one of said second (first) inputs.
14. Data processing system according to claim 13, characterized in that each of said change-over switching devices comprises a multiplexer (MUX1 - MUX3) with two first inputs and one output.
15. Data processing system with at least a first and a second processor having access to a common memory, at least said first processor having further access to a private memory, characterized in that said second processor is coupled to said common memory (CBM) via a bus (B1) whilst said first processor (MP1) has access to said bus (B1) via a bus switch (BS1), and that the system further includes control means (BAL) able to selectively grant the use of said bus (B1) to said first (MP1) or second processor and to hereby close and open said bus switch (BS) respectively.
16. Data processing system according to claim 15, characterized in that said control means comprise a logic circuit (BAL) to which said first (MPi) and second processors are coupled via request and grant lines, said logic circuit having an output controlling said bus switch (BS).
GB7937039A 1978-10-25 1979-10-25 Data communication and data processing system Expired GB2035755B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
BE2057373A BE871518A (en) 1978-10-25 1978-10-25 DATA COMMUNICATION AND DATA PROCESSING SYSTEM.
DE19792942501 DE2942501A1 (en) 1978-10-25 1979-10-20 DEVICE FOR TRANSMITTING DATA

Publications (2)

Publication Number Publication Date
GB2035755A true GB2035755A (en) 1980-06-18
GB2035755B GB2035755B (en) 1982-11-10

Family

ID=25661729

Family Applications (1)

Application Number Title Priority Date Filing Date
GB7937039A Expired GB2035755B (en) 1978-10-25 1979-10-25 Data communication and data processing system

Country Status (4)

Country Link
BE (1) BE871518A (en)
DE (1) DE2942501A1 (en)
GB (1) GB2035755B (en)
NL (1) NL7907713A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2131252A (en) * 1982-12-02 1984-06-13 Western Electric Co System and method for controlling a multiple access data communications system
EP0234191A2 (en) * 1986-01-09 1987-09-02 Nec Corporation Packet-switched communications network with parallel virtual circuits for re-routing message packets
WO1991007040A1 (en) * 1989-11-06 1991-05-16 American Telephone & Telegraph Company Automatic fault recovery in a packet network

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE3212031A1 (en) * 1982-03-31 1983-10-06 Siemens Ag Universal network for correctly timed transfer of information segments, i.e. speech with speech interpolation or messages in blocks

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2131252A (en) * 1982-12-02 1984-06-13 Western Electric Co System and method for controlling a multiple access data communications system
EP0234191A2 (en) * 1986-01-09 1987-09-02 Nec Corporation Packet-switched communications network with parallel virtual circuits for re-routing message packets
EP0234191A3 (en) * 1986-01-09 1989-05-31 Nec Corporation Packet-switched communications network with parallel virtual circuits for re-routing message packets
WO1991007040A1 (en) * 1989-11-06 1991-05-16 American Telephone & Telegraph Company Automatic fault recovery in a packet network

Also Published As

Publication number Publication date
DE2942501A1 (en) 1981-05-07
BE871518A (en) 1979-04-25
NL7907713A (en) 1980-04-29
GB2035755B (en) 1982-11-10

Similar Documents

Publication Publication Date Title
EP0330835B1 (en) Method and apparatus for linking SNA terminals to an SNA host over a packet switched communications network
EP0330834B1 (en) Method and apparatus for linking an SNA host to a remote SNA host over a packet switched communications network
Shin HARTS: A distributed real-time architecture
EP0343820B1 (en) Temporary state preservation for a distributed file service
EP0088789B1 (en) Multiprocessor computer system
US3921141A (en) Malfunction monitor control circuitry for central data processor of digital communication system
JPH0256694B2 (en)
EP0186320A1 (en) Local area communication network
CZ280707B6 (en) Communication system
GB2035755A (en) Data communication and data processing system
Blanc Review of computer networking technology
Albrecht et al. The virtual telecommunications access method: a systems network architecture perspective
Retz Operating system design considerations for the packet-switching environment
US4649534A (en) Telecomputer package switching system
KR960013971B1 (en) Attend consol interface device & its operation method for pbx
JPH05130143A (en) Data exchanger
CN117459528A (en) Method, device, equipment and medium for acquiring calculation force state
JPH1165867A (en) System doubling method for load decentralized type system
Colon-Castro et al. Interfaces between protocol layers on a multiprocessor system
von Bochmann et al. Distributed Systems: Examples and Definition
KR930000731B1 (en) Multilevel 3 protocol processing apparatus for common line signalling system
JPH0828714B2 (en) Network switching method
KR0121970B1 (en) Common-bus managing method in an exchanger
CN109358990A (en) Information transferring method, device and storage medium in a kind of more control systems
GB2217551A (en) Electronic system for packet switching

Legal Events

Date Code Title Description
732 Registration of transactions, instruments or events in the register (sect. 32/1977)
PCNP Patent ceased through non-payment of renewal fee

Effective date: 19921025