EP0972252A4 - System and method for telecommunications system fault diagnostics - Google Patents

System and method for telecommunications system fault diagnostics

Info

Publication number
EP0972252A4
EP0972252A4 EP98911981A EP98911981A EP0972252A4 EP 0972252 A4 EP0972252 A4 EP 0972252A4 EP 98911981 A EP98911981 A EP 98911981A EP 98911981 A EP98911981 A EP 98911981A EP 0972252 A4 EP0972252 A4 EP 0972252A4
Authority
EP
European Patent Office
Prior art keywords
fault
neural network
data
diagnostic
diagnostic system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
EP98911981A
Other languages
German (de)
French (fr)
Other versions
EP0972252A1 (en
Inventor
James Austin
Ping Zhou
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cybula Ltd
Original Assignee
Porta Systems Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Porta Systems Corp filed Critical Porta Systems Corp
Publication of EP0972252A1 publication Critical patent/EP0972252A1/en
Publication of EP0972252A4 publication Critical patent/EP0972252A4/en
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q3/00Selecting arrangements
    • H04Q3/0016Arrangements providing connection between exchanges
    • H04Q3/0062Provisions for network management
    • H04Q3/0087Network testing or monitoring arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/08Indicating faults in circuits or apparatus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/22Arrangements for supervision, monitoring or testing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/22Arrangements for supervision, monitoring or testing
    • H04M3/2254Arrangements for supervision, monitoring or testing in networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/22Arrangements for supervision, monitoring or testing
    • H04M3/24Arrangements for supervision, monitoring or testing with provision for checking the normal operation
    • H04M3/247Knowledge-based maintenance systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q3/00Selecting arrangements
    • H04Q3/0016Arrangements providing connection between exchanges
    • H04Q3/0062Provisions for network management
    • H04Q3/0075Fault management techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/22Arrangements for supervision, monitoring or testing
    • H04M3/26Arrangements for supervision, monitoring or testing with means for applying test signals or for measuring
    • H04M3/28Automatic routine testing ; Fault testing; Installation testing; Test methods, test equipment or test arrangements therefor
    • H04M3/30Automatic routine testing ; Fault testing; Installation testing; Test methods, test equipment or test arrangements therefor for subscriber's lines, for the local loop
    • H04M3/301Circuit arrangements at the subscriber's side of the line
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/22Arrangements for supervision, monitoring or testing
    • H04M3/26Arrangements for supervision, monitoring or testing with means for applying test signals or for measuring
    • H04M3/28Automatic routine testing ; Fault testing; Installation testing; Test methods, test equipment or test arrangements therefor
    • H04M3/30Automatic routine testing ; Fault testing; Installation testing; Test methods, test equipment or test arrangements therefor for subscriber's lines, for the local loop
    • H04M3/305Automatic routine testing ; Fault testing; Installation testing; Test methods, test equipment or test arrangements therefor for subscriber's lines, for the local loop testing of physical copper line parameters, e.g. capacitance or resistance

Definitions

  • the present invention relates generally to telecommunications system fault location, and more particularly relates to a system and method for telecommunications system fault diagnostics employing a neural network.
  • Telecommunication systems are generally complex electrical systems which are subject to failure from a variety of fault modes.
  • the rapid and accurate classification and isolation of a fault within a telecommunications system is desired to minimize dispatch and repair costs associated with such faults. Therefore, it is a long standing objective within the telecommunications industry to provide a system which can use measured data to automatically diagnose one of several failure modes.
  • LTS automated line test system
  • RTU remote test unit
  • EX local exchange
  • DP distributing points
  • CA customer apparatus
  • the telecommunications system when operating normally, exhibits characteristic parameters in response to the RTU 2 test signal. These parameters include voltage values, current values, resistance values, capacitance values and the like.
  • the RTU 2 samples and evaluates these parameters through the use of software. During a fault condition, these parameters change in response to a given fault.
  • the diagnostic software 16 implements a simple heuristic algorithm.
  • the algorithm includes decision rules which compare one or more measurements with predetermined (by an engineer) threshold values to determine whether a fault exists.
  • the algorithm may compare measured resistance values between a pair of lines against a set of expected threshold values which are stored in the program to decide whether a fault exists in either an exchange 4 or customer apparatus 14.
  • the algorithm uses linear decision rules to perform these functions.
  • the LTS is also capable of recording the measured parameters in a database 18 for future reference. Additionally, the LTS has the capability of accepting manually entered data regarding each fault from an operator via a keyboard. This information may include customer fault reports and service personnel "clear off" codes indicating the actual location of a fault. In this way, a large amount of data is assembled regarding fault history and parameter values associated with various fault locations. However, the LTS is unable to use this data to improve its own operation. If desired, the data stored in database 18 may be evaluated by an engineer periodically and the decision thresholds employed by the algorithm may be manually updated. This is an extremely labor intensive, and therefore expensive, operation. Therefore, it is a long standing objective in the field of telecommunication system diagnostics to develop a system which can overcome this limitation.
  • a neural network is a data processing system largely organized in parallel.
  • the neural network includes a collection of processing elements, or neurons, which are mutually interconnected to one another.
  • the various connections are known as neuronal interconnects.
  • the network is typically formed with an input layer of neurons, an output layer of neurons and one or more hidden layers of neurons.
  • neural networks can "learn" by means of neural network training.
  • previously acquired measurement data is applied to the neural network input layer.
  • An error signal is generated at the output layer and is back propagated through the hidden layers of the network.
  • the various weights associated with each neuronal interconnect are adjusted to minimize the error signal. If sufficient data is applied to the neural network, the neural network is able to classify unknown objects according to parameters established during training.
  • a neural network is employed to perform fault detection and diagnosis for printed circuit boards.
  • the neural network disclosed in the '566 patent is used to process thermal image data from an energized printed circuit board.
  • the neural network is trained by applying data from a printed circuit board with known faults to the network. Once trained, the neural network is then able to analyze new data and classify the new data into one of a plurality of printed circuit board faults.
  • a neural network is used in connection with a method and apparatus for detecting high impedance faults in electrical power transmission systems.
  • the system disclosed in the '327 patent employs a trained neural network to evaluate fast Fourier transforms (FFT) of continuously acquired current measurements.
  • the neural network continuously monitors the FFT data and activates a fault trigger output in the event a high impedance fault is detected.
  • FFT fast Fourier transforms
  • Neural Network Technology In general, neural networks can be viewed as a powerful approach to representing complex nonlinear discriminant functions in the form y k (x; W ⁇ where x is an input parameter and W k is an optimizing parameter within the neural network.
  • One form of neural network is referred to as a multilayer perceptron
  • MLP MLP network
  • the topology of an MLP neural network is illustrated in Figure 2.
  • the MLP network includes an input layer 24, an output layer 26 and at least one hidden layer 28. These layers are formed from a plurality of neurons 22.
  • the input layer 24 receives input parameters and distributes these parameters to each neuron 22 in the first hidden layer 28.
  • the hidden layers 28 process this data and establish probability estimates for each of a plurality of output neurons which make up the output layer 26.
  • each single neuron 22 is a discrete processing unit which performs the discriminant function by first performing a linear transformation and then a nonlinear transformation on the input variable x as follows:
  • is a nonlinear function having the form:
  • SUBST1TUTE SHEET (RULE 26) weights, are estimated from this known training data. Preferably this is accomplished using a back propagation method.
  • known data is applied to the input of the neural network and is propagated forward by applying the network equation as previously stated in equation 3.
  • the input data results in output vectors for each layer of the MLP network.
  • the output vectors are evaluated for all output neurons and are propagated backward to determine errors for the hidden neurons.
  • the weights associated with each interneural link are adjusted to minimize the resultant errors. This process is iterated until the weights stabilize over the set of training data.
  • the error function within the neural network may be defined by a sum of square difference function between the desired output, o k (n), and the network's actual output, y k (n). This equation may be stated as:
  • is a small positive constant which is denoted as the learning rate.
  • the learning rate is a critical parameter within the MLP network. Selecting ⁇ to be too large may cause the network to become unstable or oscillatory. On the other hand, if ⁇ is too small, the networks learning performance will be slow. To achieve an optimal learning rate, a portion of the previous delta weight is added to the current delta weight to give the following generalized delta rule: where ⁇ is a small positive constant which is denoted as the momentum.
  • a second neural network topology known in the prior art is referred to as a radial basis function (RBF) network.
  • RBF radial basis function
  • a typical RBF network is illustrated in Figure 3.
  • the RBF network models discrimination functions by performing a non-linear transformation on a linear combination of a set of local kernels or basis functions as follows:
  • ⁇ i are centre vectors of the network and ⁇ ; are widths associated with the network.
  • the pictorial diagram of Figure 3 represents the above formulae graphically.
  • the output from a hidden or RBF node is determined by the distance in Equation 8 from an input vector x to a centre or pattern vector ⁇ r
  • the basis functions are combined and transformed at the output layer.
  • a Moody-Darken learning method known in the art may be used for the optimization of network parameters.
  • the network training involves both unsupervised and supervised stages.
  • the centre vectors, ⁇ are determined by using an adaptive K-mean clustering algorithm.
  • the widths, ⁇ j are estimated based on the distances between each centre vector and its nearest neighbours.
  • the second stage supervised learning determines the weights from the hidden layer to the output layer using a gradient descent method similar to that for MLP networks discussed above.
  • a diagnostic system for locating faults within a telecommunications system, the diagnostic system comprising: a remote test unit, the remote test unit being operatively coupled to the telecommunications system and obtaining parametric data therefrom; and a neural network, the neural network being responsive to the parametric data from the remote test unit, classifying the parametric data to at least one of a plurality of fault locations, and generating an output signal indicative of the fault location.
  • a method of locating faults within a telecommunications system comprising the steps of: a) measuring a plurality of parameters associated with the telecommunications system; b) normalizing the measured parameters; and c) classifying the normalized parameters as probabilities associated with a plurality of fault locations.
  • the present invention provides an apparatus and method which achieves improved fault location in a telecommunications system.
  • the present invention typically provides a system which uses previous fault data and present measured data to diagnose faults within a telecommunications system.
  • the present invention typically provides a system which can accurately classify a fault mode in a telecommunications system.
  • the present invention also typically provides a system which can use previous fault data to alter the boundaries of fault decisions within a telecommunications diagnostic system.
  • the telecommunications fault diagnostic system is formed having a remote test unit (RTU) operatively coupled to a neural network.
  • the RTU which is conventional in the field of telecommunications diagnostics, is operatively coupled to a telecommunications system through a local exchange.
  • the RTU generates test signals and measures system parameters such as resistance, capacitance, voltage, etc.
  • the neural network is operatively coupled to the RTU and receives the system parameter data therefrom.
  • the neural network is a trained, and dynamically trainable, processing system which is - formed from a plurality of interconnected
  • processing units or neurons.
  • the neurons are organized in one or more processing layers.
  • System parameter data is applied to a first processing layer, or input layer. From the input layer, data is distributed to one or more hidden layers of neurons. Based on weights, which are learned by the neural network during "training," each neuron makes a decision on the data which it receives.
  • the decisions from the interconnected neurons are applied to an output layer which assigns the final probability for each fault type and location. Exemplary outputs from the output layer include the respective probabilities for a fault being located in one of the exchange, the lines, or the customer apparatus within a telecommunication system.
  • the neural network is "trained” using historical fault data which is collected from an RTU and stored in a database. By evaluating many measurements, along with associated fault types and locations, the neural network is able to assign the proper weights to attribute to each neuronal interconnect within the neural network.
  • the neural network may also be easily retrained to adapt to new data.
  • Figure 1 is a block diagram of a telecommunications diagnostic system known in the prior art.
  • Figure 2 is a pictorial diagram of a generalized multi layer perceptron neural network known in the prior art.
  • Figure 3 is a pictorial diagram of a generalized radial basis function neural network known in the prior art.
  • Figure 4 is a block diagram of a telecommunications diagnostic system employing a neural network, and formed in accordance with the present invention.
  • Figure 5 is a schematic diagram illustrating an electrical model of a pair of lines used in a telecommunication system.
  • Figure 6 is a pictorial diagram of a neural network having an input layer, an output layer and two generalized hidden layers.
  • Figure 7 is a block diagram illustrating the interconnection of the neural network and data storage device used to facilitate training of the network in one embodiment of the present invention.
  • Figure 8 is a flow chart illustrating the steps involved in training a neural network in accordance with the present invention.
  • FIGS. 9 and 10 are block diagrams of integrated telecommunication diagnostic systems, formed in accordance with the present invention.
  • FIG. 4 A telecommunications diagnostic system formed in accordance with the present invention is illustrated in Figure 4.
  • the block diagram of Figure 4 includes a remote test unit (RTU) 2 which is operatively coupled to the telecommunications system via a local exchange 4.
  • the RTU 2 is substantially equivalent to that used in the LTS system known in the prior art.
  • the RTU 2 induces test signals into the telecommunications system and measures system parameters in response thereto.
  • Figures 5 illustrates a simplified electrical model of the system parameters for two adjacent telecommunication lines.
  • the two adjacent lines designated A and B, may be characterized in part by a plurality of resistance values.
  • Typical resistance measurements performed by the RTU 2 include: line A to earth ground (R AG ) 40, line A to line B (R AB ) 42, line A to battery (R AV ) 44, line B to earth ground (R BG ) 46, line B to line A (R BA ) 42, and line B to battery (R BV ) 48.
  • the RTU 2 is capable of measuring the capacitance from line A to earth ground (C A ) 50, from line B to earth ground measurements, (C B ) 52 and from line A to line B (C ⁇ ) 54.
  • the RTU 2 also provides a ratio of capacitance Ca/Cb.
  • the RTU can also measure the voltage from line A to earth ground (V A ), from line B to earth ground (V B ) and from line A to line B (V AB ).
  • the RTU 2 is operatively coupled to a neural network 30.
  • the neural network 30 is a data processing system, preferably organized in parallel.
  • the neural network 30 may generally take the form of any topology known in the art.
  • a generalized topology of the neural network 30 is illustrated in Figure 6.
  • the neural network 30 includes an input layer 60 which is operatively coupled to the RTU 2 and receives the measured system parameters therefrom.
  • the neural network 30 further includes one or more hidden processing layers 62 and an output layer 64.
  • each layer is composed of a plurality of processing neurons which are interconnected via weighted links 68.
  • the output layer 64 preferably includes three probability outputs. These outputs preferably indicate the presence of a fault in either the customer apparatus 14, a telecommunications line 8 or the exchange 4 ( Figure 1).
  • the telecommunications diagnostic system of the present invention further includes a data storage device 18.
  • the data storage device 18 contains a historical database of fault data including measured system parameters, customer complaint data and fault clear-off codes.
  • the clear-off codes are entered by service personnel after a fault has been repaired and indicate the actual location of the fault.
  • the data storage device 18 is operatively coupled to the neural network 30.
  • a simplified block diagram illustrating a preferred interconnection of the data storage device 18 and neural network 30 is illustrated in Figure 7.
  • the 60 of the neural network 30 receives data from the data base for network training.
  • the neural network 30 may also store current fault data in the data storage device 18 to enhance the historical data base for continuous adaptive learning.
  • FIG 8. After training begins (step 70), the learning rate and momentum factors are set for the network (step 72). Each weighted link 68 ( Figure 6) is then randomly set to a normalized value in the range of -1 to 1 (step 74).
  • Training is performed in an iterative, closed loop.
  • the loop, or epoch begins by loading the first set of historical data from the data storage device 18 into the input layer 60 of the neural network (step 78). This data is processed by the network in accordance with the randomly initialized weights assigned to the links 68. This processing generates a tentative output at the output layer 64 (step 80).
  • An error detector 69 compares the initial output against the expected output (clear- off code) for the corresponding input data in step 82 and an error signal is generated. The magnitude of the error signal is compared to an acceptable error limit. If the error is acceptable, training is complete (step 84). If the error exceeds the acceptable limit, the weights 68 are adjusted (step 88). After the weights 68 are adjusted, a new epoch begins from step 78 for the next set of data in the historical data base.
  • the historical fault data is preferably pre-processed before training begins. Because the fault data is measured data from a real system, the data is subject to noise and other sources of unreliability. Pre-processing the data serves to eliminate these erroneous data sets which would adversely affect network 30 training.
  • the neural network 30 is capable of processing measured data and classifying the data into probabilities of fault locations. This information is available on an output layer of the neural network 30.
  • Preferred fault classifications include customer apparatus (P CA ) > l me (? LN ) an ⁇ exchange (P EX ).
  • the telecommunications fault diagnostic system of the present invention further includes a customer service processing unit (CSP) 32.
  • the CSP 32 receives the fault classification data from the neural network 30 and provides this information on a user interface console, such as a personal computer or data terminal.
  • a user interface console such as a personal computer or data terminal.
  • the CSP 32 preferably includes an input device, such as a keyboard. From the input device, an operator may enter appropriate customer complaint information. This information is then added to the historical database of the data storage device 18.
  • the telecommunications fault diagnostic system preferably includes a repair dispatch processing unit (RDP) 34.
  • the RDP 34 receives the fault location data from the neural network 30 and provides this information on a user interface console, such as a personal computer or data terminal. From this information, service personnel are dispatched in accordance with the indicated fault location.
  • the RDP 34 preferably includes an input device, such as a keyboard. From the input device, the operator may enter appropriate information regarding the resolution of the detected fault (clear-off code). This information is then added to the historical database of the data storage device 18. In many cases, the RDP 34 and CSP 32 may be integrated into a single station.
  • the present invention addresses the problem of fault location in a telecommunications system as a classification problem.
  • the topology of neural network 30 is chosen to optimize the classification of the various fault parameters to one of a fixed number of possible fault locations.
  • the MLP Figure
  • the MLP network of Figure 2 includes two hidden layers and an output layer of processing neurons. Implementation of this network requires defining system parameters to be input to the network, the fault classes to be output from the network, and the topologies of the hidden layers between the input and output layers.
  • the input layer 60 preferably receives fourteen measured values from the RTU 2 as inputs.
  • the output layer 66 of the neural network represents probabilities of a fault being located in the customer apparatus (P CA ), the telecommunications lines (P ⁇ ) and the communications exchange (P EK ).
  • the neural network classifies data as probability functions. Therefore, it is desirable for the input data to be normalized in the range of minus one (-1) to one (1).
  • the measured input data of resistance and capacitance vary over a significant range. Resistance values vary in the telecommunications system from a few ohms
  • Capacitance values vary from a few nano farads to 10 4 nano farads.
  • the service clear-off codes take the form of discrete values in the range of 0-30. Fault reports from customers are input as symbolic data.
  • Normalization requires coding the symbolic data into a discrete numeric representation and scaling the measured data into the range of -1 to 1.
  • the data is normalized with respect to both mean and variance values of individual measurements.
  • other normalization methods such as simple normalization with respect to minimum and maximum values, or normalizing the data with respect to both mean and covariance matrices of all measurements may also be used.
  • the topology of the hidden layers 62 is selected to strike a balance between the classification performance, the training time and the available processing power of the neural network.
  • an MLP network may be formed in accordance with the present invention having a first hidden layer with 75 neurons and a second hidden layer with 20 neurons. This topology is denoted as 75:20.
  • three parameters are adjusted to optimize the classification performance. These parameters are the learning rate ⁇ , the momentum and the decaying factor. For this network, it was determined that a learning rate of about 0.15 and a momentum factor of about 0.1 achieved optimal performance. If ⁇ is made large, e.g. , 0.3-0.4, the network becomes unstable. On the other hand, small values of ⁇ , e.g., 0.05 resulted in longer learning times.
  • the MLP network was modeled using Neural Works Professional II/Plus (TM) and Neural Works explorer (TM) software (manufactured by Neural Works, Inc. of Pittsburg, Pennsylvania) running on a personal computer. Over a sampling of 18,962 sets of fault data, the present invention was able to properly classify 76.4% of the fault cases correctly. This compares to a 68.3% overall correct classification rate by the LTS known in the prior art. This represents a net improvement of 8.1 % .
  • a telecommunications fault diagnostic system is illustrated with a neural network 30 operating in parallel with conventional LTS diagnostic software 16. Both processing systems simultaneously receive the measured system data from an RTU 2 and generate fault diagnostic output data.
  • the neural network 30 and diagnostic software 16 each generate a fault location output signal.
  • the fault location output signal from each processing system is received by a post processor 100.
  • the post processor 100 selectively routes the fault location signal from either the neural network 30 or LTS diagnostic software
  • the LTS diagnostic software 16 to a fault location output.
  • the LTS diagnostic software 16 also calculates a fault type signal. This signal is presented directly on a fault type output. This method of integration improves the diagnostic performance of the MLP neural network by approximately 0.4%.
  • FIG. 10 An alternate embodiment of an integrated telecommunications diagnostic system formed in accordance with the present invention is illustrated in Figure 10. As with Figure 9, this topology features the parallel operation of a neural network 30 and convention LTS diagnostic software 16. However, rather than employing a post processor 100, the input layer 60 of the neural network 30 is expanded to accept the fault location and fault type output signals from the LTS diagnostic software 16. The fault location output is derived from the neural network output layer 66. The fault type output is received directly from the LTS diagnostic software 16. This configuration requires a more complex neural network 30 but eliminates the post processor 100. This integration topology yielded a 0.5% improvement in classification rate over the MLP neural network 30 standing alone.
  • Figures 9 and 10 preferably include a data storage device 18, a customer service processor 32 and/or a repair dispatch processor 34 as discussed in connection with Figure 4.

Abstract

A telecommunications fault location and diagnostic system employs a remote test unit (RTU) (2) to collect system parameter data. The RTU is operatively coupled to a trained neural network (30) which receives the system parameter data from the RTU (2). The neural network (3) is trained using pre-screened historical fault data which is stored in a database. Once trained the neural network classifies the RTU data into one of a predetermined number of fault probabilities.

Description

System and Method for Telecommunications System Fault Diagnostics
BACKGROUND OF THE INVENTION
Field of the Invention
The present invention relates generally to telecommunications system fault location, and more particularly relates to a system and method for telecommunications system fault diagnostics employing a neural network.
Telecommunication systems are generally complex electrical systems which are subject to failure from a variety of fault modes. The rapid and accurate classification and isolation of a fault within a telecommunications system is desired to minimize dispatch and repair costs associated with such faults. Therefore, it is a long standing objective within the telecommunications industry to provide a system which can use measured data to automatically diagnose one of several failure modes.
The accurate diagnosis of faults within a telecommunications system is hampered by the limited accessibility of test points within the system as well as the complex relationships between faults and measurable system parameters. An automated line test system (LTS) that is currently used to perform this function is illustrated in Figure 1. In the LTS of Figure 1, a remote test unit (RTU) 2 is employed at each local exchange (EX) 4 within the telecommunication system. The RTU 2 is a hardware device which generates test signals. These test signals are introduced into the system through the EX 4. The test signals propagate through a main distribution frame (MDF) 6 and into the telecommunications lines 8. The signals typically pass through a cross connect switch 10, to one or more distributing points (DP) 12. Ultimately the signals reach various customer apparatus (CA) 14 such as a modem, facsimile machine, telephone handset and the like. The telecommunications system, when operating normally, exhibits characteristic parameters in response to the RTU 2 test signal. These parameters include voltage values, current values, resistance values, capacitance values and the like. The RTU 2 samples and evaluates these parameters through the use of software. During a fault condition, these parameters change in response to a given fault.
The diagnostic software 16 implements a simple heuristic algorithm. The algorithm includes decision rules which compare one or more measurements with predetermined (by an engineer) threshold values to determine whether a fault exists. As an example, the algorithm may compare measured resistance values between a pair of lines against a set of expected threshold values which are stored in the program to decide whether a fault exists in either an exchange 4 or customer apparatus 14. The algorithm uses linear decision rules to perform these functions.
The LTS is also capable of recording the measured parameters in a database 18 for future reference. Additionally, the LTS has the capability of accepting manually entered data regarding each fault from an operator via a keyboard. This information may include customer fault reports and service personnel "clear off" codes indicating the actual location of a fault. In this way, a large amount of data is assembled regarding fault history and parameter values associated with various fault locations. However, the LTS is unable to use this data to improve its own operation. If desired, the data stored in database 18 may be evaluated by an engineer periodically and the decision thresholds employed by the algorithm may be manually updated. This is an extremely labor intensive, and therefore expensive, operation. Therefore, it is a long standing objective in the field of telecommunication system diagnostics to develop a system which can overcome this limitation.
In diagnostic and fault location systems unrelated to telecommunications systems, neural networks have been employed to improve system performance. A neural network is a data processing system largely organized in parallel. The neural network includes a collection of processing elements, or neurons, which are mutually interconnected to one another. The various connections are known as neuronal interconnects. The network is typically formed with an input layer of neurons, an output layer of neurons and one or more hidden layers of neurons.
An important characteristic of neural networks is that they can "learn" by means of neural network training. During training, previously acquired measurement data is applied to the neural network input layer. An error signal is generated at the output layer and is back propagated through the hidden layers of the network. During this operation, the various weights associated with each neuronal interconnect are adjusted to minimize the error signal. If sufficient data is applied to the neural network, the neural network is able to classify unknown objects according to parameters established during training.
In U.S. Patent No. 5,440,566 to Spence et al., a neural network is employed to perform fault detection and diagnosis for printed circuit boards. The neural network disclosed in the '566 patent is used to process thermal image data from an energized printed circuit board. The neural network is trained by applying data from a printed circuit board with known faults to the network. Once trained, the neural network is then able to analyze new data and classify the new data into one of a plurality of printed circuit board faults.
In U.S. Patent No. 5,537,327 to Snow et al. , a neural network is used in connection with a method and apparatus for detecting high impedance faults in electrical power transmission systems. The system disclosed in the '327 patent employs a trained neural network to evaluate fast Fourier transforms (FFT) of continuously acquired current measurements. The neural network continuously monitors the FFT data and activates a fault trigger output in the event a high impedance fault is detected. Neural Network Technology In general, neural networks can be viewed as a powerful approach to representing complex nonlinear discriminant functions in the form yk (x; W^ where x is an input parameter and Wk is an optimizing parameter within the neural network. One form of neural network is referred to as a multilayer perceptron
(MLP) network. The topology of an MLP neural network is illustrated in Figure 2. The MLP network includes an input layer 24, an output layer 26 and at least one hidden layer 28. These layers are formed from a plurality of neurons 22. The input layer 24 receives input parameters and distributes these parameters to each neuron 22 in the first hidden layer 28. The hidden layers 28 process this data and establish probability estimates for each of a plurality of output neurons which make up the output layer 26.
Within the MLP network, each single neuron 22 is a discrete processing unit which performs the discriminant function by first performing a linear transformation and then a nonlinear transformation on the input variable x as follows:
"k = [Wk X + Wk0) = φ ∑ wkj +w kO Eq. 1 J = ι
Where φ is a nonlinear function having the form:
9(v)= - l-— Eq. 2 l + exp(-v)
The general network function for the MLP neural network of Figure 2 is as follows:
Once a network topology is established, it is necessary to "train" the network by applying previously collected training data to the input layer 24 and output layer 26 of the neural network. Optimal network parameters, or interneural
-4-
SUBST1TUTE SHEET (RULE 26) weights, are estimated from this known training data. Preferably this is accomplished using a back propagation method. In this process, known data is applied to the input of the neural network and is propagated forward by applying the network equation as previously stated in equation 3. The input data results in output vectors for each layer of the MLP network. The output vectors are evaluated for all output neurons and are propagated backward to determine errors for the hidden neurons. During this process, the weights associated with each interneural link are adjusted to minimize the resultant errors. This process is iterated until the weights stabilize over the set of training data.
The error function within the neural network may be defined by a sum of square difference function between the desired output, ok (n), and the network's actual output, yk (n). This equation may be stated as:
. N c
E{W)-^- zZ [ok(n)-yK(n)f Eq. 4
2N „ = 1 ιt= l
To minimize this error function, a gradient descent method well known in the art may be used. In applying the gradient descent method, an adjustment which is made to a weight (Δ W) at iteration n + 1 is proportional to the size, yet opposite in direction, to the partial derivative of the error function with respect to the weight at the previous (n) iteration. This can be stated as:
where η is a small positive constant which is denoted as the learning rate. The learning rate is a critical parameter within the MLP network. Selecting η to be too large may cause the network to become unstable or oscillatory. On the other hand, if η is too small, the networks learning performance will be slow. To achieve an optimal learning rate, a portion of the previous delta weight is added to the current delta weight to give the following generalized delta rule: where α is a small positive constant which is denoted as the momentum.
A second neural network topology known in the prior art is referred to as a radial basis function (RBF) network. A typical RBF network is illustrated in Figure 3. The RBF network models discrimination functions by performing a non-linear transformation on a linear combination of a set of local kernels or basis functions as follows:
yk(χ) = φ (Σ wkj gj .χ + wko) Ecι- 7
where φ is the same as that in Equation 2 and gj is a Gaussian basis function of the form,
where μi are centre vectors of the network and σ; are widths associated with the network.
The pictorial diagram of Figure 3 represents the above formulae graphically. The output from a hidden or RBF node is determined by the distance in Equation 8 from an input vector x to a centre or pattern vector μr The basis functions are combined and transformed at the output layer.
For computational efficiency, a Moody-Darken learning method known in the art may be used for the optimization of network parameters. In this method, the network training involves both unsupervised and supervised stages. In the unsupervised learning stage, the centre vectors, μ,, are determined by using an adaptive K-mean clustering algorithm. The widths, θj, are estimated based on the distances between each centre vector and its nearest neighbours. The second stage supervised learning determines the weights from the hidden layer to the output layer using a gradient descent method similar to that for MLP networks discussed above.
In accordance with a first aspect of the present invention, there is provided a diagnostic system for locating faults within a telecommunications system, the diagnostic system comprising: a remote test unit, the remote test unit being operatively coupled to the telecommunications system and obtaining parametric data therefrom; and a neural network, the neural network being responsive to the parametric data from the remote test unit, classifying the parametric data to at least one of a plurality of fault locations, and generating an output signal indicative of the fault location.
In accordance with a second aspect of the present invention, there is provided a method of locating faults within a telecommunications system, the method comprising the steps of: a) measuring a plurality of parameters associated with the telecommunications system; b) normalizing the measured parameters; and c) classifying the normalized parameters as probabilities associated with a plurality of fault locations. The present invention provides an apparatus and method which achieves improved fault location in a telecommunications system.
The present invention typically provides a system which uses previous fault data and present measured data to diagnose faults within a telecommunications system.
The present invention typically provides a system which can accurately classify a fault mode in a telecommunications system. The present invention also typically provides a system which can use previous fault data to alter the boundaries of fault decisions within a telecommunications diagnostic system. The telecommunications fault diagnostic system is formed having a remote test unit (RTU) operatively coupled to a neural network. The RTU, which is conventional in the field of telecommunications diagnostics, is operatively coupled to a telecommunications system through a local exchange. The RTU generates test signals and measures system parameters such as resistance, capacitance, voltage, etc.
The neural network is operatively coupled to the RTU and receives the system parameter data therefrom. The neural network is a trained, and dynamically trainable, processing system which is - formed from a plurality of interconnected
processing units, or neurons. The neurons are organized in one or more processing layers. System parameter data is applied to a first processing layer, or input layer. From the input layer, data is distributed to one or more hidden layers of neurons. Based on weights, which are learned by the neural network during "training," each neuron makes a decision on the data which it receives. The decisions from the interconnected neurons are applied to an output layer which assigns the final probability for each fault type and location. Exemplary outputs from the output layer include the respective probabilities for a fault being located in one of the exchange, the lines, or the customer apparatus within a telecommunication system.
The neural network is "trained" using historical fault data which is collected from an RTU and stored in a database. By evaluating many measurements, along with associated fault types and locations, the neural network is able to assign the proper weights to attribute to each neuronal interconnect within the neural network. The neural network may also be easily retrained to adapt to new data.
These and other objects, features and advantages of the present invention will become apparent from the following detailed description of illustrative embodiments thereof, which is to be read in connection with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 is a block diagram of a telecommunications diagnostic system known in the prior art.
Figure 2 is a pictorial diagram of a generalized multi layer perceptron neural network known in the prior art.
Figure 3 is a pictorial diagram of a generalized radial basis function neural network known in the prior art. Figure 4 is a block diagram of a telecommunications diagnostic system employing a neural network, and formed in accordance with the present invention.
Figure 5 is a schematic diagram illustrating an electrical model of a pair of lines used in a telecommunication system.
Figure 6 is a pictorial diagram of a neural network having an input layer, an output layer and two generalized hidden layers.
Figure 7 is a block diagram illustrating the interconnection of the neural network and data storage device used to facilitate training of the network in one embodiment of the present invention.
Figure 8 is a flow chart illustrating the steps involved in training a neural network in accordance with the present invention.
Figures 9 and 10 are block diagrams of integrated telecommunication diagnostic systems, formed in accordance with the present invention.
A telecommunications diagnostic system formed in accordance with the present invention is illustrated in Figure 4. The block diagram of Figure 4 includes a remote test unit (RTU) 2 which is operatively coupled to the telecommunications system via a local exchange 4. The RTU 2 is substantially equivalent to that used in the LTS system known in the prior art. The RTU 2 induces test signals into the telecommunications system and measures system parameters in response thereto.
Figures 5 illustrates a simplified electrical model of the system parameters for two adjacent telecommunication lines. Referring to Figure 5, the two adjacent lines, designated A and B, may be characterized in part by a plurality of resistance values. Typical resistance measurements performed by the RTU 2 include: line A to earth ground (RAG) 40, line A to line B (RAB) 42, line A to battery (RAV) 44, line B to earth ground (RBG) 46, line B to line A (RBA) 42, and line B to battery (RBV) 48.
The capacitance values associated with lines A and B are also illustrated in
Figure 5. The RTU 2 is capable of measuring the capacitance from line A to earth ground (CA) 50, from line B to earth ground measurements, (CB) 52 and from line A to line B (C^) 54. The RTU 2 also provides a ratio of capacitance Ca/Cb. In addition, the RTU can also measure the voltage from line A to earth ground (VA), from line B to earth ground (VB) and from line A to line B (VAB).
Returning to Figure 4, the RTU 2 is operatively coupled to a neural network 30. The neural network 30 is a data processing system, preferably organized in parallel. The neural network 30 may generally take the form of any topology known in the art.
A generalized topology of the neural network 30 is illustrated in Figure 6.
The neural network 30 includes an input layer 60 which is operatively coupled to the RTU 2 and receives the measured system parameters therefrom. The neural network 30 further includes one or more hidden processing layers 62 and an output layer 64. As in the prior art, each layer is composed of a plurality of processing neurons which are interconnected via weighted links 68. The output layer 64 preferably includes three probability outputs. These outputs preferably indicate the presence of a fault in either the customer apparatus 14, a telecommunications line 8 or the exchange 4 (Figure 1).
Referring to Figure 4, the telecommunications diagnostic system of the present invention further includes a data storage device 18. The data storage device 18 contains a historical database of fault data including measured system parameters, customer complaint data and fault clear-off codes. The clear-off codes are entered by service personnel after a fault has been repaired and indicate the actual location of the fault.
The data storage device 18 is operatively coupled to the neural network 30. A simplified block diagram illustrating a preferred interconnection of the data storage device 18 and neural network 30 is illustrated in Figure 7. The input layer
60 of the neural network 30 receives data from the data base for network training. The neural network 30 may also store current fault data in the data storage device 18 to enhance the historical data base for continuous adaptive learning.
The process of training the generalized neural network 30 is described in connection with the block diagram of Figure 7 and the flow chart illustrated in
Figure 8. After training begins (step 70), the learning rate and momentum factors are set for the network (step 72). Each weighted link 68 (Figure 6) is then randomly set to a normalized value in the range of -1 to 1 (step 74).
Training is performed in an iterative, closed loop. The loop, or epoch, begins by loading the first set of historical data from the data storage device 18 into the input layer 60 of the neural network (step 78). This data is processed by the network in accordance with the randomly initialized weights assigned to the links 68. This processing generates a tentative output at the output layer 64 (step 80). An error detector 69 compares the initial output against the expected output (clear- off code) for the corresponding input data in step 82 and an error signal is generated. The magnitude of the error signal is compared to an acceptable error limit. If the error is acceptable, training is complete (step 84). If the error exceeds the acceptable limit, the weights 68 are adjusted (step 88). After the weights 68 are adjusted, a new epoch begins from step 78 for the next set of data in the historical data base.
To achieve optimal training of the neural network 30, the historical fault data is preferably pre-processed before training begins. Because the fault data is measured data from a real system, the data is subject to noise and other sources of unreliability. Pre-processing the data serves to eliminate these erroneous data sets which would adversely affect network 30 training.
Once trained, the neural network 30 is capable of processing measured data and classifying the data into probabilities of fault locations. This information is available on an output layer of the neural network 30. Preferred fault classifications include customer apparatus (PCA)> lme (?LN) an^ exchange (PEX).
Returning to Figure 4, the telecommunications fault diagnostic system of the present invention further includes a customer service processing unit (CSP) 32. The CSP 32 receives the fault classification data from the neural network 30 and provides this information on a user interface console, such as a personal computer or data terminal. In addition to facilitating the display of the probable fault location, the CSP 32 preferably includes an input device, such as a keyboard. From the input device, an operator may enter appropriate customer complaint information. This information is then added to the historical database of the data storage device 18.
The telecommunications fault diagnostic system preferably includes a repair dispatch processing unit (RDP) 34. The RDP 34 receives the fault location data from the neural network 30 and provides this information on a user interface console, such as a personal computer or data terminal. From this information, service personnel are dispatched in accordance with the indicated fault location. In addition to displaying fault data, the RDP 34 preferably includes an input device, such as a keyboard. From the input device, the operator may enter appropriate information regarding the resolution of the detected fault (clear-off code). This information is then added to the historical database of the data storage device 18. In many cases, the RDP 34 and CSP 32 may be integrated into a single station. Neural Network Topology
The present invention addresses the problem of fault location in a telecommunications system as a classification problem. As such, the topology of neural network 30 is chosen to optimize the classification of the various fault parameters to one of a fixed number of possible fault locations. The MLP (Figure
2) and RBF (Figure 3) network topologies are preferred for implementing the present invention.
The MLP network of Figure 2 includes two hidden layers and an output layer of processing neurons. Implementation of this network requires defining system parameters to be input to the network, the fault classes to be output from the network, and the topologies of the hidden layers between the input and output layers. Referring to the generalized neural network model of Figure 6, the input layer 60 preferably receives fourteen measured values from the RTU 2 as inputs. The output layer 66 of the neural network represents probabilities of a fault being located in the customer apparatus (PCA), the telecommunications lines (P^) and the communications exchange (PEK).
The neural network classifies data as probability functions. Therefore, it is desirable for the input data to be normalized in the range of minus one (-1) to one (1). The measured input data of resistance and capacitance vary over a significant range. Resistance values vary in the telecommunications system from a few ohms
(Ω) to several million ohms (meg Ω). Capacitance values vary from a few nano farads to 104 nano farads. The service clear-off codes take the form of discrete values in the range of 0-30. Fault reports from customers are input as symbolic data.
Normalization requires coding the symbolic data into a discrete numeric representation and scaling the measured data into the range of -1 to 1. Preferably, the data is normalized with respect to both mean and variance values of individual measurements. However, other normalization methods such as simple normalization with respect to minimum and maximum values, or normalizing the data with respect to both mean and covariance matrices of all measurements may also be used.
The topology of the hidden layers 62 is selected to strike a balance between the classification performance, the training time and the available processing power of the neural network. As an example, an MLP network may be formed in accordance with the present invention having a first hidden layer with 75 neurons and a second hidden layer with 20 neurons. This topology is denoted as 75:20. In evaluating this network, three parameters are adjusted to optimize the classification performance. These parameters are the learning rate η, the momentum and the decaying factor. For this network, it was determined that a learning rate of about 0.15 and a momentum factor of about 0.1 achieved optimal performance. If η is made large, e.g. , 0.3-0.4, the network becomes unstable. On the other hand, small values of η, e.g., 0.05 resulted in longer learning times.
Applying these parameters to an MLP network results in significant improvements in fault classification over the prior art LTS. The MLP network was modeled using Neural Works Professional II/Plus (TM) and Neural Works explorer (TM) software (manufactured by Neural Works, Inc. of Pittsburg, Pennsylvania) running on a personal computer. Over a sampling of 18,962 sets of fault data, the present invention was able to properly classify 76.4% of the fault cases correctly. This compares to a 68.3% overall correct classification rate by the LTS known in the prior art. This represents a net improvement of 8.1 % .
While the MLP network alone achieved an 8.1 % improvement over a conventional LTS, it has been found that the heuristic decision rules used in prior art LTS systems have some rules which perform very well. Therefore, by forming a telecommunications diagnostic system which integrates the neural network of Figure 6 with a conventional LTS (Figure 1), even higher performance is achieved. Such integrated systems are illustrated in Figures 9 and 10.
Referring to Figure 9, a telecommunications fault diagnostic system is illustrated with a neural network 30 operating in parallel with conventional LTS diagnostic software 16. Both processing systems simultaneously receive the measured system data from an RTU 2 and generate fault diagnostic output data. The neural network 30 and diagnostic software 16 each generate a fault location output signal. The fault location output signal from each processing system is received by a post processor 100. The post processor 100 selectively routes the fault location signal from either the neural network 30 or LTS diagnostic software
16 to a fault location output. The LTS diagnostic software 16 also calculates a fault type signal. This signal is presented directly on a fault type output. This method of integration improves the diagnostic performance of the MLP neural network by approximately 0.4%.
An alternate embodiment of an integrated telecommunications diagnostic system formed in accordance with the present invention is illustrated in Figure 10. As with Figure 9, this topology features the parallel operation of a neural network 30 and convention LTS diagnostic software 16. However, rather than employing a post processor 100, the input layer 60 of the neural network 30 is expanded to accept the fault location and fault type output signals from the LTS diagnostic software 16. The fault location output is derived from the neural network output layer 66. The fault type output is received directly from the LTS diagnostic software 16. This configuration requires a more complex neural network 30 but eliminates the post processor 100. This integration topology yielded a 0.5% improvement in classification rate over the MLP neural network 30 standing alone.
While not illustrated, it should be understood that the embodiments illustrated in Figures 9 and 10 preferably include a data storage device 18, a customer service processor 32 and/or a repair dispatch processor 34 as discussed in connection with Figure 4.
Although illustrative embodiments of the present invention have been described herein with reference to the accompanying drawings, it is to be understood that the invention is not limited to those precise embodiments, and that various other changes and modifications may be effected therein by one skilled in the art without departing from the scope or spirit of the invention.

Claims

CLAIMS:
1. A diagnostic system for locating faults within a telecommunications system, the diagnostic system comprising:
a remote test unit, the remote test unit being operatively coupled to the telecommunications system and obtaining parametric data therefrom; and
a neural network, the neural network being responsive to the parametric data from the remote test unit, classifying the parametric data to at least one of a plurality of fault locations, and generating an output signal indicative of the fault location.
2. A diagnostic system as defined by Claim 1, further comprising: a linear decision diagnostic processor (LDDP), the LDDP being responsive to the parametric data and generating a fault location signal and fault type signal therefrom; a post processor, the post processor receiving the output signal from the neural network and the fault location signal and fault type signal from the LDDP and generating a diagnostic fault location output signal therefrom; and an output device, the output device receiving the diagnostic fault location output signal from the post processor and the fault type signal from the LDDP, the output device indicating the fault location and type, whereby service personnel may be efficiently dispatched to service a detected fault.
3. A diagnostic system as defined by Claim 1 , further comprising: a linear decision diagnostic processor (LDDP), the LDDP being responsive to the parametric data and calculating a fault location signal and fault type signal therefrom; the neural network being responsive to the parametric data from the remote test unit and the fault location and fault type signals from the LDDP, the neural network classifying the parametric data and received signals to at least one of a plurality of fault locations and generating an output signal indicative of the at least one fault location; and an output device, the output device receiving the output signal from the neural network and the fault type signal from the LDDP, the output device displaying the fault location and fault type, whereby service personnel may be efficiently dispatched to service a detected fault.
4. A diagnostic system as defined by Claims 1, 2, or 3 wherein the parametric data includes a plurality of resistance, capacitance and voltage measurements and a termination signal.
5. A diagnostic system as defined by Claim 4, wherein the plurality of fault locations indicate a fault in at least one of a telecommunication line, a telecommunication exchange, and a customer apparatus.
6. A diagnostic system as defined by Claim 5, wherein the neural network comprises a multi layer perceptron formed from a plurality of processing neurons.
7. A diagnostic system as defined by Claim 6, wherein the multi layer perceptron includes a first hidden processing layer and a second hidden processing layer.
8. A diagnostic system as defined by Claim 7, wherein the first hidden processing layer comprises a number of neurons substantially equal to 75 and the second hidden processing layer comprises a number of neurons substantially equal to 20.
9. A diagnostic system as defined by Claim 8, further comprising a data storagte device, the data storage device containing a data base of historical fault data, the neural network being operatively coupled to the data storage device and receiving the historical fault data during a network training operation.
10. A method of locating faults within a telecommunications system, the method comprising the steps of:
a) measuring a plurality of parameters associated with the telecommunications system; b) normalizing the measured parameters; and c) classifying the normalized parameters as probabilities associated with a plurality of fault locations.
11. A method as defined by Claim 10, wherein the plurality of parameters in step a) include a plurality of resistance values, a plurality of capacitance values, a plurality of voltage values, and a termination signal.
12. The method as defined by Claim 11, wherein the plurality of fault locations include a telecommunication line, exchange and customer apparatus.
EP98911981A 1997-04-01 1998-03-23 System and method for telecommunications system fault diagnostics Ceased EP0972252A4 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
GB9706521 1997-04-01
GB9706521A GB2327553B (en) 1997-04-01 1997-04-01 System and method for telecommunications system fault diagnostics
PCT/US1998/005736 WO1998044428A1 (en) 1997-04-01 1998-03-23 System and method for telecommunications system fault diagnostics

Publications (2)

Publication Number Publication Date
EP0972252A1 EP0972252A1 (en) 2000-01-19
EP0972252A4 true EP0972252A4 (en) 2001-02-21

Family

ID=10810061

Family Applications (1)

Application Number Title Priority Date Filing Date
EP98911981A Ceased EP0972252A4 (en) 1997-04-01 1998-03-23 System and method for telecommunications system fault diagnostics

Country Status (5)

Country Link
EP (1) EP0972252A4 (en)
AU (1) AU6580798A (en)
CA (1) CA2285239A1 (en)
GB (1) GB2327553B (en)
WO (1) WO1998044428A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222830A (en) * 2019-06-13 2019-09-10 中国人民解放军空军工程大学 A kind of depth feedforward network method for diagnosing faults based on self-adapted genetic algorithm optimization

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6895081B1 (en) 1999-04-20 2005-05-17 Teradyne, Inc. Predicting performance of telephone lines for data services
US7127506B1 (en) 1999-05-28 2006-10-24 Teradyne, Inc. PC configuration fault analysis
US6654914B1 (en) 1999-05-28 2003-11-25 Teradyne, Inc. Network fault isolation
GB2355361B (en) 1999-06-23 2004-04-14 Teradyne Inc Qualifying telephone lines for data transmission
US6687336B1 (en) * 1999-09-30 2004-02-03 Teradyne, Inc. Line qualification with neural networks
GB0005227D0 (en) 2000-03-03 2000-04-26 Teradyne Inc Technique for estimatio of insertion loss
GB0007836D0 (en) 2000-03-31 2000-05-17 British Telecomm Telecommunications line parameter estimation
GB2365253B (en) 2000-07-19 2004-06-16 Teradyne Inc Method of performing insertion loss estimation
EP1191803A1 (en) * 2000-09-20 2002-03-27 Lucent Technologies Inc. Method and system for detecting network states of a hierarchically structured network comprising network elements on at least two layers
WO2002058369A2 (en) 2000-10-19 2002-07-25 Teradyne, Inc. Method and apparatus for bridged tap impact analysis
US6914961B2 (en) 2002-09-30 2005-07-05 Teradyne, Inc. Speed binning by neural network
GB0307115D0 (en) * 2003-03-27 2003-04-30 British Telecomm Line testing apparatus and method
US7386039B2 (en) 2003-09-26 2008-06-10 Tollgrade Communications, Inc. Method and apparatus for identifying faults in a broadband network
CN104469832B (en) * 2014-12-19 2018-03-02 武汉虹信通信技术有限责任公司 Mobile communications network accident analysis locating assist system
CN109284830A (en) * 2018-11-27 2019-01-29 广东电网有限责任公司 A kind of combination single-ended traveling wave fault location algorithm
CN109510738B (en) * 2018-12-14 2022-02-22 平安壹钱包电子商务有限公司 Communication link test method and device
CN111539516A (en) * 2020-04-22 2020-08-14 谭雄向 Power grid fault diagnosis system and method based on big data processing
CN111505424A (en) * 2020-05-06 2020-08-07 哈尔滨工业大学 Large experimental device power equipment fault diagnosis method based on deep convolutional neural network
CN112766396A (en) * 2021-01-27 2021-05-07 昆仑数智科技有限责任公司 System, method, computer device and medium for detecting device abnormality
CN112820321A (en) * 2021-03-05 2021-05-18 河北雄安友平科技有限公司 Remote intelligent audio diagnosis system, method, equipment and medium for oil pumping unit
CN113238144A (en) * 2021-06-17 2021-08-10 哈尔滨理工大学 Fault diagnosis system of nonlinear analog circuit based on multi-tone signal
CN113740667B (en) * 2021-08-30 2022-06-14 华北电力大学 Power grid fault diagnosis method integrating self-encoder and convolutional neural network
CN116070151B (en) * 2023-03-17 2023-06-20 国网安徽省电力有限公司超高压分公司 Ultra-high voltage direct current transmission line fault detection method based on generalized regression neural network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1994019888A1 (en) * 1993-02-26 1994-09-01 Cabletron Systems, Inc. Method and apparatus for resolving faults in communications networks
WO1995009463A1 (en) * 1993-09-27 1995-04-06 Siemens Aktiengesellschaft Method of generating a fault-indication signal
WO1996010890A2 (en) * 1994-09-26 1996-04-11 Teradyne, Inc. Method and apparatus for fault segmentation in a telephone network
EP0712227A2 (en) * 1994-11-14 1996-05-15 Harris Corporation Trouble-shooting system for telephone system

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA1335836C (en) * 1988-07-07 1995-06-06 Ichiro Iida Adaptive routing system
US5440566A (en) 1991-09-23 1995-08-08 Southwest Research Institute Fault detection and diagnosis for printed circuit boards
US5293323A (en) 1991-10-24 1994-03-08 General Electric Company Method for fault diagnosis by assessment of confidence measure
US5442555A (en) 1992-05-18 1995-08-15 Argonne National Laboratory Combined expert system/neural networks method for process fault diagnosis
US5465321A (en) 1993-04-07 1995-11-07 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Hidden markov models for fault detection in dynamic systems
US5537327A (en) * 1993-10-22 1996-07-16 New York State Electric & Gas Corporation Method and apparatus for detecting high-impedance faults in electrical power systems
US5544308A (en) * 1994-08-02 1996-08-06 Giordano Automation Corp. Method for automating the development and execution of diagnostic reasoning software in products and processes
US5778184A (en) 1996-06-28 1998-07-07 Mci Communications Corporation System method and computer program product for processing faults in a hierarchial network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1994019888A1 (en) * 1993-02-26 1994-09-01 Cabletron Systems, Inc. Method and apparatus for resolving faults in communications networks
WO1995009463A1 (en) * 1993-09-27 1995-04-06 Siemens Aktiengesellschaft Method of generating a fault-indication signal
WO1996010890A2 (en) * 1994-09-26 1996-04-11 Teradyne, Inc. Method and apparatus for fault segmentation in a telephone network
EP0712227A2 (en) * 1994-11-14 1996-05-15 Harris Corporation Trouble-shooting system for telephone system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
See also references of WO9844428A1 *
TOTTON K A E: "EXPERIENCE IN USING NEURAL NETWORKS FOR ELECTRONIC DIAGNOSIS", PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON ARTIFICIAL NEURAL NETWORKS, 18 November 1991 (1991-11-18), XP002910910 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222830A (en) * 2019-06-13 2019-09-10 中国人民解放军空军工程大学 A kind of depth feedforward network method for diagnosing faults based on self-adapted genetic algorithm optimization
CN110222830B (en) * 2019-06-13 2023-10-31 中国人民解放军空军工程大学 Deep feed-forward network fault diagnosis method based on adaptive genetic algorithm optimization

Also Published As

Publication number Publication date
AU6580798A (en) 1998-10-22
EP0972252A1 (en) 2000-01-19
GB9706521D0 (en) 1997-05-21
GB2327553A (en) 1999-01-27
WO1998044428A1 (en) 1998-10-08
GB2327553B (en) 2002-08-21
CA2285239A1 (en) 1998-10-08

Similar Documents

Publication Publication Date Title
EP0972252A1 (en) System and method for telecommunications system fault diagnostics
US6636841B1 (en) System and method for telecommunications system fault diagnostics
US10955456B2 (en) Method and apparatus for automatic localization of a fault
CN111833583B (en) Training method, device, equipment and medium for power data anomaly detection model
EP1416438A2 (en) A method for performing an empirical test for the presence of bi-modal data
Castro et al. An interpretation of neural networks as inference engines with application to transformer failure diagnosis
CN110702966B (en) Fault arc detection method, device and system based on probabilistic neural network
US20030204368A1 (en) Adaptive sequential detection network
CN117061322A (en) Internet of things flow pool management method and system
CN109389313B (en) Fault classification diagnosis method based on weighted neighbor decision
Abbasy et al. Power system state estimation: ANN application to bad data detection and identification
JPH10143343A (en) Association type plant abnormality diagnosis device
CN111079348B (en) Method and device for detecting slowly-varying signal
EP4120575A1 (en) A method and apparatus for determining the location of impairments on a line of a wired network
CN113807462A (en) AI-based network equipment fault reason positioning method and system
CN111833173A (en) LSTM-based third-party platform payment fraud online detection method
EP3863269B1 (en) Method and apparatus for monitoring a communication line
Gelenbe et al. Learning neural networks for detection and classification of synchronous recurrent transient signals
CN111123884B (en) Testability evaluation method and system based on fuzzy neural network
Ghazali et al. Cable fault classification in ADSL copper access network using machine learning
CN117354053B (en) Network security protection method based on big data
CN111160454B (en) Quick change signal detection method and device
CN116702010A (en) Power distribution network abnormal event identification method, device, equipment and medium
Patel et al. Interactive voice response field classifiers
CN117370742A (en) Bearing residual life prediction method under data loss

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 19991028

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

A4 Supplementary search report drawn up and despatched

Effective date: 20010105

AK Designated contracting states

Kind code of ref document: A4

Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

17Q First examination report despatched

Effective date: 20030612

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: CYBULA LIMITED

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED

18R Application refused

Effective date: 20091203