WO2022056594A1 - Procédé de gestion d'un système - Google Patents

Procédé de gestion d'un système Download PDF

Info

Publication number
WO2022056594A1
WO2022056594A1 PCT/AU2021/051075 AU2021051075W WO2022056594A1 WO 2022056594 A1 WO2022056594 A1 WO 2022056594A1 AU 2021051075 W AU2021051075 W AU 2021051075W WO 2022056594 A1 WO2022056594 A1 WO 2022056594A1
Authority
WO
WIPO (PCT)
Prior art keywords
issue
determined
model
series data
components
Prior art date
Application number
PCT/AU2021/051075
Other languages
English (en)
Inventor
Alastair Lockey
Kumarini Nadisha Seneviratne
Abhyuday Bhartia
Original Assignee
Waterwerx Technology Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2020903357A external-priority patent/AU2020903357A0/en
Application filed by Waterwerx Technology Pty Ltd filed Critical Waterwerx Technology Pty Ltd
Publication of WO2022056594A1 publication Critical patent/WO2022056594A1/fr

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B23/00Testing or monitoring of control systems or parts thereof
    • G05B23/02Electric testing or monitoring
    • G05B23/0205Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
    • G05B23/0218Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults
    • G05B23/0221Preprocessing measurements, e.g. data collection rate adjustment; Standardization of measurements; Time series or signal analysis, e.g. frequency analysis or wavelets; Trustworthiness of measurements; Indexes therefor; Measurements using easily measured parameters to estimate parameters difficult to measure; Virtual sensor creation; De-noising; Sensor fusion; Unconventional preprocessing inherently present in specific fault detection methods like PCA-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/30Administration of product recycling or disposal
    • CCHEMISTRY; METALLURGY
    • C02TREATMENT OF WATER, WASTE WATER, SEWAGE, OR SLUDGE
    • C02FTREATMENT OF WATER, WASTE WATER, SEWAGE, OR SLUDGE
    • C02F1/00Treatment of water, waste water, or sewage
    • C02F1/008Control or steering systems not provided for elsewhere in subclass C02F
    • CCHEMISTRY; METALLURGY
    • C02TREATMENT OF WATER, WASTE WATER, SEWAGE, OR SLUDGE
    • C02FTREATMENT OF WATER, WASTE WATER, SEWAGE, OR SLUDGE
    • C02F9/00Multistage treatment of water, waste water or sewage
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B21/00Systems involving sampling of the variable controlled
    • G05B21/02Systems involving sampling of the variable controlled electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B23/00Testing or monitoring of control systems or parts thereof
    • G05B23/02Electric testing or monitoring
    • G05B23/0205Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
    • G05B23/0218Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults
    • G05B23/0224Process history based detection method, e.g. whereby history implies the availability of large amounts of data
    • G05B23/024Quantitative history assessment, e.g. mathematical relationships between available data; Functions therefor; Principal component analysis [PCA]; Partial least square [PLS]; Statistical classifiers, e.g. Bayesian networks, linear regression or correlation analysis; Neural networks
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B23/00Testing or monitoring of control systems or parts thereof
    • G05B23/02Electric testing or monitoring
    • G05B23/0205Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
    • G05B23/0218Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults
    • G05B23/0243Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults model based detection method, e.g. first-principles knowledge model
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B23/00Testing or monitoring of control systems or parts thereof
    • G05B23/02Electric testing or monitoring
    • G05B23/0205Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
    • G05B23/0259Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterized by the response to fault detection
    • G05B23/0283Predictive maintenance, e.g. involving the monitoring of a system and, based on the monitoring results, taking decisions on the maintenance schedule of the monitored system; Estimating remaining useful life [RUL]
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B23/00Testing or monitoring of control systems or parts thereof
    • G05B23/02Electric testing or monitoring
    • G05B23/0205Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
    • G05B23/0259Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterized by the response to fault detection
    • G05B23/0286Modifications to the monitored process, e.g. stopping operation or adapting control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • CCHEMISTRY; METALLURGY
    • C02TREATMENT OF WATER, WASTE WATER, SEWAGE, OR SLUDGE
    • C02FTREATMENT OF WATER, WASTE WATER, SEWAGE, OR SLUDGE
    • C02F1/00Treatment of water, waste water, or sewage
    • C02F1/24Treatment of water, waste water, or sewage by flotation
    • CCHEMISTRY; METALLURGY
    • C02TREATMENT OF WATER, WASTE WATER, SEWAGE, OR SLUDGE
    • C02FTREATMENT OF WATER, WASTE WATER, SEWAGE, OR SLUDGE
    • C02F1/00Treatment of water, waste water, or sewage
    • C02F1/52Treatment of water, waste water, or sewage by flocculation or precipitation of suspended impurities
    • CCHEMISTRY; METALLURGY
    • C02TREATMENT OF WATER, WASTE WATER, SEWAGE, OR SLUDGE
    • C02FTREATMENT OF WATER, WASTE WATER, SEWAGE, OR SLUDGE
    • C02F1/00Treatment of water, waste water, or sewage
    • C02F1/52Treatment of water, waste water, or sewage by flocculation or precipitation of suspended impurities
    • C02F1/5209Regulation methods for flocculation or precipitation
    • CCHEMISTRY; METALLURGY
    • C02TREATMENT OF WATER, WASTE WATER, SEWAGE, OR SLUDGE
    • C02FTREATMENT OF WATER, WASTE WATER, SEWAGE, OR SLUDGE
    • C02F1/00Treatment of water, waste water, or sewage
    • C02F1/66Treatment of water, waste water, or sewage by neutralisation; pH adjustment
    • CCHEMISTRY; METALLURGY
    • C02TREATMENT OF WATER, WASTE WATER, SEWAGE, OR SLUDGE
    • C02FTREATMENT OF WATER, WASTE WATER, SEWAGE, OR SLUDGE
    • C02F2209/00Controlling or monitoring parameters in water treatment
    • C02F2209/02Temperature
    • CCHEMISTRY; METALLURGY
    • C02TREATMENT OF WATER, WASTE WATER, SEWAGE, OR SLUDGE
    • C02FTREATMENT OF WATER, WASTE WATER, SEWAGE, OR SLUDGE
    • C02F2209/00Controlling or monitoring parameters in water treatment
    • C02F2209/06Controlling or monitoring parameters in water treatment pH
    • CCHEMISTRY; METALLURGY
    • C02TREATMENT OF WATER, WASTE WATER, SEWAGE, OR SLUDGE
    • C02FTREATMENT OF WATER, WASTE WATER, SEWAGE, OR SLUDGE
    • C02F2209/00Controlling or monitoring parameters in water treatment
    • C02F2209/10Solids, e.g. total solids [TS], total suspended solids [TSS] or volatile solids [VS]
    • CCHEMISTRY; METALLURGY
    • C02TREATMENT OF WATER, WASTE WATER, SEWAGE, OR SLUDGE
    • C02FTREATMENT OF WATER, WASTE WATER, SEWAGE, OR SLUDGE
    • C02F2209/00Controlling or monitoring parameters in water treatment
    • C02F2209/40Liquid flow rate
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/24Pc safety
    • G05B2219/24001Maintenance, repair
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/24Pc safety
    • G05B2219/24019Computer assisted maintenance

Definitions

  • the present invention relates generally to managing a system and, in particular, to detecting issues/potential future issues of the system and recommending solutions to resolve the detected issues/potential future issues.
  • a wastewater treatment system removes contaminants from wastewater. The removed contaminants are then broken down to harmless materials. Further, any unremoved contaminants should be minimal and comply with government regulations.
  • FIG. 1 An example of a wastewater treatment system is shown in Fig. 1.
  • a wastewater treatment system is complicated, where a less than optimal operation of a component of the system may affect other components of the system and ultimately the overall efficiency of the system.
  • an issue occurring at a component is detected when the system loses efficiency and, by that time, multiple components would already have issues. It is therefore difficult to identify the real issue affecting such a complex system.
  • Trained technical personnel may adjust certain parameters of certain components to resolve an issue or increase the efficiency of the system. Although the adjusted parameters may increase the system efficiency, such an adjustment may not resolve the original issue. This action may therefore create other issues.
  • a method of monitoring a system having components comprising: generating time-series data of two or more of the components of the system; and determining, by an issue identification model, a current issue or a potential issue relating to one or more of the components of the system based on the generated time-series data, wherein the issue identification model comprises a first machine learning model or a deterministic tracing algorithm.
  • a method of monitoring a system having components comprising: generating time-series data of two or more of the components of the system; receiving second parameters detected by second sensors, the second sensors detecting events relating to the system; determining, by a correlation model, a correlation between the generated time-series data and the second parameters to generate predicted time-series data, wherein the correlation model comprises a third machine learning model; and determining, using an issue identification model, a potential issue relating to one or more of the components of the system based on the predicted timeseries data, wherein the issue identification model comprises a first machine learning model or a deterministic tracing algorithm.
  • a computer program product including a computer readable medium having recorded thereon a computer program for implementing any one of the methods described above.
  • FIG. 1 shows an example wastewater treatment system
  • FIG. 2 is a digital representation of the system of Fig. 1;
  • Fig. 3 is a method of setting up the models of identifying issues/potential future issues of a system to be monitored and of resolving the identified issues/potential future issues;
  • Fig. 4 is a method of identifying issues/potential future issues and recommending solutions to resolve the identified issues/potential future issues of the monitored system
  • Fig. 5 is a method of correlating events with time-series data of the monitored system
  • FIGs. 6A and 6B form a schematic block diagram of a general purpose computer system upon which arrangements described can be practiced;
  • Fig. 7 shows an example neural network model to perform any one of the methods shown in Figs. 3 to 5;
  • Fig. 8 shows a deterministic tracing algorithm method of identifying a current issue or a potential future issue
  • Fig. 9 shows an example networked graph of nodes on which the method of Fig. 8 can be performed.
  • FIG. 1 shows an example wastewater treatment system 100A that can be monitored by the present invention.
  • wastewater treatment system is used as an example, the present invention can be used to monitor other systems.
  • the wastewater treatment system 100A includes an acid tank 102A, a coagulant tank 102B, a caustic tank 106A, a neat flocculant tank 108A, a flocculant make down 111A, a dilute flocculant tank 112A, a balance tank 110A, a coagulation tank 114A, a first dissolved air flotation (DAF) 116A, a second DAF 118A, a sludge tank 120A, a decanter 122A, a sludge skip 124A, and a discharge point 126A.
  • DAF dissolved air flotation
  • a problem associated with the components 102A to 126A of the system 100A is resolved manually.
  • the primary objective of the DAF 116A is to lower the level of TSS, so a common problem is that the TSS of the water leaving the DAF 116A is above an acceptable level, which is usually defined by the governing water authority.
  • an acceptable level which is usually defined by the governing water authority.
  • To determine the root cause of the DAF outlet TSS being higher than acceptable it must be determined whether there is a malfunction in the wastewater treatment process, or whether there is a change in the system’s production process. Assuming that there is a problem with the wastewater treatment process, the technical personnel managing the system 100A would then progressively look into the subcomponents of the system 100A.
  • the main subcomponents of a DAF are the aeration system, the scrapers, and the dosing systems for acid, base, coagulant, and flocculant.
  • Each of these sub-components has associated data that would help determine if any of the sub-components are faulty.
  • Each chemical dosing sub-component has a set of chemical flow rates, pump speeds, and tank levels, which affect the data collected by the relevant sensors. The technical personnel may identify that the pump speed controlling the water flow into the DAF is too fast and consequently lower the pump speed to rectify the problem of the TSS of the DAF being too high. However, the root cause of this problem may be because the components preceding the DAF are malfunctioning resulting in excess water flowing through the pump.
  • the disclosure of the present specification aims to identify the root cause of a problem and to propose actions to rectify the root cause of the problem.
  • FIG. 2 shows a digital representation 100B of the system 100A.
  • Each digital component representing the physical component is identified by the letter B following the corresponding number.
  • a first DAF 116A is identified as a first DAF 116B in the digital representation 100B.
  • Each component 102A to 126A include sub-components (not shown in Figs. 1 and 2).
  • the sub-components can also be represented in the digital representation 100B.
  • Each of the digital components 102B to 126B and sub-components has a model to simulate the operation of the component or sub-component.
  • the first DAF 116B has a model to simulate the liquid flowing through the inlet of the first DAF 116A, the liquid being processed in the DAF 116A, and the processed liquid flowing out through the outlet of the first DAF 116A.
  • the digital representation 100B is updated in real-time as data is received from each of the components 102A to 126A. Therefore, the digital representation 100B enables a user to monitor the operation of the system 100A in real-time.
  • the digital representation 100B is used as a simulation model of the system 100A.
  • the digital representation 100B can then be used to simulate the operation of the system 100A when certain settings of components are modified.
  • Each of the components 102B to 126B of the system 100B is implemented as one or more computer application programs 1333 (see Fig. 6A and associated description below) that are executable within the computer system 1300 (see Fig. 6A and associated description below).
  • Fig. 3 shows a flowchart of a method 300 of setting up the present invention on a system to be monitored.
  • the system 100A is used as an example.
  • the method 300 commences at step 310 by disposing a first sensor on a component of the system 100A to be monitored.
  • the first sensor includes a liquid level sensor, a pH sensor, a flow measurement sensor, a total suspended solids (TSS) sensor and the like.
  • the first sensor monitors parameters of each component 102A to 126A or subcomponent, where the parameters vary with time.
  • An example of such a parameter is the level of TSS in the first DAF 116A.
  • the first sensor monitoring TSS is a TSS sensor for this example.
  • the method 300 then proceeds from step 310 to step 320.
  • a second sensor is disposed on the system 100A to be monitored.
  • the second sensor includes a camera, a pressure sensor, and the like.
  • the second sensor monitors parameters (e.g., water level, manual dose setting, manually changed settings, etc.) of each component 102A to 126A and sub-components that are adjustable by technical personnel.
  • Second sensors are disposed on or nearby the components 102A to 126A and sub-components that required monitoring.
  • An example of a parameter that is monitored by a second sensor is the changing of concentrations of chemicals in the caustic tank 106A.
  • the second sensor in this example is a camera disposed nearby the caustic tank 106A to capture technical personnel changing the chemical concentrations and to automatically log the change with a time stamp.
  • pressure sensors are located on a touchpad of the caustic tank 106A to capture the change of chemical concentrations to the caustic tank 106A.
  • the second sensor is a user interface at which a user can log changes to the components or subcomponents of the system.
  • a camera disposed nearby the first DAF 116A is able to capture the fouling of a TSS sensor using a machine learning model.
  • a level sensor disposed at the caustic tank 106A is able to capture that the tank has been emptied.
  • a vibration sensor is disposed at a pump preceding coagulation tank 114A and is able to capture the deteriorating state of its mechanical elements.
  • the second sensor therefore enables changes to the components 102A to 126A and sub-components of the system 100A to be captured.
  • the method 300 then proceeds from step 320 to step 330.
  • an issue identification model and an issue resolution model which are suitable for the system to be monitored, are selected.
  • an issue identification model for a DAF is selected for the digital representation of the DAF 116B.
  • Certain settings of the DAF are dependent on the system on which the DAF is installed.
  • the DAF has properties such as retention volume, recirculation rate, incoming pH, outgoing pH, TSS, temperature, and flow rates of added chemicals.
  • the flocculant flow rate of the DAF may be a better predictor of the outgoing TSS than the coagulant flow rate.
  • the opposite may be true.
  • a user may select the prioritization of certain components based on the system on which the components are installed. [0041] Therefore, the issue identification model and the issue resolution model are selected based on the system to be monitored. Optionally, a correlation model to correlate events with the monitored parameters of the components 102A to 126A is also selected.
  • the issue identification model and the issue resolution model are used in method 400 (see Fig. 4).
  • the correlation model is used in method 500 (see Fig. 5).
  • the method 300 concludes after selecting the models. Further sensors and models may be established after the completion of method 300.
  • method 400 is performed for monitoring the operation of a system.
  • Method 400 also identifies issues/potential future issues relating to one or more components or sub-components of the system. Once the issues/potential future issues are identified, method 400 also provides candidate solutions for resolving the issues/potential future issues.
  • method 500 can be performed to correlate certain events (e.g., maintenance events, calibration, etc.) to a potential future issue of one or more components or subcomponents.
  • System 100A is used to illustrate the operation of methods 400 and 500.
  • Method 400 is implemented as one or more computer application programs 1333 (see Fig. 6A and associated description below) that are executable within the computer system 1300 (see Fig. 6A and associated description below).
  • Method 400 commences with step 410 by receiving first parameters detected by the first sensors during ongoing operation of the system 100A to be monitored.
  • the first sensors are disposed on the components or sub-components of the system 100A, as discussed above in relation to method 300.
  • the first sensors are disposed on the first DAF 116A of the system 100A to monitor TSS.
  • the frequency at which detected first parameters are sent by the first sensors is dependent on the components being monitored. For a component or subcomponent that changes rapidly, the frequency may be set at 0.1 second, 0.5 second, and the like. On the other hand, for a component that changes slowly, the frequency may be set at 1 second, 5 seconds, 1 minute, and the like.
  • Method 400 proceeds from step 410 to step 420.
  • the received first parameters are processed to generate time-series data for each monitored component or sub-component.
  • the time-series data is the data of one of the first sensors over a period of time.
  • time-series data of the TSS level of DAF 116A is generated for a period of 1 hour with discrete data at intervals set by the frequency at which the first parameters are received.
  • the length of the time-series data in this example is 1 hour with discrete data at intervals determined by the frequency (e.g., 5 seconds).
  • the frequency at which the time-series data is generated depends on whether the component or sub-component being monitored rapidly changes.
  • the frequency For a component or sub-component that changes rapidly, the frequency may be set at 1 minute, 5 minutes, and the like. On the other hand, for a component that changes slowly, the frequency may be set at 10 minute, 20 minutes, and the like. Accordingly, time-series data of different components and sub-components are generated at the completion of step 420. Method 400 proceeds from step 420 to step 430.
  • step 430 method 400 determines, using the issue identification model, whether the time-series data of the components and sub-components are associated with a current issue or a potential future issue.
  • the issue identification model is a machine learning model that has been trained to identify such an issue or potential future issue.
  • the training is performed by adjusting the parameters of the machine learning model using a training dataset.
  • the training dataset is a dataset that contains similar time-series data and associated issue/potential future issue that has been verified. Examples of the machine learning model is discussed below in relation to Fig. 7.
  • the time-series data of the components and sub-components are provided to the issue identification model.
  • the issue identification model then processes the received time-series data through the layers of the machine learning model and produces candidate issues (i.e., either current issues or potential future issues).
  • Each candidate issue is provided with a confidence measure value, indicating the likelihood that the candidate issue is the actual (existing/potential) issue (i.e., an indication whether the determined candidate issue is accurate).
  • the issue identification model is a deterministic tracing algorithm.
  • the deterministic tracing algorithm will be discussed hereinafter in relation to Figs. 8 and 9.
  • the deterministic tracing algorithm is capable of identifying a current issue or potential future issue.
  • the time-series data of the components and sub-components are provided to the issue identification model.
  • the issue identification model then processes the received time-series data through the deterministic tracing algorithm and produces candidate issues (i.e., either current issues or potential future issues).
  • Each candidate issue is provided with a confidence measure value, indicating the likelihood that the candidate issue is the actual (existing/potential) issue (i.e., an indication whether the determined candidate issue is accurate).
  • the issue identification model performs the determination of step
  • the time-series data is generated at a particular frequency and accordingly the issues/potential issues are detected at that particular frequency.
  • the issue identification model processes the time-series data to obtain scattering transforms.
  • the scattering transforms simplify the time-series data into patterns that can be used to identify candidate issues.
  • Method 400 then proceeds from step 430 to step 435.
  • step 435 method 400 determines whether the confidence measure value of any of the identified issues exceeds a predetermined threshold value. If NO, then method 400 concludes. If YES, method 400 proceeds from step 435 to step 440.
  • step 440 method 400 determines, using the issue resolution model, candidate actions for resolving the candidate issues identified in step 430.
  • the issue resolution model is a machine learning model that has been trained to identify candidate actions to resolve candidate issues. One or more candidate actions are presented for each candidate issue.
  • the training of the issue resolution model is performed by adjusting the parameters of the machine learning model using a training dataset.
  • the training dataset is a dataset that contains similar candidate issues and associated candidate actions that have been verified. Examples of the machine learning model is discussed below in relation to Fig. 7.
  • the candidate issues are provided to the issue resolution model.
  • the issue resolution model then processes the received candidate issues through the layers of the machine learning model and produces candidate actions.
  • Each candidate action is provided with a confidence measure value, indicating the likelihood of resolving the candidate issue using the candidate action.
  • Method 400 then proceeds from step 440 to step 450. Steps 450 and 460 are optional. In one arrangement, method 400 concludes at the conclusion of step 440 and presents the candidate actions to the user.
  • step 450 the candidate actions produced by the issue resolution model are provided to the digital representation 100B.
  • the digital representation 100B then implements each candidate action and simulates the effects of implementing the candidate action on the components, sub-components, and the overall system.
  • the effect on the performance of each component, sub-component, and the overall system of each candidate action is output at the conclusion of step 450.
  • Method 400 proceeds from step 450 to step 460.
  • step 460 the effect on the performance of each component and sub-component and the overall system is used to update (i.e. , modify) the confidence measure value of each candidate action.
  • Method 400 then concludes.
  • the computer system 1300 executing the computer application program 1333 to run method 400 is capable of receiving input from a user to indicate the actual issue and the candidate action used to resolve the issue/potential issue.
  • the verification from the user is then used to update the training dataset used to train the issue identification model and the issue resolution model. Therefore, when the issue identification model and the issue resolution model are re-trained, the re-training includes the user verification of an action resolving an issue for the system 100A being monitored. As more and more issues and actions are verified, the issue identification model and issue resolution model for the system 100A improve after each retraining as the training dataset includes the verified issue and action of this particular system 100A.
  • the verification also includes false positives, where an identified issue or candidate action is confirmed to be incorrect. As the training dataset includes false positives, the models can be re-trained to exclude these false positives.
  • Method 500 correlates certain events (e.g., maintenance events, calibration, etc.) to a potential future issue of one or more components or sub-components.
  • Method 500 is implemented as one or more computer application programs 1333 (see Fig. 6A and associated description below) that are executable within the computer system 1300 (see Fig. 6A and associated description below).
  • Method 500 commences with step 510 by receiving first parameters detected by the first sensors during ongoing operation of the system to be monitored.
  • the first sensors are disposed on the components or sub-components of the system, as discussed above in relation to method 300.
  • the first sensors are disposed on the first DAF 116A of the system 100A to monitor TSS.
  • Step 510 is similar to step 410 and, in one arrangement, receiving the first parameters can be used simultaneously for method 400 and method 500.
  • the frequency at which detected first parameters are sent by the first sensors is dependent on the components being monitored. For a component or sub-component that changes rapidly, the frequency may be set at 0.1 second, 0.5 second, and the like. On the other hand, for a component that changes slowly, the frequency may be set at 1 second, 5 seconds, 1 minute, and the like.
  • Method 500 proceeds from step 510 to step 520.
  • the received first parameters are processed to generate time-series data for each monitored component (or sub-component).
  • the time-series data is the data of one of the first sensors over a period of time. For example, the TSS level over a period of 1 hour is received and recorded. For example, time-series data of the TSS level of DAF 116A is generated for a period of 1 hour.
  • the frequency at which the time-series data is generated depends on whether the component or sub-component being monitored rapidly changes. For a component or sub-component that changes rapidly, the frequency may be set at 1 minute, 5 minutes, and the like. On the other hand, for a component that changes slowly, the frequency may be set at 10 minute, 20 minutes, and the like.
  • Step 520 is similar to step 420 and, in one arrangement, generating the time-series data can be performed simultaneously for method 400 and method 500.
  • Method 500 proceeds from step 520 to step 530.
  • step 530 method 500 receives second parameters detected by the second sensors (see step 320 discussed above).
  • the second parameters include manual inputs on settings, calibration, maintenance events, control signals between components, and the like.
  • the second parameters are manual adjustments, which are manually entered by a user, on changes to settings on any one of the components and sub-components.
  • the second parameters are derived by the second sensors (e.g., a camera) to determine changes to settings of a component or sub-component. In other words, the second sensors detect events relating to the system 100A.
  • Method 500 proceeds from step 530 to step 540.
  • step 540 method 500 determines a correlation between the time-series data (generated at step 520) and the second parameters (received at step 530).
  • the determination of such a correlation is performed by a correlation model.
  • the correlation model is a machine learning model that has been trained to identify such correlations between the second parameter and time-series data.
  • the training is performed by adjusting the parameters of the machine learning model using a training dataset.
  • the training dataset is a dataset that contains similar events and time-series data, the correlation of which has been verified. Examples of the machine learning model is discussed below in relation to Fig. 7.
  • the second parameters and time-series data of the components and sub-components are provided to the correlation model.
  • the correlation model then processes the received timeseries data and second parameters through the layers of the machine learning model and produces respective predicted time-series data for the components and sub-components.
  • Each predicted time-series data is provided with a confidence measure value, indicating the likelihood that the predicted time-series data would be accurate.
  • Method 500 then proceeds from step 540 to step 550.
  • step 550 method 500 determines, using the issue identification model, whether the predicted time-series data of the components and sub-components are associated with a potential future issue.
  • the issue identification model is the same machine learning model or deterministic tracing algorithm discussed in relation to step 430.
  • the predicted time-series data of the components and sub-components are provided to the issue identification model.
  • the issue identification model then processes the predicted timeseries data through the layers and produces candidate issues.
  • Each candidate issue is provided with a confidence measure value, indicating the likelihood that the candidate issue is a future potential issue.
  • Method 500 then proceeds from step 550 to step 560.
  • step 560 method 500 determines whether the confidence measure value of any of the identified potential future issues exceeds a predetermined threshold value. If NO, then method 500 concludes. If YES, method 500 proceeds from step 560 to step 570.
  • step 570 method 500 determines, using the issue resolution model, candidate actions for resolving the candidate issues identified in step 560.
  • the issue resolution model is the same machine learning model discussed in relation to step 440 above.
  • the candidate issues are provided to the issue resolution model.
  • the issue resolution model then processes the received candidate issues through the layers and produces candidate actions.
  • Each candidate action is provided with a confidence measure value, indicating the likelihood of resolving the candidate issue using the candidate action.
  • Method 500 then concludes at the conclusion of step 570.
  • the candidate actions produced at step 570 can be simulated in the digital representation 100B to modify the confidence measure value of the candidate actions, per steps 450 and 460 of method 400.
  • method 500 is shown separately to method 400, method 500 can be incorporated into method 400 and can be performed in parallel to method 400.
  • Method 500 enables a maintenance schedule to be generated or modified based on the predicted future potential issue.
  • a predetermined maintenance schedule is amended by adding a maintenance event if method 500 determines that there is a predicted potential issue and there is no upcoming maintenance event.
  • a predetermined maintenance schedule is amended by removing a maintenance event exists but method 500 determines that there is no predicted potential issue and adding a maintenance event at a later time in the maintenance schedule. Accordingly, a maintenance schedule can be modified to reduce potential costs resulting from having to rectify the system, reduce costs from performing unnecessary maintenance on the system, and the like.
  • Fig. 7 shows the machine learning model that can be used on any one of the issue identification model, issue resolution model, and correlation model.
  • the machine learning model includes inputting any input (e.g., time-series data, second parameters) into a machine learning model.
  • the machine learning model is a deep learning model 700 (shown in Fig. 7).
  • the machine learning model is a support vector machine (SVM) algorithm or model.
  • SVM support vector machine
  • Deep learning model 700 is a machine learning model that excels at analysing unstructured data, including the time-series data and second parameters. Deep learning model 700 employs algorithms that combine feature construction, modelling, and prediction into a single end-to-end system, and thus reduces unstructured data to an information-dense representation that is optimized for prediction.
  • One technique used in deep learning model 700 in the methods 400 and 500 is a convolutional neural network (CNN) 71.
  • the CNN 71 employs a multilayer neural network.
  • the layers of the CNN 71 include an input layer 72, hidden layers 74, and an output layer 76.
  • the hidden layers include multiple convolutional layers, pooling layers, fully connected layers and normalization layers.
  • Time-series data and/or second parameters are fed into the input layer 72 (as determined by the model used).
  • the model then generates the predictions (see steps 430, 440, 540, 550, and 570 discussed above) as required.
  • the CNN 71 is trained with the training dataset to provide effective predictions.
  • the SVM is an alternative machine learning model that includes locating a hyperplane that classifies data points.
  • the SVM includes generating predictions (see steps 430, 440, 540, 550, and 570 discussed above) as required.
  • the SVM is trained with the training dataset to provide effective predictions.
  • the DAF 116A has a problem with the level of the TSS.
  • the technical personnel managing the system 100A identified that the problem was the pump speed controlling the water flow into the DAF 116A.
  • the system 100A is being monitored and first and second sensors are disposed on the components 102A to 126A (see method 300).
  • the first parameters detected by the first sensors are received by the computer system 1300 (which executes method 400) at step 410.
  • the first parameters are then converted to timeseries data spanning a period of time.
  • the time-series data of the components 102A to 126A and their sub-components are then provided to the issue identification model at step 430.
  • the issue identification model then processes the time-series data to determine the root cause of the TSS level of the DAF 116A being too high.
  • the issue identification model analyses the time-series data of the entire system 100A (which includes the components 102A to 126A and their sub-components), the issue identification model then identifies, for example, the problems (i.e. , the root cause) may be the chemical concentration in the coagulant tank 104A, fouling of a TSS sensor, malfunction of an air pressure supply, unavailability of coagulant in the coagulant tank 104A, a bearing on the coagulant pump failing, and the like.
  • Each of the potential problems has a confidence measure value indicating the likelihood that the potential problem is the actual (existing/potential) problem (i.e., the root cause).
  • Step 435 determines whether any of the confidence measure values exceed a predetermined threshold value. If yes, then method 400 proceeds to step 440 and provides the potential identified problems to the issue resolution model.
  • the issue resolution model then provides candidate actions to resolve the root cause of the problem of the TSS of the DAF 116A being too high.
  • the candidate actions may be simulated on the digital representation 100B to simulate the results of performing the candidate actions.
  • the technical personnel can implement the action and provide feedback as to which potential identified problems and candidate actions are correct.
  • the issue identification model does not identify the pump speed controlling the water flow into the DAF 116A to be the problem, contrary to the conventional methods. Instead, the issue identification model identifies potential root cause of the problem. Accordingly, the issue resolution model does not recommend adjusting the pump speed controlling the water flow into the DAF 116A, contrary to the conventional methods. Rather, the issue resolution model recommends actions relating to the identified potential root cause of the problem.
  • the present invention identifies the potential root cause of a problem and candidate actions to resolve the root cause.
  • the present invention accordingly prevents temporary fixes that may cause further harm to the system.
  • Fig. 8 shows a flowchart of a method 800 of identifying a current issue or a potential future issue.
  • the method 800 is referred to as the deterministic tracing algorithm used for the issue identification model.
  • the method 800 receives input (e.g., time-series data, predicted time-series data) and processes the input to determine a current issue or a potential future issue.
  • input e.g., time-series data, predicted time-series data
  • the method 800 uses a networked graph 900 (shown in Fig. 9) having nodes 1010 to 1030 and edges 1080. Each of the nodes 1010 to 1030 is connected to one or more of the other nodes 1010 to 1030 via the edges 1080. The nodes 1010 to 1030 represent issues. The edges 1080 represent a relationship (or a causality) between the nodes 1010 to 1030.
  • the networked graph 900 is built on a causation relationship between issues. In one arrangement, the networked graph 900 is manually developed. In another arrangement, the networked graph 900 is developed using the machine learning model shown in Fig. 7.
  • the method 800 commences at step 810 by determining a current issue or a potential future issue associated with the time-series data (obtained at step 420) or predicted time-series data (obtained at step 540). For example, a detected leak in a flocculant dosing line indicates that the relevant Dissolved Air Flotation system is not or will not be functioning optimally with regards to Total Suspended Solids removal.
  • step 810 determines one or more component(s) as a current issue or potential future issue (which is represented by the nodes 1010 and 1012).
  • Each of the nodes 1010 and 1012 has an initial probability value indicating the probability that a node 1010 and 1012 is the issue causing the problem shown in the time-series data.
  • the initial probability values of the nodes 1010 and 1012 are obtained through empirical data.
  • the probability values are also known as the confidence measure values (as used hereinbefore).
  • Fig. 9 shows two candidate issues, there may be only one or more than two candidate issues.
  • the method 800 then proceeds from step 810 to 820. [0086] In step 820, the method 800 determines issues related to the determined issue. Step 820 is performed using the networked graph 900.
  • each of the nodes 1010 and 1012 relates to nodes 1014, 1016, and 1018 as these nodes 1010 to 1018 are connected by the edges 1080. Therefore, in step 820, the nodes 1014, 1016, and 1018 are determined to be the related issues. Similar to the nodes 1010 and 1012, each of the nodes 1014, 1016, and 1018 has an initial probability value indicating the probability that a node 1014, 1016, and 1018 is the cause of the determined issue (i.e. , nodes 1010 and 1012). The initial probability values of the nodes 1014, 1016, and 1018 are obtained through empirical data. The method 800 proceeds from step 820 to step 830.
  • step 830 the method 800 calculates respective cost functions of the respective related issues.
  • the cost function is a function to amend the initial probability value of a related issue based on the preceding issues. Referring to the networked graph 900, the cost function of each of the nodes 1014, 1016, and 1018 is calculated.
  • the cost function is an average of the initial probability value of a node and the highest probability value of any one of the preceding nodes.
  • node 1014 has an initial probability value of 0.8, meaning the issue represented by node 1014 has an 80% chance of causing the determined issues represented by nodes 1010 and 1012.
  • Node 1010 has an initial probability value of 0.9 and node 1012 has an initial probability value of 0.7.
  • This cost function arrangement calculates the average of the initial probability value of node 1014 and the highest probability value (i.e., 0.9) of the preceding nodes (i.e., nodes 1010 and 1012). The cost function obtains the average of 0.85.
  • cost function is an example only. There are other cost functions such as a weighted average function, where the initial probability value is multiplied with the highest probability value of the preceding nodes. The result of the multiplication is then averaged with the initial probably value of the node.
  • cost function such as the output being the higher value out of the initial probability value and the value of the function “a*b/(a+b))”, where a is the value of the initial probability value, and b is the highest probability of the preceding nodes.
  • the method 800 proceeds from step 830 to step 835.
  • step 835 the method 800 updates probability values of the related issues (i.e., nodes 1014 to 1018).
  • the update is based on the calculated cost function such that the value of the calculated cost function becomes the probability value of the node. For example, the cost function of node 1014 is 0.85.
  • step 835 the probability value of node 1014 is updated to be 0.85.
  • the method 800 proceeds from step 835 to step 840.
  • step 840 the method 800 determines whether there are other related issues associated with the related issues (determined at step 820 initially or at step 840 subsequently).
  • nodes 1014 to 1018 are connected to further related nodes 1026 to 1030. If there are further related issues (YES), the method 800 proceeds from step 840 to 830. In the subsequent steps 830 and 835, the method 800 calculates the cost function and updates the probability values of nodes 1026 to 1030.
  • step 840 proceeds from step 840 to step 850.
  • step 850 the method 800 determines the last related issue with the highest probability value.
  • the last related issues are nodes 1026 to 1030.
  • One of the nodes 1026 to 1030 with the highest probability value after the update i.e., step 835) is determined.
  • node 1026 has an updated probability value of 0.5
  • node 1028 has an updated probability value of 0.6
  • node 1030 has an updated probability value of 0.7.
  • Step 850 determines that the last related issue with the highest probability value is node 1030 with an updated probability value of 0.7.
  • the method 800 proceeds from step 850 to step 860.
  • step 860 the method 800 determines the most likely issues related to the timeseries data.
  • the most likely issues are determined by proceeding from the last related issue with the highest probability value (identified at step 850) to the preceding issue with the highest cost function value until arriving at the initial determined current or potential issue (identified at step 810) with the highest cost function value.
  • the highest cost function is determined by proceeding from the latest node (identified at step 850) to the preceding node with the highest probability value. This is repeated until arriving at the initial determined current or potential issue (identified at step 810),. For example, node 1030 with the probability value of 0.7 proceeds to node 1022 with the highest probability value of 0.9. In turn, node 1022 proceeds to node 1016 with the highest probability value of 0.8. Then node 1016 proceeds to node 1012. Accordingly, the most likely issues are the issues represented by nodes 1030, 1022, 1016, and 1012. [0097] In one optional arrangement, a threshold value is predetermined to better identify a likely issue.
  • a threshold value is predetermined to be 0.85.
  • the above arrangement stops at node 1022 as the probability value of node 1016 is below the threshold value. Accordingly, the most likely issues are represented by nodes 1030 and 1022.
  • the probability values of the nodes in each path are added and the path with the highest total probability values is the highest cost function.
  • one path has nodes 1030, 1022, 1016, and 1012 with a total probability value of 2.3.
  • Another path has nodes 1030, 1024, 1014, and 1010 with a total probability value of 2.5.
  • the highest cost function is the path having nodes 1030, 1024, 1014, and 1010.
  • the cost function is determined using Bayes’ theorem where where A represents a child node (e.g., node 1024), B represents a parent node (e.g., node 1028), is the probability of A given B is true, j S the probability of B given A is true, and B(X) are the independent probability values of the nodes.
  • a node e.g., 1026, 1028, 1030
  • a preceding node e.g., 1020, 1022, 1024 dependent on which connection has the highest P( 4 s B ' ) ⁇ . This is repeated until arriving at the initial determined current or potential issue (identified at step 810).
  • a threshold value is predetermined to better identify a likely issue (similar to one of the above arrangements) to enable the
  • the method 800 concludes at the conclusion of step 860.
  • FIGs. 6A and 6B depict a general-purpose computer system 1300, upon which the various arrangements described can be practiced.
  • the computer system 1300 includes: a computer module 1301; input devices such as a keyboard 1302, a mouse pointer device 1303, a scanner 1326, a camera 1327, and a microphone 1380; and output devices including a printer 1315, a display device 1314 and loudspeakers 1317.
  • An external Modulator-Demodulator (Modem) transceiver device 1316 may be used by the computer module 1301 for communicating to and from a communications network 1320 via a connection 1321.
  • the communications network 1320 may be a wide-area network (WAN), such as the Internet, a cellular telecommunications network, or a private WAN.
  • WAN wide-area network
  • the modem 1316 may be a traditional “dial-up” modem.
  • the modem 1316 may be a broadband modem.
  • a wireless modem may also be used for wireless connection to the communications network 1320.
  • the computer module 1301 typically includes at least one processor unit 1305, and a memory unit 1306.
  • the memory unit 1306 may have semiconductor random access memory (RAM) and semiconductor read only memory (ROM).
  • the computer module 1301 also includes an number of input/output (I/O) interfaces including: an audio-video interface 1307 that couples to the video display 1314, loudspeakers 1317 and microphone 1380; an I/O interface 1313 that couples to the keyboard 1302, mouse 1303, scanner 1326, camera 1327 and optionally a joystick or other human interface device (not illustrated); and an interface 1308 for the external modem 1316 and printer 1315.
  • the modem 1316 may be incorporated within the computer module 1301 , for example within the interface 1308.
  • the computer module 1301 also has a local network interface 1311 , which permits coupling of the computer system 1300 via a connection 1323 to a local-area communications network 1322, known as a Local Area Network (LAN).
  • LAN Local Area Network
  • the local communications network 1322 may also couple to the wide network 1320 via a connection 1324, which would typically include a so-called “firewall” device or device of similar functionality.
  • the local network interface 1311 may comprise an Ethernet circuit card, a Bluetooth® wireless arrangement or an IEEE 802.11 wireless arrangement; however, numerous other types of interfaces may be practiced for the interface 1311.
  • the I/O interfaces 1308 and 1313 may afford either or both of serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) standards and having corresponding USB connectors (not illustrated).
  • Storage devices 1309 are provided and typically include a hard disk drive (HDD) 1310. Other storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used.
  • An optical disk drive 1312 is typically provided to act as a non-volatile source of data.
  • Portable memory devices such optical disks (e.g., CD-ROM, DVD, Blu-ray DiscTM), USB-RAM, portable, external hard drives, and floppy disks, for example, may be used as appropriate sources of data to the system 1300.
  • the components 1305 to 1313 of the computer module 1301 typically communicate via an interconnected bus 1304 and in a manner that results in a conventional mode of operation of the computer system 1300 known to those in the relevant art.
  • the processor 1305 is coupled to the system bus 1304 using a connection 1318.
  • the memory 1306 and optical disk drive 1312 are coupled to the system bus 1304 by connections 1319. Examples of computers on which the described arrangements can be practised include IBM-PC’s and compatibles, Sun Sparcstations, Apple MacTM or like computer systems.
  • the method of identifying issues/potential future issues, recommending candidate actions, and correlating events to potential future issues may be implemented using the computer system 1300 wherein the processes of Figs. 4 and 5, described above, may be implemented as one or more software application programs 1333 executable within the computer system 1300.
  • the steps of methods 400 and 500 are effected by instructions 1331 (see Fig. 6B) in the software 1333 that are carried out within the computer system 1300.
  • the software instructions 1331 may be formed as one or more code modules, each for performing one or more particular tasks.
  • the software may also be divided into two separate parts, in which a first part and the corresponding code modules performs methods 400 and 500 and a second part and the corresponding code modules manage a user interface between the first part and the user.
  • the software may be stored in a computer readable medium, including the storage devices described below, for example.
  • the software is loaded into the computer system 1300 from the computer readable medium, and then executed by the computer system 1300.
  • a computer readable medium having such software or computer program recorded on the computer readable medium is a computer program product.
  • the use of the computer program product in the computer system 1300 preferably effects an advantageous apparatus for identifying issues/potential future issues, recommending candidate actions, and correlating events to potential future issues.
  • the software 1333 is typically stored in the HDD 1310 or the memory 1306.
  • the software is loaded into the computer system 1300 from a computer readable medium, and executed by the computer system 1300.
  • the software 1333 may be stored on an optically readable disk storage medium (e.g., CD-ROM) 1325 that is read by the optical disk drive 1312.
  • a computer readable medium having such software or computer program recorded on it is a computer program product.
  • the use of the computer program product in the computer system 1300 preferably effects an apparatus for identifying issues/potential future issues, recommending candidate actions, and correlating events to potential future issues.
  • the application programs 1333 may be supplied to the user encoded on one or more CD-ROMs 1325 and read via the corresponding drive 1312, or alternatively may be read by the user from the networks 1320 or 1322. Still further, the software can also be loaded into the computer system 1300 from other computer readable media.
  • Computer readable storage media refers to any non-transitory tangible storage medium that provides recorded instructions and/or data to the computer system 1300 for execution and/or processing.
  • Examples of such storage media include floppy disks, magnetic tape, CD-ROM, DVD, Blu-rayTM Disc, a hard disk drive, a ROM or integrated circuit, USB memory, a magnetooptical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the computer module 1301.
  • Examples of transitory or non-tangible computer readable transmission media that may also participate in the provision of software, application programs, instructions and/or data to the computer module 1301 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like.
  • GUIs graphical user interfaces
  • a user of the computer system 1300 and the application may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s).
  • Other forms of functionally adaptable user interfaces may also be implemented, such as an audio interface utilizing speech prompts output via the loudspeakers 1317 and user voice commands input via the microphone 1380.
  • Fig. 6B is a detailed schematic block diagram of the processor 1305 and a “memory” 1334.
  • the memory 1334 represents a logical aggregation of all the memory modules (including the HDD 1309 and semiconductor memory 1306) that can be accessed by the computer module 1301 in Fig. 6A.
  • a power-on self-test (POST) program 1350 executes.
  • the POST program 1350 is typically stored in a ROM 1349 of the semiconductor memory 1306 of Fig. 6A.
  • a hardware device such as the ROM 1349 storing software is sometimes referred to as firmware.
  • the POST program 1350 examines hardware within the computer module 1301 to ensure proper functioning and typically checks the processor 1305, the memory 1334 (1309, 1306), and a basic input-output systems software (BIOS) module 1351, also typically stored in the ROM 1349, for correct operation. Once the POST program 1350 has run successfully, the BIOS 1351 activates the hard disk drive 1310 of Fig. 6A.
  • BIOS basic input-output systems software
  • Activation of the hard disk drive 1310 causes a bootstrap loader program 1352 that is resident on the hard disk drive 1310 to execute via the processor 1305.
  • the operating system 1353 is a system level application, executable by the processor 1305, to fulfil various high level functions, including processor management, memory management, device management, storage management, software application interface, and generic user interface.
  • the operating system 1353 manages the memory 1334 (1309, 1306) to ensure that each process or application running on the computer module 1301 has sufficient memory in which to execute without colliding with memory allocated to another process. Furthermore, the different types of memory available in the system 1300 of Fig. 6A must be used properly so that each process can run effectively. Accordingly, the aggregated memory 1334 is not intended to illustrate how particular segments of memory are allocated (unless otherwise stated), but rather to provide a general view of the memory accessible by the computer system 1300 and how such is used.
  • the processor 1305 includes a number of functional modules including a control unit 1339, an arithmetic logic unit (ALU) 1340, and a local or internal memory 1348, sometimes called a cache memory.
  • the cache memory 1348 typically includes a number of storage registers 1344 - 1346 in a register section.
  • One or more internal busses 1341 functionally interconnect these functional modules.
  • the processor 1305 typically also has one or more interfaces 1342 for communicating with external devices via the system bus 1304, using a connection 1318.
  • the memory 1334 is coupled to the bus 1304 using a connection 1319.
  • the application program 1333 includes a sequence of instructions 1331 that may include conditional branch and loop instructions.
  • the program 1333 may also include data 1332 which is used in execution of the program 1333.
  • the instructions 1331 and the data 1332 are stored in memory locations 1328, 1329, 1330 and 1335, 1336, 1337, respectively.
  • a particular instruction may be stored in a single memory location as depicted by the instruction shown in the memory location 1330.
  • an instruction may be segmented into a number of parts each of which is stored in a separate memory location, as depicted by the instruction segments shown in the memory locations 1328 and 1329.
  • the processor 1305 is given a set of instructions which are executed therein.
  • the processor 1305 waits for a subsequent input, to which the processor 1305 reacts to by executing another set of instructions.
  • Each input may be provided from one or more of a number of sources, including data generated by one or more of the input devices 1302, 1303, data received from an external source across one of the networks 1320, 1302, data retrieved from one of the storage devices 1306, 1309 or data retrieved from a storage medium 1325 inserted into the corresponding reader 1312, all depicted in Fig. 6A.
  • the execution of a set of the instructions may in some cases result in output of data. Execution may also involve storing data or variables to the memory 1334.
  • the disclosed system management arrangements use input variables 1354, which are stored in the memory 1334 in corresponding memory locations 1355, 1356, 1357.
  • the system management arrangements produce output variables 1361, which are stored in the memory 1334 in corresponding memory locations 1362, 1363, 1364.
  • Intermediate variables 1358 may be stored in memory locations 1359, 1360, 1366 and 1367.
  • each fetch, decode, and execute cycle comprises:
  • a fetch operation which fetches or reads an instruction 1331 from a memory location 1328, 1329, 1330;
  • a further fetch, decode, and execute cycle for the next instruction may be executed.
  • a store cycle may be performed by which the control unit 1339 stores or writes a value to a memory location 1332.
  • Each step or sub-process in the processes of Figs. 4 and 5 is associated with one or more segments of the program 1333 and is performed by the register section 1344, 1345, 1347, the ALU 1340, and the control unit 1339 in the processor 1305 working together to perform the fetch, decode, and execute cycles for every instruction in the instruction set for the noted segments of the program 1333.
  • the method of identifying issues/potential future issues, recommending candidate actions, and correlating events to potential future issues may alternatively be implemented in dedicated hardware such as one or more integrated circuits performing the functions or sub functions of methods 400 and 500.
  • dedicated hardware may include graphic processors, digital signal processors, or one or more microprocessors and associated memories.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Automation & Control Theory (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Economics (AREA)
  • Hydrology & Water Resources (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Chemical & Material Sciences (AREA)
  • Water Supply & Treatment (AREA)
  • General Business, Economics & Management (AREA)
  • Environmental & Geological Engineering (AREA)
  • Tourism & Hospitality (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Organic Chemistry (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Game Theory and Decision Science (AREA)
  • Educational Administration (AREA)
  • Development Economics (AREA)
  • Sustainable Development (AREA)

Abstract

La présente divulgation concerne un procédé de gestion d'un système ayant des composants. Le procédé consiste à générer des données chronologiques d'au moins deux des composants du système ; et à déterminer, par un modèle d'identification de problème, un problème actuel ou un problème potentiel relatif à un ou plusieurs des composants du système sur la base des données chronologiques générées, le modèle d'identification de problème comprenant un premier modèle d'apprentissage automatique ou un algorithme de traçage déterministe.
PCT/AU2021/051075 2020-09-18 2021-09-17 Procédé de gestion d'un système WO2022056594A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2020903357 2020-09-18
AU2020903357A AU2020903357A0 (en) 2020-09-18 Method of managing a system

Publications (1)

Publication Number Publication Date
WO2022056594A1 true WO2022056594A1 (fr) 2022-03-24

Family

ID=80777183

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2021/051075 WO2022056594A1 (fr) 2020-09-18 2021-09-17 Procédé de gestion d'un système

Country Status (1)

Country Link
WO (1) WO2022056594A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115403222A (zh) * 2022-09-16 2022-11-29 广东海洋大学 一种养殖尾水处理系统及方法

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3435184A1 (fr) * 2017-07-28 2019-01-30 Siemens Aktiengesellschaft Système, procédé et unité de commande pour commander un système technique
US10354196B2 (en) * 2016-12-16 2019-07-16 Palantir Technologies Inc. Machine fault modelling
WO2019162648A1 (fr) * 2018-02-20 2019-08-29 Centrica Hive Limited Diagnostic de système de commande
US20190302710A1 (en) * 2018-03-30 2019-10-03 General Electric Company System and method for mechanical transmission control
WO2019216975A1 (fr) * 2018-05-07 2019-11-14 Strong Force Iot Portfolio 2016, Llc Procédés et systèmes de collecte, d'apprentissage et de diffusion en continu de signaux de machine à des fins d'analyse et de maintenance à l'aide de l'internet des objets industriel
WO2020046371A1 (fr) * 2018-08-31 2020-03-05 Siemens Aktiengesellschaft Systèmes et dispositifs de commande de processus résistant à une intrusion numérique et à des instructions erronées
US20200166921A1 (en) * 2018-11-27 2020-05-28 Presenso, Ltd. System and method for proactive repair of suboptimal operation of a machine
US20200231466A1 (en) * 2017-10-09 2020-07-23 Zijun Xia Intelligent systems and methods for process and asset health diagnosis, anomoly detection and control in wastewater treatment plants or drinking water plants
US20200272139A1 (en) * 2019-02-21 2020-08-27 Abb Schweiz Ag Method and System for Data Driven Machine Diagnostics

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10354196B2 (en) * 2016-12-16 2019-07-16 Palantir Technologies Inc. Machine fault modelling
EP3435184A1 (fr) * 2017-07-28 2019-01-30 Siemens Aktiengesellschaft Système, procédé et unité de commande pour commander un système technique
US20200231466A1 (en) * 2017-10-09 2020-07-23 Zijun Xia Intelligent systems and methods for process and asset health diagnosis, anomoly detection and control in wastewater treatment plants or drinking water plants
WO2019162648A1 (fr) * 2018-02-20 2019-08-29 Centrica Hive Limited Diagnostic de système de commande
US20190302710A1 (en) * 2018-03-30 2019-10-03 General Electric Company System and method for mechanical transmission control
WO2019216975A1 (fr) * 2018-05-07 2019-11-14 Strong Force Iot Portfolio 2016, Llc Procédés et systèmes de collecte, d'apprentissage et de diffusion en continu de signaux de machine à des fins d'analyse et de maintenance à l'aide de l'internet des objets industriel
WO2020046371A1 (fr) * 2018-08-31 2020-03-05 Siemens Aktiengesellschaft Systèmes et dispositifs de commande de processus résistant à une intrusion numérique et à des instructions erronées
US20200166921A1 (en) * 2018-11-27 2020-05-28 Presenso, Ltd. System and method for proactive repair of suboptimal operation of a machine
US20200272139A1 (en) * 2019-02-21 2020-08-27 Abb Schweiz Ag Method and System for Data Driven Machine Diagnostics

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115403222A (zh) * 2022-09-16 2022-11-29 广东海洋大学 一种养殖尾水处理系统及方法
CN115403222B (zh) * 2022-09-16 2023-11-14 广东海洋大学 一种养殖尾水处理系统及方法

Similar Documents

Publication Publication Date Title
US20200231466A1 (en) Intelligent systems and methods for process and asset health diagnosis, anomoly detection and control in wastewater treatment plants or drinking water plants
US9146800B2 (en) Method for detecting anomalies in a time series data with trajectory and stochastic components
WO2018230645A1 (fr) Dispositif de détection d'anomalie, procédé de détection d'anomalie et programme
KR102270347B1 (ko) 딥러닝 앙상블 모델을 이용한 이상 상황 탐지 장치 및 그 방법
CN110909807A (zh) 基于深度学习的网络验证码识别方法、装置及计算机设备
US9772895B2 (en) Identifying intervals of unusual activity in information technology systems
CN110672323B (zh) 一种基于神经网络的轴承健康状态评估方法及装置
CN111650922A (zh) 一种智能家居异常检测方法和装置
CN113449703B (zh) 环境在线监测数据的质控方法、装置、存储介质及设备
WO2022056594A1 (fr) Procédé de gestion d'un système
EP4133346A1 (fr) Procédé et système de transmission d'une alerte portant sur des scores d'anomalies attribués à des données d'entrée
Alelaumi et al. A predictive abnormality detection model using ensemble learning in stencil printing process
CN114662602A (zh) 一种离群点检测方法、装置、电子设备及存储介质
CN115980050A (zh) 排水口的水质检测方法、装置、计算机设备及存储介质
CN113448807B (zh) 告警监测方法、系统、电子设备及计算机可读存储介质
KR102132077B1 (ko) 설비 데이터의 이상 정도 평가 방법
CN112185382B (zh) 一种唤醒模型的生成和更新方法、装置、设备及介质
US20210080924A1 (en) Diagnosis Method and Diagnosis System for a Processing Engineering Plant and Training Method
CN109214318A (zh) 一种寻找非稳态时间序列微弱尖峰的方法
CN109274562A (zh) 一种语音指令执行的方法、装置、智能家电设备及介质
US10641133B2 (en) Managing water-supply pumping for an electricity production plant circuit
EP4009142A1 (fr) Appareil, procede et programme pour ajuster les parametres d'un affichage
CN110334244B (zh) 一种数据处理的方法、装置及电子设备
US20230212032A1 (en) Apparatus and method for controlling output for chemical dosing optimization for water treatment plant
US20200311629A1 (en) Systems and Methods for a Workflow Tolerance Designer

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21867951

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21867951

Country of ref document: EP

Kind code of ref document: A1