WO2023041458A2 - Procédés mis en œuvre par ordinateur, modules et système de détection d'anomalies dans des processus de fabrication industriels - Google Patents

Procédés mis en œuvre par ordinateur, modules et système de détection d'anomalies dans des processus de fabrication industriels Download PDF

Info

Publication number
WO2023041458A2
WO2023041458A2 PCT/EP2022/075214 EP2022075214W WO2023041458A2 WO 2023041458 A2 WO2023041458 A2 WO 2023041458A2 EP 2022075214 W EP2022075214 W EP 2022075214W WO 2023041458 A2 WO2023041458 A2 WO 2023041458A2
Authority
WO
WIPO (PCT)
Prior art keywords
anomaly
data
analysis model
module
expert
Prior art date
Application number
PCT/EP2022/075214
Other languages
German (de)
English (en)
Other versions
WO2023041458A3 (fr
Inventor
Georg Schneider
Nicolas Thewes
Original Assignee
Zf Friedrichshafen Ag
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zf Friedrichshafen Ag filed Critical Zf Friedrichshafen Ag
Publication of WO2023041458A2 publication Critical patent/WO2023041458A2/fr
Publication of WO2023041458A3 publication Critical patent/WO2023041458A3/fr

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B23/00Testing or monitoring of control systems or parts thereof
    • G05B23/02Electric testing or monitoring
    • G05B23/0205Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
    • G05B23/0218Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults
    • G05B23/0243Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults model based detection method, e.g. first-principles knowledge model
    • G05B23/0254Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults model based detection method, e.g. first-principles knowledge model based on a quantitative model, e.g. mathematical relationships between inputs and outputs; functions: observer, Kalman filter, residual calculation, Neural Networks

Definitions

  • the invention relates to computer-implemented methods, modules and a system for anomaly detection in industrial manufacturing processes.
  • the object of the invention was how industrial manufacturing processes and/or individual steps of an industrial manufacturing process can be monitored in order to better detect deviations or anomalies.
  • the invention provides a computer-implemented method for obtaining an analysis model for anomaly detection in an industrial manufacturing process.
  • the procedure includes the steps
  • the invention also provides a computer program for obtaining an analysis model for anomaly detection in an industrial manufacturing process.
  • the computer program includes instructions that cause a computer to carry out the steps of the inventive method for obtaining an analysis model for anomaly detection in an industrial manufacturing process when the computer program runs on the computer.
  • annotation requests are sent to the validation entity.
  • the data is enriched with annotations received from the reviewer.
  • the distribution of normal states is determined as a reference and/or patterns are determined in the data based on historical data, which are classified at least into the classes normal and abnormal by corresponding annotations.
  • the reference is determined as follows:
  • the data from each of N states is ordered by h ⁇ k positions, each entry of one of the N states at one of the h ⁇ k positions with the entry of another of the N states at the same position is comparable.
  • the h ⁇ k positions correspond to a matrix of times and frequencies.
  • It N training states are thus obtained in the form of an N xhxk arrangement.
  • At least one reference image is applied to the training states, with a value of a position in a reference state being calculated from the values of the position in the training states according to a predetermined calculation rule. For example, for each position in the reference state, the mean is calculated as a first reference image and the standard deviation is calculated as a second reference image over all corresponding positions in the training states, respectively.
  • the accuracy of the trained analysis model is increased by removing anomalies that were not recognized in the past from historical data that were initially fed into the analysis model.
  • the training is monitored and/or further analysis models are released based on an achieved accuracy.
  • the steps of the method according to the invention are carried out iteratively and analysis models are retrained and/or retrained.
  • the invention provides a first module for obtaining an analysis model for anomaly detection in an industrial manufacturing process.
  • the first module includes
  • a data processing module designed to receive data from states of a process, a component and/or a production machine in at least one process step of the industrial manufacturing process to be monitored at a specific point in time or in a specific time interval, from similar process steps and/or from downstream processes, comprising sensorially measured data and/or data obtained from simulating the process;
  • a first data memory which stores the data and, based on the data, a state definition and data extensions in the form of annotations of a checking instance;
  • a training module that trains at least one analysis model for anomaly detection, the analysis model being based on the state definition determines a distribution of the states and, based on the distribution, classifies as anomalies the states that are rare and/or deviate from other states based on the data;
  • a second data store storing the trained analysis model and an evaluation of the analysis model based on a test data set or on an evaluation by the verification authority.
  • the first module further comprises
  • a monitoring module for increasing the accuracy of the trained analysis model, in which historical data that the training module initially fed into the analysis model is cleaned of anomalies that were not recognized in the past, and/or
  • a control module for monitoring the training and/or for enabling further analysis models based on an achieved accuracy.
  • the computer-implemented method, the computer program and the first module can receive analysis models for anomaly detection in the case of a failing/damaging production machine (scenario 1) and/or when testing components (scenario 2).
  • the invention provides a computer-implemented method for determining at least one anomaly value in an industrial manufacturing process.
  • the procedure includes the steps
  • the invention also provides a computer program for determining at least one anomaly value in an industrial manufacturing process.
  • the computer program includes instructions that cause a computer to carry out the steps of the method according to the invention for determining at least one anomaly value in an industrial manufacturing process when the computer program runs on the computer.
  • the analysis model has been obtained according to the method for obtaining an analysis model according to the invention or by means of a first module according to the invention.
  • the anomaly value checked by the checking instance flows into a training of the analysis model.
  • the invention provides a second module for determining at least one anomaly value in an industrial manufacturing process.
  • the second module includes
  • a data input Zcontrol module executed, o sensor-measured data from states of a process, a component and/or a production machine in at least one process step to be monitored of the industrial manufacturing process and o at least one analysis model based on historical data of the data on data from similar process steps and/or on data from downstream processes, comprising data measured by sensors and/or data obtained from simulating the process, to obtain anomaly detection; o input the data into the at least one analysis model to obtain at least one anomaly score;
  • the second module further comprises a unifier module for combining an anomaly value obtained from an annotation-based analysis model and an anomaly value obtained from the reference-based analysis model.
  • the second module is designed to determine anomaly values according to the method according to the invention for determining at least one anomaly value.
  • the computer-implemented method, the computer program and the second module can determine an anomaly value in scenario 1 and/or in scenario 2.
  • the invention provides a computer-implemented method for providing anomalies detected in an industrial manufacturing process to a verification authority.
  • the procedure includes the steps
  • Whether a state classified as normal is sent to the checking instance can be made dependent on the degree of normality/abnormality, for example, which is represented by the level of the calculated anomaly value.
  • the invention also provides a computer program for providing anomalies detected in an industrial manufacturing process to a verification authority.
  • the computer program includes instructions that cause a computer, the steps of the method for providing to carry out anomalies detected in an industrial manufacturing process to a checking instance when the computer program is running on the computer.
  • the state is annotated as a function
  • Metadata comprising identity of the human reviewer, quality of annotations already made by that reviewer, time of day, day of week and/or degree of anomaly.
  • annotation requests relating to states are sent to the verification authority and data from the states are supplemented with annotations received from the verification authority.
  • a state that has already been provided to a verification authority is provided again at a different time for the same verification authority and/or is provided to further verification authorities, and the annotations are compared and/or evaluated.
  • the anomaly value is obtained according to the method according to the invention for determining at least one anomaly value or by means of the second module according to the invention.
  • the invention provides a third module for providing anomalies detected in an industrial manufacturing process to a verification authority.
  • the third module is designed to carry out the method according to the invention for providing anomalies detected in an industrial manufacturing process to a checking authority.
  • the computer-implemented method, the computer program and the third module can provide detected anomalies to the verification authority in scenario 1 and/or in scenario 2.
  • the invention provides a computer-implemented method for controlling anomalies in industrial manufacturing processes, comprising the steps
  • the invention also provides a computer program for controlling anomalies in industrial manufacturing processes.
  • the computer program comprises instructions which cause a computer to carry out the steps of the method according to the invention for checking anomalies in industrial manufacturing processes when the computer program runs on the computer.
  • the analysis model is trained according to the method according to the invention for obtaining an analysis model for anomaly detection, an anomaly is recognized according to the method according to the invention for determining at least one anomaly value and/or the checking instance according to the method according to the invention for providing anomalies recognized in an industrial manufacturing process to a Verification instance according to one of Claims 15 to 19 integrated.
  • the invention provides a system for controlling anomalies in industrial manufacturing processes.
  • the system includes a first module according to the invention, a second module according to the invention and a third module according to the invention.
  • the system is designed to carry out the inventive method for checking anomalies.
  • the computer-implemented method, computer program and system can control anomalies in scenario 1 and/or in scenario 2.
  • the claimed objects and the present disclosure can monitor all steps of an industrial manufacturing process, as well as component quality tests carried out during/after production, with the help of recorded data and detect deviations from the normal Ztarget state of a sub-process or a test, so-called anomalies, and send them to a checking authority, including a human reviewer.
  • the system according to the invention for checking anomalies in industrial manufacturing processes is an overall system for detecting anomalies.
  • the analysis models, or anomaly detection models form the brain of the overall system.
  • the anomaly detection models assess a condition based on the data and calculate an anomaly score.
  • the anomaly value reflects the degree of the anomaly.
  • anomalies in industrial manufacturing processes are thus determined in a data-driven manner, in particular by ordered, numerical data.
  • human reviewer For functions that are exclusively performed by a human reviewer, the term human reviewer is used.
  • review instance is used for all other functions that a reviewer computer program could perform in addition to or as an alternative to a human reviewer.
  • the reviewer will assess the anomaly and report back his assessment, for example in the form of an annotation. This will create a continuous Improvement achieved during use of the items as well as an adaptation to changing processes and components.
  • annotations are binary annotations, for example "good” and "bad", which are identified in the form of labels.
  • an annotation is divided into several (detail) levels, for example three levels.
  • a first level includes a binary classification of the reviewer. If, for example, the second module detects an anomaly (positive) and the verifier comes to the conclusion that it is in fact not an anomaly, he enters a no/false on the first level, for example. This is a false positive event.
  • the inspector can, for example, specify the affected component in more detail.
  • the reviewer can, for example, provide more details about his review/result.
  • the annotations are reported back online in the form of a table/file, for example.
  • the invention uses sensors and/or sensor models that are used in industrial processes.
  • Sensor models simulate real sensors.
  • the term sensor includes real sensors and sensor models.
  • One way to categorize these sensors is based on their physical measurand, e.g. temperature sensors, vibration sensors, force and pressure sensors, optical sensors including camera, infrared, lidar and radar sensors, measuring the size of a component including its geometry, current and voltage sensors and others. Which sensors are used in a specific case depends on the respective process step.
  • functions of a machine and also the properties of a component are monitored by means of sensors, including the aforementioned sensors.
  • the invention includes the following sensor configurations:
  • Machine-related measurement vibrations and temperatures of one or more components
  • the sensors use Internet of Things technology to transmit data, for example to one another, to individual components/modules of the solution according to the invention and/or to a cloud infrastructure. Automated or autonomous anomaly detection can thus be implemented.
  • the cloud infrastructure includes cloud-based data storage.
  • the cloud infrastructure is, for example, a public, private or hybrid cloud infrastructure.
  • the definition of what constitutes an anomaly can be based on the definition of what is to be understood by a state in the industrial manufacturing process.
  • Part of the solution according to the invention is based on the definition of the status of the production step or component to be monitored with regard to the data describing the status.
  • the status definition can form the basis for further steps up to the detection of anomalies by defining the data basis for the training of models and the use of the solution according to the invention.
  • the definition of both a state and the associated anomalies are not static concepts, but can be subject to change over time, for example because new sensors are available, quality requirements are changed, or materials involved in the process or product are changed.
  • Condition describes the condition/properties of a process, a component and/or a (production) machine at a specific point in time or in a specific time interval.
  • This state is detected by measurements using suitable sensors as described above. Which sensors are suitable depends on the definition of the state as well as the system to be described (component, machine, ). Conversely, the availability of appropriate sensors also influences the definition of the state.
  • a state that is not accessible by appropriate sensors is not a meaningful definition of a state.
  • a state can have different characteristics in the industrial manufacturing process, the invention includes the following state definitions: • The state of a machine can be defined by all of the sensor data within a time interval, for example the last 10 seconds, and also by component-specific parameters.
  • the condition of a component can be defined by the results of a test, as well as by other component parameters, including different component variants.
  • a distribution of the states is determined depending on the respective state definition.
  • An anomaly is a condition, specified by corresponding (sensor) data, that is both rare and, based on the recorded data, deviates from almost all other conditions.
  • An anomaly is rare and different. Nevertheless, it can often be difficult to draw a precise distinction between normal and abnormal, and in many cases even not possible with the available data, so that ultimately a condition is only abnormal with a certain probability. In order to accommodate this fact, the human reviewer is proposed as part of the solution according to the invention.
  • the modules according to the solution according to the invention include hardware and/or software modules, including hardware and/or software modules for regulating and/or controlling industrial manufacturing processes and/or for anomaly detection.
  • the hardware modules include electronic units, integrated circuits, embedded systems, microcontrollers, multiprocessor systems-on-chip, central processors and/or hardware accelerators, e.g. graphics processors, data storage units and connectivity elements, e.g. WLAN modules, RFID modules, Bluetooth modules, NFC modules .
  • the anomaly detection is implemented as functional software in the cloud infrastructure.
  • the commands of the computer programs according to the invention include machine instructions, source text or object code written in assembly language, an object-oriented programming language, such as C ++, or in a procedural programming language, such as C.
  • the computer programs are after a Aspect of the invention Hardware-independent application program that is provided, for example, via a data carrier or a data carrier signal using software over the air technology.
  • FIG. 2 shows a first exemplary embodiment of a system according to the invention for industrial anomaly detection
  • FIG. 3 shows a second exemplary embodiment of a system according to the invention for industrial anomaly detection
  • FIG. 4 shows an exemplary embodiment of a first module according to the invention of the system according to the invention
  • FIG. 6 shows an exemplary embodiment of a false positive rate after the selection of a limit value based on the anomalis scores from FIG. 5,
  • FIG. 7 shows an exemplary embodiment of an optimal limit value for the costs of incorrect classification as an anomaly based on FIG. 6,
  • FIG. 8 shows an exemplary embodiment of a second module according to the invention of the system according to the invention
  • 9 shows an exemplary embodiment of a method according to the invention for obtaining an analysis model for anomaly detection
  • 10 shows an exemplary embodiment of a method according to the invention for determining at least one anomaly value
  • FIG. 11 shows an exemplary embodiment of a method according to the invention for providing anomalies detected in an industrial manufacturing process to a checking authority
  • FIG. 12 shows an exemplary embodiment of a method according to the invention for checking anomalies.
  • a production line has a certain production capacity, for example 100 products per day. Production is also scaled up by using several production lines of the same type, in which the same product is then manufactured. In addition to a main line, there are secondary lines in which individual components of the main product are manufactured. A single production step can have different characteristics, for example a component can be screwed together, a component can be milled or a quality inspection can be carried out on a component.
  • 1 shows an industrial manufacturing process IF comprising a main line and two secondary lines.
  • the main line includes the production steps PS1, PS2, PS3.
  • One of the secondary lines also includes three production steps PS1, PS2, PS3 and relates, for example, to the production step PS2 of the main line.
  • the other side lines also include, for example, three production steps PS1, PS2, PS3 and relate, for example, to the production step PS3 of the main line.
  • an end-of-line test EOL is carried out.
  • the system IF-Anom according to the invention monitors, for example, the production steps PS, PS1, PS2, PS3 and/or the respectively associated production machines PM with regard to any anomalies that occur, see FIG. 2 (scenario 1).
  • the system IF-Anom according to the invention carries out quality checks with regard to any anomalies that occur, see FIG. 3 (scenario 2).
  • the IF-Anom system integrates the following steps or modules: Data-based definition of the state Z of the system to be monitored, data storage and processing of all necessary data Data1 -Data5, training an anomaly detection based on historical data Datal, integration of a human reviewer Expert to control the detected Anomalies NOK and to generate further annotations for the targeted expansion of the training database Datal, detection of anomalies, processing of the evaluation of the human reviewer Expert and based on this continuous adjustment and improvement of the anomaly detection.
  • the basic functionalities such as saving and expanding the database Data, Data1 -Data5, Training Ref, Anno of models, setting up configurations of trained models Config-Model, requesting annotations to expand the data basis Data, Data1 -Data5, are handled by a first module IF-Anom Core adopted, see Fig. 4.
  • the task of detecting potential anomalies NOK is carried out by a second module IF-Anom anomaly detector, see FIG .
  • the response from IF-Anom then contains a classification of the state Z as "normal/abnormal", optionally with confidence estimation, and is sent to a human expert for verification via a third module IF-Anom annotation manager.
  • IF-Anom's suggestions are actually verified by a human depends on both the use case and how far the training of IF-Anom has progressed, i.e. how high the expected accuracies of the assessments of IF-Anom are.
  • the IF-Anom system includes the option of integrating further data sources into a production step PS.
  • This can, for example, be an additional vibration sensor coupled to a special excitation.
  • the data generated in this way can be transmitted to the IF-Anom system during operation and, as soon as a sufficient amount of data has been generated, serve as an additional data source for model training.
  • Scenario 1 shown in FIG. 2 could, for example, relate to the case of a broken production machine. So far, in such a case, a repair has only been carried out after the production machine has failed and costs for a production downtime have thus arisen. If the production machine does not fail immediately, faulty components can still be produced unnoticed, which can be further processed and then lead to problems in downstream production steps PS, PS1, PS2, PS3 or products.
  • Scenario 2 shown in FIG. 3 could relate to the testing of components, for example.
  • Component also refers to finished products.
  • Scenario 2 shown in FIG. 3 thus also includes, in particular, the end-of-production inspection EOL of a product.
  • EOL end-of-production inspection
  • This plays a central role in the industrial manufacturing process IF and ensures consistent, high quality of the components produced.
  • the challenge here is to quantify the requirements for a good component, which leads to initial difficulties in testing, especially for new components. If the corresponding requirements are not defined precisely enough, there is a risk of too many rejects on the one hand, and on the other hand there is a risk that defective components will be further processed and/or delivered.
  • the system IF-Anom according to the invention can carry out the method according to the invention for checking anomalies NOK.
  • the penetration of the industrial manufacturing process IF and/or the tests with sensors and the corresponding availability of data Data2 offers the possibility of the intelligent system IF-Anom and/or its respective modules IF-Anom Core, IF-Anom anomaly detector, IF-Anom annotation manager, using artificial intelligence, among other things, to record the status Z of a production step PS, PS1, PS2, PS3 or a test based on data and subsequently to detect and report abnormal behavior.
  • this enables an early reaction to changes that occur, ideally before a process comes to a standstill or a defective component continues in the process chain, and on the other hand, better identification of the underlying error.
  • Machine learning is a technology that teaches computers and other data processing devices to perform tasks by learning from data rather than being programmed to do the tasks.
  • the solution according to the invention corresponds to a data-based application, for which machine learning can advantageously be used.
  • the individual computer-implemented methods and modules according to the invention and the system according to the invention execute machine learning algorithms.
  • the analysis models for anomaly detection are based on machine learning algorithms.
  • Examples of machine learning algorithms covered by the invention are artificial neural networks, for example convolutional networks, support vector machines and random forest models.
  • process step should be used as a general term for a production step as well as for a test.
  • the system IF-Anom monitors a production step PS or a production machine PM, for example. This could be, for example, a lathe machining a component or a component being screwed together by a human being.
  • the procedure for using the IF-Anom system is, for example, as follows. After the process step to be monitored has been determined, the relevant database Data 2 is assessed. Based on this, a data-based definition of the state Z to be monitored is established. Then the initial training data Datal for the system IF-Anom is selected.
  • the main data source of the training data Datal regularly consists of the historical data of the process in question and the upstream processes insofar as these are relevant for the assessment of the state Z. This can be, for example, geometry information from the components involved.
  • data from comparable processes can be used as a second data source for the training data Datal.
  • synthetic data can be used as third data sources of the training data DataB. In this case, data, for example via a physical model of the process in question, is artificially generated/simulated.
  • a transfer learning can be used, for example.
  • the processing of the data Data1 -Data4 and the model training are taken over by the first module IF-Anom Core, see Fig. 4.
  • data Data2 is continuously sent from the monitored process to the second IF-Anom anomaly detector module.
  • the second module IF-Anom anomaly detector receives the current models from the first module IF-Anom Core, as well as an application-specific configuration Config-Model, in which, for example, the necessary pre-processing of the incoming data Data2 is specified. Using these models, the current state Z is then assessed by the second module IF-Anom anomaly detector.
  • This assessment is passed on to the third module IF-Anom annotation manager.
  • the third module IF-Anom annotation manager takes over the task decide whether and when a state Z is given to one, or to which, human expert for assessment. For example, roughly speaking, detected anomalies are almost always propagated, and additionally detected-normal samples under certain circumstances.
  • the evaluations by the human expert go back to the third module IF-Anom Annotation Manager, which decides in what form annotations go back to the first module IF-Anom Core in order to expand the database there accordingly.
  • the third module IF-Anom annotation manager has access to meta information regarding the assessment, for example an anonymous identification of the assessing expert, especially if necessary for data protection reasons, as well as a time stamp when the assessment was written. Details of the third module IF-Anom Annotation Manager are described below.
  • the data sent to the second module IF-Anom Anomaly Detector are passed on to the first module IF-Anom Core Module for processing and data storage.
  • the first module, IF-AnomCore regularly retrains or retrains anomaly detection models.
  • the first module IF-Anom Core also has the possibility to send annotation requests to the third module IF-Anom Annotationsmanager.
  • an NOK anomaly is confirmed by the expert reviewer as an NOK anomaly or if the detected NOK anomaly is released without checking, two reactions are set in motion, for example.
  • the monitored production step PS is examined more closely based on the state Z recognized as abnormal, in order to decide whether/when maintenance/repair must be carried out.
  • the components that were processed during and after the occurrence of the NOK anomaly can be sorted out as a precaution and/or examined separately.
  • An exemplary application of the fourth module IF-Anom Data for additional data generation in this scenario is a specially designed test program that
  • Machine states generated that are not under standard production conditions occur, but contain a high level of information about the health of the components of the production machine PM to be monitored.
  • Fig. 3 shows the application of the IF-Anom system to industrial test methods, for example the end-of-production test EOL of gears. While there are hardly any differences between the scenarios shown in FIGS. 2, 3 at the level of algorithmic anomaly detection, there are significant differences with regard to the data.
  • Test methods carry out an annotation according to OK and non-OK NOK.
  • the main area of application of the IF-Anom system in this scenario is to search for anomalies NOK within the OK components that are not recognized as NOK by the current test procedure. These are also often referred to as unknown anomalies, in contrast to the known anomalies, the NOC.
  • the use of the IF-Anom system can lead to the replacement of the existing test procedure, which would then mean that all components would be assessed by the IF-Anom system.
  • the fourth data source is data Data4 from the later use of products, so-called field data. If problems or failures occur during the use of products, this is a possible indication of NOK anomalies that are not detected by the current test procedure and thus an opportunity to test and train anomaly detection algorithms.
  • An important step in data generation here is the generation of special suggestions within test procedures by the fourth module IF-Anom data, for example driving through a transmission in different gears over a certain period of time in different load states.
  • the under this suggestion The generated data often has a much higher information content of the process step to be analyzed than without this targeted suggestion.
  • a recognized, confirmed anomaly NOK leads to the sorting out and repair or dismantling of the component in question.
  • the first module IF-Anom Core includes two essential functions of the IF-Anom system, on the one hand the processing and data storage of incoming data Data1 -Data4 using a data processing module, on the other hand the training and evaluation of models for anomaly detection using a training module module 2. Both functions are subject to a temporal aspect, ie at the beginning of the use of the IF-Anom system, existing data Data2 from the process to be monitored is first accessed and/or data from comparable processes or synthetically generated data. Based on this, a status definition is developed. Data and state definition are brought into a form suitable for the subsequent steps and stored.
  • a first data memory Mem1 stores the data and/or configurations of the data processing module.
  • the training module Modul2 reads in the data from the first data memory Mem1.
  • the existing data is used for the initial training of anomaly detection models.
  • a suitable separation of the existing data into training, validation and test data is assumed here as a standard procedure and is therefore not mentioned explicitly.
  • the IF-Anom system has two different approaches to anomaly detection.
  • the normal state of a process step PS is recorded. This is based on the assumption that abnormal states in a process step PS are rare. Based on this assumption, a quantitative characterization of the distribution of the training states, i.e. almost exclusively normal states, is calculated. This characterization of normal states is called the reference state, or reference for short. The degree of deviation between a questionable state and the reference is then calculated for anomaly detection.
  • conspicuous i.e. potentially abnormal
  • behavior from the past must be known and corresponding data must be available, i.e. historical data must at least be divided into the classes normal/abnormal.
  • the IF-Anom system learns to recognize patterns that are associated with conspicuous or inconspicuous behavior in the data.
  • patterns are then sought that indicate abnormal behavior and their occurrence is quantified.
  • the more detailed the annotations available the more accurate the output. If only annotations of the form normal/abnormal are present, the IF-Anom system will only classify states into these classes, as in the reference-based methodology.
  • the advantage of the reference-based methodology is a potentially more generic anomaly detection, which can be particularly beneficial when detecting unknown anomalies.
  • Benefits of the annotation-based methodology is a higher level of detail in anomaly detection and potentially a better compromise between false-positive and false-negative predictions on known anomalies.
  • the system IF-Anom can combine the results of a reference-based and an annotation-based approach Ref, Anno to train meta models Meta.
  • the type of combination depends on the application, the quality of the available models and the current data.
  • Various types of combination are possible, from a simple weighted sum to specially developed models, which are also trained on historical data/anomaly predictions and are used by the second module IF-Anom Anomaly Detector.
  • the trained reference models, annotation-based models, the combined meta-models and/or the results of the model evaluation are stored or temporarily stored in a second data memory Mem2 of the training module Module2.
  • the released models are reloaded for use in the second module IF-Anom anomaly detector.
  • the IF-Anom system can be put into operation. This opens up further sources of information for the IF-Anom system.
  • the current process step or test data which is sent to the second module IF-Anom Anomaly Detector, is made available to the system.
  • the IF-Anom system receives the manual evaluations of the potential anomalies, which are carried out at least statistically.
  • the IF-Anom system can send annotation requests to the third module IF-Anom Annotation Manager during the training of models, in order to expand the database with annotations, for example in the context of active learning.
  • the IF-Anom system thus builds up an annotated database during operation. This contains at least the classes normal/abnormal, but can also have a higher level of detail if the human reviewer Expert provides appropriate annotations. As a result, the IF-Anom system will always have an annotated dataset available over time.
  • the database which is expanded during operation, is used to regularly train new models. Even if only reference-based models Ref were used at the beginning, due to the absence of an annotated data set, annotation-based models Anno can now also be used. These can then replace or supplement the models currently used in the company. In addition, successfully trained and tested models can be used in an iterative process to clean the historical data, which was used for the initial training of a frequently reference-based model Ref, from anomalies that were not recognized in the past. By improving the quality of the initial training set, an additional improvement in model accuracy can be achieved.
  • the monitoring of the model training and the release of new models for productive anomaly detection can be automated using the monitoring module Moduli and/or the control module Modulß or manually based on the accuracy achieved and can be controlled by the control module Modulß as a function of a user interface.
  • the control module Modulß is also used to display the data in the past, for example the last x days, achieved accuracy of anomaly detection, including the proportion of actual anomalies to the proposed anomalies, and thus fulfills a monitoring function.
  • the monitoring module module reads the data from the first data memory Mem1 and can exchange data with the control module module B.
  • Employed models can make errors in two directions, a condition recognized as abnormal is in fact normal, which corresponds to a false positive result. Or a condition that is actually abnormal has been classified as normal, which corresponds to a false negative result. In most cases there is competition between the two errors, i.e. a lower number of false negative predictions is accompanied by a higher number of false positive predictions and vice versa.
  • the challenge at this point is that for a meaningful assessment of the usability of a model, the cost/cost function that an undetected anomaly or a normal state that is recognized as an anomaly entails must be taken into account.
  • the IF-Anom system offers quantitative support for this.
  • the IF-Anom system offers the possibility of a detailed examination, in which the respective costs of the possible errors can be taken into account and thus an optimal decision can be made under the given circumstances, see Fig. 5.
  • the first module IF-Anom Core performs the method of obtaining analysis models according to the invention.
  • a model here logistic regression, assigns an anomaly score AS1, AS2, AS3 to test examples.
  • the number of states is plotted on the abscissa.
  • the anomalies score is plotted on the ordinate.
  • the points marked as normal/abnormal represent ground truth.
  • the normal/abnormal classification would now be based on a limit value or threshold related to the anomalies score AS1, AS2, AS3. For example, if the cutoff is determined to be 0.8, any anomalis scores determined by the model that are greater than 0.8 correspond to an anomaly. That is, the points marked as normal above 0.8 correspond to false positive events.
  • Figure 6 is a threshold optimization curve, also known as a receiver operating characteristic.
  • the AUC value ie the size of the area under the curve, is a quality measure for the model. If the costs, including material costs, for an unrecognized anomaly, i.e. false negative, and a wrong classification as an anomaly, i.e. false positive, are known, an optimal limit value, i.e. an optimal model, can be calculated for productive use see dashed line in FIG. 7.
  • the limit value is set too small, for example all components whose received anomaly value is above the limit value are sorted out, including many components with incorrect classification. This leads to high costs. If the threshold value is too large, many components will be retained, including many components with undetected anomalies. The optimal limit obtained by optimizing the cost function balances these two trends.
  • Logistic regression is just one example of an analysis model here.
  • the solution according to the invention is not limited to a specific analysis model.
  • Further analysis models for carrying out the invention include, for example, isolation trees, also known as isolation forests, autoencoders, generative adversarial networks, also known as generative adversarial networks, convolution networks or support vector machines.
  • Isolation Forest denotes an anomaly detection algorithm that identifies anomalies through isolation. Anomalies are isolated using binary trees. Isolation forest works well in situations where the training set contains no or few anomalies. This means that the isolation forest is an advantageous analysis model for the solution according to the invention.
  • the second challenge is usually that there are few or, in the extreme case of unknown anomalies, no test anomalies that can be used for the evaluation. This leads at least to the fact that the evaluation of different models gets a statistical significance problem and in extreme cases it is even not possible to evaluate models at first.
  • the IF-Anom system solves this problem by linking it to a human expert evaluator. If there are not enough test anomalies at the beginning, various models are used by manually evaluating the ones found anomalies evaluated. As a result, an initial evaluation of models takes place, by means of which a model can be selected for the first productive use.
  • an initial set of annotated examples is generated, which can be used to evaluate models in the further course of using the IF-Anom system.
  • the exchange between examples for annotation and the corresponding assessments by a human expert reviewer is carried out by the third module IF-Anom annotation manager.
  • the second module IF-Anom anomaly detector takes on the task of processing incoming data comprehensively Data2 according to the existing status definition and data pre-processing routines.
  • the incoming data is processed in the same way as the historical data Datal used for model training.
  • An assessment is then carried out using the available anomaly detection models.
  • the output of each model includes an anomaly score, the anomalies score AS1, AS2, AS3.
  • the anomaliscore AS1, AS2, AS3 indicates how abnormal the evaluated state Z is, usually but not necessarily, the higher the more abnormal. Which methodology and which models are used depends on the application and the availability of models of sufficient quality.
  • the anomaly value calculated by this model is sent to the third module IF-Anom annotation manager.
  • this value is passed through by the combiner module Modul5. This case usually occurs at the beginning of the use of the IF-Anom system, when not enough annotated data is available and only a reference-based model is used.
  • a data entry/control module module 4 determines fifth data Data5 from the data of the first module IF-Anom Core and/or from the configuration of trained models Config-Model.
  • the fifth data Data5 includes the status data and models.
  • the second module IF-Anom anomaly identifier carries out a reference-based anomaly identifier Ref-anomaly and/or an annotation based anomaly detection Anno anomaly by.
  • a first anomaly score AS1 is determined in the reference-based anomaly detection Ref-anomaly.
  • a second anomaly score AS2 is determined.
  • the unifier module Module5 takes on the task of calculating a final, third anomaly score AS3 from the first and second anomaly scores AS1, AS2 .
  • the following procedure can be used:
  • the annotation-based model detects an anomaly, ie the anomaliscore AS2 belonging to an anomaly class is greater than a set limit value, then this assessment is given to the third module IF-Anom annotation manager.
  • the unifier module Modul5 simply passes on the result.
  • the unifier module also receives the assessment of the reference-based model. Then there are two possible approaches:
  • the unifier module takes over the assessment of the reference-based model and gives the calculated first anomaly score AS1 together with a classification based on a limit value as normal/abnormal to the third module IF-Anom annotation manager.
  • the unifier module takes the assessments of the annotation-based and the reference-based approach to generate a meta-prediction from them.
  • a separate model Meta can be trained for this purpose.
  • meta-information about the state Z in question as well as historical test accuracies of the models used can also be included in this model.
  • an anomalies score here the third anomalies score AS3, as well as a threshold-based assessment normal/abnormal passed to the third module IF-Anom annotation manager.
  • the communication between the IF-Anom system and the manual check of the predicted anomaly values/anomaly classes is taken over by the third module IF-Anom annotation manager.
  • the data pre-processed by the second IF-Anom anomaly detector module are sent to the first IF-Anom Core module for data storage and further processing.
  • the second module IF-Anom anomaly identifier carries out the method according to the invention for determining at least one anomaly value AS1, AS2, AS3.
  • the third module IF-Anom Annotation Manager takes over the tasks related to the generation of annotations, including:
  • the third module IF-Anom annotation manager carries out the method according to the invention for providing recognized anomalies to the checking instance Expert.
  • the models of the second module IF-Anom anomaly detector assign an anomaly value AS1, AS2, AS3 to a state Z to be examined, as well as a classification based on a limit value, at least into the classes normal/abnormal; in the case of training data with a higher level of detail, into the correspondingly more detailed classes. Based on this assessment, the third module IF-Anom annotation manager decides how to proceed.
  • condition Z has been assessed as abnormal, a decision is made as to whether a check by a human expert takes place or, usually if the expected accuracy is high, whether the condition Z is reported as an anomaly NOK without further checking.
  • the state Z can still be sent to an Expert for verification, flagged as a potential anomaly, in order to subject the modeling carried out by the IF-Anom system to a statistical control and at the same time to ensure the attention of the Expert verifiers.
  • the probability of whether a normal state is sent to a checker Expert can be made dependent on the calculated anomaly value AS1, AS2, AS3, for example the more abnormal, the more likely a check will take place.
  • the continuous improvement of the IF-Anom system is achieved by using the annotations of the reviewers Expert for further training of models, and thus the knowledge of the human experts Expert flows into the anomaly detection.
  • a characteristic of many of the models used here, especially those from the field of artificial intelligence is that a certain number of examples of a class must be present for the class to be successfully learned and consequently recognized. In the operation of the IF-Anom system, this can lead to a human reviewer Expert very often having to classify very similar states Z as a true-positive or false-positive prediction before a model is able to recognize a comparable state Z as surely sufficiently abnormal or as not abnormal.
  • the third module IF-Anom annotation manager can calculate the similarity of a state Z to already recognized ones before there is a potential anomaly to a verifier expert. If there is a high level of agreement, the state Z in question can then be given the annotation of the already known similar state Z without further checking.
  • the degree of the necessary similarity is defined via a limit value to be set.
  • the first module can send annotation requests to receive annotations to improve the models during training, for example in the context of active learning.
  • a model calculates that the annotation of certain training states can particularly increase the model quality and these states Z are then given to a verifier expert by the third module IF-Anom annotation manager in order to specifically expand the annotated database.
  • the first module IF-Anom Core can also send annotation requests to the third module IF-Anom Annotation Manager for the purpose of model evaluation.
  • the final evaluation of a trained model requires a test data set. If this is incomplete or too small, the first module IF-Anom Core can make specific annotation requests to evaluate a model.
  • These are then sent by the third module IF-Anom annotation manager to a verifier Expert. This case plays a major role in particular at the beginning of the use of the IF-Anom system or after process changes, if not enough suitable test states Z are available.
  • Another functionality of the IF-Anom system, more precisely of the third module IF-Anom annotation manager, is that different human reviewers Expert are systematically tested.
  • a status Z that has already been checked is given again to an expert for annotation.
  • This can be the same Verifier Expert, for example at a different time of day, or it can be a different Verifier Expert.
  • This essentially serves to weight the annotations given by the Expert reviewers accordingly and, if necessary, to continuously improve the annotation quality and thus the accuracy of the IF-Anom system by asking other Expert reviewers and/or at other times.
  • a basic goal when performing the annotations described above is to improve the existing database and thus the accuracy of the anomaly detection models.
  • the IF-Anom system performs an evaluation of the incoming annotations. This evaluation is based firstly on an assessment of the accuracy of an annotation by the expert reviewer himself, i.e. an expert reviewer states how certain he is of an annotation, for example in the three levels very certain, certain, unsure.
  • the IF-Anom system performs its own assessment of the quality of an annotation based on metadata such as the ID of the Verifier Expert and the quality of the annotations made by this Verifier Expert in the past, including time of day, day of the week, degree of anomaly, self-assessment and other relevant variables as well as the assessment by the reviewer Expert associated with an annotation.
  • metadata such as the ID of the Verifier Expert and the quality of the annotations made by this Verifier Expert in the past, including time of day, day of the week, degree of anomaly, self-assessment and other relevant variables as well as the assessment by the reviewer Expert associated with an annotation.
  • the IF-Anom system decides whether an annotation is trustworthy enough to immediately send it to the first IF-Anom module Core to expand the training data set there, or whether the state Z is sent to a human reviewer Expert for further annotation. This usually involves selecting a different person, but it can also select the same person at a different time of day.
  • a third annotation is requested. If there is a majority, the annotation can be sent to the first module IF-Anom Core or one of the next steps can also be carried out.
  • a state Z is not 100% assigned an annotation, but a state Z can have several, even contradictory, annotations, with each annotation being weighted, i.e. a state is, for example, 80% not okay and 20% okay .
  • This weight can be assigned by the IF-Anom system based on its assessment of the annotation quality.
  • the weighted annotations are then sent to the first module IF-Anom Core to extend the training data set.
  • the flawless condition Z of a finished transmission is determined as part of an end-of-line test EOL.
  • This test includes a functional and acoustic test.
  • a Speed ramp driven and meanwhile the resulting structure-borne noise is measured on the gearbox housing.
  • the established assessment of the transmission condition then occurs in the following manner.
  • the time signal is converted into a sonogram using a Fast Fourier Transform.
  • areas can be assigned to specific transmission components and provided with limit values that must not be exceeded. Based on these areas, so-called characteristics, and the associated limit values, the condition Z of a transmission is determined as OK if no limit value is exceeded and as Not OK (NOK) if at least a limit is exceeded.
  • NOK Not OK
  • the established transmission acoustic test is expanded to include anomaly detection using the IF-Anom system according to the invention.
  • the data basis of the IF-Anom system consists of the time signals that are recorded during the acoustic test and other meta information including the variant of the transmission, the ID of the testing test station, the ID of the production line, and the result of the established test NOK or OK . All of this data then also defines the state Z of a transmission, i.e. the state Z of a transmission in the sense of the IF-Anom system is defined by the acoustic signal from the EOL test, the result of the EOL test N IO/IO, the Transmission variant, the checking test station and the production line.
  • the historical acoustic test data of various transmission variants, test stations and production lines are available as training data Datal, as well as the annotation in OK and NOK according to the established one described above test procedure.
  • the training data set Datal for reference-based models can thus be optimized to the extent that the known NOK are already removed.
  • the existing annotations can also be used to train annotation-based models. However, these are then only applied to IO cases in order to find unrecognized NOK.
  • Data4 Complaints from the operation of the gearbox are used as Data4 as a further data source for checking the anomaly detection. It is checked to what extent the anomalies recognized in the historical data are related to complaints in the field.
  • the data pre-processing includes a transformation of the acoustic time signal into the frequency space by means of Fourier transformation or wavelet transformation. Furthermore, normalization steps, for example to the speed, can be added.
  • the anomaly detection models described below are trained with the initially available data.
  • a model that is based on the statistical recording of the distribution of the training data and calculates the deviation from the statistically observed normal state for anomaly detection. This model thus falls into the category of reference-based models.
  • An autoencoder is a model with a two-part neural network architecture. One part of the model learns to create a low-dimensional representation of a state, for example a sonagram, and the second part of the model learns to restore the original state from this. The mistake that is made in doing this is for infrequently occurring ones States larger than for the majority of the occurring states Z. This model can thus be used for anomaly detection. The anomalies score AS1 then results from the difference between the original sonagram and the restored sonagram calculated by the model.
  • This model is also a reference-based model.
  • a convolution network can be trained to differentiate between OK and NOK states.
  • the trained model can only be used for states Z that have been classified as IO by the established method.
  • the model looks for patterns in the IO states that actually belong to NIO cases.
  • the probability calculated by the model that the condition in question belongs to the NOK class then serves as the anomaly score AS2.
  • This model can be classified in the class of annotation-based models.
  • the models described are trained at the level of transmission variants and production lines. There is one version of each model for each transmission variant and production line. This restriction is always recorded in the associated configuration Config-Model, so that the model associated with the transmission in question is always loaded later on.
  • the accuracy of the developed models is tested in two ways. As a quick test, the accuracy with which a model would have found the known NOK of a test data set is calculated. If the result is promising, annotation requests are sent to an Expert human reviewer for analysis of the accuracy with which unknown anomalies are detected.
  • the model for the unifier module Modul5 is kept simple here. If an annotation-based model does not detect an anomaly, the assessment of the reference-based model is used, ie in this case no additional model training for the unifier module Module5 is required.
  • the threshold values used to decide whether an anomalies score leads to classification as abnormal are set very narrowly in this case, for the following reasons:
  • the IF-Anom system is only used as an additional safety net.
  • Verifying the potential anomalies is laborious, so it is beneficial to focus on the very anomalous states Z.
  • the data belonging to its state Z are sent to the second IF-Anom anomaly detector module for each EOL-tested transmission.
  • the statistical model is used as a reference-based model and, in addition, the annotation-based model described.
  • the final anomaly prediction is then sent to the third module IF-Anom annotation manager and, if an anomaly is detected, always, since the thresholds are set correspondingly high, on to the human reviewer Expert.
  • a gearbox classified as not abnormal will be sent for expert checking with a certain probability. The probability depends on the distance to the set anomaly score limit.
  • an explainability output can be generated using the models used. This makes it transparent for the expert reviewer which areas of the sonagram were relevant for the normal/abnormal decision.
  • a transmission that is actually abnormal is then sorted out and repaired in a rework process.
  • two accuracies are manually monitored, which can be calculated from the manually annotated states Z requested by IF-Anom.
  • this is the rate of false positives in relation to the states Z classified as abnormal, and on the other hand the rate of anomalies, i.e. false negatives, among the states Z classified as normal.
  • the false positive rate and /or the false negative rate can be counteracted either by changing the set limit values or by retraining the models, preferably on more recent data.
  • FIG. 9 schematically shows the computer-implemented method according to the invention for obtaining an analysis model for anomaly detection in an industrial manufacturing process IF.
  • data Data1, Data2, Dataß, Data4 are obtained from states Z, for example of a production machine PM, P in, for example, a process step PS to be monitored.
  • a state definition is determined from the data in a method step C2.
  • the data and the state definition are stored in a first data memory Mem1.
  • an analysis model for anomaly detection in the form of an artificial intelligence K1 for example an artificial neural network, is trained on the data Data1, Data2, Data3, Data4.
  • the analysis model learns to classify states Z that are rare and/or deviate from other states Z as anomalies.
  • the trained analysis model is evaluated by the checking instance Expert using an annotation of the anomalies found.
  • the data Data1, Data2, Data3, Data4 are expanded with the annotations in a method step C6 and stored.
  • the trained analysis model and its evaluation are stored in a method step C7.
  • the first module IF-Anom Core is a software module, a hardware module or a combination of software and hardware module.
  • the first module IF-Anom Core executes the method steps C1 -C7, for example.
  • 10 schematically shows the computer-implemented method according to the invention for determining at least one anomaly value AS1, AS2, AS3 in an industrial manufacturing process IF.
  • sensor-measured data from states Z for example of the production machine PM
  • a reference-based analysis model Ref-Anomaly is obtained, for example, which was trained, for example, on historical data of the data for anomaly detection.
  • the data is fed into the Ref-Anomaly analysis model.
  • the analysis model Ref-Anomaly receives the anomaly value AS1, AS2, AS3.
  • the anomaly value AS1, AS2, AA3 is made available to the checking instance Expert for further checking.
  • the second module IF-Anom anomaly identifier is a software module, a hardware module or a combination of software and hardware module.
  • the second module IF-Anom anomaly detector carries out the method steps E1-E4, for example.
  • FIG. 11 schematically shows the computer-implemented method according to the invention for providing recognized anomalies to the checking instance Expert.
  • a method step M1 an anomaly value AS1, AS2, AS3 is obtained. If the received anomaly value AS1, AS2, AS3 classifies an abnormal state with high accuracy, the state is reported as an anomaly in a method step M2 without further checking. If the received anomaly value AS1, AS2, AS3 classifies an abnormal state that is to be checked further, a similarity to a state already recognized as an anomaly is determined in a method step M3. In the case of high similarity, the annotation of the previously recognized and annotated state is adopted in a method step M4. If the condition recognized earlier was a false positive, this annotation is of course also adopted.
  • a method step M5 the status is made available to the checking instance Expert. Furthermore, the state is annotated accordingly by the checking instance Expert. Conditions classified as normal can also occasionally be reported to the verification authority for the purpose of statistical control of the anomaly detection algorithms and/or to ensure the attention of the verification authority.
  • the third module IF-Anom annotation manager is a software module or a combination of software and hardware module.
  • the third module IF-Anom annotation manager executes the method steps M1-M5, for example.
  • a data-based status Z for example of the process step PS to be monitored, is defined.
  • the data are stored and processed.
  • an analysis model for anomaly detection is trained based on at least historical data, for example using the first module IF-Anom Core.
  • the checking instance is integrated to check detected anomalies and to generate annotations for expanding the database, for example using the third module IF-Anom anomaly detector.
  • the anomalies are detected, for example by means of the second module IF-Anom anomaly detector.
  • the monitored process step PS is checked if an anomaly is detected. For example, the monitored process step PS is examined more closely based on the detected anomaly in order to decide whether and/or when maintenance or repairs need to be carried out. A component that was processed during or after the occurrence of an anomaly can, for example, be sorted out and/or examined separately.
  • the method is carried out, for example, by the system IF-Anom according to the invention for checking anomalies NOK in industrial manufacturing processes IF.
  • Reference sign

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • General Factory Administration (AREA)
  • Debugging And Monitoring (AREA)
  • Testing And Monitoring For Control Systems (AREA)

Abstract

L'invention concerne un procédé, mis en œuvre par ordinateur, permettant de contrôler des anomalies (NOK) dans des processus de fabrication industriels (IF), ledit procédé comprenant les étapes suivantes : définir, sur la base de données, un état (Z) d'une étape de processus (PS, PS1, PS2, PS3) à surveiller du processus de fabrication industriel (IF) ou d'un système (A1), mémoriser et traiter des données, qui sont mesurées par voie sensorielle d'après la définition d'état, fondée sur des données et/ou sont obtenues à partir de simulations (A2), entraîner un modèle d'analyse (Ref, Anno) destiné à identifier des anomalies sur la base d'au moins des données historiques (A3), intégrer une instance de vérification (Expert) destinée à contrôler des anomalies identifiées et à générer des annotations destinées à enrichir l'ensemble de données (A4), identifier des anomalies (A5), contrôler l'étape de processus (PS, PS1, PS2, PS3) surveillée ou le système, en cas d'anomalie identifiée (A6).
PCT/EP2022/075214 2021-09-14 2022-09-12 Procédés mis en œuvre par ordinateur, modules et système de détection d'anomalies dans des processus de fabrication industriels WO2023041458A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102021210107.0A DE102021210107A1 (de) 2021-09-14 2021-09-14 Computerimplementierte Verfahren, Module und System zur Anomalieerkennung in industriellen Fertigungsprozessen
DE102021210107.0 2021-09-14

Publications (2)

Publication Number Publication Date
WO2023041458A2 true WO2023041458A2 (fr) 2023-03-23
WO2023041458A3 WO2023041458A3 (fr) 2023-05-11

Family

ID=83322495

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2022/075214 WO2023041458A2 (fr) 2021-09-14 2022-09-12 Procédés mis en œuvre par ordinateur, modules et système de détection d'anomalies dans des processus de fabrication industriels

Country Status (2)

Country Link
DE (1) DE102021210107A1 (fr)
WO (1) WO2023041458A2 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116596336A (zh) * 2023-05-16 2023-08-15 合肥联宝信息技术有限公司 电子设备的状态评估方法、装置、电子设备及存储介质
CN117007135A (zh) * 2023-10-07 2023-11-07 东莞百舜机器人技术有限公司 一种基于物联网数据的液压风扇自动组装线监测系统

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102022203475A1 (de) 2022-04-07 2023-10-12 Zf Friedrichshafen Ag System zum Erzeugen einer von einem Menschen wahrnehmbaren Erklärungsausgabe für eine von einem Anomalieerkennungsmodul vorhergesagte Anomalie auf hochfrequenten Sensordaten oder davon abgeleiteten Größen eines industriellen Fertigungsprozesses, Verfahren und Computerprogramm zur Überwachung einer auf künstlicher Intelligenz basierenden Anomalieerkennung in hochfrequenten Sensordaten oder davon abgeleiteten Größen eines industriellen Fertigungsprozesses und Verfahren und Computerprogramm zur Überwachung einer auf künstlicher Intelligenz basierenden Anomalieerkennung bei einer End-of-Line Akustikprüfung eines Getriebes
DE102023201383A1 (de) 2023-02-17 2024-08-22 Robert Bosch Gesellschaft mit beschränkter Haftung Computerimplementiertes Verfahren zum Optimieren einer Detektionsschwelle eines Vorhersagemodells

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10685159B2 (en) * 2018-06-27 2020-06-16 Intel Corporation Analog functional safety with anomaly detection
DE102019108268A1 (de) * 2019-03-29 2020-10-01 Festo Ag & Co. Kg Anomaliedetektion in einem pneumatischen System
DE102019110721A1 (de) * 2019-04-25 2020-10-29 Carl Zeiss Industrielle Messtechnik Gmbh Workflow zum trainieren eines klassifikators für die qualitätsprüfung in der messtechnik
US11448570B2 (en) * 2019-06-04 2022-09-20 Palo Alto Research Center Incorporated Method and system for unsupervised anomaly detection and accountability with majority voting for high-dimensional sensor data
CN114503132A (zh) * 2019-09-30 2022-05-13 亚马逊科技公司 机器学习模型训练的调试和剖析
DE202019005395U1 (de) * 2019-12-20 2020-07-02 Trumpf Werkzeugmaschinen Gmbh + Co. Kg Früherkennung und Reaktion auf Fehler in einer Maschine

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116596336A (zh) * 2023-05-16 2023-08-15 合肥联宝信息技术有限公司 电子设备的状态评估方法、装置、电子设备及存储介质
CN116596336B (zh) * 2023-05-16 2023-10-31 合肥联宝信息技术有限公司 电子设备的状态评估方法、装置、电子设备及存储介质
CN117007135A (zh) * 2023-10-07 2023-11-07 东莞百舜机器人技术有限公司 一种基于物联网数据的液压风扇自动组装线监测系统
CN117007135B (zh) * 2023-10-07 2023-12-12 东莞百舜机器人技术有限公司 一种基于物联网数据的液压风扇自动组装线监测系统

Also Published As

Publication number Publication date
WO2023041458A3 (fr) 2023-05-11
DE102021210107A1 (de) 2023-03-16

Similar Documents

Publication Publication Date Title
WO2023041458A2 (fr) Procédés mis en œuvre par ordinateur, modules et système de détection d'anomalies dans des processus de fabrication industriels
DE102018128158A1 (de) Vorrichtung zur inspektion des erscheinungsbilds
DE102019217613A1 (de) Verfahren zur diagnose eines motorzustands und diagnostisches modellierungsverfahren dafür
DE102010052998A1 (de) Software-zentrierte Methodik für die Überprüfung und Bestätigung von Fehlermodellen
EP3767403B1 (fr) Mesure de forme et de surface assistée par apprentissage automatique destinée à la surveillance de production
WO2018087343A1 (fr) Procédé de commande d'un système de moyens de transport, système de traitement de données
EP3591482B1 (fr) Surveillance d'une installation technique
EP4258179A1 (fr) Système de génération d'une sortie d'explication perceptible par un être humain pour une anomalie prévue par un module de détection d'anomalie haute fréquence ou grandeurs dérivées d'un processus de fabrication industriel, et procédé de surveillance d'anomalie artificielle associés
WO2023041459A1 (fr) Procédé et système mis en œuvre par ordinateur pour détecter des anomalies, et procédé pour détecter des anomalies pendant un test acoustique final d'une transmission
EP3077878A1 (fr) Procédé informatisé et système de surveillance et détermination d'état automatiques de sections entières d'une unité de processus
EP3014372B1 (fr) Système diagnostique d'un atelier
DE102018209108A1 (de) Schnelle Fehleranalyse für technische Vorrichtungen mit maschinellem Lernen
WO2022195050A1 (fr) Procédé et système de prédiction du fonctionnement d'une installation technique
DE102011086352A1 (de) Verfahren und Diagnosesystem zur Unterstützung der geführten Fehlersuche in technischen Systemen
DE102019120696A1 (de) Vorrichtung und Verfahren zur Reifenprüfung
DE102008032885A1 (de) Verfahren und Vorrichtung zur Überprüfung und Feststellung von Zuständen eines Sensors
EP3340250B1 (fr) L'identification des composants dans le traitement des erreurs des dispositifs médicaux
WO2018177526A1 (fr) Analyse de robustesse sur des véhicules
DE102022132047A1 (de) Systeme und verfahren zum detektieren von herstellungsanomalien
DE112022002094T5 (de) Verifizierungskomponente zum Verifizieren eines Modells künstlicher Intelligenz (KI)
DE102021211610A1 (de) Verfahren zum Trainieren eines neuronalen Lernmodells zum Detektieren von Produktionsfehlern
EP3056994B1 (fr) Dispositif et procede de detection, de verification et de memorisation de donnees de processus provenant au moins de deux etapes de processus
DE102021109129A1 (de) Verfahren zum Testen eines Produkts
EP3686697A1 (fr) Optimisation du régulateur pour un système de commande d'une installation technique
EP3553679A1 (fr) Procédé de diagnostic de panne assisté par ordinateur pour un système technique

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22770027

Country of ref document: EP

Kind code of ref document: A2

122 Ep: pct application non-entry in european phase

Ref document number: 22770027

Country of ref document: EP

Kind code of ref document: A2