CN117332834A - Management of a process that is temporally spread out into the past by means of a neural network - Google Patents

Management of a process that is temporally spread out into the past by means of a neural network Download PDF

Info

Publication number
CN117332834A
CN117332834A CN202310802249.3A CN202310802249A CN117332834A CN 117332834 A CN117332834 A CN 117332834A CN 202310802249 A CN202310802249 A CN 202310802249A CN 117332834 A CN117332834 A CN 117332834A
Authority
CN
China
Prior art keywords
neural network
time
stopwatch
input data
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310802249.3A
Other languages
Chinese (zh)
Inventor
M·托克
A·冯伯宁根
N·科尔沃
M·比肖夫
D·格罗森巴赫尔
M·莱波德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Siemens AG
Original Assignee
Siemens AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens AG filed Critical Siemens AG
Publication of CN117332834A publication Critical patent/CN117332834A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/067Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using optical means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • G06N3/065Analogue means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/083Shipping

Abstract

The management of processes which have been developed in time to the past, in particular processes which are run simultaneously in industrial facilities, by means of a neural network. Taking as an example a logistics system comprising a plurality of parallel transport sections for individual goods, each transport section being assembled in a transport direction to a combined unit, it is described how extremely complex control of such industrial facilities in terms of time and space can be reconstructed by means of a neural network, so that the neural network can also reliably recognize time and space dependencies. This is achieved by a digital stopwatch which is applied to the neural network in addition to the sensor data of the logistics system and is always reset to an initial value when the motion detector indicates that the package passes.

Description

Management of a process that is temporally spread out into the past by means of a neural network
Background
Neural networks are computer-implemented artificial tools in the fields of machine learning and artificial intelligence technology. Those skilled in the digital arts understand the technical term "neural network" as a computer artificial network (a computer is referred to as a "node" or "neuron") having an accurately determined shape and structure. The computer-implemented artificial neural network has an input node and an output node, and an internal node located between the input node and the output node according to a design. The nodes are connected to each other in a clearly defined manner by so-called edges. Each node has a receiving side and a transmitting side. On the receiving side, values are received from upstream nodes and correlated in the nodes on the basis of local weights, for example by multiplying each received value by a weight specific to the edge and if necessary by increasing the predetermined value, and then adding all values weighted in this way. If the resulting sum exceeds a specific value (e.g., defined by a so-called "activation function"), the node activates itself and the determined value is output on the transmitting side (e.g., to a downstream node). For example, zero or no value at all is output from an inactive node.
In operation, input data is applied to the input node, which will be processed by the network as described above, resulting in output data being output at the output node. This manner of operation of the neural network is referred to as "prediction" or "inference".
The local weights in the nodes are configured by so-called "training" or "learning". The starting point of the learning process is selected at random by assigning a random number to the weights at the beginning, since it is not known which weight combination can be used by the neural network to solve the task set up for the neural network. A portion (e.g., 90%) of the training data is then entered into the neural network in succession, and what the neural network outputs is observed as a prediction. This step is called "Forward Pass". The training data is run forward through the network. In each step, the output of the network is compared with the expected result that the network should output for the corresponding training data. The degree of error of the neural network is then determined by forming the difference. Errors calculated in this way are now used to change the weights stored in the neural network by a learning method called "backpropagation" so that the result becomes slightly better. For this purpose, the nodes are assigned errors from the output node in the opposite direction (i.e. in the direction of the input node) along the structure of the neural network, whereby a node-specific desired result is formed for each node. With these node-specific desired results, the weights are adjusted slightly upward or downward in the nodes. The further the weight is from the node-specific desired result, the more strongly the weight is adjusted. The learning method thus determines which part of the network caused the most severe error and correspondingly weight counteracts the most severe error by readjusting the weights. This is the so-called "Backward Pass" because errors flow Backward from the output to the input through the network. Then at the next time the network is slightly closer together and the whole process is repeated. This must be done for a long period of time: for simple tasks such as identifying images, it is necessary to do this tens or hundreds of thousands of times; this must also be done billions of times for difficult tasks such as identifying pedestrians for autonomously driven vehicles. If the error in the training process is small enough for the training data, then proceed to the next training data, and if the error is small enough for all training data, then stop training. Other sequences using training data and other termination criteria for training are also possible.
The remaining training data (e.g. 10%) that has not been used for training is now used to check the quality of the training. The training data is never seen by the network and therefore cannot be "remembered", i.e., stored in weights by training.
If the network is able to provide a desired result that matches it based on an acceptable amount of such training data, the training of the neural network is likely to be successful. Training may end and the operational run of the neural network may begin. Here, the neural network may continue to be readjusted during run-time through training (e.g., if the user selects on which images a particular queried object is seen, but the neural network has not yet identified the presence of the queried object).
However, if the network is unable to provide the desired result of a match with an acceptable amount of such training data, it is faced with mostly extremely demanding technical tasks, i.e. finding the cause and/or solution thereof, in which the network provides the desired result of a match with an acceptable amount of such training data.
Describing the behavior of a complex neural network can be very challenging due to its complexity and multidimensional structure. Terms borrowed from mathematics can be well-functioning description tools for this purpose. The state vector in which an entry is provided for each node can thus be used well to detect the state of, for example, a neural network at a particular point in time. At the same time, the sum of all learned weights can be well represented by a matrix. The dynamic behavior of the neural network over time can then be well described by means of the multiplication of the state vector with a weight matrix.
Neural networks are sometimes considered examples of computational models and algorithms that should themselves have abstract mathematical properties, perhaps benefiting from the good mathematical descriptability of neural networks. However, this view creates language and practical difficulties. In language, the network is the network, and the model is the model. An item does not become another item simply because it is claimed or intended to be so viewed. In fact, neural networks are artificial "networks" of computer-implemented nodes, which are consistent in language with the words described at the outset, to those skilled in the art of digital technology. While this may be attractive, and perhaps even understandable, the use of mathematical description means does not allow the neural network to be equated with mathematics simply because the neural network can be well described in terms borrowed from mathematics.
This important distinction is also reflected in the decision T1924/17 made by the European patent office complaint committee 05/03/07 at 29, 7, 2019, according to which the mathematics and in particular the mathematics "themselves" should be interpreted only as follows:
abstract science of numbers, numbers and spaces,
derivation and proof of mathematical theorem based on a set of mathematical axioms,
-games played on paper with nonsensical marks according to specific simple rules.
Thus, neural networks are not mathematical. In contrast, neural networks form an independent technical field, with technical problems specific to neural networks and a large number of possible technical solutions to these problems.
Disclosure of Invention
As with each technical field, the object here is to identify at least one technical problem and to specify at least one technical teaching in order to solve the technical problem.
This task is accomplished by this document and in particular by the following description and drawings. Which technical teachings are claimed herein as the invention.
While neural networks are generally universally applicable as an independent technical field and are not limited to a particular application, they may still be better understood if specific problems and solutions of the neural network are described based on the specific application. The management of processes which have been developed in time to the past, in particular of processes which are run simultaneously in an industrial installation, in particular an industrial installation consisting of individually controllable pipeline sections, by means of a neural network is therefore described below. However, as previously mentioned, this is only for providing a better illustration and should not be construed as limiting the application in any way.
Industrial facilities consisting of individually controllable pipeline sections are commonly used for internal logistics or in general for production environments. For example, a parcel sorting facility is comprised of one or more fingers that in turn comprise a plurality of serially connected pipeline sections. The package sorting facility is particularly for separating packages of different or the same length. Another example is a packaging machine, in which the product has to be accelerated/decelerated on a conveying section to be transported to a bag making and bagging machine at the end of the conveying section, matching the clock frequency.
A "dynamic gap device" is herein a controlled drive application used in internal flow. In this application, packages that are normally transported in one direction by two or more parallel transport sections should be pooled onto a single exit transport section. In this case, packages having an undefined distance from one another on two or more parallel conveying sections should be sorted by means of a combined unit onto an outlet conveying section while being placed at a defined distance.
The combination of packages and the creation of defined distances is achieved by means of a plurality of sub-conveyors of the respective conveying sections. The respective sub-conveyor belts are driven by a gear motor and controlled by a frequency converter. The speed target value of the corresponding sub-conveying section is preset by the controller. The position of the package on the sub-conveyor sections is detected by sensors and processed in a control unit. The position between packages is typically affected by PI regulators acting between every two packages. Based on the information currently present, the controller accelerates and decelerates the sub-transport sections differently.
Since there are a plurality of parallel conveying sections and each conveying section has a plurality of sub-conveyors which should accelerate and decelerate independently of one another, the control is complex in terms of regulation technology. Due to the different package sizes and package characteristics, the optimization is very complex. Adaptation of the adjustment process is enormous in the case of different mechanical design types. For example, if the number of conveying segments or the length of the individual sub-conveying segments varies, the adjustment process can be significantly affected and adaptation in the controller is required.
Furthermore, if there is no physical model of the facility (e.g., simulation), determining the optimal speed of each pipeline at any point in time and under any conditions (number, size, and location of each parcel) is technically very challenging.
Digital twinning in the form of a simulation model faithful to physics can be used as a basis for solving this task. They can be created by means of digital tools such as the NX MCD (MechatronicConcept Design, mechatronic conceptual design) of company Siemens Digital IndustriesSoftware (formerly Siemens Product Lifecycle ManagementSoftware, siemens PLM Software) or the Unity Simulation Pro of Unity Software company (company name of UnityTechnologies).
The use of NX MCD for faithful simulation to physics, for example, allows for kinematic assessment of models in environments where physical effects (e.g., forces, inertia, acceleration) play a role in addition to modeling of display geometry. Verification is supported by reuse libraries from which components can be added to the functional model. These components contain further information such as geometry, parameters or kinematics. In this way, a physical-based interactive simulation is created step by step to verify future machine operations.
It has been recognized that simulation faithful to the physics can sometimes also present difficulties and problems:
a) Physical simulation mostly requires an expert familiar with the system or a set of individual components that have been simulated (e.g., different drive motor models). If these are not available, then these simulations may not be created.
b) Simulation faithful to physics may sometimes be inaccurate when packages are passed between pipelines.
c) Modeling of complex facilities faithfully to physics can be very time consuming and therefore costly.
d) If a controller of a product is not accessible, it may not be possible to reconstruct the behavior of the controller by means of faithful physical simulations due to lack of knowledge of the controller details. For example, if a competitor's product should be simulated, but a controller of the product cannot be accessed, this may occur, for example, in the case of a computer-implemented controller when the source code of the controller is not available, but only the machine-readable code is available and decompilation of the machine-readable code is not possible for some reason (e.g., contractual bans, technical barriers).
In order to solve at least one of these problems, it is proposed to implement digital twinning by means of a neural network.
Here, the neural network is connected to an existing sensor with which the actual state of the conveyor belt is detected. The term "sensor" is to be understood broadly herein and may also include, for example, currently existing control commands. The input data of the neural network may then for example comprise the following data:
-a status of the grating, with which it is detected at a specific location whether there is exactly the actual speed of the drive motor of the parcel-conveyor
The target speed of the drive motor of the conveyor belt, which can be derived, for example, from control commands transmitted by the controller to the drive motor.
The task of the neural network is then to predict what values these parameters are expected to have at a particular point in time in the future based on these data.
An additional unexpected technical problem arises in solving this task. These technical problems must first be identified and the cause of these technical problems found. A solution should then be found.
For example, a neural network can be trained with sufficient accuracy only when sufficient training data is present. If there is only little training data, the training data cannot cover the entire state space of the neural network densely enough. This results in the neural network having to interpolate a large amount in the gaps between the training data. This can lead to significant inaccuracy, overfitting and poor generalization.
It has been recognized that a possible solution to this technical problem consists in pursuing as narrow a simulation clock as possible of about 10ms (or less) in order to thereby generate as much input data as possible and in particular very close in time for operation and training.
In this case, the neural network's task is to predict what values these parameters are expected to have in the next clock of the pursued narrow clock of 10ms based on the input data.
Surprisingly, if multiple sub-delivery segments have to be coordinated with respect to each other, the neural network cannot reliably make the desired predictions based on the input data described above.
In this case, at least two sensors to (typically) all the sub-transport sections to be correlated are connected to the neural network, and the weights of the neural network are trained by means of the actual states of the sensors, which are detected as suggested, for example, every 10ms. The sensor data at the time t form the input data of a neural network, from which the expected sensor data at the time t+1 is derived as output data, the interval between t and t+1 being arbitrarily selected and, as suggested, being very close to 10ms. The output data is compared with the sensor data actually present at time t+1.
The degree of agreement of the predictions with reality forms one quality parameter of the neural network, so that the degree of agreement (like many other precision-dependent tools) can distinguish, for example, a neural network of high quality from a neural network of lower quality because particularly accurate predictions are made.
Furthermore, the bias is used to configure the neural network during training of the neural network, mostly by adapting the weights of the neural network.
The processing speed of the neural network may be another quality parameter that is independent of the prediction accuracy, but is particularly important in the case of the proposed narrow clock of 10 ms. As a rough rule of thumb, processing speed is often secondary for small amounts of input data and/or slow running processes, and may become more important as complexity increases.
It is now surprising that in this arrangement and in this process the neural network cannot reliably create a sufficiently accurate prediction of the predicted sensor data at time point t+1 based on the sensor data at time point t.
According to one discovery, this is due in particular to another technical problem of the neural network, namely that it is very difficult for the neural network to handle a process that runs dynamically over time. The neural network maps the input data to output data according to its structure and its weights learned through training. The mapping is in principle always the same for the same input data as long as the structure and the weights of the network do not change. It has been recognized that the neural network of the structure described at the outset cannot identify when a parcel of a sub-delivery segment was detected in the past and how far the parcel has been currently moved using the input data described above.
Likewise, if the neural network should distinguish temporally between a plurality of sub-conveyor sections connected in series and/or in parallel to determine which sub-conveyor sections have to accelerate and which sub-conveyor sections have to decelerate, for example, collisions of parcels transported in parallel at the same height in the combined unit and/or with parcels decelerated by downstream sub-conveyor sections are thereby prevented. In this case, contradictory situations may occur for the neural network, in which a particular sub-transport segment sometimes accelerates, sometimes decelerates, and sometimes remains operating unchanged, if the input data is identical.
According to one discovery, this depends on how the conditions of the upstream, downstream or side-by-side sub-conveying sections are. The slightest difference in the progress of the shipment of individual packages is critical to the controller in deciding which packages to accelerate, decelerate, or continue to be shipped unchanged.
When considering a particular sub-delivery segment alone, the neural network cannot recognize these differences. The neural network is also unable to recognize these differences when considering all sub-transport segments and a narrow clock of 10ms in general. The network always sees the respective current state only at the point in time of scanning the sensor data based on its input data. However, it cannot be identified from the detected sensor data how long this state has lasted.
It is therefore not easy to implement a neural network with markov properties in which, for a set of temporally successive sensor data detected at intervals of, for example, 10ms, the probability of the next state of these sensor data to be determined by the neural network depends only on the immediately preceding state and not on the earlier state.
Finally, this dynamics leads to contradiction with the paradigm that the neural network predictions for the same input data are in principle always the same as long as the neural network structure and weights do not change. The neural network repeatedly encounters partially diametrically opposed training specifications while training on the same input data. For the same input data of the neural network, the same sub-transport segments must sometimes be accelerated, sometimes decelerated, sometimes remain unchanged. The configuration of the weights then depends on random parameters, such as the order or selection of training data.
Visually, it can be generally envisioned that the internal structure of the neural network begins to "oscillate" during training. The internal structure randomly (e.g., depending on how the training data is structured) sometimes tends to one direction, sometimes tends to another direction, and sways back and forth even between different directions as necessary.
This diffusivity continues after training is completed. The situation that the prediction of the neural network deviates not only slightly but also significantly from the actual behavior of the controller always results during operation.
Now, one task is to find a solution for the identified problem. This technical task is particularly challenging when pursuing a narrow clock of about 10ms, since in this case the information has to be transmitted over a number of time steps, whereby the simulation model can become very complex.
In order to solve the time problem, for a parcel sorting facility serving as an example of an application of a neural network, it is proposed to provide a "stopwatch" as further input data to the neural network for each sensor that checks for the presence of a parcel, for example by means of a grating, which is reset to an initial value (usually "zero" in the case of a stopwatch) always when the assigned sensor indicates a change in its state. In the case of gratings, this state of the sensor is typically binary, i.e. it has exactly two states:
(1) "raster interrupt" =
"there is now a parcel"
(2) "raster uninterrupted" =
"no package currently".
From this example of application, it can be seen that the solution can be reduced to only one sensor designed as a motion detector and only one stopwatch assigned to the sensor. In this case, the stop watch is always reset when the sensor detects motion.
After reset, the stopwatch is restarted and the passage of time is measured. The current stopwatch value that occurs at each predicted instant may then indicate how long in the past the sensor was triggered. Through this technical teaching, this information can be transmitted over many time steps, and the neural network can understand the expansion over time.
A direct correspondence with the stopwatch can be achieved if a saw tooth curve is applied as an additional input value, which restarts after each reset and changes linearly by a constant value every predicted period in the case of the proposed narrow clock of 10 ms. In this way, the time information can be transmitted over a number of time steps at a constant cost. Depending on the design of the variation values, this gives a linearly rising or linearly falling sawtooth curve in terms of the result. Here, a modification is a curve that is reset to "0" and changes "+1" in each period, respectively.
Alternative designs are conceivable in which the stopwatch varies over time in a manner other than linear. The stopwatch can be modeled, for example, as a rising/falling edge of a raster signal, and can be modeled here not only as a linear sawtooth function, but also as an exponential sawtooth function. Or the stopwatch may be changed based on the difference between the conveyor belt position at the falling or rising edge and the current time points. The neural network can thus identify different distances between packages, which is important especially when the controller should be rebuilt from the neural network as a digital twin also takes this into account.
Likewise, the stopwatch may stop after a certain time has elapsed. In this case, it is to be considered, for example, in the case of a parcel sorting installation, that the control unit decides, in particular shortly after the detection of the movement, whether the parcel is to be transported more quickly, more slowly or at the same speed than the other parcels. Once this decision is made, it remains mostly unchanged for the respective transport section until correction is needed as necessary due to triggering of other motion detectors. However, other stopwatches are responsible for identifying this condition. For example, in a large system with a large number of stopwatches, stopping of the stopwatch may have considerable advantages in terms of processing speed of the neural network and the real-time capabilities that may be required.
Furthermore, more than one stopwatch may be provided for a sensor, for example configured as a motion detector. For example in the case of a parcel sorting facility, it is advantageous to provide two stopwatches, and here to start a first stopwatch when a parcel is detected and to start a second stopwatch when the parcel passes the sensor. The neural network can thus identify the size of the parcel, for example, by correlating the difference between the two stopwatches with the speed of the sub-conveyor (derived, for example, from the speed of the gear motor of the sub-conveyor) if necessary. This knowledge enables, for example, the neural network to reconstruct the behavior of a controller that takes into account specific, defined values of the distance between packages, such as the packages in front of the combination unit or in front of the bag making machine of the packaging machine.
The digital twinning formed using such a neural network may be used, inter alia, for the following purposes and/or has the following advantages:
-simulating a standard operation of the facility to ensure compliance with the specifications;
-determining the throughput of the facility;
-simulating special conditions in the operation of the facility without investing in special hardware or without potentially damaging facility components;
-running the simulation model on the edge device in parallel with the facility running and in the process identifying an abnormal running state;
Optimizing the facility structure (e.g. the number of fingers, i.e. pipelines);
-determining optimal control of the facility using reinforcement learning. It is particularly advantageous for optimal control by means of reinforcement learning if a neural network with markov properties is used for this purpose.
The neural network based predictive model may be purely data based, i.e. no detailed knowledge about the system behaviour, e.g. of the drive motor, but only data from past or from the same structured facilities, whereas the physical simulation model is premised on accurate knowledge about the facility components and their environment. Thus, creating a physical simulation model of a complex facility using a program such as an NX MCD or Unity typically requires an expert to spend a great deal of time mapping the precise behavior of the facility, while the creation of a predictive model may be automated by training a neural network by means of supervised learning. This also makes creation more cost-effective, as no expert is required to create.
If the accuracy of the model is no longer satisfactory, the predictive model created by training the neural network by means of supervised learning can be retrained at any time.
The predictive model created by training the neural network is based on real data and learns the hard-to-simulate transitions on the conveyor belt sections together.
If a facility component (e.g. a drive motor) that does not already exist using a physical simulation model such as NX MCD or Unity should be replaced, a purely data-based predictive model can be created significantly faster and thus more cost-effectively.
By using a neural network, it is possible to install facility components from different manufacturers and then to create predictive models for this purpose, there being no physical simulation model for these facility components using for example NX MCD or Unity, and no physical simulation model can be created due to the inability to access manufacturer data. So that an optimal controller can be developed for (competing) facilities where no physical simulation model exists.
When used on an edge device, the neural network may run in parallel with the facility operation and identify abnormal operating conditions, such as incomplete data of a predictive model implemented by the neural network or defects of a facility component (e.g., a drive motor). Alert data may also be generated if reality is significantly different from predictions.
By using a Recurrent Neural Network (RNN) with a Long Short Term Memory (LSTM) module, the gradient vanishing can be counteracted due to the long time expansion (e.g. a time expansion of 4 seconds in case of a clock of 10ms corresponds to a time expansion of more than 400 prediction periods).
A very accurate resolution is derived by long-term expansion of RNNs into the past.
By taking into account the conveyor belt length, a generic model of all facilities can be obtained, irrespective of the conveyor belt length of these facilities. Generalization can thus be achieved.
A particularly good advantage of neural networks is that the conveyor belt transitions resulting from the linking of two physical individual models tend to be inaccurate in the physical simulation model if the prediction system learns by training to accurately map these relationships.
A further advantage that can be achieved by automatically creating a predictive model implemented by a neural network is:
o automatically exploring different facility designs;
the o-model is quickly adapted to new modes of operation, facility components or changing conditions;
o migration learning between different facilities.
The implementation of the neural network is advantageously performed by a computer program with a program code, which may be stored for example on a non-volatile machine-readable carrier or in the cloud of the internet. The computer program implements the above embodiments when the program code is executed on a computer.
Drawings
A better understanding of the proposed technical teachings will be obtained from the following detailed description in conjunction with the accompanying drawings. Here:
FIG. 1 shows a schematic diagram of a logistics system having a computing unit for executing a method for computer-implemented configuration of a controlled drive application implemented in the computing unit in accordance with the present invention;
fig. 2 shows a schematic diagram of method steps performed by a computing unit;
FIG. 3 shows a design of a computing unit;
FIG. 4 shows another design of a computing unit;
FIG. 5 shows the dynamic behavior of a logistics system predicted by a neural network with a plurality of linear stopwatches;
FIG. 6 illustrates the dynamic behavior of a recurrent neural network;
FIG. 7 illustrates the dynamic behavior of another recurrent neural network;
fig. 8 shows an alternative dynamic behavior of a stopwatch of a neural network.
Detailed Description
Fig. 1 shows a schematic view of a logistics system 1 with controlled driving application. The logistics system 1 comprises, by way of example, three conveying sections 10, 20, 30 extending parallel to one another, on each conveying section 10, 20, 30 individual goods, in particular packages, being able to be conveyed in a conveying direction FR extending from right to left. Each transport section 10, 20, 30, which in this example has the same length (but may also have different lengths) and is also called a finger, comprises a plurality of sub-transport sections 11-13, 21-23, 31-33. In the present embodiment, the number of sub-conveying sections of each conveying section 10, 20, 30 is the same (this is also not mandatory and may be different). The sub-conveying sections 11-13, 21-23, 31-33 of the respective conveying sections 10, 20, 30 may also have the same length or have different lengths.
Each sub-conveyor section 11-13, 21-23, 31-33 is assigned a respective drive 11A-13A, 21A-23A, 31A-33A. By corresponding actuation of the drives 11A-13A, 21A-23A, 31A-33A by means of the computing unit 60, the sub-conveyor sections 11-13, 21-23, 31-33 can be individually accelerated or decelerated.
At the end of the conveying sections 10, 20, 30, i.e. in the conveying direction FR, a combination unit 40 is arranged, in which the last sub-conveying section 13, 23, 33 transfers the individual goods transported by it to the combination unit 40. At the outlet 41 of the combined unit 40 a single outlet conveying section 50 is arranged. The outlet conveying section may be comprised of one or more sub-conveying sections 51. One or more sub-conveying sections 51 are in turn driven by a driver 51A under the control of a calculation unit 60.
The acceleration and deceleration of the respective sub-conveyor sections by means of control signals suitable for the drives 11A-13A, 21A-23A, 31A-33A makes it possible to transport the individual goods transported on the parallel conveyor sections 10, 20, 30 to the combination unit 40 with a time offset. The combination unit 40 is thereby able to feed individual goods onto the outlet conveyor section 50 such that every two temporally successive individual goods have a predetermined, defined distance from one another.
In order for the computing unit 60 to be able to output control signals suitable for the acceleration and deceleration drives 11A-13A, 21A-23A, 31A-33A, the respective sub-conveying sections 11-13, 21-23, 31-33 are provided with a number of respective sensors 11S-13S, 21S-23S, 31S-33S. The sensors 11S-13S, 21S-23S, 31S-33S comprise in particular gratings for determining the respective transport speed, length and/or position of the individual goods and/or the deviation of the individual goods from the intended position. The sensors optionally include, for example, rotational speed sensors for detecting rotational speeds of the drivers 11A-13A, 21A-23A, 31A-33A, current sensors for detecting motor currents of the drivers 11A-13A, 21A-23A, 31A-33A, and the like.
The individual goods are fed onto the conveyor sections 10, 20, 30 via the respective transfer units 18, 28, 38, the transfer units 18, 28, 38 also being configured, for example, as sub-conveyor sections. The transfer units 18, 28, 38 also have corresponding drivers (but not explicitly shown here) and a number of corresponding sensors 18S, 28S, 38S. These transfer units may be segments independent of the actual conveying sections 10, 20, 30. However, the transfer units 18, 28, 38 may also represent respective sub-conveying sections of the assigned conveying sections 10, 20, 30.
For simplicity, only the transfer units 18, 28, 38 are equipped with corresponding sensors 18S, 28S, 38S in fig. 1. The corresponding measurement signals are supplied to a calculation unit 60 for further processing. The measurement signal is indicated by a dashed line. For simplicity not all signal lines required for measuring signals or transmission are shown.
The drives 11A-13A, 21A-23A, 31A-33A, 51A assigned to the sub-conveyor sections 11-13, 21-23, 31-33 are controlled by dashed lines with corresponding actuating signals. For simplicity not all steering signals or steering wires required for transmission are shown.
In fig. 1 the controlled driving application of the logistics system 1 is configured in a computer-implemented manner by a computing unit 60. However, these steps may also be performed on a computing unit that is independent of the final control of the logistics system 1.
The control logic of this control performed by the calculation unit 60 can also be taught and then predicted/simulated by the system 200 schematically shown in fig. 2, the system 200 comprising at least one neural network NN and at least one stopwatch SU. This process is schematically shown in fig. 2.
In a first step S1, a system model of the logistics system 1 is determined based on the operational data BD of the logistics system. The operating data BD are present at a plurality of points in time of the operation of the logistics system 1 and comprise, for each point in time, measured values of the sensors 11S-13S, 21S-23S, 31S-33S, 18S-38S, for example, raster signals, motor currents, the position of the individual goods on the respective sub-conveyor sections 11-13, 21-23, 31-33, 18-38, the rotational speeds of the drives 11A-13A, 21A-23A, 31A-33A and the speeds of the sub-conveyor sections 11-13, 21-23, 31-33. In principle, it is possible here to process not only the operating data BD of the currently examined logistics system 1, but also the operating data BD from other logistics systems (preferably similar logistics systems).
Furthermore, in step S1, manipulated variable changes are determined and processed for each point in time, including, for example, speed changes or rotational speed changes of the drives 11A-13A, 21A-23A, 31A-33A, 18A-38A.
In addition, at least one stopwatch SU is used in step S1. The stopwatch is always reset to the initial value when the value of the assigned operating data BD changes in a specific way, for example from "0" to "1" or vice versa. Preferably, the operating data BD is one of the measured values of the sensors 11S-13S, 21S-23S, 31S-33S, 18S-38S, in particular a grating signal for determining the position of the individual goods on the respective sub-conveyor sections 11-13, 21-23, 31-33. A stopwatch SU may be provided for a plurality of gratings or even for all gratings. The number of stopwatches SU provided for each running data BD is also variable. This flexibility of use of the stopwatch SU is indicated by the index 1..n on the reference sign SU. If no specific stopwatch is mentioned, these indexes are omitted. The general reference SU should not be understood here in principle as being limited to a single stopwatch, and even if for the sake of simplicity only the behaviour of a particular stopwatch is described.
Here, it is clear to a person skilled in the art that the stopwatch is constructed as a computer-implemented digital stopwatch in the context of the computer-implemented technical teaching. In addition to the sensor data of the logistics system, a stopwatch is also applied to the neural network NN and is reset to an initial value, for example, always when the motion detector indicates that a package passes.
The system model is determined by means of at least one (preferably recurrent) neural network NN. Here, it is clear to a person skilled in the art that the technical term is used in the art to denote a computer-implemented artificial neural network NN.
For determining the system model, the neural network NN is configured by means of training/learning, and in particular here by means of supervised learning, wherein the stopwatch SU is supplied to the neural network NN as (additional) input data in addition to the operating data BD. Since the procedure in this respect is known (see in particular also the statements about neural network training at the beginning of the description), the description will not be repeated at this point. As a result, this training results in the correlation between the running data BD and the corresponding course of the stopwatch SU being additionally stored in the weights of the neural network NN configured and structured in this way.
This process has a particularly good effect if the input data is identical at a given point in time without a stopwatch SU but differs in the temporal context of the previously input data. The time development of the past behavior of the simultaneously operating sub-conveyor sections 11-13, 21-23, 31-33, 18-38 is highly relevant for the control behavior of the computing unit 60 and determines to a large extent which of the sub-conveyor sections 11-13, 21-23, 31-33, 18-38 are to be accelerated, decelerated and continue to operate at a constant speed.
Now due to the additional input of the stopwatch SU, the same input data in the absence of the stopwatch SU differ from each other due to the different combinations of the current values of the stopwatch SU. In this way, the training "oscillations" and the predicted "oscillations" of the trained neural network NN described at the outset can be avoided.
In a second step S2, the system 200 determines the expected regulatory function REGF of the logistics system 1. The regulating function REGF comprises, for example, configuration data KD of the drives 11A-13A, 21A-23A, 31A-33A, i.e. motor current and/or rotational speed, etc., so that the assigned sub-conveyor segments 11-13, 21-23, 31-33 can be accelerated or decelerated in a suitable manner. System 200 can now predict all or some portion of configuration data KD. It is also conceivable to train and/or design the system 200 such that the system 200 predicts when a package arrives at the next raster in the conveying direction FR (prediction 1) and/or when it passes completely through the next raster (prediction 2). These values may be output, for example, as a decreasing saw tooth curve. Here, the progress of each ladder is determined by the frequency with which the system 200 predicts. In the case of the 10ms clock proposed at the beginning, such a step function would have, for example, 100 steps in a period of 1 second.
The determination of the regulating function REGF on the basis of the system model configured in step S1 can be used very generally in that at least one predicted regulation is predicted in the system model given one or more objects to be achieved. For example, one or more of the following parameters may be considered as targets: average throughput of individual goods at the outlet 41 of the combined unit 40; the distance between two directly successively conveyed single goods, i.e. the gap distance, in particular the minimum distance; collision detection in the combined unit 40, in particular at its outlet 41; a distance uniformity metric that characterizes the deviation of the distance between each two directly sequentially conveyed singlets from equidistant, i.e., the uniformity of the gap distance; and the operating speeds of the three conveying sections of the respective conveying sections or of the conveying section as a whole in order to achieve, for example, wear optimization.
By varying typical model input variables, such as the size of the individual goods, the mass of the individual goods, the friction coefficient, etc., a near-real distribution can be produced by the operating data BD that were present up to now. This makes the derived predicted adjustment function REGF and the configuration data KD highly robust.
Fig. 3 shows a computer-implemented example of the technical teachings described herein, the computer-implemented comprising:
(301) Computer system
(302) Processor and method for controlling the same
(303) Memory device
(304) Computer program (product)
(305) A user interface.
In this embodiment, the computer program product 304 includes program instructions for performing the present invention. The computer program 304 is stored in the memory 303, which in particular makes the memory and/or the associated computer system 301 a providing device for the computer program product 304. The system 301 may perform the present invention by executing program instructions of the computer program 304 by the processor 302. The results of the present invention may be displayed on the user interface 305. Alternatively, these results may be stored in memory 303 or other suitable means for storing data.
Fig. 4 shows another embodiment of a computer implementation, comprising:
(401) Providing apparatus
(402) Computer program (product)
(403) Computer network/internet
(404) Computer system
(405) Mobile device/smart phone.
In this embodiment, the providing device 401 stores a computer program 402 containing program instructions for executing the present invention. The device 401 is provided such that the computer program 402 is available via a computer network/internet 403. For example, computer system 404 or mobile device/smartphone 405 may load computer program 402 and execute the present invention by executing the program instructions of computer program 402.
Fig. 5 illustratively shows the prediction of a system 200 with the aim of predicting when a package arrives at the second grating 32S after passing through the first grating 31S (prediction 1) and passes completely through the second grating 32S (prediction 2), taking as an example two sub-conveying sections 31 and 32 arranged one after the other in the conveying direction FR in fig. 1. In this example, the two values are output as a decreasing saw tooth curve.
In this minimum embodiment, the data of two conveyor belt sections 31, 32 connected one after the other are shown. One goal is to predict the rising/falling edge of the grating 32S of the conveyor 32 downstream of the conveyor 31 based on the rising/falling edge of the grating 31S of the input conveyor 31. Here, the grating provides a binary signal, where "0" = no wrap passes through the grating and "1" = wrap passes through the grating. The goal is to generate rising/falling edges Pec _32_up_t (NN), pec _32_down_t (NN), which describe the time from each time point t to the next switching time point (from 0 to 1 or from 1 to 0).
In this implementation, a simulation model (also referred to as a predictive model) implemented by the neural network NN is first trained by supervised learning using historical data of a facility to be optimized or the like.
In this example, the preconditions for training the predictive model are:
the gratings 31S, 32S at the start of the conveyor are always mounted in the same position. This may be relevant, for example, if different conveyor belt lengths should be simulated in the scope of design optimization.
The lengths of the sub-conveyors 11-13, 21-23, 31-33, 18-38 are known and supplied as part of the input data to the predictive model. This has the advantage that the model can cope with different conveyor belt lengths and even interpolate between different known conveyor belt lengths.
The overall installation can also be simulated similarly, i.e. the behaviour of a grating of a plurality of downstream conveyor belts or conveyor belt arms extending in parallel. The implementation in this example is partially substituted for the whole.
Very small time steps were simulated using (t+1) -t.apprxeq.10 ms. For the predictive model, this needs to be extended to the past height time so that the time point of the rising/falling edge of the downstream grating 32S can be determined as precisely as possible based on the rising/falling edge of the input grating 31S due to the wrap-through. For example, if n=400 steps from the past are used for this, this corresponds to a real interaction time of 4 seconds on the conveyor belts 31S, 32S.
In graphs 501-508, some parameters from the actual operation of the facility over a period of time corresponding to approximately more than 16 seconds in fig. 1 are shown in part. The time periods are plotted on the x-axis in units of milliseconds, respectively.
Within these 16 seconds, four packages were transported in their entirety on the two conveyors 31S, 32S (the other two packages with the origin and the end added were partially transported). The curve Pec _31 (data) in graph 503 shows the progress of a parcel through the grating 31S, and the curve Pec (data) in graph 504 shows the progress of the same parcel through the grating 32S with a time offset. Each two adjacent state changes of the gratings 31S, 32S from "1" to "0" (the current package passed) and from "0" back to "1" (the subsequent package identified) are caused by the gap between two consecutive packages. It can be clearly seen that in this example the gap has a variable size and is approximately in the range of 0.1-0.5 seconds, whereas the residence time of the wrap on the conveyor belts 31, 32 is significantly longer and roughly estimated to be in the range of approximately 2.5-3.5 seconds. Thus, each parcel stays on one of the conveyors 31, 32 for less than 4 seconds, so that in this example the past of n=400 steps is sufficient to cover the complete passage of the parcel on each of the conveyors 31, 32. At the same time, however, it has not been necessary in the past to choose (even) larger.
In the graphs 507, 508, the two curves Target 31A (data), target 32A (data) show the manipulated variables of the speeds or rotational speeds of the two drives 31A, 32A for the two conveyor belts 31, 32 output by the calculation unit 60. The aim of these manipulated variables is to achieve the predefined aim of the control by accelerating and decelerating the packages, i.e. to prevent collisions in the combination unit 40, for example, or to make optimal use of the conveyor capacity by minimizing the spacing between packages or by creating a spacing such that packages transported by the installation 1 are transported to a bag-making bagging machine (not shown) located at the end of the conveying section in a frequency-matched manner to the clock.
By associating the time interval between two adjacent state changes of the grating 31S, 32S from "1" to "0" (the current package passed) and from "0" back to "1" (the subsequent package identified) with the manipulated variables of the respective assigned drivers 31A, 32A, the horizontal distance of the passing package can be derived quite accurately, which is important information for achieving the desired distance between packages. Thus, by applying this information as input data to the neural network NN, the neural network NN is also able to internalize the correlation in its weight by training in step S1, and thus derive a prediction of the future behavior of the controller from the given input data in step S2, which controller also takes into account the horizontal distance of the package in a reliable manner.
The horizontal size of the package may also be derived in a similar manner. The decisive factor here is the time interval between two adjacent state changes of the gratings 31S, 32S from "1" to "0" (the current package is identified) and from "0" back to "1" (the current package has passed). However, this correlation is more complex because it extends over significantly longer periods of time and the multiple different speeds of the drives 31A, 32A fall within these periods of time. The horizontal size of a parcel is more difficult for the neural network NN to teach and identify than the distance between two parcels, since it extends significantly longer in the past than the distance between parcels.
The predictive model of the system 200 can address these and other challenges in a manner that it uses the following input features as input data to the neural network NN (which are measured at the last n time steps) at point in time t (in this example to help understand the limitation to the two conveyors 31, 32):
length of conveyor belt sections 31, 32
Raster state Pec _31 (data), pec _32 (data)
The time point of the last change in grating state is a saw-tooth curve with time rise (represented in the graphs 501, 502 by two stopwatches SU, respectively 31_up 、SU 31_down Represented by the current value of (1), wherein the first stopwatch SU 31_up Reset to an initial value of "0" on the falling edge of the curve Pec _31 (data), which indicates that the package entered the grating 31S, and the second stopwatch SU 31_down Is reset to an initial value of "0" at the rising edge of the curve Pec —31 (data), which indicates that the package is exiting from the grating 31S. Here, this does not have to be a piecewise linear function, such as a sawtooth curve, but may also be, for example, an exponential function. The allocated stopwatch SU triggered by the change in raster status is indicated in fig. 5 by an exemplary arrow between diagrams 501 and 502 being reset to an initial value, wherein these arrows start from diagram 502 and have their effect in diagram 501, in that the stopwatch SU shown there is reset to an initial value of "0" in this example.
Actual and/or Target speed of the drive motor 31A, 32A (in the graphs 507, 508, the Target speed is shown in the curve "target_31a (data)"/"target_32a (data)").
Fig. 6 and 7 show different variants of possible predictive models with neural networks NN.
Here, for a specific time point t:
s (t) represents the current state of the installation 1
U (t) represents input data of the system 200
Y (t) represents the output data of the system 200.
Taking the sub-transport section 31 as an example, in this case, the input data may be, in the two variants shown, at the time t:
length of the sub-conveying section 31
The state of the grating 31
Stopwatch SU 31_up 、SU 31_down Values of (2)
Actual speed of the drives 31A, 32A
Target speed of the drives 31A, 32A.
The recurrent neural network NN known from the literature is very suitable for implementation of predictive models, since the recurrent neural network has properties that can inherently map time relations. This has the advantage in particular that the time behaviour of the underlying dynamic system (e.g. the installation 1) can be learned by means of a unique matrix a, and the state s (t) at the respective point in time t is relatively small. In the case of many time steps from the past to be considered, such as n=400 in the case here, the matrix would be very large for standard MLP (multi-layer perceptron) and thus over-fit would be easy. The matrix a/B/C is in both variants shown a "shared" weight matrix, a so-called shared weight, which is trained, for example, by means of back propagation-Through-Time (BPTT) in such a way that the average of all n error gradients is used for one gradient descent step.
The variant shown in fig. 6 shows a recurrent neural network NN whose aim is to predict the number of time steps measured from the point in time t, after which the rising/falling edges of the grating 32 of the downstream conveyor belt 32 are expected, as output data of the system 200 at the point in time t (for example, as a countdown counter).
This variant solves this task by following the state transition equation and following the following optimization objectives:
state transition equation:
y i =Cs t (1)
s t =tanh(Bu t +As i-1 ) (2)
optimization target:
the time-dependent course of the output data achieved in this way is shown in graph 506 in fig. 5 as a falling-edge curve Pec32_down_t (NN) and in graph 505 as a rising-edge curve Pec32_up_t (NN). By superimposing the two prediction curves with the data of the actual measurement, it can be seen that the predictions at each time point t are almost completely consistent with the curves representing reality, pec _32_up_t (data) in graph 505 and Pec _32_down_t (data) in graph 506. In fig. 5, it is shown by way of example that the neural network NN very accurately predicts the actual occurrence of edges by the arrows between graphs 504 and 505.
Advantageously, with this variant, the expected behaviour of the downstream grating 32 can be predicted with only one step over a longer time frame, whereby many (intermediate) calculation steps are omitted in a computer implementation. This is a particularly good advantage in the case that the control commands to the drive (target speed of the drive motor) should be optimized, for example, in a number of steps, to avoid an upcoming collision (observed through multiple arms) early, for example, during parcel separation on the output line.
However, one-step dynamics of the actual speed of the drive motor cannot be achieved in this way. By "one-step dynamic" is meant that the subsequent state at time point t+1 depends only on the state at time point t and the input data BD (markov characteristic).
The variant shown in fig. 7 shows that such a one-step dynamic recurrent neural network NN can be implemented at point in time t. The goal of this variant is to predict the following output data:
the expected binary state of the grating 32 of the downstream conveyor belt at the time point t+1 is expected, and
the expected actual speed of the drive motor 31A, 32A at the time point t+1 is expected based on the target speed of the drive motor 31A, 32A up to the time point t.
This variant solves this task by the following state transfer equation and pursuing the following optimization objectives:
this variant solves this task by following the state transition equation and following the following optimization objectives:
state transition equation:
y t+1 =Cs t (1)
s t =tanh(Ba t +As t-1 ) (2)
optimization target:
advantageously, with this variant, it is possible to simulate the predicted actual speed of the drive motor (inertia of the drive) at the point in time t+1, in addition to the predicted state of the downstream grating.
A particularly advantageous advantage is that the output at time t can be used directly by the neural network NN as input at time t+1, so that in principle a stepwise dynamics can be simulated over any number of steps in the future. This enables, among other things, tracking of packages on a conveyor belt.
In a comparison of the two variants, the computer implementation of the variant shown in fig. 6 requires slightly less computational effort if necessary, since in the variant shown in fig. 7 each computational step is simulated separately, including the target speed to be achieved.
Fig. 8 shows an alternative design of a digital stopwatch SU. In this example, the five curves sgnl_edge_up_x_0 to sgnl_edge_up_x_4 in the graphs 800-804 show the dynamic course of the 5 digital stopwatch SU in dependence of the rising/falling EDGEs of the grating allocated to the 5 digital stopwatch SU. In this example, the curve of the digital stopwatch SU is implemented by a sawtooth function that does not grow linearly with time, but is based on
At the point in time of the falling or rising edge of the grating assigned to the stopwatch
-a difference between the conveyor belt positions at the current point in time.
For simplicity reasons, only the curves of the stopwatch SU that are reset to the initial value at the rising edge of the grating are shown in graphs 800-804. A similar curve (not shown) may also be provided for the stopwatch SU which is reset to an initial value on the falling edge of the grating.
In this example, the curve of stopwatch SU shows the cumulative movement of the conveyor (measured in mm) at each time point 1200-1700 since the last raster rising edge in curves sensordata_0 through sensordata_4.
Here, the conveyor belts shown in the graphs 800 and 804 move at a constant speed, so that a linearly rising curve is derived for the two curves sgnl_edge_up_x_0 and sgnl_edge_up_x_4.
In contrast, the conveyor belts shown in the diagrams 801 to 803 move at a variable speed, thereby producing a non-linear, monotonically rising behavior of the curves sgnl_edge_up_x_1 to sgnl_edge_up_x_3 in this example.
While the above considerations have been described in part and in great detail, especially with regard to highly complex, time-dependent correlations between sub-conveyors of a complex parcel delivery facility with a plurality of parallel conveyors, they are not limited to the examples disclosed and other variants can be derived from them by a person skilled in the art, in particular without departing from the scope of protection of the invention as defined by the claims. Many considerations thus apply equally to any system whose dynamic behaviour can be described by means of at least a series of measurements of measurable system parameters collected distributed over a period of time. Many dynamic systems that interact with the environment meet this premise. Without requiring completeness, the dynamic system may be, for example, a mechanical structure, an electrical network, a machine, a conveying or sorting facility for any object (e.g. a suitcase, letter or material), a production line (e.g. a production line used in automotive manufacturing), a metering facility or a biological process known from the process technology. The dynamic behaviour of the conditioning technique system can also be measured and thus predicted by the invention. Other applications and application areas of the invention can be derived from these application examples by a person skilled in the art without departing from the scope of the invention.
This broad and widespread range of applications can be explained in particular by the fact that: the technical teaching to solve the technical problem is accomplished by providing an innovative neural network NN if not connected earlier only by the arrangement and functional role of the neural network NN. For a complete implementation of the technical teaching, it is not absolutely necessary for a specific application of a particular system, which may be for example influencing the system by generating, outputting and/or applying control signals to the system. Such downstream application to the system is possible, but optional. Also, a complete implementation does not require direct connection to sensors that directly collect measurements during system operation. These measurements may also be collected or artificially generated at an earlier point in time, as it is desired to study the behaviour of a hypothetical system, which is for example only planned but not yet manufactured. Obviously, valuable resources can be saved significantly by avoiding prototypes.
It is also irrelevant or unimportant whether the control signals generated by the neural network NN are applied automatically by the machine control device to the system or manually by human intervention, since this is only one step downstream of the technical teaching, which step, depending on the implementation, can represent an independent technical teaching alone or in interaction with the neural network NN, which is upstream in this way. Such application of the neural network NN to the system may illustratively directly affect the package-sorting facility. However, the innovative neural networks NN based and the predictive and control signals created by these neural networks may exert many other identical types of effects on the system, which can be performed automatically by a machine or manually by a person through corresponding guidance. These effects may be affecting the speed of the conveyor belt of the parcel sorting facility to counteract the predicted future jams, as described in detail, to name just a few examples from many others; in the production line of automobiles, the automobile can respond to future part shortage in advance so as to counteract production pause; in biological processes, this may be a change in composition to counteract the predicted future development early and especially in time, which development is considered disadvantageous. As with many dynamic systems, this time component is particularly important because there is an inherent latency in the system's reaction to changes in system parameters, especially when adverse developments can only be counteracted prophylactically, because correction is no longer possible when damage has occurred.

Claims (10)

1. A Neural Network (NN) for managing a process that is spread out in time to the past, the neural network having input data (BD) describing the state of the process and output data (REGF) derived from the input data by the neural network, wherein at least one digital Stopwatch (SU) is provided,
a. the value of the digital stopwatch is supplied as further input data to the neural network, and
b. the digital stopwatch is set to an initial value when a specific change occurs in one of the input data, and then indicates to the neural network an increasing distance from that point in time by the course of its rising or falling value.
2. The neural network of claim 1, wherein the stopwatch value varies
a. The linear rise or fall is carried out,
b. increase or decrease in index, or
c. And rising or falling based on a difference between a value of at least one of the input data at a time point when the last time of the stopwatch is reset to its initial value and a value of the input data at a subsequent time point.
3. A neural network according to any one of the preceding claims, wherein the stopwatch stops after a certain time has elapsed.
4. A neural network according to any one of the preceding claims, wherein at least one input data is a state of a grating by which the presence and absence of a single piece of cargo passing through the grating is alternately indicated by means of binary values over the course of the time expansion, and two stopwatches (SU 31_up 、SU 31_down ) One of the stopwatches is reset at the beginning of the presence indicated by the change of the binary value and the other stopwatch is reset at the beginning of the absence indicated by the change of the binary value.
5. A method for configuring a neural network constructed in accordance with any one of the preceding claims, wherein the configuration is achieved by data-based training of the network by means of input data from the past of a process.
6. A neural network trained in accordance with the foregoing method.
7. Use of a neural network constructed according to any one of the preceding claims for predicting at least one predicted future behavior of an industrial installation (1) in which at least two processes run simultaneously in dependence on each other and are controlled in relation to each other at least partly in dependence on the development of the at least two processes over time.
8. A computer program having a program code which, when executed by a computer, implements the neural network according to any one of the preceding claims by the computer program.
9. A computing unit (60) comprising a computer program according to claim 8.
10. A logistics system (1) having one or more parallel-extending conveying sections (10, 20, 30) for individual goods, each conveying section being assembled to a combining unit (40) in a conveying direction (FR), wherein each conveying section (10, 20, 30) consists of a plurality of sub-conveying sections (11-13, 21-23, 31-33) which are each accelerated or decelerated by an assigned drive (11A-13A, 21A-23A, 31A-33A) under the control of a computing unit (60) constructed according to claim 9, so that the combining unit (40) can combine the individual goods at a defined distance onto a single outlet conveying section (50).
CN202310802249.3A 2022-06-30 2023-06-30 Management of a process that is temporally spread out into the past by means of a neural network Pending CN117332834A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP22182422.0 2022-06-30
EP22182422.0A EP4300362A1 (en) 2022-06-30 2022-06-30 Management of processes with temporal deployment in the past, in particular of concurrent processes in industrial plants, using neural networks

Publications (1)

Publication Number Publication Date
CN117332834A true CN117332834A (en) 2024-01-02

Family

ID=82494081

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310802249.3A Pending CN117332834A (en) 2022-06-30 2023-06-30 Management of a process that is temporally spread out into the past by means of a neural network

Country Status (4)

Country Link
US (1) US20240005149A1 (en)
EP (1) EP4300362A1 (en)
CN (1) CN117332834A (en)
DE (1) DE102023206106A1 (en)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102015117241B4 (en) * 2015-10-09 2018-11-15 Deutsche Post Ag Control of a conveyor system
EP3825263A1 (en) * 2019-11-20 2021-05-26 Siemens Aktiengesellschaft Method for the computer-implemented configuration of a controlled drive application of a logistics system

Also Published As

Publication number Publication date
US20240005149A1 (en) 2024-01-04
DE102023206106A1 (en) 2024-01-04
EP4300362A1 (en) 2024-01-03

Similar Documents

Publication Publication Date Title
US11275345B2 (en) Machine learning Method and machine learning device for learning fault conditions, and fault prediction device and fault prediction system including the machine learning device
US10203666B2 (en) Cell controller for finding cause of abnormality in manufacturing machine
Hofmann et al. Implementation of an IoT-and cloud-based digital twin for real-time decision support in port operations
CN109399122B (en) Control device and machine learning device
CN106409120B (en) Machine learning method, machine learning device, and failure prediction device and system
US11040719B2 (en) Vehicle system for recognizing objects
US8565912B2 (en) Pick and place
US8843230B2 (en) Machining time predicting apparatus of numerically controlled machine tool
US20090089030A1 (en) Distributed simulation and synchronization
CN108803499B (en) Control device and machine learning device
Krumeich et al. Advanced planning and control of manufacturing processes in steel industry through big data analytics: Case study and architecture proposal
EP4083866A1 (en) Information processing device, method, and program
Schuh et al. Cyber-physical production management
US20220269248A1 (en) Method and device for automatically determining an optimized process configuration of a process for manufacturing or processing products
US11579000B2 (en) Measurement operation parameter adjustment apparatus, machine learning device, and system
Saadallah et al. Explainable predictive quality inspection using deep learning in electronics manufacturing
CN117332834A (en) Management of a process that is temporally spread out into the past by means of a neural network
CN114206599A (en) Machine learning for joint improvement
US10577191B2 (en) Conveyor article management system
CN114650958A (en) Method for configuring controlled drive applications of a logistics system in a computer-implemented manner
US11059171B2 (en) Method and apparatus for optimizing a target working line
EP2770465A1 (en) Event-based data processing
Vaisi et al. Multiobjective optimal model for task scheduling and allocation in a two machines robotic cell considering breakdowns
CN112296460A (en) Prediction device
Ghosh Delay-Aware Control for Autonomous Systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination