WO2023046653A1 - Verfahren zum bestimmen, ob ein vorgegebenes transportgut in einem überwachungsbereich angeordnet ist - Google Patents
Verfahren zum bestimmen, ob ein vorgegebenes transportgut in einem überwachungsbereich angeordnet ist Download PDFInfo
- Publication number
- WO2023046653A1 WO2023046653A1 PCT/EP2022/076020 EP2022076020W WO2023046653A1 WO 2023046653 A1 WO2023046653 A1 WO 2023046653A1 EP 2022076020 W EP2022076020 W EP 2022076020W WO 2023046653 A1 WO2023046653 A1 WO 2023046653A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- neural network
- layer
- training
- image
- transported
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 67
- 238000012544 monitoring process Methods 0.000 title claims abstract description 30
- 238000013528 artificial neural network Methods 0.000 claims abstract description 207
- 238000012549 training Methods 0.000 claims description 184
- 210000002569 neuron Anatomy 0.000 claims description 28
- 238000013527 convolutional neural network Methods 0.000 claims description 23
- 230000001537 neural effect Effects 0.000 claims description 19
- 238000001514 detection method Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 7
- 238000009826 distribution Methods 0.000 claims description 5
- 238000011144 upstream manufacturing Methods 0.000 claims description 4
- 239000010410 layer Substances 0.000 description 243
- 230000032258 transport Effects 0.000 description 63
- 238000004519 manufacturing process Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 239000003086 colorant Substances 0.000 description 4
- 238000010845 search algorithm Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 2
- 238000001454 recorded image Methods 0.000 description 2
- 239000002356 single layer Substances 0.000 description 2
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/06—Recognition of objects for industrial automation
Definitions
- the invention relates to a method for determining whether a given item to be transported is located in a monitored area.
- the invention relates to a computer device for determining whether a specified item to be transported is located in a monitored area and a device with such a computer device.
- the invention also relates to a computer program product, a data carrier and a data carrier signal.
- Different configurations of production lines are known from the prior art.
- a given item to be transported is examined at least once along a production route.
- devices are provided by means of which the quality of the transported goods is checked.
- the devices usually have an image acquisition device, by means of which an area of the production line is monitored.
- the image acquisition device is connected to the production line in terms of data technology.
- sensor values that relate to the position of the goods to be transported are transmitted to the image acquisition device.
- This information is necessary so that at least one image of the monitored area is recorded when the item to be transported is actually located in the monitored area.
- the data can be transmitted via a data line or wirelessly.
- a disadvantage of the known image capturing devices is that the structure of the image capturing device is very complex.
- a high installation wall is required so that the image capturing device can capture image signals of a surveillance area. Accordingly, it is very time-consuming and costly to place the image capture device at a different point in the production line, or it is not possible.
- the object of the invention is to specify a method in which the disadvantages mentioned above do not occur.
- the object is achieved by a method for determining whether at least one specified item to be transported is arranged in a monitored area, an image signal of the monitored area through which a transport route of an object runs being recorded, with the image signal being fed to an artificial neural network by another artificial neural network determining on the basis of the image signal whether at least part of an object is located in the surveillance area, wherein the image signal is supplied to the neural network when it is determined by the other neural network that at least part of the object is in is arranged in the monitoring area, it being determined by the artificial neural network on the basis of the image signal whether the determined at least one part of the object corresponds to at least one part of the at least one specified item to be transported, wherein if it is determined by the artificial neural network that at least a part of the at least one specified item to be transported is arranged in the monitored area, an image of the monitored area is generated.
- a further object consists in providing a computer device by means of which the disadvantages mentioned above can be avoided.
- the device is solved by a computer device for determining whether at least one specified transport item is located in a monitoring area, with a transport item detection module that has an artificial neural network, and a filter module that has another artificial neural network and is connected upstream of the transport item detection module, wherein the computer device is configured in such a way that the other artificial neural network of the filter module determines on the basis of the image signal whether at least part of an object is arranged in the monitored area, the image signal being fed to the neural network of the transported goods detection module when the other neural network determines that at least a part of the object is arranged in the surveillance area, and that the artificial neural network determines on the basis of the image signal whether the determined at least a part of the object corresponds to at least a part of the at least one predetermined Corresponds to transport goods, wherein the computer device causes an image of the surveillance area to be generated when the artificial neural network determines that at least part of the at least one predetermined transport goods is arranged in the surveillance area.
- the computer device can determine, independently of the data determined by the transport device, whether at least a part of the item to be transported is located in the monitored area and whether the specific item to be transported is a predetermined item to be transported. This makes it possible for the computer device to be placed at any point along the transport route without the settings of the transport device having to be changed.
- the method can be carried out with almost any image acquisition device, so that no complex structures are required
- the filter module with the other neural network offers the advantage that not all of the image signals are fed to the transported goods identification module. This is advantageous because the filter module can be used to filter out image signals that do not contain any specified goods to be transported. This means that they do not have to be analyzed by the transported goods identification module. Since the processing of the image signal by the transported goods detection module takes longer than by the filter module, the provision of the filter module ensures rapid data processing.
- the filter module is configured to detect whether at least part of an object is located in the surveillance area. This also includes the filter module recognizing that the entire object is arranged in the surveillance area. However, the filter module does not recognize which object it is. In this respect, the filter module does not recognize whether the object is the specified item to be transported.
- the item to be transported identification module is designed in such a way that it detects whether the at least one part of the object that is determined corresponds to at least one part of the specified item to be transported. This also includes recognizing whether the completely determined object corresponds to the specified transport goods.
- the specified item to be transported is an object that is to be transported by means of the transport device and is therefore of interest. During operation, however, it can happen that objects other than the specified goods to be transported are arranged on the conveyor belt and/or the image signal contains other objects, such as people or moving components of the production line, which are not of interest. These objects must therefore be recognized as irrelevant objects and not taken into account.
- transported goods An object that is subjected to a spatial change of location by means of the transport device is referred to as transported goods.
- the item to be transported is of interest to the user, so the method is intended to determine whether the item to be transported is located in the monitored area.
- the transport device is used to transport the goods to be transported and can be a conveyor belt, for example. Alternatively, other devices are also conceivable that act as a transport device.
- “at least one specified item to be transported” is understood to mean that the computer device can determine a single specified item to be transported or a plurality of specified items to be transported. If there are several specified transport goods, they can be goods of the same type. Alternatively, the transported goods can differ in type. In this case, the training process described below must be carried out for each type of transported goods so that they are recognized by the computer device during operation as "predetermined transported goods”.
- the transport path is understood to mean the path along which the transport device transports the specified goods to be transported or another object.
- the monitoring area is selected in such a way that it includes part of the transport route. It is thus ensured in a simple manner that the Transport the specified transport goods always passes through the monitoring area.
- Image generation or image recording is understood as a process in which the image signal is stored at a specific point in time in an electrical memory, in particular for repeated and/or permanent use.
- the electrical memory can be a hard disk of the computer device and/or the image acquisition device. Based on the stored image signal, the image can be generated, which can be displayed, for example.
- the image signal includes a plurality of pixels which as a whole form the aforementioned image.
- the computer device is a data-processing electrical unit, by means of which the image signal is processed and/or evaluated.
- the computer device can be a processor or have a processor.
- the light of the image signal can be light visible to the human eye.
- the light can have a wavelength in the range between 380 nm and 780 nm (nanometers). This offers the advantage that no complex and expensive image acquisition device has to be used to generate the image.
- an electrical device such as a mobile phone, a tablet, a camera, etc. can be used to generate the image.
- the artificial neural network and/or the other neural network may have been trained prior to operation.
- the data determined during operation of the computer device can also be used to train the artificial neural network and/or the other neural network.
- the training processes are described in more detail below.
- the neural network may be a network having at least an input layer and a decision layer.
- the neural network can have at least one layer.
- the neural network can be a deep neural network.
- a deep neural network is understood to mean an artificial neural network that has a large number of layers.
- the neural network has the input layer and the decision layer. In between, the neural network has at least one layer that is connected to the input layer and the decision layer in terms of data technology.
- the layer or layers located between the input layer and the decision layer are also referred to as the hidden layer or layers. Each of the aforementioned layers has neurons.
- Neurons in one layer are connected to neurons in another layer.
- the connection can be such that a neuron of one layer is connected to all neurons of another layer.
- a neuron of one layer can be connected to only a part of neurons of the other layer.
- a connection in the neural network can be realized in which one or more outputs of the input layer are fed to the layer.
- one or more outputs of the layer are fed to the decision layer.
- the neural network can have a neural convolution network, which is also referred to as a convolutional neural network.
- Image signals can be examined particularly well by means of a neural convolution network.
- patterns in image signals can be recognized well by means of a neural convolution network.
- the image signal captured by the image capture device can be supplied to an input layer of the neural convolution network.
- the number of neurons in an input layer of the neural convolution network can correspond to a number of pixels in the image signal.
- the input layer contains information about image height and image width.
- the input layer of the convolutional neural network can be three-dimensional.
- the input layer can contain information about the image height, image width and image information.
- the image information can be color information, for example. If the color information is limited to the colors red, yellow and blue, the input layer has three sub-layers.
- the convolutional neural network may have one or more layers.
- the layer may be a convolutional neural layer and/or downstream of the input layer.
- the layer receives the output data of the input layer.
- a layer can have several sublayers.
- a sublayer has a multiplicity of neurons which are arranged in a plane, the planes being arranged offset from one another.
- a layer can thus be viewed as a multi-dimensional matrix that outputs information at a different level of abstraction.
- a first layer can thus recognize and output information about edges. The first layer may be downstream of the input layer.
- a second layer downstream of the first layer can recognize and output different shapes based on the edges.
- a third layer downstream of the second layer can in turn recognize and output objects based on the different shapes.
- a fourth layer downstream of the third layer can recognize and output structures based on the objects.
- the convolutional neural network may have multiple layers, each having one or more sub-layers.
- the neural network can have a first layer and a second layer downstream of the first layer, with the second layer being generated by using a filter, in particular a one-dimensional or multi-dimensional filter.
- the filter is configured in such a way that when applied to the first layer it produces one, in particular a single, sub-layer of the second layer.
- the filter is thus matched to the first layer, in particular with regard to the number of sub-layers of the first layer.
- the number of sub-layers of the second layer may correspond to the number of filters applied to the first layer.
- the first layer can be downstream of the input layer and by applying a, in particular one or multi-dimensional filters are generated.
- the filter is configured in such a way that when applied to the input layer it produces one, in particular a single, sub-layer of the first layer.
- the filter is matched to the input layer, particularly with regard to the number of sublayers of the first layer.
- the number of sublayers of the first layer depends on the number of filters applied to the input layer.
- the decision layer of the neural network can be connected to at least two layers. This means that the outputs from at least two layers are fed directly to the decision layer. In particular, the decision layer can receive the outputs from each layer directly. Alternatively, the decision layer can be connected directly to the input layer. This is the case when the neural network has no layer between the input layer and the decision layer.
- the artificial neural network can have an unsupervised machine algorithm, in particular a learning algorithm.
- the unsupervised algorithm is used to accurately identify whether the specific at least a portion of the object corresponds to at least a portion of the specified cargo.
- An unsupervised algorithm is understood to be an algorithm that determines structures and relationships in the input data on the basis of inputs. Two types of unsupervised learning algorithms can be distinguished. A “cluster” algorithm attempts to find clusters of observations in a dataset that are similar to each other. An “association” algorithm tries to find rules with which associations can be drawn. It is particularly advantageous if the unsupervised algorithm is a proximity search algorithm, in particular a nearest neighbor algorithm.
- a decision layer of the convolutional neural network may include the unsupervised learning algorithm.
- the unsupervised learning algorithm is supplied with the output data of the layer or the input layer.
- the decision layer and hence the unsupervised algorithm can be fully connected to the upstream layer or the input layer.
- all neurons of the decision layer are connected to all neurons of the preceding further layer or the input layer.
- the convolutional neural network can have only a single layer, in particular the decision layer, which is completely connected to the previous layer.
- the preceding layer is understood to mean the layer that is arranged before the decision layer in an information flow from the input layer to the decision layer.
- the decision layer can have a multiplicity of neurons which extend, in particular exclusively, in one direction. Such a one-dimensional decision layer enables faster processing of the data.
- a data element of the image signal that is fed to the decision layer can be checked to see whether it contains part of the specified transport item.
- the check can contain the determination of at least one parameter of the supplied data element.
- two parameters can be determined.
- the parameters can differ from each other in type.
- one parameter can be the variance of an item of image information contained in the data element and/or another parameter can be the expected value of an item of image information contained in the data element.
- using the unsupervised learning algorithm and the at least one specific parameter, in particular the two specific parameters it can be determined whether the data element contains part of the specified transport goods.
- the unsupervised learning algorithm may use a training result to determine whether the data item contains a portion of the specified shipment.
- the training result can be at least one parameter range.
- the training result can be two parameter ranges. This is relevant for the case when the unsupervised learning algorithm determines two parameters.
- the parameter range determined in the training is referred to below as the pre-trained parameter range.
- the unsupervised learning algorithm can determine that the supplied data element contains part of the specified transport good if the determined at least one parameter lies in the at least one pre-trained parameter range. In the event that two parameters are determined, the unsupervised learning algorithm determines that the supplied data element contains part of the specified transport goods if a first determined parameter, such as the variance, is in a first pre-trained parameter range and a second determined parameter, such as the Expected value is in a second pre-trained parameter range. It has been recognized that by using the unsupervised learning algorithm it is possible to determine whether the objects contained in the image signal correspond to the specified transport goods. Therefore, the neural network can be used flexibly. In particular, the neural network can also be used if the image signal contains objects that are unknown to the neural network. This is possible because the unsupervised learning algorithm is based on the at least one parameter and uses the at least one parameter to determine whether or not the image signal contains the specified item to be transported.
- the unsupervised learning algorithm can be trained in such a way that it also determines another predefined item to be transported.
- the method works in the same way if a number of predefined goods to be transported are to be determined.
- the other item to be transported differs, for example, in type from the specified item to be transported.
- the two types of goods to be transported can be arranged in the monitoring area at the same time.
- the types of goods to be transported can be arranged in the monitoring area at different points in time. In this case, as part of the training process, several parameter ranges of the same type have been determined, which are assigned to the respective goods to be transported.
- the computer device can determine the specified transport item assigned to the specified parameter range.
- the neural network can be used to identify in a simple manner whether a specified item to be transported or a plurality of specified types of item to be transported are located in the monitoring area.
- the unsupervised algorithm may be configured to output a bounding box that encloses the portion of the specified cargo.
- a bounding box may be created when it has been determined that a portion of the specified cargo is located within the surveillance area.
- the bounding box is determined based on the analysis of the data items described above. Thus, after the analysis of the data elements, those data elements are known which contain part of the goods to be transported.
- a neural convolution network which has an unsupervised learning algorithm, offers the advantage that it can be recognized particularly well whether the specified item to be transported is arranged in the monitored area.
- the computer device can have a triggering module that determines a recording time.
- the triggering module can be connected downstream of the transported goods identification module. This means that the triggering module receives the output data from the transported goods identification module as input data. Alternatively, it is possible for the triggering module to work in parallel with the item-of-transport identification module.
- the recording time corresponds to the time at which the picture was taken.
- the recording time can be offset by a period of time from a determination time at which the other artificial neural network has determined that at least part of the object is in the surveillance area.
- the recording time is not time-delayed to the recording time and thus an image is recorded at the same time as it was determined that the object is arranged in the surveillance area. In this case it was determined by the other artificial neural network that the object, in particular transport goods, is arranged completely in the monitored area.
- the time period can be selected such that the entire object, in particular transported goods, is arranged in the monitored area at the time of recording. This means that the picture is not taken until the object, in particular goods to be transported, from the position at the time of determination to a position in which the object, in particular goods to be transported, is completely arranged in the monitoring area.
- the triggering module makes a prediction as to when the item to be transported is completely arranged in the monitoring area. It can thus be achieved in a simple manner that only images that can actually be processed in later processing are recorded. For this it is necessary that the complete transported goods are arranged in the monitored area.
- the time of recording can be determined and/or the image can only be generated when the neural network of the item to be transported identification module determines that the specific object corresponds to the at least one specified item to be transported.
- An algorithm of the triggering module can be used to determine when the object, in particular transport goods, is completely arranged in the monitored area.
- the triggering module can have a linear quadratic estimation algorithm, by means of which it is determined when the cargo is completely arranged in the surveillance area.
- the triggering module thus takes into account the fact that the image capturing device requires a certain time from receipt of a recording signal in order to capture an image. Accordingly, the triggering module ensures that the image capturing device captures the image neither too early nor too late, but always when the specified item to be transported is completely arranged in the monitored area.
- the recorded image can be processed in such a way that the quality of the goods to be transported is assessed in a subsequent step. In particular, it can be assessed whether the transported goods are not damaged and/or have other undesirable properties.
- an image can be made available in a simple manner and without a data connection to the transport device that can be used to assess the quality of the transported goods. This means that no electrical data, such as sensor data, relating to a state and/or a property of the transport device and/or the transported goods are transmitted to the computer device.
- the determination of the transport quality can thus be based solely on the recorded optical image signal.
- the computer device has a filter module.
- the filter module is configured in such a way that it checks the image signal before supplying it to the artificial neural network as to whether at least a part of the object is arranged in the surveillance area.
- the filter module is provided in the computer device in such a way that it first receives the image signals recorded by the image recording device.
- the filter module can thus filter out image signals, so that only a part of the monitoring area goes to the transport goods detection module outgoing image signals. This is advantageous because the filter module requires less computing capacity than the modules connected downstream of the filter module. As a result, the computer capacity required can be kept low when no object is arranged in the monitored area.
- the filter module may include another artificial neural network.
- the other artificial neural network can determine whether an object is located in the monitoring area based on the supplied image signal.
- the other artificial neural network may be a deep neural network. In this case, the other artificial network can have fewer layers than the artificial neural network. This allows the other artificial network to process the image signal faster than the artificial neural network.
- the other neural network may be another convolution neural network.
- the other convolutional neural network has the advantage that image patterns can be recognized and it is therefore advantageous for the analysis of image signals.
- the other artificial neural network differs from the artificial network in that it is optimized to process the image signals quickly and recognizes that at least a part of the object is located in the monitoring area. However, it usually does not recognize or does not recognize which object is arranged in the monitored area. In contrast, the artificial neural network is optimized in such a way that it recognizes whether the object located in the monitored area corresponds to the specified transport goods.
- the image signal captured by the image capture device can be supplied to an input layer of the other artificial neural network.
- the number of neurons in the input layer of the other neural convolution network can correspond to a number of pixels in the image signal.
- the input layer contains information about image height and image width.
- the input layer of the other convolutional neural network can be three-dimensional.
- the input layer can contain information about the image height, image width and image information.
- the image information can be color information, for example. If color information is limited to the colors red, yellow and blue, the input layer has three sublayers. As a result, the input layer of the other neural network can be identical to the input layer of the neural network described above.
- the other convolutional neural network can have one or more further layers.
- the further layer can be a neuronal convolution layer and/or can be connected downstream of the input layer.
- the further layer receives the output data of the input layer.
- At least one filter with a predetermined pixel size of the further layer analyzes the received data and outputs an output matrix.
- the number of output matrices depends on the number of filters.
- the size of the output matrix depends on the filter size and other factors such as padding and step size.
- the output of the further layer can be simplified, in particular reduced, by pooling, become.
- the number of further layers of the other convolutional neural network is smaller than the number of layers of the convolutional neural network.
- the other convolutional neural network may have only one input layer and one output layer.
- the other neural network may have one or more layers between the input layer and the output layer.
- the other neural network can thus have 10 or fewer further layers.
- the convolutional neural network can have 20 or more further layers.
- the other artificial neural network can have another unsupervised machine algorithm, in particular a learning algorithm.
- the other unsupervised algorithm is for detecting whether at least part of an object is located in the surveillance area.
- the other unmonitored algorithm also recognizes a hitherto unknown object that is located in the monitored area. It is particularly advantageous if the other unsupervised algorithm is a proximity search algorithm, in particular a nearest neighbor algorithm.
- a decision layer of the other convolutional neural network may include the other unsupervised learning algorithm.
- the output data of a further layer or the input layer are supplied to the other unsupervised learning algorithm.
- the decision layer, and hence the other unsupervised algorithm may be fully connected to the preceding further layer or to the input layer.
- all neurons of the decision layer are connected to all neurons of the preceding further layer or the input layer.
- the other convolutional neural network can have only a single layer, in particular the decision layer, which is completely connected to the previous layer or the input layer.
- the preceding layer is understood to mean the layer that is arranged before the decision layer in an information flow from the input layer to the decision layer.
- the other unsupervised learning algorithm may be configured to output information as to whether or not a part of the object is located in the monitored area. If the unsupervised learning algorithm determines that part of the object is located in the surveillance area, the image is fed to the neural network. As a result, filtering of the image signals is achieved in a simple manner, so that only the image signals in which at least part of the object is arranged in the monitored area are fed to the neural network. The neural network can then determine, in the manner described above, whether the specific object is the specified item to be transported and/or generate the bounding box.
- the other unsupervised learning algorithm can use a training result of the other neural network to determine whether the data element of the image signal contains a part of the object. In this way, the other unsupervised learning algorithm can determine a different parameter based on the image signal. In this case, the unsupervised learning algorithm can depend on the other parameter determine whether at least part of the object is located in the surveillance area. In particular, another parameter or another parameter range pre-trained in the training process can be used to assess whether the data element contains a part of the object or not. By comparing the other parameter determined in real operation to the other parameter pre-trained in the training process or the pre-trained other parameter range, it can be determined whether the data element contains at least part of the object.
- the other neural network recognizes that at least part of an object is arranged in the monitored area.
- the other neural network works like the neural network, in that in both cases it is checked whether another parameter determined in real operation lies within a pre-trained other parameter range, with the output of the respective network depending on the test result. Therefore, reference is also made to the above statements on the neural network.
- the two networks may differ in the number of layers.
- the other neural network can have fewer layers than the neural network, so that the other neural network cannot recognize exactly which object it is.
- the other neural network can accurately recognize whether at least part of the object is located in the surveillance area.
- the neural network outputs whether the object is the specified item to be transported or part of the item to be transported, while the other neural network outputs whether any part of an object is located in the monitoring area at all.
- the structure, in particular the connections between the different layers, of the other artificial neural network can be determined by a neural architecture search system. This can also be done in the training process.
- the structure of the network is optimized for speed and not for the precise recognition of patterns.
- a system constructed in this way offers the advantage that the examination of the image signal can take place particularly quickly.
- every neuron in one layer is connected to every neuron in another layer.
- the search system is designed in such a way that it recognizes which connections between the neurons are actually necessary and removes the unnecessary connections. Therefore, the search system reduces the number of connections between the neurons, which allows the artificial neural network to process the image signal quickly. Since the decision layer is fully connected to the previous layer, the optimization only occurs in the layers preceding the decisions.
- the neural convolutional network is trained in a training process before it is used.
- the workout of the neural network can have a first training phase and a second training phase, in particular downstream of the first training phase.
- the neural network can be trained in the second training phase using the neural network trained in the first training phase. This is explained in more detail below.
- the neural network can be modified in comparison to the neural network used in real operation such that the neural network to be trained has a different decision layer.
- the decision layer of the network to be trained does not have an unsupervised learning algorithm. This means that in the first training phase, the layers of the neural network to be trained that precede the decision layer are trained.
- the training takes place in the first training phase with training images.
- the number of training images supplied to the neural network to be trained in the first training phase is greater than the number of images supplied to the neural network to be trained in the second training phase.
- the images fed to the neural network to be trained are labeled.
- the first training phase can be understood as a basic training of the neural network. This means that the training is not geared towards the specific application, that is to say the at least one specified item to be transported, but rather the aim is for the neural network to learn a large number of different objects.
- the image signal contains information about the image height, image width and other image information such as color.
- the image signal contains information about the objects represented in the images.
- the image signal contains information about what objects, e.g. screws, chair, pen, etc., are.
- the training images supplied as part of the first training phase can have the specified goods to be transported.
- the designated or classified objects are at least partially enclosed by a bounding box, so that the neural convolution network recognizes where the specified item to be transported is located in the image signal.
- the neural network to be trained is supplied with a large number of images, in particular millions of images, which, as explained above, contain information about the object and the position of the object.
- images can show a large number of different objects, in which case the goods to be transported can be included in the images, but do not have to be included.
- the first training phase can preferably only be carried out once.
- the neural network is only trained according to the second training phase for each new application.
- the first training phase can be carried out before every second training phase, in particular each time the neural network is used in a new application.
- the decision layer of the neural network to be trained features the unsupervised learning algorithm. This means that the structure and functionality of the neural network to be trained in the second training phase corresponds to the neural network that is used in real operation.
- the neural network trained in the first training phase is used in the second training phase. This means that in the second training phase, the layer or layers preceding the decision layer have already been trained.
- the second training phase is used to train the neural network for the special application, ie for the case in which the neural network is to recognize the at least one transport item.
- the neural network to be trained can be supplied with training images which contain the specified item to be transported and optionally one or more other objects, and training images which contain no object and therefore no specified item to be transported. However, it is advantageous if 20-100%, in particular 80-95%, preferably 90-95%, of the supplied training images show the specified item to be transported.
- the same training images can be supplied in the second training phase as in the first training phase. Alternatively, different training images can be supplied. At least one training image, in particular a large number of training images, can be supplied to the neural network in the second training phase. A part of the training images can be labeled and another part of the training images can not be labeled. Alternatively, all images may not be labeled. All training images that contain an object are labeled. Training images that do not contain an object are not labeled.
- the neural network is trained for the at least one specified item to be transported. This means that the neural network recognizes very precisely whether the object in the monitored area is the specified item to be transported. Accordingly, the output of the neural network is whether the object located in the monitored area is the specified item to be transported.
- At least one parameter is determined for at least one neuron of the decision layer supplied training data element.
- the training data item comes from a layer upstream of the decision layer.
- the training data element contains image information, in particular an image intensity, and/or represents an image area of a training image.
- a large number of training data elements, each representing an image area, are fed to the decision layer.
- the decision layer has the complete picture in the form of training data elements. It is clear that the training data elements are fed to the decision layer for each training image fed to the neural network to be trained.
- At least one parameter is determined in the decision layer for, in particular each, training data element of a training image.
- a variance of the image information contained in the training data element is determined as the first parameter and/or an expected value, in particular of a normal distribution, of the image information contained in the training data element is determined as the second parameter.
- a parameter range can be determined in which the training data element has a part of the transport object.
- the parameter range can be determined because a large number of training images are used for training. Accordingly, many parameters are determined so that a range of parameters can be determined.
- two parameter ranges which are assigned to the specified item to be transported, are determined.
- the training data element has at least part of the item to be transported if both determined parameters are in the respective parameter range.
- the slices trained in the first training phase can determine very precisely whether the image signal contains an object. It is therefore possible in the second training phase that at least one cluster, in particular parameter ranges, can be formed.
- the unsupervised algorithm is not adapted, but the above-mentioned parameter or parameters are determined.
- the parameter values or parameter value ranges are known, which define a cluster area in which training data elements of the training images are arranged, which have a part of the transported goods. With knowledge of the parameter values or the parameter value ranges, a decision can be made in real operation as to whether the video signal contains at least part of the transported goods.
- the other neural network in particular the convolution network, is trained in a training process before it is used.
- the training can take place identically to the neural network.
- the other neural network can also be trained in two training phases.
- Analogous to the neural network at least one other parameter can be determined during training for a training image supplied to the decision layer of the other neural network.
- the other unsupervised learning algorithm can determine the other parameter or another range of parameters.
- the determined pre-trained other parameter or other range of parameters may characterize whether at least part of the object is located in the surveillance area.
- the determined other parameter can be compared with the pre-trained parameter or other pre-trained parameter range determined in the training process, and depending on the comparison it can be determined whether at least part of the object is arranged in the monitored area. In particular, it can be determined that at least a part of the object is arranged in the surveillance area if the determined other parameter lies in the pre-trained other parameter range. Contrary to the neural network, the other becomes neural networks trained such that the output of the other neural network is whether or not at least part of the object is located in the surveillance area.
- a device with an image capture device for capturing an image signal, which emanates from a monitored area and a computer device according to the invention that is data-technically connected to the image capture device.
- the image capture device and the computer device are connected in such a way that the image signals captured by the image capture device are fed to the computer device.
- the image acquisition device and the computer device can be integrated in the same device. So the device can be a mobile phone or camera or tablet or the like. Alternatively, the image acquisition device and the computer device can be designed separately from one another.
- the computing device can thus be part of a computer which is connected to the image acquisition device in terms of data technology.
- the computer device has to communicate with the image acquisition device and not with the transport device.
- the image capturing device can have a lens. In this case, the same image capturing device can capture the image signal emanating from the surveillance area and record and thus generate the image. Alternatively, an image capture device can capture the image signal emanating from the monitored area and another image capture device can capture the image and thus generate it.
- the image capture device can continuously monitor the surveillance area, so that image signals are continuously captured by the image capture device. Accordingly, image signals are continuously evaluated by the computer device.
- the image acquisition device can be designed in such a way that it can acquire image signals that are in the wavelength range that is visible to the human eye. Alternatively or additionally, the image acquisition device can be designed in such a way that it can process image signals that are outside the wavelength range that is visible to the human eye, in particular in the infrared or hyperspectral range.
- the computer device can output a recording signal and transmit it to the image acquisition device.
- the image capturing device can record the image and thus generate it.
- the recording signal can contain information about the recording time or is output at the recording time.
- a computer program which includes instructions which, when the program is executed by a computer, cause the latter to carry out the method according to the invention.
- a data carrier is advantageous on which the computer program according to the invention is saved.
- a data carrier signal is advantageous which transmits a computer program according to the invention.
- FIG. 1 shows a schematic representation of a device according to the invention at a point in time at which a part of a specified item to be transported is completely arranged in a monitoring area
- FIG. 4 shows a flowchart for training the artificial neural network from FIG. 2,
- FIG. 6 shows a flow chart of the method according to the invention.
- a device 9 shown in FIG. 1 has an image acquisition device 10 and a computer device 3 . At least part of the image acquisition device 10 and the computer device 3 are arranged in a cavity of the device 9 that is enclosed by a housing 12 .
- the image acquisition device 10 and the computer device 3 are connected in terms of data technology. In this way, optical image signals captured by the image capturing device 10 are transmitted to the computer device 3 .
- the device 9 can be a mobile phone.
- the device 9 is not connected to the transport device 11 in terms of data technology. This means that no data exchange takes place between the transport device 11 and the device 9 . In particular, no data is transmitted from the transport device 11 to the device 9 or vice versa.
- the image capture device 10 monitors a surveillance area 2 from which the image signal emanates.
- the transport device 11 can be a conveyor belt or the like.
- the transport device 11 serves to transport objects.
- the object can be a given transport item 1, which is further analyzed.
- the object can be another transport item 1a that is not relevant and should therefore not be analyzed further.
- the image capturing device 10 is placed in such a way that the monitored area 2 includes an area of the transport device 11 .
- the monitoring area 2 includes a transport route of the object, so that it is ensured that each specified item to be transported 1 and each other, not relevant transport goods 1a must pass the monitoring area 2.
- the image acquisition device 10 can record an image of the monitored area 2 after receiving a recording signal 13 output by the computer device 3 .
- the computer device 3 includes a filter module 6 to which the image signal captured by the image capture device 10 is supplied.
- the computer device 3 has a transported goods identification module 4 to which an output signal from the filter module 6 can be supplied.
- the transported goods identification module 4 has an artificial neural network, the structure of which is shown in FIG.
- the filter module 6 has another artificial neural network, the structure of which is shown in FIG.
- the computer device 3 also has a triggering module 5 . The output signal of the filter module 6 is fed to the transport goods identification module 4 and the triggering module 5 .
- the transported goods identification module 4 and the triggering module 5 generate the recording signal 13 which is fed to the image acquisition device 10 .
- the image recording device 10 After receiving the recording signal 13 , the image recording device 10 records an image of the monitored area 2 . The image is saved and can be used for a subsequent determination of the quality of the specified item 1 to be transported.
- the triggering module 5 is connected downstream of the goods identification module 4 .
- Fig. 2 shows a schematic structure of an artificial neural network 7 of the transported goods identification module 4.
- the neural network 7 has an input layer 15 and a decision layer 16 and a large number of layers 17, 17'. Although two layers are shown in FIG. 2, namely a first layer 17 and a second layer 17', the neural network 7 has more than two layers 17, 17'. The flow of information from the input layer 15 to the decision layer 16 takes place in the direction of the arrow shown.
- the neural network 7 can have more than 100 layers.
- the image signal received by the filter module 6 from the image acquisition device 10 is transmitted to the transported goods detection module 4 when it is determined in the filter module 6 that at least part of the specified transported goods 1 is located in the monitoring area 2.
- the image signal is fed to the input layer 15 of the neural network 7 .
- the input layer is three-dimensional.
- the extent of the input layer 15 in the width direction B corresponds to an image width and the extent of the input layer 15 in the height direction H corresponds to the image height.
- the number of neurons of the input layer 15 not shown in Figure 2 can be Width direction B corresponds to the number of pixels contained in the image signal in the width direction B and in height direction H correspond to the number of pixels contained in the image signal in the height direction.
- the input layer 15 can contain further image information in the depth direction T. In this way, the input layer 15 can contain color information in the depth direction T. If the focus is only on the colors red, yellow and blue, the input layer has three sub-layers in the depth direction T, not shown in FIG. 2, namely one sub-layer for each of the aforementioned colors. However, the image information is not limited to color.
- the layers 17 are convolutional layers.
- the layers 17 each have a plurality of filters, which leads to a greater extension of the layers 17 in the depth direction T. This means that the layers 17 have a larger number of sub-layers in the depth direction T than the input layer 15 and/or than the respectively preceding layer 17, 17'.
- the filters are selected in such a way that the number of neurons is reduced, at least in the width direction B. The number of neurons in the width direction B from layer 17, 17' to another layer 17, 17' in the direction of the decision layer 16 can continue to decrease.
- the neural network 7 is designed in such a way that the decision layer 16 is one-dimensional. In the case shown in Figure 2, the decision layer 16 thus extends in the vertical direction H.
- the decision layer 16 includes an unsupervised learning algorithm.
- the unsupervised learning algorithm can be a proximity search algorithm, in particular a nearest neighbor algorithm.
- the decision layer 16 is fully connected to the previous layer 17. This means that all neurons of the previous layer 17 are connected to all neurons of the decision layer 16.
- the convolutional neural network 7 has only one layer, namely the decision layer 16, which is completely connected to a preceding layer.
- the unsupervised learning algorithm is configured in such a way that its output 18 contains information as to whether at least part of the object, in particular item to be transported 1 , 1a in the monitoring area 2 corresponds to at least part of the specified item to be transported 1 .
- the unsupervised learning algorithm creates a bounding box that encloses the cargo or a portion of the cargo.
- Fig. 3 shows a schematic structure of another artificial neural network 7 of the filter module 6.
- the other neural network 19 has an input layer 20 and a decision layer 22 and a large number of further layers 21 . Although a further layer 21 is shown in FIG. 3 , the neural network 19 can have more than one layer 21 . However, the other neural network 19 has fewer additional layers 21 than the neural network 7 shown in FIG indicated direction of the arrow.
- the image signal captured by the image capture device 10 is supplied to the input layer 20 .
- the input layer is three-dimensional and identical to the input layer 15 of the neural network 7 . Therefore, reference is made to the statements above.
- the decision layer 22 includes an unsupervised learning algorithm.
- the unsupervised learning algorithm can be a proximity search algorithm, in particular a nearest neighbor algorithm.
- the unsupervised learning algorithm can be configured in such a way that its output 23 contains information as to whether at least a part of the object is arranged in the monitored area 2 .
- the filter module does not differentiate whether the specific object is the specified item to be transported or not.
- the filter module 6 only determines that an item to be transported 1 or another item to be transported 1a or another object is located in the monitored area.
- FIG. 4 shows a flow chart of training of the neural network 7 shown in FIG. 2.
- the training has two training phases T1, T2, with the second training phase T2 following the first training phase T1.
- a neural network that is based on the neural network shown in FIG. 2 is trained.
- the neural network to be trained differs in the decision layer 16 from the neural network shown in FIG.
- the decision layer 16 of the neural network to be trained does not have an unsupervised learning algorithm, but outputs the result determined in the preceding layers.
- the output can contain the information as to whether part of the specified item to be transported 1 is located in the monitored area 2 and/or have a boundary box which encloses the part of the item to be transported 1 located in the monitored area.
- a large number of training images are supplied to the input layer.
- the training images are labeled.
- the images contain information as to whether an object is arranged and/or where the object is arranged.
- the training images can contain the specified item to be transported 1 .
- the training images can contain other objects in addition to the specified goods to be transported. It is also possible that the training images do not contain the specified transport item 1. As a result, the training images contain a large number of different objects.
- a large number of training images is supplied to the neural network 7 to be trained.
- the aim of the first training phase is basic training, after which the neural network recognizes a large number of objects which can also contain the at least one specified item to be transported.
- the layers 17, 17' of the neural network can already recognize precisely whether a part of an object is arranged in the image signal.
- the second training phase T2 is initiated.
- the second training phase is based on the neural network T1 trained in the first training phase T1, which is symbolized by the dashed arrow in FIG.
- the second training phase serves to train the neural network for the specific application. This means that after the end of the second training phase, the neural network can easily recognize whether the object in the monitored area is the specified item to be transported.
- the neural network to be trained in the second training phase T2 does not differ in structure and functioning from the neural network used in real operation, which is shown in FIG.
- the neural network to be trained in the second training phase T2 has a decision layer 16 with the unsupervised learning algorithm.
- the neural network to be trained in the second training phase T2 has the layers 17, 17' trained in the first training phase T1.
- a first training step T21 the neural network 7 to be trained is supplied with images which contain a specified item to be transported and images which do not contain any specified item to be transported. In contrast to the first training phase T1, at least some of the images are not labeled or all the images are not labeled.
- a second training step T22 several parameters are determined in the decision layer 16 for each training data element of the training image supplied to the decision layer.
- the training data element represents part of the training image and contains at least one piece of image information, such as the light intensity.
- a normal distribution is then determined for the image information contained in the training data element. In particular, a variance and/or an expected value of the normal distribution is determined.
- the parameters are determined for each training data element of a training image.
- the process is repeated for each training image supplied to the neural network to be trained, in particular the training data elements of the training image.
- a third training step T23 based on the ascertained parameter values, at least one parameter range is determined, in which the training data elements are located, which contain at least a part of the transported goods 1. Accordingly, it is also known at which parameter values the training data elements do not contain any part of the specified item to be transported. Utilization is made of the fact that the layers 17, 17' can already determine precisely whether an image signal contains a part of the specified transport item 1. Since the image information of a training data element and thus the parameters depend on whether it contains part of the specified item to be transported, the training data elements can be classified using the determined parameters as to whether they contain part of the specified item to be transported 1 or not.
- FIG. 5 shows a diagram with training data elements.
- the vertical axis is a parameter, such as Variance of the image information included in the training data item and the horizontal axis is another parameter, like expected value, of the image information included in the training data item.
- the training data elements shown in FIG. 5 each show the same image section in each of the training images fed to the neural network 7 . In the present case it was assumed that 20 training images are supplied to the neural network 7, with 10 training images containing a first predefined training item and 10 training images containing a second predefined training item.
- a first cluster area is defined by a first expectation area E1 and a first variance area V1.
- a second cluster area is defined by a second expectation area E2 and a second variance area V2.
- the cluster areas are determined by the determined parameter values of the first specified item to be transported and by the determined parameter values of the second specified item to be transported.
- the first cluster area is selected in such a way that the parameter values determined by the neural network for the first specified item to be transported lie within the cluster area.
- the second cluster area is selected in such a way that the parameter values determined by the neural network for the second specified item to be transported lie within the second cluster area C2.
- the other neural network 19 shown in FIG. 3 is trained in the same way as the neural network 7. In this respect, reference is made to the above explanations for FIGS. In contrast to the neural network 7, however, the other neural network 19 outputs information as to whether at least part of the object is arranged in the monitored area 2.
- FIG. 6 shows a flow chart of the method according to the invention.
- a first method step S1 an image signal emanating from the monitored area 2 is captured by the image capturing device 10 .
- the captured image signal is fed to the filter module 6 .
- the filter module 6 is used to determine whether at least a part of the transported goods 1 is arranged in the monitored area 2 .
- the filter module 6 has the other artificial neural network 19, which may be a convolutional neural network.
- the neural convolution network has already been trained before use.
- the other artificial network 19 determines, on the basis of the image signal received from the image acquisition device 10, whether at least part of an object is arranged in the monitored area 2. In particular, it is determined whether the determined other parameter value is within the other pre-trained parameter range. If it is determined in the filter module 6 that no part of the object is arranged in the monitored area 2, the processing is terminated and the process sequence begins again. This is the case when the determined other parameter value is not within the other pre-trained parameter range. This is symbolized by the dashed arrow. Since the filter module 6 continuously receives image signals from the image capture device 10, a new image signal is examined in the filter module 6 as described above.
- the output signal is transmitted by the filter module 6 to the transported goods identification module 4 in a third method step S3 .
- the image signal coming from the filter module 6 is processed in the transport goods identification module 4 in the layers 17, 17'.
- several data elements of the image signal are transmitted to the decision layer 16 .
- the parameters mentioned above, such as variance and expected value, are determined for each data element.
- each data element represents a part of the image signal.
- the decision layer is supplied with the image signal in the form of data elements.
- the unsupervised learning algorithm uses the determined parameter values to determine for each data element whether the data element is in the first cluster area C1 or in the second cluster area C2. If this is the case, the transport identification module determines that the item in question is a predefined item to be transported assigned to the cluster area C1, C2, or at least part of the predefined item to be transported. This process is repeated for all data elements of the image signal.
- a bounding box is generated, which encloses the part of the goods to be transported or the goods to be transported.
- the bounding box can be created using the analyzed data elements. After analyzing the data elements, it is known whether they contain part of the transported goods.
- a fifth method step S5 the output signal is transmitted from the transport detection module 4 to the triggering module 5.
- a recording time is determined in the triggering module 5 on the basis of the output signal from the transport goods identification module 4 .
- a period of time is determined in the triggering module 5 when the item to be transported 1 is completely arranged in the monitoring area 2 .
- a recording signal is generated and transmitted to the image acquisition device 5 on the basis of the outputs of the transport detection module 4 and/or the triggering module 5 .
- the recording signal can contain the information that the image acquisition device 5 is to record an image and the information when this is to take place.
- the image acquisition device 10 records the image of the monitored area 2 and stores it. An assessment of the quality of the transported goods, which is not presented in more detail below, can be carried out on the basis of the recorded image.
- V1 first range of variance
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2024541125A JP2024533867A (ja) | 2021-09-21 | 2022-09-20 | 所定の搬送アイテムが監視領域内に配置されているかどうかを判定する方法 |
EP22790299.6A EP4405908A1 (de) | 2021-09-21 | 2022-09-20 | Verfahren zum bestimmen, ob ein vorgegebenes transportgut in einem überwachungsbereich angeordnet ist |
CN202280063854.7A CN117980964A (zh) | 2021-09-21 | 2022-09-20 | 用于确定预定的运输物品是否布置在监控区域内的方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102021124348.3A DE102021124348A1 (de) | 2021-09-21 | 2021-09-21 | Verfahren zum Bestimmen, ob ein Transportgut in einem Überwachungsbereich angeordnet ist |
DE102021124348.3 | 2021-09-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023046653A1 true WO2023046653A1 (de) | 2023-03-30 |
Family
ID=83898260
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2022/076020 WO2023046653A1 (de) | 2021-09-21 | 2022-09-20 | Verfahren zum bestimmen, ob ein vorgegebenes transportgut in einem überwachungsbereich angeordnet ist |
Country Status (5)
Country | Link |
---|---|
EP (1) | EP4405908A1 (de) |
JP (1) | JP2024533867A (de) |
CN (1) | CN117980964A (de) |
DE (1) | DE102021124348A1 (de) |
WO (1) | WO2023046653A1 (de) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170206431A1 (en) * | 2016-01-20 | 2017-07-20 | Microsoft Technology Licensing, Llc | Object detection and classification in images |
CN108038843A (zh) * | 2017-11-29 | 2018-05-15 | 英特尔产品(成都)有限公司 | 一种用于缺陷检测的方法、装置和设备 |
JP2021012107A (ja) * | 2019-07-05 | 2021-02-04 | 株式会社イシダ | 検査装置及び学習装置 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3704626A1 (de) | 2017-11-02 | 2020-09-09 | Amp Robotics Corporation | Systeme und verfahren zur optischen materialcharakterisierung von abfallmaterialien mithilfe von maschinenlernen |
EP3816857A1 (de) | 2019-11-04 | 2021-05-05 | TOMRA Sorting GmbH | Neuronales netzwerk zur massensortierung |
-
2021
- 2021-09-21 DE DE102021124348.3A patent/DE102021124348A1/de active Pending
-
2022
- 2022-09-20 EP EP22790299.6A patent/EP4405908A1/de active Pending
- 2022-09-20 JP JP2024541125A patent/JP2024533867A/ja active Pending
- 2022-09-20 CN CN202280063854.7A patent/CN117980964A/zh active Pending
- 2022-09-20 WO PCT/EP2022/076020 patent/WO2023046653A1/de active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170206431A1 (en) * | 2016-01-20 | 2017-07-20 | Microsoft Technology Licensing, Llc | Object detection and classification in images |
CN108038843A (zh) * | 2017-11-29 | 2018-05-15 | 英特尔产品(成都)有限公司 | 一种用于缺陷检测的方法、装置和设备 |
JP2021012107A (ja) * | 2019-07-05 | 2021-02-04 | 株式会社イシダ | 検査装置及び学習装置 |
Non-Patent Citations (1)
Title |
---|
EBRAHIMPOUR MOHAMMAD K ET AL: "WW-Nets: Dual Neural Networks for Object Detection", 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), IEEE, 19 July 2020 (2020-07-19), pages 1 - 8, XP033831735, DOI: 10.1109/IJCNN48605.2020.9207407 * |
Also Published As
Publication number | Publication date |
---|---|
DE102021124348A1 (de) | 2023-03-23 |
EP4405908A1 (de) | 2024-07-31 |
CN117980964A (zh) | 2024-05-03 |
JP2024533867A (ja) | 2024-09-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE68928895T2 (de) | Verfahren und Gerät für universelle adaptiv lernende Bildmessung und -erkennung | |
DE69637172T2 (de) | Hochgeschwindigkeits sortierapparat für den lebensmittelmassentransport zur optischen inspektion und sortierung von grossvolumigen lebensmitteln | |
DE69024537T2 (de) | Infrarot-Überwachungssystem | |
DE102017116017A1 (de) | Kraftfahrzeug-Sensorvorrichtung mit mehreren Sensoreinheiten und mehreren neuronalen Netzen zum Erzeugen einer kombinierten Repräsentation einer Umgebung | |
DE102018128531A1 (de) | System und Verfahren zum Analysieren einer durch eine Punktwolke dargestellten dreidimensionalen Umgebung durch tiefes Lernen | |
WO2020049154A1 (de) | Verfahren und vorrichtung zur klassifizierung von objekten | |
EP2034461A2 (de) | Verfahren zur Detektion und/oder Verfolgung von bewegten Objekten in einer Überwachungsszene mit Störern, Vorrichtung sowie Computerprogramm | |
EP0523407A2 (de) | Verfahren zur Klassifikation von Signalen | |
DE102018109276A1 (de) | Bildhintergrundsubtraktion für dynamische beleuchtungsszenarios | |
EP3767403A1 (de) | Machine-learning gestützte form- und oberflächenmessung zur produktionsüberwachung | |
EP3468727B1 (de) | Sortiervorrichtung sowie entsprechendes sortierverfahren | |
DE102016100134A1 (de) | Verfahren und Vorrichtung zum Untersuchen eines Objekts unter Verwendung von maschinellem Sehen | |
DE69821225T2 (de) | Verfahren zur kontrolle der oberfläche einer laufenden materialbahn mit vorklassifikation von ermittelten unregelmässigkeiten | |
EP3779790A1 (de) | Optische qualitätskontrolle | |
EP3482348A1 (de) | Verfahren und einrichtung zur kategorisierung einer bruchfläche eines bauteils | |
EP2359308B1 (de) | Vorrichtung zur erzeugung und/oder verarbeitung einer objektsignatur, überwachungsvorrichtung, verfahren und computerprogramm | |
DE102020209080A1 (de) | Bildverarbeitungssystem | |
DE102023107476A1 (de) | Ultraschall-defekterkennung und klassifikationssystem unter verwendung von maschinellem lernen | |
WO2023046653A1 (de) | Verfahren zum bestimmen, ob ein vorgegebenes transportgut in einem überwachungsbereich angeordnet ist | |
EP4007990A1 (de) | Verfahren zur analyse von bild-informationen mit zugeordneten skalaren werten | |
DE19612465C2 (de) | Automatisches Optimieren von Objekt-Erkennungssystemen | |
DE102022107144A1 (de) | Komponentenprüfsystem und -verfahren auf produktionsgeschwindigkeit | |
DE102021204040A1 (de) | Verfahren, Vorrichtung und Computerprogramm zur Erstellung von Trainingsdaten im Fahrzeug | |
EP0693200B1 (de) | Verfahren zur klassifizierung von objekten | |
EP0469315B1 (de) | Verfahren zur visuellen Inspektion zwei- oder dreidimensionaler Bilder |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22790299 Country of ref document: EP Kind code of ref document: A1 |
|
WD | Withdrawal of designations after international publication |
Designated state(s): DE |
|
ENP | Entry into the national phase |
Ref document number: 2024541125 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18693699 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202280063854.7 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022790299 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2022790299 Country of ref document: EP Effective date: 20240422 |