WO2022129562A1 - Verfahren zur klassifizierung von bildern und verfahren zur optischen prüfung eines objekts - Google Patents
Verfahren zur klassifizierung von bildern und verfahren zur optischen prüfung eines objekts Download PDFInfo
- Publication number
- WO2022129562A1 WO2022129562A1 PCT/EP2021/086565 EP2021086565W WO2022129562A1 WO 2022129562 A1 WO2022129562 A1 WO 2022129562A1 EP 2021086565 W EP2021086565 W EP 2021086565W WO 2022129562 A1 WO2022129562 A1 WO 2022129562A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- images
- good
- bad
- neural network
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 238000013528 artificial neural network Methods 0.000 claims abstract description 60
- 238000012549 training Methods 0.000 claims abstract description 46
- 230000003287 optical effect Effects 0.000 claims description 39
- 230000007547 defect Effects 0.000 claims description 33
- 238000007689 inspection Methods 0.000 claims description 19
- 230000002950 deficient Effects 0.000 claims description 15
- 230000006870 function Effects 0.000 claims description 13
- 238000009826 distribution Methods 0.000 claims description 7
- 238000013527 convolutional neural network Methods 0.000 claims description 6
- 238000005457 optimization Methods 0.000 claims description 6
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 4
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 4
- 230000006978 adaptation Effects 0.000 claims description 4
- 230000003190 augmentative effect Effects 0.000 claims description 4
- 238000000502 dialysis Methods 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- 230000008569 process Effects 0.000 description 7
- 238000012360 testing method Methods 0.000 description 5
- 238000013459 approach Methods 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 4
- 238000011179 visual inspection Methods 0.000 description 4
- 230000004913 activation Effects 0.000 description 3
- 238000004140 cleaning Methods 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 206010000210 abortion Diseases 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000003908 quality control method Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/001—Industrial image inspection using an image reference approach
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/02—Affine transformations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/0008—Industrial image inspection checking presence/absence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30164—Workpiece; Machine component
Definitions
- the present invention relates to a method for classifying images and a method for optically inspecting an object using the method for classifying images.
- a final inspection as part of a quality control, which includes a visual inspection or optical final inspection may contain, subjected.
- a visual final inspection depending on the condition of the object determined by the visual inspection, it is determined whether the respective inspected object is in a condition in which it can be delivered to the customer, or whether the product still needs to be improved before delivery or the component or the object are to be made.
- Such a final optical inspection can be used to check, for example, whether the object or finally assembled device or component of the device is labeled correctly according to a specification, is configured according to customer-specific requirements, and whether the object has one or more optical defects.
- a surface or surfaces of the object can be checked to see whether they have dents, scratches or stains, which may have been insufficiently removed during a final cleaning of the object.
- the check can be carried out by human inspectors using defined evaluation criteria. In this process, however, minor defects can be overlooked by the human inspectors, which means that the quality of the delivered products or objects, especially the finally assembled devices, can fluctuate.
- the manual control is a task that exhausts the concentration and eyesight of the inspectors.
- optical inspection systems with a camera for recording an image of the object to be inspected and a freely available open source software product whose parameters can be individually adapted to the respective object to be inspected can also be used.
- the parameters for the resolution and enlargement of the image can be set in the camera and/or software settings, and the fixed points or features to be found by the software that are characteristic of the features of the object to be checked can be set in the software settings.
- optical test systems are not suitable.
- a method for classifying images, in which the images are classified according to good images and bad images, has the following steps according to one embodiment:
- each bad image of at least a subset of the plurality of bad images of the training data corresponds to a respective good image of at least a subset of the plurality of good images of the training data into which at least one image defect is introduced, and wherein the artificial neural network is constructed using respective pairs of a respective good image from the subset of the plurality of good images and a respective bad image from the subset of the plurality of bad images, wherein a respective bad image corresponds to the good image associated with the same pair in which the at least one image error is inserted, tr is ained.
- each bad image from at least one subset of the plurality of bad images in the training data corresponds to a respective good image from at least one subset of the plurality of good images in the training data, in which at least one image error is inserted.
- each bad image of the subset of the plurality of bad images of the training data is generated by a good image of the subset of the plurality of good images of the training data, in which the at least one image error is inserted.
- any number of bad images can be provided for the training data in this way. This is particularly advantageous in a case where a small number of bad images are available, for example, in a case where the images to be classified are images of an object such as a medical device or a component thereof and to be classified Pictures an optical final inspection is to be carried out before the object is delivered to a customer, since the proportion of optically faultless objects intended for optical final inspection is considerably larger than the proportion of optically faulty objects intended for optical final inspection. Furthermore, the possibility of providing any number of bad images for the training data is advantageous in a case in which potential optical anomalies of the objects cannot be covered by corresponding training data or the variety of possible errors is very large.
- the at least one image error is preferably selected in such a way that it corresponds to or is at least similar to an image error that is actually to be expected and which occurs in an image of the object as a result of an optical defect of an object to be checked.
- the plurality of bad images can also contain bad images that were not generated from good images.
- the large number of bad images of the training data can actually contain bad images that were created directly by camera recordings and were not generated.
- the proportion of bad images generated or the subset of the large number of bad images can make up the majority of the large number of bad images of the training data, preferably over 60%, more preferably over 70% or 80%.
- the method of classifying images may have the following steps identify: acquiring image data of an image, and classifying the image as a good image or a bad image, the classification being performed using an artificial neural network trained by supervised learning using training data from good images and bad images, each bad image of the training data corresponds to a respective good image of the training data into which at least one image defect is introduced, and the artificial neural network using respective pairs of a respective good image and a respective bad image, each bad image belonging to the same pair associated good image, in which the at least one image error is inserted, is trained.
- the result of the classification can be output by means of an output device, for example a display device.
- an output device for example a display device.
- Attention Heatmaps the decisive areas can be highlighted by optically superimposing a colour-coded calculation result over the original image
- the artificial neural network can be trained by a respective adaptation of parameters of the artificial neural network after a respective input of the image data from a respective pair of a respective good image and a respective bad image. This advantageously allows the artificial neural network to distinguish the typical flaws of a bad image from typical features of a good image, which is hardly possible when using a different approach to input training data.
- the at least one image error is a randomized pixel error, a line of pixel errors or an area error, and/or by distorting, blurring or deforming an image section of the good image, by an affine image transformation of the good image, by augmented spots, circular, elliptical or rectangular shapes, which can also be completely or only partially filled in color or in shades of gray.
- the artificial neural network is preferably designed as a convolutional neural network, which has an input layer, an output layer and a number of hidden layers arranged in between, with the training of the artificial neural network a combination of a regularization in all hidden layers with a loss function takes place.
- an output from the last layer of the neural network can be converted into a probability distribution by a softmax function, and the classification can be carried out on the basis of the probability distribution.
- the artificial neural network can be trained using a self-adaptive optimization method, preferably a rectified adam method.
- a method, in particular a computer-implemented method, for the optical inspection of an object has the following steps:
- capturing image data of an image includes capturing image data of the at least one image of the object
- the method for optically inspecting an object can be used, for example, as part of an optical final inspection, in order to inspect an object produced by a manufacturing process, such as a medical device, prior to delivery to a customer with regard to optical defects on a surface of the to inspect the Item and deliver the Item to the Customer only if the process determines that the Item is free of defects and otherwise arrange for the Item to be cleaned or for the Item to be touched up.
- a manufacturing process such as a medical device
- the method further includes the following steps:
- the method also has a step of displaying, if it is determined that the object is faulty, by means of an output device designed as a display device, the at least one image of the object and a mask, which is based on an output of the artificial neural network is generated, wherein the mask overlays the at least one image of the object and indicates an error of the object and its position output by the artificial neural network.
- an inspector can use the information displayed by the mask to visually inspect the object in the next step and decide whether the object or the manufactured machine can be shipped, whether the object has to go through the cleaning process again or possibly be put on hold for further improvements becomes.
- capturing image data of at least one image of the object includes capturing image data of a plurality of images of the object at a plurality of different angles relative to the object, wherein
- the object is determined to be error-free if each of the plurality of images of the object is classified as a good image, or
- the acquisition of image data from a large number of images of the object at a large number of different angles relative to the object can have the following steps:
- capturing image data from a large number of images of the object at a large number of different angles relative to the object can have the following steps:
- the image capturing device is designed to be movable instead of the rotatable platform.
- the image capturing device can be moved around the object via a rail system, for example.
- the image acquisition device is moved around the object by a drive device.
- the artificial neural network is preferably trained using training data from a large number of good images and a large number of bad images, the good images each being images of at least one section of a medical device, preferably a dialysis machine.
- the at least one image defect corresponds to an optical defect of a surface of the object, preferably a scratch or a dent in the surface of the object or a spot on the surface of the object, or at least similar to it.
- the images can also be divided into smaller sections and the sections can be calculated in parallel.
- the same architecture of the neural network can be used here, with the weighting of the nodes being adapted depending on the examined object section.
- FIG. 1 schematically shows a device for classifying images and, if necessary, for optically inspecting an object according to one embodiment
- 2A-C schematically show a good image of an object, a bad image of another object and a difference image generated from the good image and the bad image
- Figures 3A-C schematically show a good image of an object, a bad image of the object and a difference image generated from the good image and the bad image.
- FIG. 4 shows a flowchart to illustrate a method for classifying images according to an embodiment
- FIG. 5 shows a flow chart to illustrate a method for the optical inspection of an object.
- the device 100 for classifying images and, if necessary, for the optical inspection of an object 10 has a chamber 106 or an inspection space 106 which is partially or completely shielded from extraneous light.
- a rotatable platform 101 is provided in test room 106, on which an object 10 to be tested, for example a medical device such as a dialysis machine, is arranged for testing.
- an image capturing device 102 can also be moved or rotated around the object 10 .
- the image capture device 102 such as one or more single image cameras, for example four area cameras, or a video camera is provided in the chamber 106 or the test room 106, which is set up to capture images of the object 10, and in one embodiment high-resolution images , for example with a size of 5496 x 3672 pixels.
- an illumination device 108 is also provided within the chamber 106 or the test space 106, which is set up to illuminate the object 10, and for example an LED panel or several LEDs panels.
- a drive device 107 for rotating the rotatable platform 101 and the image acquisition device 102 are connected to a control device 103, which is set up to control the inspection process by actuating the drive device 107 to rotate the rotatable platform 101 and by actuating the image acquisition device 102 to rotate during to control the rotation of the platform 101 to capture a series of images of the object 10 placed on the platform 101 .
- This configuration makes it possible to capture a large number of images of the object to be inspected 10 from different perspectives using the image acquisition device 102 during the inspection process, and thus preferably to capture images of the entire exposed surface of the object 10 in order to capture the entire exposed surface of an optical to be able to undergo an inspection for visual defects.
- the control device 103 is also connected to a memory device 104 and a display device 105 .
- the images captured by the image capturing device 102 or the corresponding image data can be stored in the memory device 104 .
- a program for classifying the images of the object 10 captured by the image capturing device 102 is stored in the memory device 104 and can be executed by the control device 103 .
- the control device 103 and/or the storage device 104 can be arranged locally or remotely or can be distributed.
- a cloud-based architecture can be used.
- the program is set up to classify the images captured by the image capturing device 102 as good images GB or bad images SB.
- the program has a software component designed as an artificial neural network.
- the artificial neural network is trained by supervised learning using training data having a plurality of good images GB and a plurality of bad images SB.
- the plurality of good images GB are formed by images actually captured from different angles of surfaces of an object 10 that does not have optical defects such as dents, scratches or stains that may have been insufficiently removed during the final cleaning and/or correct according to a Labeled by default and/or configured according to customer-specific wishes.
- a respective bad image SB of at least one subset of the plurality of bad images SB of the training data corresponds to a respective good image GB of at least one subset of the plurality of good images GB of the training data, in which at least one image error 11 was artificially inserted.
- the at least one image error 11 is preferably selected in such a way that it corresponds to or is at least similar to an image error or optical error that is actually to be expected and which occurs in the image of the object 10 as a result of an optical defect in the object 10 .
- the artificial neural network is or is trained in particular using respective pairs formed from a respective good image GB from the subset of the plurality of good images GB and a respective bad image SB from the subset of bad images SB, where a respective bad image SB corresponds to the good image GB belonging to the same pair, in which the at least one image error 11 is inserted.
- the at least one image error 11 can, for example, be caused by randomized pixel errors, lines of pixel errors or area errors, and/or by distorting, blurring or deforming at least one image section of the good image GB and/or by using affine image transformations, by augmented spots, circular, Elliptical or rectangular shapes, which are preferably at least partially filled with color or shades of gray, can be generated from good images GB or from the corresponding image data. Any number of bad images SB can be generated in this way, as a result of which a large number of optical errors can be simulated.
- the artificial neural network is trained by a respective adaptation of parameters of the artificial neural network after a respective input of the image data from a respective pair of a respective good image GB and a respective bad image SB.
- the artificial neural network can be designed, for example, as a flat (“shallow”) convolutional neural network that has an input layer, an output layer and several hidden layers provided in between, preferably a total of at least three hidden, preferably six layers, and two hidden classification layers for preprocessing the output , having.
- a flat (“shallow”) convolutional neural network that has an input layer, an output layer and several hidden layers provided in between, preferably a total of at least three hidden, preferably six layers, and two hidden classification layers for preprocessing the output , having.
- the training algorithm used to train the artificial neural network and in particular the loss function used therein, is adapted to the particular choice of training data, namely the pairs of good images GB of the subset of the large number of good images GB and bad images SB Subset of the plurality of bad images SB, adjusted.
- this problem is solved by a combination of regularization in all network layers and the loss function, a final, normalizing softmax layer and a modern self-adaptive optimization method, for example a “rectified adam” method.
- the filter depth of the convolutional layers is reduced overall, starting with a filter depth of 50 and the subsequent depths being 40, 30, 20, 20, 10, for example.
- the L2 norm for example, can be used as a regularization function as a penalty term on the activation signals.
- a pooling for example by a MaxPooling layer with a 2x2 kernel.
- the subsequent dense layers can be activated by the sigmoid function.
- the Softmax activation function is used for the output layer itself.
- the loss function is mapped by a so-called categorical cross-entropy in order to finally make the assignment to a good or bad picture via the probability distribution.
- the classification of the training data is preferably carried out using error feedback, in which the traced neuron activity, from which the external (human) teacher can conclude the cause of the "bad image” classification made by the artificial neural network, in which corresponding image can be visualized on the display device 105.
- the small number of hidden layers is sufficient to detect small, local optical errors, which enables the pixel-precise processing of high-resolution image material with cheap resources and in a few seconds.
- 2A shows a good image GB of an object 10 or a surface of the object 10 actually captured by the image capture device 102, and FIG minimal actual optical error 12, which is caused, for example, by a dent, a scratch or a stain, the good image GB and the bad image SB2 being recorded with minimally different positioning of the object 10 and the other object 10.
- 2C schematically shows an image DB2 which was generated by forming the difference between the intensities of the good image GB and the bad image SB2.
- the representation of the object 10 in dashed form illustrates that at least parts of the features of the object 10, optionally with a changed color, can be taken from the difference image DB2.
- the image features of the minimum actual optical error 12 contained in the bad image SB2 are almost completely lost in the differential image DB2 when the artificial neural network carries out the weighting of the feature extraction, since there are too many differences between the two different image recordings. Accordingly, in such a case, the features relevant to an optical error cannot be significantly trained and weighted.
- Fig. 3A shows a good image GB of an object 10 actually captured by the image capture device 102, which has no optical error
- Fig. 3B shows a bad image SB generated based on the good image GB, which is created by inserting a minimal image error 11 into the good picture GB was generated
- 3C shows an image which was generated by forming the difference between the intensities of the good image GB and the bad image SB.
- the image error 11 is clearly recognizable in the image DB generated by subtraction, so that the features relevant to an optical error can be significantly trained and weighted as a result.
- control device 103 is set up, after the acquisition of image data of an image by the image acquisition device 102, the acquired image by means of the program stored in the memory device 104 for classifying the images acquired by the image acquisition device 102 as good To classify image or bad image, and to output the result of the classification on the display device 105.
- a program for the optical inspection of an object is also stored in the memory device 104 , which program uses the program for classifying the images of the object 10 captured by the image capture device 102 .
- Control device 103 is set up to use the program for checking the object stored in memory device 104 to cause image capture device 102 to capture image data of at least one image of object 10, which uses the program to classify the at least one image of object 10 from to classify the images captured by the image capturing device 102 as a good image or as a bad image, to determine that the object 10 is free of defects if the at least one image of the object 10 is classified as a good image, or to determine that the object is defective , if the at least one image of the object 10 is classified as a bad image.
- the control device 103 is also set up to use the program stored in the memory device 104 for the visual inspection of an object to cause the display device 105 to output information about the fact that the object 10 is free of defects if it is determined that the object 10 is free of defects, or to output information that the object is defective when it is determined that the object is defective.
- control device 103 is set up to use the program stored in the memory device 104 for the visual inspection of an object to cause the display device 105 to display the at least one image of the object 10 and a mask, which is generated based on an output of the artificial neural network , wherein the mask overlays the at least one image of the object and indicates an error of the object 10 and its position output by the artificial neural network.
- FIG. 4 shows a flowchart to illustrate a method according to an embodiment for classifying images into good images and bad images.
- Image data of an image are captured in step S40, wherein the image data can be captured, for example, by means of the image capturing device 102 and can be image data of an image of the object 10.
- step S41 the image is classified as a good image GB or a bad image SB2 using an artificial neural network described above obtained by supervised learning using training data from a plurality of good images GB and a plurality of bad images images SB is trained, and each bad image SB of at least one subset of the plurality of bad images SB of the training data corresponds to a respective good image GB of at least one subset of the plurality of good images GB of the training data, in which at least one image error 11 is inserted.
- the artificial neural network using respective pairs of a respective good image GB from the subset of the plurality of good images GB and a respective bad image SB from the subset of the plurality of bad images SB, with a respective bad image SB of the same Pair of associated good image GB, in which the at least one image defect 11 is inserted corresponds to be trained.
- the artificial neural network can be trained by a respective adjustment of parameters of the artificial neural network after a respective input of the image data from a respective pair of a respective good image GB and a respective bad image SB.
- the at least one image error 11 can be a randomized pixel error, a line of pixel errors or an area error, and/or by distorting, blurring or deforming an image section of the good image GB, by an affine image transformation of the good image GB, by augmented spots, circular , elliptical or rectangular shapes, which are preferably at least partially colored or filled in shades of gray, are or can be created.
- the artificial neural network can be embodied as a convolutional neural network, which has an input layer, an output layer and a number of hidden layers arranged in between, a combination of regularization in all hidden layers with a loss function taking place when the artificial neural network is trained.
- the artificial neural network can be set up to convert an output of the last layer of the artificial neural network into a probability distribution using a softmax function, with the classification taking place on the basis of the probability distribution.
- the artificial neural network can be trained using a self-adaptive optimization method, preferably a rectified adam method.
- FIG. 5 shows a flowchart to illustrate a method according to an embodiment for the optical inspection of an object.
- step S50 image data of at least one image of object 10 is captured, for example using image capture device 102.
- step S51 the at least one image of the object 10 is then classified as a good image or as a bad image using the method described with reference to FIG Object 10 includes.
- step S52 it is determined that the object 10 is flawless if the at least one image of the object 10 is classified as a good image in step S51, or it is determined that the object 10 is flawed if the at least one image of the object 10 in step S51 S51 is classified as a bad picture.
- step S53 information that the object 10 is free of defects is output, for example by means of the display device 105, if it is determined in step S52 that the object 10 is free of defects, or information is output that the object 10 is defective , when it is determined in step S52 that the object 10 is defective.
- step S54 if it is determined in step S52 that the object 10 is defective, the display device 105 displays the at least one image of the object 10 and a mask, which is generated based on an output of the artificial neural network, the mask which superimposes at least one image of the object 10 and displays an error of the object 10 and its position output from the artificial neural network.
- Capturing image data of at least one image of the object 10 may include capturing image data of a plurality of images of the object 10 at a plurality of different angles relative to the object 10, wherein in step S52 it is determined that the object 10 is defect-free if each of the plurality of images of the object 10 is classified as a good image in step S51, or the object 10 is determined to be defective in step S52 when at least one of the plurality of images of the object 10 is classified as a bad image in step S51.
- the acquisition of image data from a large number of images of the object 10 at a large number of different angles relative to the object 10 can include arranging the object 10 on the rotatable platform 101, controlling the drive device 107 of the rotatable platform 101, by means of the Control device 103 to rotate the rotatable platform 101 and capturing, by means of the image acquisition device 102, the image data of the plurality of images of the object 10 at the plurality of different angles relative to the object 10 while the rotatable platform 101 is driven by the drive device 107 is rotated.
- capturing image data of a plurality of images of object 10 at a plurality of different angles relative to object 10 can include arranging object 10 on a platform, controlling a drive device of an image capturing device, and rotating the image capturing device around the object 10 to move, and a detection by means of An image capture device having image data of the plurality of images of the object 10 at the plurality of different angles relative to the object as the image capture device is moved about the object 10 by the drive means.
- the artificial neural network can be trained using training data from good images GB and bad images SB, the good images GB being images of at least one section of a medical device, preferably a dialysis machine.
- the at least one image defect 11 can correspond to an optical defect of a surface of object 10, preferably a scratch or dent in the surface of object 10 or a stain on the surface of object 10.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
- Image Processing (AREA)
Abstract
Description
Claims
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP21840880.5A EP4264541A1 (de) | 2020-12-18 | 2021-12-17 | Verfahren zur klassifizierung von bildern und verfahren zur optischen prüfung eines objekts |
JP2023535708A JP2023554337A (ja) | 2020-12-18 | 2021-12-17 | 画像分類方法及び物体の光学検査方法 |
MX2023007166A MX2023007166A (es) | 2020-12-18 | 2021-12-17 | Metodo para clasificar imagenes y metodo para examinar opticamente un objeto. |
CN202180084167.9A CN116601665A (zh) | 2020-12-18 | 2021-12-17 | 用于对图像进行分类的方法和用于对物体进行光学检查的方法 |
US18/039,493 US20240096059A1 (en) | 2020-12-18 | 2021-12-17 | Method for classifying images and method for optically examining an object |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102020216289.1 | 2020-12-18 | ||
DE102020216289.1A DE102020216289A1 (de) | 2020-12-18 | 2020-12-18 | Verfahren zur klassifizierung von bildern und verfahren zur optischen prüfung eines objekts |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022129562A1 true WO2022129562A1 (de) | 2022-06-23 |
Family
ID=79425430
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2021/086565 WO2022129562A1 (de) | 2020-12-18 | 2021-12-17 | Verfahren zur klassifizierung von bildern und verfahren zur optischen prüfung eines objekts |
Country Status (7)
Country | Link |
---|---|
US (1) | US20240096059A1 (de) |
EP (1) | EP4264541A1 (de) |
JP (1) | JP2023554337A (de) |
CN (1) | CN116601665A (de) |
DE (1) | DE102020216289A1 (de) |
MX (1) | MX2023007166A (de) |
WO (1) | WO2022129562A1 (de) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3399466A1 (de) * | 2017-05-02 | 2018-11-07 | General Electric Company | Neuronales netzwerk-trainingsbilderzeugungssystem |
WO2019183153A1 (en) * | 2018-03-21 | 2019-09-26 | Kla-Tencor Corporation | Training a machine learning model with synthetic images |
EP3660491A1 (de) * | 2017-07-26 | 2020-06-03 | The Yokohama Rubber Co., Ltd. | Defektinspektionsverfahren und defektinspektionsvorrichtung |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI787296B (zh) | 2018-06-29 | 2022-12-21 | 由田新技股份有限公司 | 光學檢測方法、光學檢測裝置及光學檢測系統 |
EP3857508A1 (de) | 2018-11-16 | 2021-08-04 | Align Technology, Inc. | Maschinenbasierte dreidimensionale (3d)-objektfehlerdetektion |
-
2020
- 2020-12-18 DE DE102020216289.1A patent/DE102020216289A1/de active Pending
-
2021
- 2021-12-17 MX MX2023007166A patent/MX2023007166A/es unknown
- 2021-12-17 US US18/039,493 patent/US20240096059A1/en active Pending
- 2021-12-17 WO PCT/EP2021/086565 patent/WO2022129562A1/de active Application Filing
- 2021-12-17 EP EP21840880.5A patent/EP4264541A1/de active Pending
- 2021-12-17 CN CN202180084167.9A patent/CN116601665A/zh active Pending
- 2021-12-17 JP JP2023535708A patent/JP2023554337A/ja active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3399466A1 (de) * | 2017-05-02 | 2018-11-07 | General Electric Company | Neuronales netzwerk-trainingsbilderzeugungssystem |
EP3660491A1 (de) * | 2017-07-26 | 2020-06-03 | The Yokohama Rubber Co., Ltd. | Defektinspektionsverfahren und defektinspektionsvorrichtung |
WO2019183153A1 (en) * | 2018-03-21 | 2019-09-26 | Kla-Tencor Corporation | Training a machine learning model with synthetic images |
Also Published As
Publication number | Publication date |
---|---|
DE102020216289A1 (de) | 2022-06-23 |
US20240096059A1 (en) | 2024-03-21 |
CN116601665A (zh) | 2023-08-15 |
MX2023007166A (es) | 2023-06-29 |
JP2023554337A (ja) | 2023-12-27 |
EP4264541A1 (de) | 2023-10-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE102018128158A1 (de) | Vorrichtung zur inspektion des erscheinungsbilds | |
DE10011200A1 (de) | Verfahren zur Bewertung von Strukturfehlern auf einer Waferoberfläche | |
EP3807838A2 (de) | Materialprüfung von optischen prüflingen | |
DE102013001808A1 (de) | Verfahren zur zerstörungsfreien Prüfung des Volumens eines Prüflings sowie zur Ausführung eines solchen Verfahrens eingerichtete Prüfvorrichtung | |
DE112019005951T5 (de) | Zentralisierte Analyse mehrerer visueller Prüfvorrichtungen | |
DE102010032241A1 (de) | Verfahren und Vorrichtung zum Erkennen von Oberflächenfehlern | |
DE102021100496A1 (de) | Intelligentes Produktionslinienüberwachungssystem und Überwachungsverfahren | |
DE10041354A1 (de) | Verfahren zur Überprüfung auf Fremdpartikel oder Fehler und entsprechende Vorrichtung | |
DE102019120696A1 (de) | Vorrichtung und Verfahren zur Reifenprüfung | |
WO2022135787A1 (de) | Verfahren und vorrichtung zur optischen qualitätskontrolle während der fertigung von leiterplatten | |
WO2021115734A1 (de) | Verfahren und assistenzsystem zur überprüfung von mustern auf fehlerhaftigkeit | |
WO2022129562A1 (de) | Verfahren zur klassifizierung von bildern und verfahren zur optischen prüfung eines objekts | |
DE102022130393A1 (de) | Verfahren und vorrichtung zur automatischen detektion von defekten | |
DE102018133092B3 (de) | Computer-implementiertes Verfahren zur Analyse von Messdaten aus einer Messung eines Objektes | |
DE102021211610A1 (de) | Verfahren zum Trainieren eines neuronalen Lernmodells zum Detektieren von Produktionsfehlern | |
WO2018068775A1 (de) | Verfahren und anlage zum ermitteln der defektfläche mindestens einer fehlstelle auf mindestens einer funktionsoberfläche eines bauteils oder prüfkörpers | |
DE112019004583T5 (de) | Rationalisierung eines automatischen visuellen prüfprozesses | |
DE112020004812T5 (de) | Bewegung in bildern, die in einem visuellen prüfprozess verwendet werden | |
AT507939B1 (de) | Verfahren zum automatisierten nachweis eines defektes an einer oberfläche eines formteils | |
DE19527446A1 (de) | Verfahren und Vorrichtung zur optischen Oberflächenprüfung von Werkstücken | |
DE19834718A1 (de) | Digitale Bildverarbeitung für ein Qualitätskontrollsystem | |
DE102018207933A1 (de) | Verfahren sowie Überprüfungssystem zur Überprüfung einer hergestellten Getriebebaugruppe | |
DE102022204406B4 (de) | Verfahren zum Klassifizieren und/oder zur Regression von Eingangssignalen unter Zuhilfenahme einer Gram-Matrix-Variante | |
DE102020120257A1 (de) | Verfahren und Analysevorrichtung zum Erzeugen von Trainings- oder Korrekturdaten für ein künstliches neuronales Netz | |
DE10013137A1 (de) | Verfahren zur bildgesteuerten Prüfung und Bearbeitung von Produkten |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21840880 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18039493 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023535708 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202180084167.9 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: MX/A/2023/007166 Country of ref document: MX |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2021840880 Country of ref document: EP Effective date: 20230718 |