EP4194107A1 - Vorrichtung und verfahren zum klassifizieren von materialobjekten - Google Patents

Vorrichtung und verfahren zum klassifizieren von materialobjekten Download PDF

Info

Publication number
EP4194107A1
EP4194107A1 EP22152672.6A EP22152672A EP4194107A1 EP 4194107 A1 EP4194107 A1 EP 4194107A1 EP 22152672 A EP22152672 A EP 22152672A EP 4194107 A1 EP4194107 A1 EP 4194107A1
Authority
EP
European Patent Office
Prior art keywords
deep learning
material object
training
classification
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP22152672.6A
Other languages
English (en)
French (fr)
Inventor
Steffen RÜGER
Jann GOSCHENHOFER
Alexander Ennen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Original Assignee
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV filed Critical Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Publication of EP4194107A1 publication Critical patent/EP4194107A1/de
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/34Sorting according to other particular properties
    • B07C5/3416Sorting according to other particular properties according to radiation transmissivity, e.g. for light, x-rays, particle radiation

Definitions

  • the present application concerns the field of classifying material objects, more specifically deep learning-based material object classification for use in, for instance, Deep learning-based sorting on dual energy X-ray transmission data.
  • Embodiments relate to an apparatus and a method for classifying material objects.
  • Fig. 17 shows exemplarily a flow chart showing the sequence of an X-ray sorting system and the individual processes through which a material stream passes until it is sorted.
  • BMD is a method for the characterization of material samples based on two X-ray spectra, e.g., a dual energy X-ray image.
  • the inventors of the present application realized that one problem encountered when trying to classifying material objects stems from the fact that present methods need to be parameterized by human experts. According to the first aspect of the present application, this difficulty is overcome by using a deep learning model. Concurrently, the deep learning model is trained in a supervised manner and subsequently yields strong predictive performance, e.g., in terms of quality of and robustness of the classification of the material objects. Even further, the usage of the deep learning model increases a flexibility in classifying material objects and allows an efficient adaptation in response to new and/or additional material objects to be classified.
  • an apparatus for classifying material objects comprises a deep learning model.
  • the apparatus is configured to, in an initialization phase, subject the deep learning model to supervised learning (in the following also understood as supervised training or supervised model training or training in a supervised manner).
  • the initialization phase might be at an initialization of the apparatus, i.e. an initialization phase of the apparatus.
  • the supervised model training is based on training data obtained from, for each material object of a training set of material objects, a respective pair of sensor data and label information.
  • the training data is obtained from a plurality of pairs of sensor data and label information, wherein each pair is associated with one of the material objects of the training set of material objects (e.g., a one-to-one correspondence, i.e. a bijective correspondence).
  • the respective sensor data is obtained by a measurement of the respective material object and the respective label information is associating the respective material object with a target classification.
  • the apparatus is configured to, using the deep learning model, classify a predetermined material object based on sensor data obtained by a measurement of the predetermined material object.
  • the material objects are aluminum pieces.
  • the material objects are aluminum trash pieces and the classification discriminates between high-grade pure aluminum and a low-grade residual.
  • the apparatus might be configured to classify a predetermined material object as a high-grade pure aluminum or a low-grade residual.
  • the aluminum trash pieces comprise aluminum flakes and flakes of one or more other materials, wherein the aluminum flakes, for example, should be classified as high-grade pure aluminum and the flakes of the one or more other materials, for example, should be classified as low-grade residual.
  • the usage of the deep learning model for the classification improves a sorting quality in a traceable way.
  • the apparatus comprises a measurement sensor configured to perform the measurement of the training set of material objects and the predetermined material object.
  • a measurement sensor configured to perform the measurement of the training set of material objects and the predetermined material object.
  • one and the same measurement sensor is used for performing a measurement of each material object of the training set of material objects and for performing a measurement of the predetermined material object.
  • the measurements of the material objects of the training set of material objects may be used by the apparatus for the supervised model training of the deep learning model and the measurement of the predetermined material object may be used by the apparatus for classifying the predetermined material object.
  • the inventors found that it is advantageous that the deep learning model is trained based on measurement data received from the same measurement sensor as the measurement data of the predetermined material object, which has to be classified by the apparatus. This feature increases the performance of the deep learning model and the quality of the classification.
  • the measurement sensor comprises a dual energy X-ray sensor and, optionally, a conveyor belt for passing the training set of material objects and the predetermined material object by the dual energy X-ray sensor so as to be scanned by the dual energy X-ray sensor.
  • the conveyor belt is configured to transport the material objects of the training set of material objects and the predetermined material object.
  • the conveyor belt leads the material objects of the training set of material objects and the predetermined material object past the dual energy X-ray sensor.
  • Multiple material objects may be distributed on the conveyor belt and the dual energy X-ray sensor might be configured to perform a measurement of each of the multiple material objects.
  • the usage of the dual energy X-ray sensor is advantageous since it provides meaningful information of the materials of the material objects and can therefor increase the quality of the classification.
  • the conveyor belt is advantageous in terms of efficiency, since it enables to classify multiple material objects in short time and it enables to integrate/incorporate the apparatus in a processing line with additional systems and it serves for a better handling of the material flow.
  • the sensor data comprises dual energy X-ray images.
  • the apparatus comprises a sorting stage configured to subject the predetermined material object to sorting according to the classification.
  • the apparatus can achieve a low error rate at the sorting, since the apparatus achieves a high accuracy at the classification by using the deep learning model.
  • the label information, for the respective material object comprises further a confidence value for the indication of the target classification.
  • the confidence value may indicate a probability for the respective target classification being correct.
  • the target classification may represent a mean of two or more preliminary target classifications associated with the respective material object.
  • the confidence value may indicate whether the two or more preliminary target classifications differ among each other or whether they are consistent.
  • the apparatus can be configured so that the supervised training of the deep learning model is less sensitive to sensor data of material objects whose label information comprises lower confidence values compared to sensor data of material objects whose label information comprises higher confidence values. This is based on the finding that labeling noise/annotation noise can be reduced by defining the sensitivity of the apparatus for the sensor data.
  • a confidence value for example, may be regarded as a high confidence value, if the confidence value is equal to or greater than a predetermined threshold.
  • the predetermined threshold may indicate a confidence of 60 %, 66 %, 75 %, 80 %, 85 % or 90 %.
  • the apparatus is configured so that the label information comprises, for the respective material object, at least two labels each indicating a target class for the respective material object.
  • the at least two labels are each indicating additionally a confidence value for the indication of the target class.
  • the apparatus is configured so that the supervised training of the deep learning model is less sensitive to sensor data of material objects whose label information comprises labels indicative of different target classes and/or lower confidence values compared to sensor data of material objects whose label information comprises labels indicative of the same target classes and/or higher confidence values. This is based on the finding that labeling noise/annotation noise can be reduced by defining the sensitivity of the apparatus for the sensor data.
  • the model performance can be improved by defining the sensitivity of the apparatus, so that it is only sensitive to sensor data of material objects whose label information comprises labels indicative of the same target classes and/or higher confidence values compared to sensor data of material objects whose label information comprises labels indicative of different target classes and/or lower confidence values.
  • the usage of two or more labels and/or of the confidence values results in more stable supervised training of the deep learning model, since the training data has a higher quality.
  • the apparatus further comprises a user interface.
  • the apparatus may be configured to obtain, for the respective material object, the at least two labels via the user interface.
  • the respective label information might be provided via the user interface.
  • the label information might be acquired only once, e.g. during the supervised model training or at an initialization phase of the supervised model training.
  • a user might assign one of the two labels to the respective material object and provide same to the apparatus via the user interface.
  • the apparatus may be configured so that the classification of the predetermined material object depends on performance metric input via the user interface.
  • the performance metric being indicative of a scalar metric differently measuring miss-classifications of different classes according to class-specific weights; e.g. the user inputs weights associated with the different error types (type I/II error or, differently speaking, false positive/ false negatives) for one or more classes.
  • a class-specific weight associated with a class may weight a precision of the classification of material objects associated with the class.
  • the performance metric may indicate different precisions for different classes.
  • the performance metric may indicate that a misclassification of material objects belonging to a first class have less influence, e.g., on a sorting of the material objects according to their assigned class, than misclassification of material objects belonging to a second class.
  • the performance metric can be indicative of a trade-off between false positives and false negatives. The inventors found, that in the field of recycling a monetary gain and an accuracy at a sorting of the material objects according to their respective classification can be improved, if the performance metric indicates to favor false negative over false positives given that aluminum corresponds to the positive class.
  • the residual material objects may correspond to the negative class.
  • the dependency of the classification on the performance metric is advantageous in terms of an improvement of a quality of the classification of the predetermined material object by the apparatus.
  • the dependence on the performance metric especially allows to adapt the classification performed by the apparatus to the needs of a user. Therefore, a highly adjustable and accurate classification can be achieved.
  • the performance metric is indicative of miss-classification costs associated with miss-classifications of one or more classes discriminated by the classification of the deep learning model.
  • the apparatus is configured so that the classification of the predetermined material object depends on performance metric by taking the performance metric into account by controlling the supervised model training so that the deep learning model meets, with respect to the training data, the performance metric, or is optimized with respect to the performance metric.
  • the apparatus is configured to, in subjecting the deep learning model to the supervised model training based on the training data in the initialization phase, take the performance metric into account. Therefore the performance metric should be defined at a start of the supervised model training, so that the apparatus is configured to perform the classification of the predetermined material object dependent on the performance metric using the trained deep learning model. This enables an efficient and accurate classification of material objects according to the user needs by the apparatus, since the performance metric is already considered by the deep learning model.
  • the apparatus is configured so that the classification of the predetermined material object depends on performance metric by taking the performance metric into account by subjecting a set of candidate deep learning models to the supervised model training based on the training data obtained and by selecting one candidate deep learning model out of the set of candidate deep learning models which meets or is best in terms of the performance metric.
  • These candidate models may differ in the configurations of their respective hyperparameters, e.g., a learning rate, a dropout rate or properties of the architecture.
  • the apparatus is configured to, in subjecting the deep learning model to the supervised model training based on the training data in the initialization phase, take the performance metric into account.
  • the ability to select the candidate deep learning model out of a set allows the apparatus to select based on predictive performance and model inference time the best deep learning model to be trained by the supervised model training, e.g., depending on the material objects to be classified and/or on the performance metric.
  • the apparatus further comprises a user interface, e.g., the interface described above.
  • the apparatus is configured to obtain, via the user interface, a threshold value, and compare the threshold value with an output of the deep learning model applied to the sensor data of the predetermined material object so as to decide on whether the predetermined material object is attributed to a predetermined class.
  • the deep learning model comprises a convolutional neural network.
  • the apparatus is configured to, intermittently, subject the deep learning model to semi-supervised model training.
  • the apparatus may be configured to obtain training data for the semi-supervised model training based on sensor data without label information being associated to, e.g., unlabeled data, and based on pairs of sensor data and label information, e.g., labeled data.
  • the semi-supervised model training can be performed based on labeled and unlabeled data.
  • the sensor data for the semi-supervised model training may be obtained by measurements of material objects of a further training set of material objects, wherein only for some of the material objects also label information is provided additionally to the respective sensor data.
  • the labeled data for the semi-supervised model training may be obtained from pairs of sensor data and label information of material objects of the training set of material objects for the supervised model training and the unlabeled data may be obtained for the material objects of the further training set of material objects.
  • the semi-supervised model training it is possible to increase the amount of training data and therefore increase the performance of the deep learning model.
  • the semi-supervised model training results in reduced time and costs involved with obtaining the label information, since it is not necessary to provide the respective label information for all material objects based on which the training data for the semi-supervised model training is obtained.
  • the apparatus is configured to, intermittently, subject the deep learning model to unsupervised model training.
  • the advantage of this feature is that it is not necessary to provide label information for each of a training set of material objects used for the unsupervised model training, increasing an efficiency in obtaining a trained deep learning model. The time consuming and costly labeling can be reduced. Additionally, the amount of training data can be easily increased, since it is possible to add a further training set of material objects without label information for the unsupervised model training. This increase in training data improves the performance of the deep learning model. Especially, the usage of both, the supervised model training and the unsupervised model training, achieves a good compromise between model performance and efficiency in the training of the deep learning model.
  • the apparatus is configured to obtain the training data from the pairs of sensor data and label information by means of augmentation to artificially increase the variety of the training database for model training. This has a regularizing effect on the model while training, reducing overfitting and thus improving model performance.
  • a further embodiment relates to a sorting system including one of the apparatus described herein.
  • a further embodiment relates to a method performed by any of the above apparatuses and systems.
  • a further embodiment relates to a method for classifying material objects, comprising in an initialization phase, subjecting a deep learning model to supervised model training based on training data obtained from, for each of a training set of material objects, a pair of sensor data obtained by a measurement of the respective material object and label information associating the respective material object with a target classification. Additionally, the method comprises, using the deep learning model, classifying a predetermined material object based on sensor data obtained by a measurement of the predetermined material object.
  • the method as described above is based on the same considerations as the above-described apparatus.
  • the method can, by the way, be completed with all features and functionalities, which are also described with regard to the apparatus.
  • a further embodiment relates to a computer program for instructing a computer, when being executed on the computer, the herein described method.
  • Fig. 1 shows an apparatus 100 for classifying one or more material objects 110 (e.g., see 110o and 110 1 to 110 n ).
  • the apparatus 100 comprises a deep learning model 120.
  • the deep learning model 120 represents or comprises a convolutional neural network.
  • the apparatus 100 is configured to, using the deep learning model 120, classify 130 a predetermined material object 110 0 based on sensor data 140 0 obtained by a measurement of the predetermined material object 110 0 .
  • the sensor data 140 0 e.g., is obtained by a measurement using a dual energy X-ray sensor or using a camera or using a measurement system for detecting one or more properties of the predetermined material object 110 0 .
  • the apparatus 100 is configured to subject the deep learning model 120 to supervised model training 150.
  • the supervised model training 150 of the deep learning model 120 for example, is performed only once at a set-up or a first start of the apparatus 100.
  • the initialization phase represents a phase initialized in response to a change in settings or requirements for the classification 130, e.g., in response to new or additional classes of material objects 100, in response to a change in a measurement system for obtaining the sensor data 140 or a change in performance metrics.
  • the deep learning model 120 for example, is adapted to new settings or requirements for the classification 130.
  • the supervised model training 150 is performed to achieve a high performance deep learning model 120 for a high accuracy at the classification 130 of the material objects 110.
  • the supervised model training 150 is based on training data 152 obtained from, for each of a training set 154 of material objects 110 1 to 110 n , a pair 156 (see 156 1 to 156 n ) of sensor data 140 (see 140 1 to 140 n ) obtained by a measurement of the respective material object 110, i.e. one of 110 1 to 110 n , and label information 158 (see 158 1 to 158 n ) associating the respective material object 110 with a target classification.
  • Each material object 110 1 to 110 n of the training set 154 is associated with object-individual sensor data 140 and label information 158.
  • each material object 110 1 to 110 n of the training set 154 is performed, e.g., by an external device or by the apparatus 100, to obtain the sensor data 140 1 to 140 n .
  • the sensor data 140 1 to 140 n might be obtained the same way as the sensor data 140o for the predetermined material object 110 0 .
  • the respective sensor data 140 might represent dual energy X-ray transmission data of the respective material object 110. However, it is also possible that the respective sensor data 140 represents any other conceivable data domain, like an RGB image.
  • the label information 158 of the respective material object 110 might be obtained based on the sensor data 140 of the respective material object 110.
  • the target classification might indicate for the respective material object 110 a class with which the material object 110 is associated.
  • the target classification might indicate one out of two or more different classes.
  • Fig. 1 shows exemplarily two different material classes, i.e. material 1 and material 2.
  • the target classification might indicate the material or main material of the respective material object 110.
  • the respective label information 158 is provided by a user of the apparatus 100, e.g., via a user interface of the apparatus 100.
  • the user might analyze the sensor data 140 of the respective material object 110 for determining the target classification of the respective material object 110.
  • the label information 158 comprises two or more labels, each indicating a target classification, wherein each of the two or more labels is provided by another user via the user interface, e.g., see Fig. 2 .
  • the supervised model training 150 enables the deep learning model 120 to determine based on the sensor data 140 0 of the predetermined material object 110o a classification 160 of the predetermined material object 110 0 with high accuracy.
  • the classification 160 indicates for the predetermined material object 110 0 one out of two or more different classes.
  • the two or more different classes should be the same classes as the ones selectable for the target classification.
  • Fig. 1 shows exemplarily that the apparatus 100 classifies 130 the predetermined material object 110o as material 2.
  • the apparatus 100 may be configured to obtain the training data 152 from the pairs 156 of sensor data 140 and label information 158 by means of augmentation. Augmentation artificially creates training data through different ways of processing of the sensor data 140 or combination of multiple processing of the sensor data 140. This enables to increase the amount of training data 152 and thereby the performance of the deep learning model 120 trained by supervised model training with the training data 152.
  • the material objects 110 comprise aluminum pieces.
  • the label information 158 may indicate for such material objects 110 aluminum as target classification.
  • the classification 160 of such material objects 110 should be aluminum, e.g., in case the apparatus 100 correctly classifies 130 the respective predetermined material object 110.
  • the material objects 110 are aluminum trash pieces and the classification 130 discriminates between high-grade pure aluminum and a low-grade residual.
  • the aluminum trash pieces comprise, e.g., flakes of different materials, wherein the material objects 110 out of aluminum, i.e. high-grade pure aluminum, are of interest and the material objects 110 out of other materials, i.e. low-grade residual, are not of interest. This type of classification allows to identify the material objects 110 of interest.
  • the apparatus 100 can comprise one or more of the features and/or functionalities described with regard to Figs. 2 and 3 .
  • Fig. 2 shows an apparatus 100 for classifying material objects 110.
  • the apparatus 100 comprises a deep learning model 120, and is configured to, using the deep learning model 120, classify 130 a predetermined material object 110o based on sensor data 140 0 obtained by a measurement of the predetermined material object 110 0 .
  • the classifying 130 of the predetermined material object 110o results in a classification information 160 for the predetermined material object 110 0 .
  • the deep learning model is applied onto the sensor data 140 0 to obtain a probability score.
  • the apparatus can be configured to map the probability score to a class and that class can represent the classification 160.
  • the apparatus 100 is configured to, in an initialization phase, subject 122 the deep learning model 120 to supervised model training 150 based on training data 152 obtained from, for each of a training set 154 of material objects 110 1 to 110 n , a pair of sensor data 140 (see 140 1 to 140 n ) obtained by a measurement of the respective material object 110 and label information 158 (see 158 1 to 158 n ) associating the respective material object 110 with a target classification, e.g., a user individual target classification 157 1 /157 2 or a mean target classification 157 0 .
  • a target classification e.g., a user individual target classification 157 1 /157 2 or a mean target classification 157 0 .
  • the apparatus 100 may be configured so that the respective label information 158 comprises, for the respective material object 110, at least two labels 158a and 158b each indicating a target class 157 1 /157 2 for the respective material object 110.
  • the at least two labels 158a and 158b are provided by different user. Therefore, a respective target class 157 1 /157 2 for the respective material object 110 may represent a user individual target classification.
  • the apparatus 100 may be configured so that the respective label information 158 comprises a mean target classification 157o for the respective material object 110 and a confidence value 159 for the indication of the mean target classification 157 0 .
  • the respective mean target classification 157 0 may represent a mean of two or more user individual target classifications 157 1 and 157 2 associated with the respective material object 110.
  • the respective confidence value 159 may indicate a confidence in selecting the respective mean target classification 157o based on the respective user individual target classifications 157 1 and 157 2 .
  • the apparatus 100 may be configured to determine, for each material object 110 of the training set 154, the respective confidence value based on a statistic over all respective user individual target classifications 157 1 /157 2 .
  • the confidence value 159 is 1.0 or 100 % and in case the respective user individual target classifications 157 1 /157 2 indicate different classes, e.g., see the label information 158 3 , the confidence value 159 is smaller than 1.0 or 100 %, e.g., for N labels comprised by the respective label information 158, the confidence value may be Z N , wherein Z represents the number of user individual target classifications 157 1 /157 2 indicating the same target class as the mean target classification 157 0 .
  • the respective label information 158 comprises all labels associated with the respective material object, or one label out of two or more labels associated with the respective material object, or one label indicating the mean target classification 157 0 .
  • the usage of only one label out of two or more label associated with the respective material object may reduce the quality of the respective label information 158, e.g., provide limitations with regard to the certainty of the respective label information 158.
  • the apparatus 100 is configured so that the supervised model training 150 is less sensitive to sensor data 140 of material objects 110 whose label information 158 comprises labels 158a and 158b indicative of different target classes 157 and/or lower confidence values 159 compared to sensor data 140 of material objects 110 whose label information 158 comprises labels 158a and 158b indicative of the same target classes 157 and/or higher confidence values 159.
  • the apparatus 100 might prefer the sensor data 140 1 , 140 2 and 140 n over the sensor data 140 3 for the supervised model training 150.
  • the apparatus 100 may be configured to obtain the training data 152 only from pairs of sensor data 140 and label information 158, which indicate the same target classes 157 in both labels 158a and 158b of the respective label information 158.
  • the usage of two or more labels per label information 158 and the selection of the training data 152 based on the information provided by the two labels 158a and 158b, i.e. the respective target classification 157 and/or the respective confidence value 159 improves the supervised model training.
  • the confidence value 159 may indicate the reliability of the target class provided by the respective label.
  • the supervised model training 150 can be improved. This is based on the idea that sensor data with unsure or possibly false target classification 157 are not or only to a small amount considered in the training data 152. This increases the accuracy of the deep learning model 120.
  • the apparatus may comprise a measurement sensor 170 configured to perform the measurement of the training set 154 of material objects 110 1 to 110 n and of the predetermined material object 110 0 .
  • the measurement sensor 170 may comprise or represent a dual energy X-ray sensor for obtaining a dual energy X-ray image as the sensor data 140, an X-ray sensor for obtaining a spectral X-ray image as the sensor data 140, a multi-energy X-ray sensor for obtaining a multi-energy X-ray image as the sensor data 140, a camera for obtaining an RGB image as the sensor data 140 or any combination thereof.
  • the apparatus 100 comprises a conveyor belt 175 for passing the training set 154 of material objects 110 1 to 110 n and the predetermined material object 110o by the measurement sensor 170 so that the material objects 110 1 to 110 n and the predetermined material object 110 0 can be scanned by the measurement sensor 170.
  • the sensor data 140 of the scanned material object 110 may comprise one or more images, e.g., dual energy X-ray images, spectral X-ray images, multi-energy X-ray images and/or RGB-images, of the respective material object 110.
  • the apparatus 100 may comprise a sorting stage 180 configured to subject the predetermined material object 110o to sorting according to the classification information 160.
  • the sorting stage 180 sorts the predetermined material object 110o to material objects 110 of a first class 182 or to material objects 110 of a second class 184 dependent on the classification information 160.
  • the classification information 160 can indicate for the predetermined material object 110 0 one class out of two or more classes.
  • a class might be associated with a certain material, e.g., a metal, plastic, glass, wood or a textile.
  • the apparatus 100 may comprise a user interface 190 via which, for example, a user may provide information to the apparatus 100.
  • the user interface 190 can be used for various options.
  • the apparatus 100 can be configured to obtain, for the respective material object 110, the at least two labels 158a and 158b via the user interface 190.
  • the at least two labels 158a and 158b for the respective material object 110 can be provided by different user.
  • a first user may provide a first label 158a and a second user may provide a second label 158b. This can also be the reason for a discrepancy between the target classifications of the at least two labels 158a and 158b. If the class of the respective material object 110 cannot be clearly determined based on the respective sensor data 140, different user may provide different labels for the same sensor data 140.
  • the apparatus is configured to obtain, via the user interface 190, a threshold value 132, and compare the threshold value 132 with an output of the deep learning model 120 applied to the sensor data 140 0 of the predetermined material object 110o so as to decide on whether the predetermined material object 110o is attributed to a predetermined class, e.g., 182 or 184.
  • the deep learning model 120 is configured to output a probability score indicating a likelihood of the predetermined material object 110o belonging to the predetermined class based on the sensor data 140 0 , e.g., the classification yields a model prediction probability score [0; 1].
  • the threshold value 132 can be used to map probability scores to hard class assignments as a post-hoc step, e.g., the threshold value 132 can be used to cut the probability score, i.e. the classification 160, into a binary decision.
  • the model output from the sorting stage 180 may be this binary decision.
  • This threshold value 132 could be adapted by a user after the supervised model training of the deep learning model 120 to calibrate model predictions further to their needs depending on their requirements regarding the accuracy of the classification 130 for a predetermined class of material objects 110.
  • the apparatus 100 is configured so that the classification 130 of the predetermined material object 110 0 depends on performance metric 134 input via the user interface 190.
  • the performance metric 134 can be indicative of miss-classification costs associated with miss-classifications of one or more classes discriminated by the classification 130 using the deep learning model 120. For example, a miss-classification in one class can induce higher costs than a miss-classification in another class. Therefore, the performance metric 134 may indicate whether the deep learning model 120 has to be more accurate in predicting a first class than in predicting a second class, e.g., the first class might only be determined by the deep learning model 120 at a high probability.
  • the deep learning model has to be very sure that the predetermined material object 110o belongs to the first class to select the first class for the predetermined material object 110 0 .
  • the apparatus 100 can be configured to control the supervised model training 150 so that the deep learning model 120 meets, with respect to the training data 152, the performance metric 134, or is optimized with respect to the performance metric 134.
  • the deep learning model 120 assesses the sensor data 140 0 of the predetermined material object 110 0 according to the performance metric 134.
  • the classification 130 of the predetermined material object 110 0 depends on the performance metric 134.
  • the apparatus 100 is configured to subject a set 124 of candidate deep learning models (DLM) to the supervised model training 150.
  • the apparatus 100 may select the candidate deep learning models for the set 124 based on the obtained training data 152.
  • the apparatus 100 can be configured to select one candidate deep learning model, e.g., DLM 2, out of the set 124 of candidate deep learning models which meets or is best in terms of the performance metric 134.
  • the deep learning model 120 is chosen according to the performance metric 134.
  • the classification 130 of the predetermined material object 110 0 depends on the performance metric 134, since the deep learning model 120 is applied onto the sensor data 140 0 of the predetermined material object 110o at the classification 130.
  • Fig. 3 shows an embodiment for a training of a deep learning model 120.
  • the shown training can be performed by one of the herein described apparatuses 100.
  • the apparatus 100 may be configured to decide 400 between three different training methods, e.g., the supervised model training 150, the unsupervised model training 151 and the semi-supervised model training 350, for training the deep learning model 120.
  • the apparatus 100 may be configured to subject the deep learning model 120 to supervised model training 150.
  • the apparatus 100 can be configured to, e.g., intermittently, subject the deep learning model 120 to unsupervised model training 151 and/or semi-supervised model training 350.
  • the training set 154 of material objects 110 can differ among supervised model training 150, semi-supervised model training 350 and unsupervised model training 151.
  • Unsupervised model training 151 may be performed using training data 152 without any label information 158 for each material object 110 of the training set of material objects 110 for the unsupervised model training 151.
  • the semi-supervised model training 350 may be performed using training data 152 comprising labeled data of some material objects 110 of the training set of material objects for the semi-supervised model training 350 and unlabeled data of the remaining material objects 110 of the training set of material objects 110 for the semi-supervised model training 350.
  • Fig. 4 shows a block diagram of a method 200 for classifying material objects.
  • the method 200 comprises, in an initialization phase, subjecting 210 a deep learning model to supervised model training 150 based on training data obtained from, for each of a training set of material objects, a pair of sensor data obtained by a measurement of the respective material object and label information associating the respective material object with a target classification. Additionally, the method 200 comprises, using the deep learning model, classifying 220 a predetermined material object based on sensor data obtained by a measurement of the predetermined material object.
  • the apparatus 100 may acquire an application-oriented database from shredded MHA, e.g., the training set 154 of material objects 110 1 to 110 n , on a conveyor belt 175 with an integrated dual energy X-ray scanner 170.
  • dual energy X-ray images e.g., comprised by the sensor data 140
  • a convolutional neural network model i.e. the deep learning model 120
  • the apparatus 100 may consider a customized performance metric 134 to account for the specific purity requirements that are present in such recycling tasks.
  • the proposed solution involves the interaction of multiple processes to pursue the overall goal of reducing the need of expert knowledge within the classification of material objects.
  • the advantages are illustrated, by way of example with respect to the sorting of scrap materials.
  • a deep learning based approach to increase the flexibility of sorting system operators and to improve the sorting quality in a traceable way is proposed.
  • the flowchart shown in Fig. 5 outlines a sorting system 300 and its individual processes.
  • the flowchart of the proposed deep learning based X-ray sorting system comprises the individual sub-processes through which a material stream passes until it is sorted.
  • the system 300 comprises or includes one of the herein described apparatuses 100 and a measurement system 170, e.g., an X-ray system, for obtaining sensor data 140 of a material object 110.
  • the apparatus 100 may comprise a machine learning (ML) model or a deep learning (DL) model to be used for classifying the material objects 110.
  • the apparatus 100 may be configured to receive from the measurement system 170 the sensor data 140 of one or more material objects and output for each of the one or more material objects 110 a classification 160, e.g., a decision to which class the respective material object belongs or a probability score indicating to which class the respective material object may belong.
  • a classification 160 e.g., a decision to which class the respective material object belongs or a probability score indicating to which class the respective material object may belong.
  • system 300 may comprise a sorting system 180 configured to sort the respective material object 110 based on its classification 160, e.g., map the probability score 160 onto a binary class 181.
  • apparatus 100 may comprise the measurement system 170 and/or the sorting system 180.
  • the developed deep learning (DL) model 120 is integrated into the flow of the sorting system 300 and replaces previous rule-based solutions, which base their sorting decision on pre-parameterized rule-based methods. This enables the learning of relevant features for a sorting decision directly from the raw data without human guidance.
  • the input provided to the DL model 120 is composed of the underlying material stream and the annotation of data samples (e.g., referred to as 'labeling'; e.g., comprised by the label information 158) that is provided by human experts once to enable the model training, e.g., the supervised model training 150.
  • the system 300 may comprise a user interface 190.
  • the apparatus 100 may comprise the user interface 190.
  • the 'user interaction' module i.e.
  • the user interface 190 may provide a performance metric 134 that is provided by the human expert which reflects the requirements on the trained model 120.
  • performance metric can reflect the degree of purity for specific sorting classes for instance.
  • the input data, i.e. the sensor data 140, for the DL model 120 is acquired in the first process step by an X-ray system 170 that provides dual energy X-ray transmission data (XRT data) from the raw material samples, i.e. the material objects 110, on a conveyor belt 175.
  • XRT data dual energy X-ray transmission data
  • a supervised DL model architecture i.e. the deep learning model 120, is trained to distinguish the different classes according to the provided performance metric 134. This method allows material separation to be performed with little prior knowledge of the underlying XRT data.
  • Fig. 6 illustrates exemplary submodules inside the Deep Learning Model 120.
  • Fig. 6 shows the different subprocesses of the DL model 120, starting with the preprocessing 126 (see Fig. 7 for a description) of the XRT data 140 to a machine-readable input.
  • individual objects 110 are collected from the continuous flow of material using standard computer vision techniques such as morphological operations and filtering, e.g., in step 126a.
  • the so extracted flakes, i.e. the material objects 110 are then centered on an equally sized grid of 224x224 pixels for each of the two dual energy X-ray dimensions, e.g., in step 126b.
  • Fig. 7 illustrates the pre-processing 126 of the scrap samples, i.e. the material objects 110: individual samples 110 are located and cut out from the material flow, e.g., in step 126a, and then stored as individual samples 140a and 140b according to the two dual energy channels, e.g., in step 126b. These samples 140a and 140b, e.g., comprised by the sensor data 140, are then fed to the DL model 120, e.g. as training data 152 for the supervised model training 150 of the DL model 120.
  • the human expert provides annotations (label information 158) for representative material samples, e.g. for the material objects 110 1 to 110 n .
  • annotation tools such as the proprietary "DE-Kit (dual-energy-kit)" could be used to collect these annotations 158.
  • DE-Kit dual-energy-kit
  • the human expert selects an appropriate metric 134 according to which the DL model 120 is optimized in alignment with model requirements. This performance metric 134 guides the model selection, e.g., the selection out of the set 124 of candidate deep learning models, and could reflect the different costs associated with different types of errors (e.g. False Positives vs. False Negatives) for instance.
  • Fig. 8 illustrates an embodiment of the DL model training process, e.g., a supervised model training 150, from data input to the final model 120 2 .
  • the model 120 1 is subsequently trained 150 on this annotated training data set 152 and the model hyperparameters are tuned via a valid model validation strategy (such as a nested holdout-5-fold cross validation over separately split validation and test data sets) and with an appropriate criterion that is reflected by the selected performance metric 134.
  • a valid model validation strategy such as a nested holdout-5-fold cross validation over separately split validation and test data sets
  • the final model 120 2 is integrated into the sorting system 300 and the resulting model performance (estimated generalization error) and other statistical metrics are provided to the user.
  • the DL model 120 outputs probability scores [0; 1] for each of the classes which are translated into hard class predictions via a threshold 132.
  • this threshold is by default set to 0.5.
  • the proposed system 300 or apparatus 100 enables the end user to interact with this system 300 or apparatus 100 by adapting this threshold 132 and thereby calibrate the decision making, i.e. the classification 130. Practically this implies that the user can control the purity of the resulting sorting after model training and deployment.
  • a representative database e.g., representative training data 152
  • representative training data 152 is crucial for tackling a specific task. Further for observation and evaluation of machine learning models 120 and how they perform in an application oriented field the need of realistic samples, i.e. realistic material objects 110, is mandatory. Therefore, data 152 with focus on two circumstances should be selected. First that the use case that is to be tackle is of interest in the specific domain, e.g., here the recycling industry. Second, the data 152 should be in existence in that domain regarding the acquisition system, i.e. the measurements system 170, sample shape and variety.
  • X-ray systems e.g. comprised by the measurement system 170
  • Those systems can handle tough and dirty surroundings which are predominant in the recycling sector.
  • those X-ray systems can be integrated in a processing line with additional systems.
  • systems which are built for crushing and shredding used materials to a certain size are previous steps. That serves for a better handling of the material flow and aims for sorting with a higher purity.
  • the measurement system 170 may comprise a X-ray source, e.g., set to 160 keV and 2 mA, a dual energy line detector, e.g., with 896 pixels and a pitch of 1.6 mm, and a conveyor belt 175 on that the material stream is transported.
  • a X-ray source e.g., set to 160 keV and 2 mA
  • a dual energy line detector e.g., with 896 pixels and a pitch of 1.6 mm
  • a conveyor belt 175 on that the material stream is transported.
  • the two information per pixels can be processed with conventional image processing methods e.g. filtering, morphological operations, etc.
  • the dual energy information can be processed within consideration of physical properties of the underlying measurement setup and the law of attenuation of X-rays. That allows to utilize a technique called the basis material decomposition (BMD) which is used to distinguish between different basis materials [1].
  • BMD basis material decomposition
  • a deep learning based approach for processing the dual energy information is proposed for which, for example, a few preprocessing steps are acquired to the raw dual information that are described in the following section.
  • Fig. 9 shows an overview of the applied steps to preprocess the measured dual energy information.
  • step 1 that shows the low energy information of a few pixel lines.
  • step 2 illustrates a part of the merged pixel lines to an image (only low energy information are shown as a grayscale image).
  • step 3 (126a) the region of interest for each individual flake 110 is marked in which the relevant information is present.
  • the last step 126b shows the extracted flake 110 and its location in a 2x224x224 shaped dual energy data sample.
  • the herein discussed sensor data 140 may comprise two individual samples 140a and 140b according to the two dual energy channels.
  • each sample (flake) 110 are captured in one individual resulting data sample 140a and 140b. This holds for each energy channel. This is necessary because the data stream acquired form the dual energy detector, e.g., comprised by the measurement system 170, is a single line for each energy information and the proposed deep learning model 120 operates on image domain.
  • the regarding energy information per pixel for each flake 110 are centered in a 224x224 sized image, see 140a and 140b. This size is chosen that the underlying extracted flakes 110 do not exceed the maximum height and width which lies, for example, between 10 to 120 mm of the shredded flakes. Further a filter may check if the extracted flakes 110 exceed image dimensions, e.g., 224 pixels in height or width, and removes large flakes 110 accordingly.
  • single dual energy pixel lines are processed to a data stream and individual flake images 140a and 140b are extracted with related spatial information, e.g., as the sensor data 140, before feeding it into a deep learning pipeline, e.g., the supervised model training 150.
  • a deep learning pipeline e.g., the supervised model training 150.
  • the resulting data information e.g., the sensor data 140
  • Fig. 10 shows an exemplary result 142 of the base material decomposition (BMD) 144 and the visualization of the output in a blue to greenish color scheme.
  • the agreement level e.g., the confidence value 159
  • the used measure e.g., the confidence value 159
  • 1 represents a complete agreement and at a value smaller than zero a zero agreement of the annotators is observed.
  • Fig. 11 shows a table of the Cohens Kappa scores between two experts in the field of recycling and X-ray physics and three non-expert annotators.
  • the Kappa agreements among each annotator from the table show that substantial results are achieved from the comparison of experts among each other. That holds for two of three non-expert human annotators according to the agreement to expert number one.
  • the remaining Kappa scores among experts and non-experts show a moderate agreement level.
  • an acceptable agreements lies at 0.41 [4].
  • the inventors conclude that non-expert are able to annotate after an instruction from experts.
  • 3000 randomly chosen flake samples 110 are annotated, i.e. the respective label information 158 is provided.
  • the apparatus 100 may decide whether to perform an additional split of the labeled samples into one fraction in which all three annotator agree with their votes and the rest of labeled samples. According to an embodiment, this may result in 2172 as certain labeled samples and 828 as noisy labeled samples.
  • the apparatus 100 may be configured so that the supervised model training 150 is less sensitive to sensor data 140 of certain labeled samples compared to sensor data 140 of noisy labeled samples.
  • the available data consist of 7346 single flake samples X , i.e. material objects 110, that comes, e.g., from major household appliance recycling scrap.
  • the appliances are shredded previously that, for example, leads to resulting flake sizes between 10 to 120 mm.
  • 3000 flakes X l are randomly selected and annotated with the herein proposed annotation scheme into two classes Y ⁇ ⁇ 0, 1 ⁇ , e.g., the target classification 0 represents pure aluminum and the target classification 1 represents the rest.
  • This annotation can be provided as the label information 158.
  • a distribution, e.g., 1:3, between material objects 110 associated with a first target classification 157 and material objects 110 associated with a second target classification 157, e.g., of pure aluminum flakes and rest flakes, may be in existence among the annotated samples X l , e.g., the sensor data 140 1 to 140 n and the label information 158 1 to 158 n associated with the material objects 110 1 to 110 n of the training set 154 of material objects 110 1 to 110 n .
  • the apparatus 100 may be configured to split of the annotated samples into certain ( X l train,cert , X l val , X l test ) and noisy ( X l train, noisy ) labeled samples.
  • the following sections architectures, data augmentation strategies, annotation noise and performance metric describe features and/or functionalities comprisable by a herein described apparatus 100 to improve a performance of the deep learning model 120.
  • Fig. 13 shows exemplary four data augmentation procedures usable within model training, e.g., to increase the amount of the training data 152.
  • Fig. 13 shows exemplary four data augmentation procedures usable within model training, e.g., to increase the amount of the training data 152.
  • it helps regularizing the model 120 and avoids overfitting.
  • It is proposed to use one or more of the set of data augmentation procedures shown in Fig. 13 and wrapped them via the RandAugment procedure [7] to facilitate the tuning towards the herein discussed use case, e.g., the classifying of material objects 110.
  • the following data augmentation procedures can be applied to the individual flakes 110 within the 224x224 grid, i.e. to the individual samples 140a and 140b of the sensor data 140:
  • Each of these procedures should be parametrized with a magnitude parameter mag ⁇ [0; 1] that controls the strength of the augmentation.
  • mag controls the degree of the rotation, for 2) it refers to the probability of a flip for 3) it refers to the standard deviation of the Gaussian noise distribution and for 4) it controls the ratio of salt and pepper pixels (white and black) as wells as the amount of those noisy pixels.
  • the RandAugment strategy [7] can serve as a wrapper for different data augmentation procedures to allow an elegant tuning of those individual procedures towards the task by introducing two hyperparameters n and mag. Therein, n controls the amount of procedure that are randomly selected at each training step and mag controls their strength as described above.
  • RandAugment follows the rationale that potentially detrimental data augmentation procedures would be averaged out in the training process, reducing their potential harm.
  • annotations e.g. the target classification 157
  • two or more e.g., of three, human annotators in the data labeling process.
  • annotations 157 y i 1 , y i 2,3 with y ij ⁇ 0,1 ⁇ there are three annotations 157 y i 1 , y i 2,3 with y ij ⁇ 0,1 ⁇ .
  • This allows for different schemes to aggregate the individual annotations 157 to one global annotation y i * used as target in model training.
  • the apparatus 100 may be configured to aggregate, for each of the training set 154 of material objects 110 1 to 110 n , the two or more labels, e.g., 158a and 158b, to one global label.
  • the apparatus 100 may be configured to obtain the training data 152 according to one of the following procedures:
  • the main objective of this work is the training 150 of a model 120 that allows the separation of aluminum and residual flakes where aluminum recyclate is more valuable than residual recyclate.
  • a performance metric 134 that reflects this purity requirement.
  • this performance metric 134 has to reflect the interest in a high precision at the potential cost of a low recall for the aluminum class.
  • dual energy X-ray transmission data derived from shredded household appliances (Major Household Appliance MHA).
  • This material flow was shredded and passed onto a conveyor belt 175 on which an integrated dual energy X-ray system, e.g., comprised by the measurements system 170, was applied to create dual-energy scans of the material objects 110 as subsequent measurement.
  • This continuous material stream may then be preprocessed, e.g., into 7346 individual flakes 110, as data input and stored in machine-readable tensor data, i.e. the sensor data 140.
  • the final material stream thus may consist of 7346 individual flakes 110.
  • 3000 flakes 110 may be randomly selected and presented to three human annotators for data annotation, e.g., to obtain the label information 158. These 3000 flakes 110 may be transformed via a Basis Material Decomposition 144 and then presented to the annotator via the proprietary "DE-Kit". In early experiments with domain experts it has been found that the decision making about the label, e.g., the target classification 157, is ambiguous in especially hard corner cases due to the complexity of the task. It is proposed to mine data annotations from three different human annotators per sample, i.e. per material object 110 of the training set 154 of material objects 110 1 to 110 n , to create a high-quality training database, i.e.
  • the training data 152 which guides the model training process, e.g., the supervised model training 150, to increase the robustness of the resulting data annotations, e.g., the label information 158 or the target classification 157 comprised by the label information 158.
  • unsupervised model training 151 it has been investigated the use of two machine learning based clustering methods trained on feature engineered dual energy XRT data.
  • the investigated unsupervised model training techniques 151 were Gaussian-Mixture-Models and k-Means clustering and two different approaches of human engineered feature extraction methods were used.
  • As a first feature extraction method histograms for each energy channel were calculated. These histograms are binned in range of minimum to maximum values present in the data a summarization of two values.
  • the second feature extraction method yields statistical measures (the arithmetic mean and the standard deviation) as features computed individual from each energy channel.
  • this annotated data i.e. the pairs 156 of sensor data 140 and label information 158
  • the training data 152 may comprise the five different folds of training, validation and testing data.
  • the models e.g., the deep learning model 120, may be trained via standard Gradient Descent Backpropagation using a standard Cross-Entropy loss function.
  • a performance metric 134 may be defined for the model selection step, e.g., for the selection of the deep learning model 120 out of the set 124 of candidate deep learning models.
  • this system is expected to distinguish new unseen aluminum flakes 110 from residual flakes 110 with a Precision of 0.9308, i.e., out of 1000 flakes 110 selected as aluminum by the model 120, 69 would be False Positives whilst 931 would correctly correspond to the aluminum class.
  • a threshold 132 of 0.5 was used to map the probability scores [0; 1] predicted by the model 120 to hard binary class predictions (class 0: Aluminum, class 1: Residual fraction).
  • This threshold 132 could be adapted by the end user after deployment of the model 120 to calibrate model predictions further to their needs depending on their requirements regarding the purity of the resulting sorting.
  • Fig. 14 shows a confusion matrix of the final model 120 on the unseen test data, i.e.
  • a threshold 132 of 0.5 was used to map the probability scores [0; 1] predicted by the model to hard binary class predictions (class 0: Aluminum, class 1: Residual fraction).
  • the apparatus 100 is configured to classify a plurality of predetermined material objects 110 0 , based on the respective sensor data 140o associated with the respective predetermined material object 140 0 .
  • the inventors used a 5-fold Cross Validation scheme as an outer loop in the model validation step [10]. Further, it has been included an inner loop with a holdout validation split to tune the hyperparameters of the different model architectures and backbones.
  • Fig. 15 illustrates the evaluation scheme. A test and validation ratio of 20% respectively was applied and the tuning budget was set to 100 GPU hours while the performance metric 134 described above has been used. Fig. 15 shows a model selection and validation scheme. GE stands for Generalization Error, HPC for Hyperparameter Configuration and HPC* is the optimal HPC per fold. Mean and standard deviation of model performance across folds are reported in the experimental results section below.
  • Fig. 16 shows a table with the final results of the herein proposed methods and their experimental setups.
  • the results are structured according to the above proposed methods: 3.1 Architectures, 3.2 Data Augmentation Strategies and 3.3 Annotation Noise.
  • a small amount of labeled data 156 also renders model evaluation a difficult problem.
  • a 5-fold nested cross validation scheme with a hold-out validation set for hyperparameter tuning has been employed to ensure a realistic and comparable performance estimate of the final model.
  • the final deployment of such deep learning approaches depends on the acceptance by the practitioners and their trust in such models. Therefore, local explainable AI methods have been employed to open these black box models and provide insights into how the model 120 makes its predictions over different flakes 110. This can foster trust amongst practitioners, and it is an important step towards the final deployment of such automated sorting systems 300.
  • the human operator of the system has the possibility to interact with the DL model 120 via integrating his prior knowledge about the material stream to be sorted into the system. Potential Options:
  • BMD Basis Material Decomposition
  • the proposed solution was developed for the sorting of EoL aluminum scrap in the context of white goods and MHA in the recycling industry.
  • the herein described approach is not only limited to the use in the recycling industry, but also suitable for other domains for which sorting is relevant such as the mining industry.

Landscapes

  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)
  • Image Analysis (AREA)
EP22152672.6A 2021-12-10 2022-01-21 Vorrichtung und verfahren zum klassifizieren von materialobjekten Pending EP4194107A1 (de)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP21213912 2021-12-10

Publications (1)

Publication Number Publication Date
EP4194107A1 true EP4194107A1 (de) 2023-06-14

Family

ID=79231071

Family Applications (1)

Application Number Title Priority Date Filing Date
EP22152672.6A Pending EP4194107A1 (de) 2021-12-10 2022-01-21 Vorrichtung und verfahren zum klassifizieren von materialobjekten

Country Status (1)

Country Link
EP (1) EP4194107A1 (de)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116957475A (zh) * 2023-08-09 2023-10-27 南京沃德睿医疗科技有限公司 基于云计算的口腔诊所库房管理方法、系统及装置
CN118122658A (zh) * 2024-05-10 2024-06-04 保定市佳宇软件科技有限公司 一种基于数据深度学习的智能干选系统

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210229133A1 (en) * 2015-07-16 2021-07-29 Sortera Alloys, Inc. Sorting between metal alloys

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210229133A1 (en) * 2015-07-16 2021-07-29 Sortera Alloys, Inc. Sorting between metal alloys

Non-Patent Citations (11)

* Cited by examiner, † Cited by third party
Title
ARTSTEIN, R., POESIO, M.: "Inter-coder agreement for computational linguistics", COMPUTATIONAL LINGUISTICS, vol. 34, 2008, pages 555 - 596, XP058245377, DOI: 10.1162/coli.07-034-R2
COHEN, J.: "A coefficient of agreement for nominal scales", EDUCATIONAL AND, vol. 20, 1960, pages 37 - 46
CUBUK, E. D.ZOPH, B.SHLENS, J.LE, Q. V.: "RandAugment: Practical automated data augmentation with a reduced search space", NEURAL INFORMATION PROCESSING SYSTEMS, vol. 34, 2020, pages 18613 - 18624
FIRSCHING, M.NACHTRAB, F.UHLMANN, N.HANKE, R.: "Multi-Energy X-ray Imaging as a Quantitative Method for Materials Characterization", ADVANCED MATERIALS, vol. 23, 2011, pages 2655 - 2656
HASTIE, T. A.: "The elements of statistical learning", SPRINGER SERIES IN STATISTICS, 2009
HE, K.ZHANG, X.REN, S.SUN, J.: "Deep residual learning for image recognition", IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, 2016, pages 770 - 778, XP055536240, DOI: 10.1109/CVPR.2016.90
LI, L. A.: "Hyperband: A novel bandit-based approach to hyperparameter optimization", THE JOURNAL OF MACHINE LEARNING RESEARCH, 2017, pages 6765 - 6816
MCHUGH, M. L.: "Interrater reliability: the kappa statistic", BIOCHEMIA MEDICA, vol. 22, 2012, pages 276 - 282
SONG, H. A.-G.: " Learning from noisy labels with deep neural networks: A survey", ARXIVE, 2020
TAN, M., LE, Q.: " EfficientNet: Rethinking model scaling for convolutional neural networks", INTERNATIONAL CONFERENCE ON MACHINE LEARNING, vol. 97, 2019, pages 6105 - 6114
TANNO, R., SAEEDI, A., SANKARANARAYANAN, D., ALEXANDER, D. C., & SILBERMAN, N.: "Learning From Noisy Labels by Regularized Estimation of Annotator Confusion", IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, 2019, pages 11236 - 11245, XP033686493, DOI: 10.1109/CVPR.2019.01150

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116957475A (zh) * 2023-08-09 2023-10-27 南京沃德睿医疗科技有限公司 基于云计算的口腔诊所库房管理方法、系统及装置
CN118122658A (zh) * 2024-05-10 2024-06-04 保定市佳宇软件科技有限公司 一种基于数据深度学习的智能干选系统

Similar Documents

Publication Publication Date Title
Kukreja et al. A Deep Neural Network based disease detection scheme for Citrus fruits
KR102137184B1 (ko) 자동 및 수동 결함 분류의 통합
EP4194107A1 (de) Vorrichtung und verfahren zum klassifizieren von materialobjekten
US9715723B2 (en) Optimization of unknown defect rejection for automatic defect classification
US9607233B2 (en) Classifier readiness and maintenance in automatic defect classification
Moses et al. Deep CNN-based damage classification of milled rice grains using a high-magnification image dataset
JP6445127B2 (ja) 貨物の検査方法およびそのシステム
Chopra et al. Efficient fruit grading system using spectrophotometry and machine learning approaches
JP2015506023A (ja) 画像内の特徴を自動的に検出する方法および装置、並びに装置をトレーニングする方法
EP3896602A1 (de) Verfahren und system zum trainieren eines maschinenlernmodells zur klassifizierung von komponenten in einem materialstrom
US20230023641A1 (en) Automated detection of chemical component of moving object
Deulkar et al. An automated tomato quality grading using clustering based support vector machine
WO2010059679A2 (en) Constructing enhanced hybrid classifiers from parametric classifier families using receiver operating characteristics
KR102158967B1 (ko) 영상 분석 장치, 영상 분석 방법 및 기록 매체
KR20230063147A (ko) 다단계 특징 분석을 사용한 전립선 조직의 효율적인 경량 cnn과 앙상블 머신 러닝 분류 방법 및 시스템
Alotaibi Germination quality prognosis: Classifying spectroscopic images of the seed samples
Richter et al. Optical filter selection for automatic visual inspection
CN113807259B (zh) 一种基于多尺度特征融合的染色体分裂相定位与排序的方法
Aravapalli An automatic inspection approach for remanufacturing components using object detection
Millan-Arias et al. Anomaly Detection in Conveyor Belt Using a Deep Learning Model
Palmquist Detecting defects on cheese using hyperspectral image analysis
Srinivasaiah et al. Analysis and prediction of seed quality using machine learning.
Dhali et al. An Automatic Brick Grading System Using Convolutional Neural Network: Bangladesh Perspective
Duberg Anomaly Detection in Snus Manufacturing: A machine learning approach for quality assurance
González et al. Region selection and image classification methodology using a non-conformity measure

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20231214

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR