EP3838427A1 - A method for sorting objects travelling on a conveyor belt - Google Patents

A method for sorting objects travelling on a conveyor belt Download PDF

Info

Publication number
EP3838427A1
EP3838427A1 EP19218995.9A EP19218995A EP3838427A1 EP 3838427 A1 EP3838427 A1 EP 3838427A1 EP 19218995 A EP19218995 A EP 19218995A EP 3838427 A1 EP3838427 A1 EP 3838427A1
Authority
EP
European Patent Office
Prior art keywords
conveyor belt
anyone
sorting
travelling
product
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP19218995.9A
Other languages
German (de)
French (fr)
Inventor
Lars Mensal
Jesper Stemann Andersen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ihp Systems AS
Ihp Systems AS
Original Assignee
Ihp Systems AS
Ihp Systems AS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ihp Systems AS, Ihp Systems AS filed Critical Ihp Systems AS
Priority to EP19218995.9A priority Critical patent/EP3838427A1/en
Priority to EP20215996.8A priority patent/EP3865222A1/en
Publication of EP3838427A1 publication Critical patent/EP3838427A1/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/34Sorting according to other particular properties
    • B07C5/342Sorting according to other particular properties according to optical properties, e.g. colour
    • B07C5/3422Sorting according to other particular properties according to optical properties, e.g. colour using video scanning devices, e.g. TV-cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/34Sorting according to other particular properties
    • B07C5/3412Sorting according to other particular properties according to a code applied to the object which indicates a property of the object, e.g. quality class, contents or incorrect indication
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C2501/00Sorting according to a characteristic or feature of the articles or material to be sorted
    • B07C2501/0054Sorting of waste or refuse

Definitions

  • the present invention relates to a method for sorting objects travelling on a conveyor belt, where image data is captured by at least one imaging sensor for an image comprising at least one object travelling on the conveyor belt and where imaging sensor provides color image data.
  • sortation of materials may be done by hand or by machines. For example, a stream of materials may be carried by a conveyor belt, and the operator of the recycling center may need to direct a certain fraction of the material into a bin or otherwise off the current conveyer.
  • These conventional sorting systems are large in size and lack flexibility due to their large size. Moreover, they lack the ability to be used in recycling facilities that handle various types of items such as plastic bottles, aluminum cans, cardboard cartons, and other recyclable items, or to be readily updated to handle new or different materials. It is also known to use automated solutions using sensors or cameras to identify materials carried on a conveyor belt, which via a controller may activate a sorting mechanism. However, these new solution does not always function perfect.
  • the conventional plastic sorting solutions are based on near-infrared / short-wave-infrared (NIR/SWIR) spectrometry, where e.g. a NIR/SWIR reflection spectrum is collected for each plastic object and the spectrum identifies the material type of the plastic object - which determines the sorting.
  • NIR/SWIR-spectrometric sorting systems are unable to handle dark and black plastics as all dark and black plastics return the same flat spectrum in the NIR/SWIR-range regardless of the material type.
  • NIR/SWIR-systems also cannot discriminate properly between white and transparent plastics, which is important for proper recycling.
  • Another drawback of the spectrometric systems is that the system cannot sort waste by application - e.g. they cannot sort food from non-food plastics.
  • spectrometric systems are also challenged by composite plastic objects, e.g. a bottle with a bottle cap and a foil covering the bottle - the spectrometric system might sort the object based on the foil.
  • An object of the present invention is to provide a method for identifying and sorting waste material in a more precise manner.
  • a further object is to provide a cost-effective and effective method of identifying and sorting waste material, in particular waste material comprising plastic
  • the categories may e.g. be glass, metal, plastic, cardboard, paper and biological waste.
  • the metal fraction may sorted into aluminium and iron fractions and plastic into fractions based on different plastic types such as PE, PP or fractions with soft and hard plastic.
  • the present invention relates to a method for sorting objects travelling on a conveyor belt, the method comprising:
  • sorting device should includes a robot, mechanical actuators, actuators based on a solenoid, air jet nozzles etc.
  • the imaging sensor is preferably a camera, which are able to provide color images in environment with low light intensity, e.g. light intensities around 500 lumen.
  • the camera operates at light intensities around 1000 lumen or more, such as 1500 lumen or more.
  • the target object is guided to a collection device in the sorting area by means of the sorting device.
  • the sorting robot may control e.g. a pusher device or air jet nozzles which are suitable for guiding the target object to a collection device.
  • the characteristics of the at least one object travelling on the conveyor belt is the physical appearance or shape of the object.
  • the method is capable of identifying objects based on their design features.
  • the characteristics of the at least one object travelling on the conveyor belt is the color and/or transparency of the object.
  • the method is also suitable for detecting objects based on their color or transparency.
  • the characteristics of the at least one object travelling on the conveyor belt is selected from vendor names, brand names, product names, trademarks, logos, symbols, slogans or a combination of two or more of the characteristics.
  • the product detection and recognition module may interact with one or more databases comprising information about vendor names, brand names, product names, trademarks, and slogans and retract information from these database to identify objects.
  • the product detection and recognition module may apply two or more characteristics in the product detection and recognition process.
  • the imaging sensor has a spatial resolution is at least 2 px/mm (pixel/mm). With such a spatial resolution the imaging sensor is able to provide very detailed images.
  • the spatial resolution is at least 4 px/mm.
  • the imaging sensor is able to detect very small scale details, such as logos with an extent of about 5 mm or less.
  • the method is adapted for detecting and recognizing objects used as packaging or container for food items, such a bottles and trays.
  • the objects may e.g. be bottles for juice and soft drinks made from plastic, such as transparent plastic.
  • the object may also be a tray used for e.g. meat or biscuits.
  • the trays may e.g. be made from plastic material in any desired colors.
  • the trays may be marked with a "fork and knife" logo indicating the tray is for use with foodstuff.
  • the method is adapted for detecting and recognizing black objects.
  • Black objects are difficult to detect due to the low reflection from the material, however, the method according to the invention has proven to be surprisingly efficient in detecting and recognizing black objects.
  • the black object may e.g. be made from plastic which it is desirable to sort properly.
  • the black object is tray for food, such as a plastic tray for meat.
  • the detection and recognition of object are based on the detection and recognition modules interaction with one or more databases, such as databases comprising information about e.g. specific product (such as materials used in the product), vendor names, brand names, product names, trademarks, and slogans.
  • databases comprising information about e.g. specific product (such as materials used in the product), vendor names, brand names, product names, trademarks, and slogans.
  • the method may also apply a convolutional neural network.
  • the product detection and recognition involves a convolutional neural network.
  • the method proceeds with an inference process where during operation the neural network parameters are loaded into a computer processor (such as the processor mentioned above) in a neural network program that implements the convolutional neural network.
  • the processor may then receive images from the imaging sensor, and pass that image through the convolutional neural network program.
  • the convolutional neural network then outputs a decision, indicating, for example, the type of object present in the image with highest likelihood.
  • the labeled data is used by a training algorithm (which may be performed by a training processor) to optimize the convolutional neural network to identify the object in the captured images with the greatest feasible accuracy.
  • a training algorithm which may be performed by a training processor
  • a number of algorithms may be utilized to perform this optimization, such as Stochastic Gradient Descent, Nesterov's Accelerated Gradient Method, the Adam optimization algorithm, or other well-known methods.
  • Stochastic Gradient Descent a random collection of the labeled images is fed through the network. The error of the output neurons is used to construct an error gradient for all the neuron parameters in the network. The parameters are then adjusted using this gradient, by subtracting the gradient multiplied by a small constant called the "learning rate". These new parameters may then be used for the next step of Stochastic Gradient Descent, and the process repeated.
  • the result of the optimization includes a set of convolutional neural network parameters (which are stored in a memory) that allow the convolutional neural network to determine the presence of an object in an image.
  • the neural network parameters may be stored on digital media.
  • the training process may be performed by creating a collection of images of items, with each image labeled with the category of the items appearing in the image.
  • Each of the categories can be associated with a number, for instance the conveyor belt might be 0, a carton 1, a transparent plastic bottle 2, etc.
  • the convolutional neural network would then comprise a series of output neurons, with each neuron associated with one of the categories.
  • neuron 0 is the neuron representing the presence of conveyor belt
  • neuron 1 represents the presence of a carton
  • neuron 2 represents the presence of a transparent plastic bottle, and so forth for other categories.
  • the method may be designed to detect and recognize waste objects using very specific categories, product-specific categories, i.e. to classify each waste object as belonging to a specific vendor, brand, product and/or application (food, cosmetics, other). This may be enabled by e.g., using an application/shape/color hierarchical ordering:
  • the method proceeds with an inference process where the neural network parameters are loaded into a computer processor (such as the processor mentioned above) in a neural network program that implements convolutional neural network.
  • the processor may then receive images from the imaging sensor, and pass that image through the convolutional neural network program.
  • the neural network then outputs a decision, indicating, for example, the type of item/material present in the image with highest likelihood.
  • the method further comprise interaction with a product database.
  • the product database may contain information about an identified object, such as which material or materials the object is manufactured from. Such information is very useful in a sorting process.
  • the object is a plastic object.
  • the object may be made from plastic material such as e.g. PE, PP, PS, PET, PVC, PVA or ABS. Large amount of plastic is used today, which generates large amount of plastic waste and the present invention provides a method for efficient sorting of plastic material.
  • the invention also provides a system for sorting objects, the system comprising:
  • Figure 1 is a diagram showing the principles of the invention.
  • Reference number 1 indicates the conveyer belt.
  • Box 2 illustrates the "scene" on the conveyer belt 1, i.e. the conveyor belt with one or a number of items.
  • the scene 2 reflects light, which are registered by the camera 3, and transformed into an image.
  • the image is processed in a product detection and recognition module 4 to identify the item or items present in the scene 2.
  • the information from the product detection and recognition module 4 is send to the sorting control 5, which may obtain further information about the identified items from the product database 6.
  • the sorting control 5 communicates with a robot controller 7 which control a robot 8, which is physically able to intervene in scene 2b in a sorting area on the conveyer belt 1 and sort the item or items into specific categories of waste material.
  • the speed of the conveyor belt 1 is monitored, and an encoder 9 sends information about the speed of the conveyer belt 1 to a synchronizer 10.
  • the synchronizer sends signals to the camera 3 and determines how many images the camera 3 should take per second.
  • the synchronizer also sends signals to the robot controller 7 with information about when the scene 2b reaches the sorting area.
  • the encoder 9 may also send signals directly to the robot controller 7.
  • Scene 2a and scene 2b are in principle identical, and the reference numbers only indicates that the conveyor belt has moved the scene a distance from the point where scene 2a was registered by the camera 3.
  • Figure 2 illustrates the principles of the conveyor belt information system.
  • the speed of the conveyor belt is monitored, and the information about the speed is transformed by the encoder 9 and send as an encoder signal to the synchronizer 10.
  • the synchroniser 10 sends a signal to the camera 3 when an image of the scene 2a needs to be provided.
  • the camera may provide several images of the scene 2a per second. However, if the speed of the conveyor belt is slow the camera 3 only needs to provide a few images per minute.
  • the images from the camera 3 are send to the product detection and recognition module 4 to be processed and the items in the image identified.
  • the information about the identified items are then send to the visualization and statistics module 5a for further processing to display or otherwise provide the information that can be extracted or accumulated from the detection system.
  • the visualization and statistics module 5a is integrated with the sorting control 5.
  • the visualization and statistics module 5a communicates with the product database 6 to obtain more detailed information about product properties for an identified item.
  • the information about product properties may e.g. be information about material.
  • the sorting control Based on the information available the sorting control sends commands to the robot controller (not shown in figure 2 ), which will activate the robot to perform desired sorting motions and actuations, when the scene 2a reaches the sorting area (scene 2b).
  • FIG. 3 illustrates the principles of the information system.
  • the information system includes the camera 3, the product detection and recognition module 4, the visualization and statistics module 5a and the product database 6.
  • the images from the camera 3 are send to the product detection and recognition module 4 where the items on the images (appearing on the scene 2a) are identified.
  • the camera 3, the lightning and the conveyor speed must be adjusted to provide images which meet the requirements, e.g. images with sufficient lightning and with little motion blur.
  • the information about the identified items are then send to the visualization and statistics module 5a for further processing.
  • the visualization and statistics module 5a is integrated with the sorting control 5.
  • the visualization and statistics module 5a communicates with the product database 6.
  • the visualization and statistics module 5a can search the product database 6 and obtain more detailed information about product properties for an identified item.
  • the information about product properties may e.g. be information about material.
  • the sorting control Based on the information available, the sorting control sends commands to the robot controller, which will activate the robot to perform desired sorting motions and actuations. This will result in that the items appearing on the scene 2a on the conveyor belt will be sorted to desired fractions.
  • FIG. 4 shows the principles of product detection and recognition.
  • the image distributor 21 receives and image and distributes the image to a neural network object detection module 22, a logo detection module 23, and symbol detection module 24, and a text detection and text+font recognition module 25.
  • the information which is deduced from the neural network object detection module 22, the logo detection module 23, and the symbol detection module 24 are send to the recognition module 4a for further processing.
  • the information from the text detection and text+font recognition module 25 is further processed in the vendor name recognition module 26, the brand name recognition module 27, the product name recognition module28, the slogan recognition module 29, and product description recognition module 30, before the information is send to the product recognition module 4a for further processing.
  • the product recognition module 4a is integrated in the product detection and recognition module 4.
  • Figure 5 is illustrates a method for logo and symbol detection as shown in figure 4 .
  • the overall detection principles are generally the same.
  • the modules receive an image from the image distributor, the image is first processed in a feature extraction module 40, extracting local features.
  • the information is sent to a feature description module 41 which describes the local features and send the information to a matching module 42.
  • the matching module 42 interacts with a feature descriptor database 44 which can provide further information about the features. From the matching module 42 matched local feature descriptors are send to a clustering module 43, before the information is provided to the product recognition module for further processing.
  • Figure 6 illustrates in more details the principles of text detection and recognition carried out in the text detection and text+font recognition module 25.
  • the text detection and text+font recognition module receive an image from the image distributor, the image is first processed in a convolutional neural network 50 which send a compressed image representation to a text detection module 25a which again sends text boxes to a text recognition module 25b and font recognition module 25c.
  • the text recognition module 25b and the font recognition module 25c provides information about text and font to the modules 26 - 30 in figure 4 . After processing in the modules 26 - 30, text information is provided to the product recognition module.
  • the convolutional neural network 50 During the processing of the image, the convolutional neural network 50, the text detection module 25a, and the text recognition module 25b interact with a images and annotations database 51.
  • the images and annotations database 51 is a training database which supports the image the convolutional neural network 50. Neural network parameters are learned in the training phase from images and annotations. It is the learned model that is extracted from the images and annotations which is interacted with during operation/processing.
  • Figure 7 illustrates the general principles of neural network object detection.
  • the image is send to the convolutional neural network 50 for processing and the convolutional neural network 50 sends compressed image representation to an object detection module 52 which detects the objects.
  • the convolutional neural network 50 and the object detection module 52 interact with the images and annotations database 51.
  • Neural network parameters are learned in the training phase from images and annotations. It is the learned model that is extracted from the images and annotations which is interacted with during operation/processing.
  • Figure 8 illustrates the general principles of two-stage neural network object detection.
  • An image is distributed from the image distributor module 21.
  • the image is sent to the convolutional neural network 50 and the object recognition module 53.
  • the convolutional neural network 50 sends compressed image representation to the object detection module 52 which detects the objects and sends the information to the object recognition module 53, which recognize the objects.
  • the convolutional neural network 50, the object detection module 52, and the object recognition module 53 interact with the images and annotations database 51 during the detection and recognition process.
  • the neural network parameters are learned in the training phase from images and annotations. It is the learned model that is extracted from the images and annotations which is interacted with during operation/processing.
  • Figure 9 illustrates an embodiment where an image with high resolution is linked to a neural network for object detection.
  • the architecture of the network is adapted to the high resolution in the images by neural network layers 50a, 50b and 50 c in the beginning of the network.
  • the embodiment corresponds to the embodiment shown in figure 7 , but adapted for images with high resolution.
  • Figure 10 illustrates examples of symbols which can be detected by the method according to the invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Sorting Of Articles (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a method for sorting objects, the method includes at least one imaging sensor and a controller comprising a processor and a memory storage, wherein the controller receives image data captured by the at least one imaging sensor; and at least one sorting robot is coupled to the controller, wherein the at least one sorting robot is configured to receive an actuation signal from the controller. The processor executes an object identification module configured to detect objects travelling on a conveyor belt and recognize at least one target item travelling on the conveyor belt by processing the image data and to determine an expected time when the at least one target item will be located within a diversion path of the sorting robot; and wherein the controller selectively generates the actuation signal based on whether a sensed object detected in the image data comprise the at least one target item.

Description

  • The present invention relates to a method for sorting objects travelling on a conveyor belt, where image data is captured by at least one imaging sensor for an image comprising at least one object travelling on the conveyor belt and where imaging sensor provides color image data.
  • BACKGROUND ART
  • In many recycling centers that receive recyclable materials, sortation of materials may be done by hand or by machines. For example, a stream of materials may be carried by a conveyor belt, and the operator of the recycling center may need to direct a certain fraction of the material into a bin or otherwise off the current conveyer. These conventional sorting systems are large in size and lack flexibility due to their large size. Moreover, they lack the ability to be used in recycling facilities that handle various types of items such as plastic bottles, aluminum cans, cardboard cartons, and other recyclable items, or to be readily updated to handle new or different materials. It is also known to use automated solutions using sensors or cameras to identify materials carried on a conveyor belt, which via a controller may activate a sorting mechanism. However, these new solution does not always function perfect.
  • The conventional plastic sorting solutions are based on near-infrared / short-wave-infrared (NIR/SWIR) spectrometry, where e.g. a NIR/SWIR reflection spectrum is collected for each plastic object and the spectrum identifies the material type of the plastic object - which determines the sorting.
    The NIR/SWIR-spectrometric sorting systems are unable to handle dark and black plastics as all dark and black plastics return the same flat spectrum in the NIR/SWIR-range regardless of the material type. Moreover, NIR/SWIR-systems also cannot discriminate properly between white and transparent plastics, which is important for proper recycling. Another drawback of the spectrometric systems is that the system cannot sort waste by application - e.g. they cannot sort food from non-food plastics.
  • Finally spectrometric systems are also challenged by composite plastic objects, e.g. a bottle with a bottle cap and a foil covering the bottle - the spectrometric system might sort the object based on the foil.
  • DISCLOSURE OF THE INVENTION
  • An object of the present invention is to provide a method for identifying and sorting waste material in a more precise manner.
  • A further object is to provide a cost-effective and effective method of identifying and sorting waste material, in particular waste material comprising plastic
  • Normally, when waste and garbage is collected and initial sorting into different material categories is performed. The categories may e.g. be glass, metal, plastic, cardboard, paper and biological waste. Thus, when the waste reaches the recycling center each material fraction is normally sorted into even finer fractions. The metal fraction may sorted into aluminium and iron fractions and plastic into fractions based on different plastic types such as PE, PP or fractions with soft and hard plastic.
  • The present invention relates to a method for sorting objects travelling on a conveyor belt,
    the method comprising:
    • receiving image data captured by at least one imaging sensor for an image comprising at least one object travelling on the conveyor belt said imaging sensor provides color image data with a spatial resolution of at least 0.4 px/mm;
    • executing a product detection and recognition module on a processor, the product detection and recognition module being configured to detect characteristics of the at least one object travelling on the conveyor belt by processing the image data;
    • determining an expected time when the at least one object will be located within a sorting area of at least one sorting device; and
    • selectively generating a robot control signal to operate the at least one sorting device on whether the at least one object comprises a target object.
  • In this context the term "sorting device" should includes a robot, mechanical actuators, actuators based on a solenoid, air jet nozzles etc.
  • The terms "object", "item" and "product" and their plural form are used interchangeable in this text.
  • The imaging sensor is preferably a camera, which are able to provide color images in environment with low light intensity, e.g. light intensities around 500 lumen. Preferably, the camera operates at light intensities around 1000 lumen or more, such as 1500 lumen or more.
  • In an embodiment the target object is guided to a collection device in the sorting area by means of the sorting device. The sorting robot may control e.g. a pusher device or air jet nozzles which are suitable for guiding the target object to a collection device.
  • In an embodiment of the method according to the invention, the characteristics of the at least one object travelling on the conveyor belt is the physical appearance or shape of the object. Thus, the method is capable of identifying objects based on their design features.
  • In an embodiment of the method according the invention, the characteristics of the at least one object travelling on the conveyor belt is the color and/or transparency of the object. Thus, the method is also suitable for detecting objects based on their color or transparency.
  • In an embodiment the characteristics of the at least one object travelling on the conveyor belt is selected from vendor names, brand names, product names, trademarks, logos, symbols, slogans or a combination of two or more of the characteristics. The product detection and recognition module may interact with one or more databases comprising information about vendor names, brand names, product names, trademarks, and slogans and retract information from these database to identify objects.
  • In respect of the three above mentioned embodiments it is clear that the features of these embodiments, may be combined in any desireable manner.
  • For the purpose of obtaining a more precise identification the product detection and recognition module may apply two or more characteristics in the product detection and recognition process.
  • In an embodiment the imaging sensor has a spatial resolution is at least 2 px/mm (pixel/mm). With such a spatial resolution the imaging sensor is able to provide very detailed images.
  • In an embodiment the spatial resolution is at least 4 px/mm. When the spatial resolution is about 4 px/mm or more, the imaging sensor is able to detect very small scale details, such as logos with an extent of about 5 mm or less.
  • In an embodiment the method is adapted for detecting and recognizing objects used as packaging or container for food items, such a bottles and trays. The objects may e.g. be bottles for juice and soft drinks made from plastic, such as transparent plastic. The object may also be a tray used for e.g. meat or biscuits. The trays may e.g. be made from plastic material in any desired colors. The trays may be marked with a "fork and knife" logo indicating the tray is for use with foodstuff.
  • In an embodiment the method is adapted for detecting and recognizing black objects. Black objects are difficult to detect due to the low reflection from the material, however, the method according to the invention has proven to be surprisingly efficient in detecting and recognizing black objects. The black object may e.g. be made from plastic which it is desirable to sort properly. Preferably the black object is tray for food, such as a plastic tray for meat.
  • In one aspect of the method the detection and recognition of object are based on the detection and recognition modules interaction with one or more databases, such as databases comprising information about e.g. specific product (such as materials used in the product), vendor names, brand names, product names, trademarks, and slogans.
  • The method may also apply a convolutional neural network.
  • Thus, in an embodiment of the method according to the invention, the product detection and recognition involves a convolutional neural network.
  • For the convolutional neural network to be used for identification of items/objects learned during training operations, the method proceeds with an inference process where during operation the neural network parameters are loaded into a computer processor (such as the processor mentioned above) in a neural network program that implements the convolutional neural network. During operation, the processor may then receive images from the imaging sensor, and pass that image through the convolutional neural network program. The convolutional neural network then outputs a decision, indicating, for example, the type of object present in the image with highest likelihood.
  • In a training operation, the labeled data is used by a training algorithm (which may be performed by a training processor) to optimize the convolutional neural network to identify the object in the captured images with the greatest feasible accuracy. As would be readily appreciate by one of ordinary skill in the art, a number of algorithms may be utilized to perform this optimization, such as Stochastic Gradient Descent, Nesterov's Accelerated Gradient Method, the Adam optimization algorithm, or other well-known methods. In Stochastic Gradient Descent, a random collection of the labeled images is fed through the network. The error of the output neurons is used to construct an error gradient for all the neuron parameters in the network. The parameters are then adjusted using this gradient, by subtracting the gradient multiplied by a small constant called the "learning rate". These new parameters may then be used for the next step of Stochastic Gradient Descent, and the process repeated.
  • The result of the optimization includes a set of convolutional neural network parameters (which are stored in a memory) that allow the convolutional neural network to determine the presence of an object in an image. During operation, the neural network parameters may be stored on digital media. In an example of implementation, the training process may be performed by creating a collection of images of items, with each image labeled with the category of the items appearing in the image. Each of the categories can be associated with a number, for instance the conveyor belt might be 0, a carton 1, a transparent plastic bottle 2, etc. The convolutional neural network would then comprise a series of output neurons, with each neuron associated with one of the categories. Thus, neuron 0 is the neuron representing the presence of conveyor belt, neuron 1 represents the presence of a carton, neuron 2 represents the presence of a transparent plastic bottle, and so forth for other categories.
  • The method may be designed to detect and recognize waste objects using very specific categories, product-specific categories, i.e. to classify each waste object as belonging to a specific vendor, brand, product and/or application (food, cosmetics, other). This may be enabled by e.g., using an application/shape/color hierarchical ordering:
    • Food
      • ∘ Bottle
        • ▪ Transparent
        • ▪ White
        • ▪ Black
        • ▪ Blue
        • ▪ Green
        • ▪ Red
        • ▪ Other
      • ∘ Tray
        • ▪ Transparent
        • ▪ White
        • ▪ Black
        • ▪ Blue
        • ▪ Green
        • ▪ Red
        • ▪ Other
      • ∘ Other
        • ▪ Transparent
        • ▪ White
        • ▪ Black
        • ▪ Blue
        • ▪ Green
        • ▪ Red
        • ▪ Other
    • Cosmetics
      • ∘ Bottle
        • ▪ Transparent
        • ▪ White
        • ▪ Black
        • ▪ Blue
        • ▪ Green
        • ▪ Red
        • ▪ Other
      • ∘ Other
        • ▪ Transparent
        • ▪ White
        • ▪ Black
        • ▪ Blue
        • ▪ Green
        • ▪ Red
        • ▪ Other
    • Other
      • ▪ Transparent
      • ▪ White
      • ▪ Black
      • ▪ Blue
      • ▪ Green
      • ▪ Red
      • ▪ Other
  • For the convolutional neural network to be used for identification of items/materials learned during training operations, the method proceeds with an inference process where the neural network parameters are loaded into a computer processor (such as the processor mentioned above) in a neural network program that implements convolutional neural network. During operation, the processor may then receive images from the imaging sensor, and pass that image through the convolutional neural network program. The neural network then outputs a decision, indicating, for example, the type of item/material present in the image with highest likelihood.
  • In an embodiment of the method, the method further comprise interaction with a product database. The product database may contain information about an identified object, such as which material or materials the object is manufactured from. Such information is very useful in a sorting process.
  • In an embodiment the object is a plastic object. The object may be made from plastic material such as e.g. PE, PP, PS, PET, PVC, PVA or ABS. Large amount of plastic is used today, which generates large amount of plastic waste and the present invention provides a method for efficient sorting of plastic material.
  • The invention also provides a system for sorting objects, the system comprising:
    • at least one imaging sensor;
    • a controller comprising a processor and a memory storage, wherein the controller receives image data captured by the at least one imaging sensor; and
    • at least one sorting robot coupled to the controller, wherein the at least one sorting robot is configured to receive an actuation signal from the controller;
    • wherein the processor executes an object identification module configured to detect objects travelling on a conveyor belt and recognize at least one target item travelling on a conveyor belt by processing the image data and to determine an expected time when the at least one target item will be located within a diversion path of the sorting robot; and
    • wherein the controller selectively generates the actuation signal based on whether a sensed object detected in the image data comprise the at least one target item.
    DETAILED DESCRIPTION OF THE INVENTION
  • The invention will now be described in further details with reference to drawings in which:
  • Figure 1:
    shows an embodiment with a conveyor and a robot;
    Figure 2:
    shows an embodiment with just a conveyor;
    Figure 3:
    shows an embodiment without conveyor (nor robot);
    Figure 4:
    shows a detailed view of the invention;
    Figure 5:
    shows a method for logo/symbol detection;
    Figure 6:
    shows the principles of text detection and recognition;
    Figure 7:
    illustrates the principles of neural network object detection;
    Figure 8:
    illustrates the principles of two-stage neural network object detection;
    Figure 9:
    shows an embodiment linking high resolution with a neural network; and
    Figure 10:
    shows examples of symbols, which can be detected by the method.
  • The figures are only intended to illustrate the principles of the invention and may not be accurate in every detail. Moreover, parts which do not form part of the invention may be omitted. The same reference numbers are used for the same parts.
  • Figure 1 is a diagram showing the principles of the invention. Reference number 1 indicates the conveyer belt. Box 2 illustrates the "scene" on the conveyer belt 1, i.e. the conveyor belt with one or a number of items. The scene 2 reflects light, which are registered by the camera 3, and transformed into an image. The image is processed in a product detection and recognition module 4 to identify the item or items present in the scene 2. The information from the product detection and recognition module 4 is send to the sorting control 5, which may obtain further information about the identified items from the product database 6.
  • The sorting control 5 communicates with a robot controller 7 which control a robot 8, which is physically able to intervene in scene 2b in a sorting area on the conveyer belt 1 and sort the item or items into specific categories of waste material.
  • The speed of the conveyor belt 1 is monitored, and an encoder 9 sends information about the speed of the conveyer belt 1 to a synchronizer 10. The synchronizer sends signals to the camera 3 and determines how many images the camera 3 should take per second. The synchronizer also sends signals to the robot controller 7 with information about when the scene 2b reaches the sorting area. The encoder 9 may also send signals directly to the robot controller 7.
  • Scene 2a and scene 2b are in principle identical, and the reference numbers only indicates that the conveyor belt has moved the scene a distance from the point where scene 2a was registered by the camera 3.
  • Figure 2 illustrates the principles of the conveyor belt information system. The speed of the conveyor belt is monitored, and the information about the speed is transformed by the encoder 9 and send as an encoder signal to the synchronizer 10. The synchroniser 10 sends a signal to the camera 3 when an image of the scene 2a needs to be provided. Depending on the actual speed of the conveyor belt the camera may provide several images of the scene 2a per second. However, if the speed of the conveyor belt is slow the camera 3 only needs to provide a few images per minute.
  • The images from the camera 3 are send to the product detection and recognition module 4 to be processed and the items in the image identified. The information about the identified items are then send to the visualization and statistics module 5a for further processing to display or otherwise provide the information that can be extracted or accumulated from the detection system. The visualization and statistics module 5a is integrated with the sorting control 5.
  • The visualization and statistics module 5a communicates with the product database 6 to obtain more detailed information about product properties for an identified item. The information about product properties may e.g. be information about material.
  • Based on the information available the sorting control sends commands to the robot controller (not shown in figure 2), which will activate the robot to perform desired sorting motions and actuations, when the scene 2a reaches the sorting area (scene 2b).
  • Figure 3 illustrates the principles of the information system. The information system includes the camera 3, the product detection and recognition module 4, the visualization and statistics module 5a and the product database 6.
  • The images from the camera 3 are send to the product detection and recognition module 4 where the items on the images (appearing on the scene 2a) are identified.
  • The camera 3, the lightning and the conveyor speed must be adjusted to provide images which meet the requirements, e.g. images with sufficient lightning and with little motion blur.
  • The information about the identified items are then send to the visualization and statistics module 5a for further processing. The visualization and statistics module 5a is integrated with the sorting control 5.
  • The visualization and statistics module 5a communicates with the product database 6. The visualization and statistics module 5a can search the product database 6 and obtain more detailed information about product properties for an identified item. The information about product properties may e.g. be information about material.
  • Based on the information available, the sorting control sends commands to the robot controller, which will activate the robot to perform desired sorting motions and actuations. This will result in that the items appearing on the scene 2a on the conveyor belt will be sorted to desired fractions.
  • Figure 4 shows the principles of product detection and recognition. The image distributor 21 receives and image and distributes the image to a neural network object detection module 22, a logo detection module 23, and symbol detection module 24, and a text detection and text+font recognition module 25.
  • The information which is deduced from the neural network object detection module 22, the logo detection module 23, and the symbol detection module 24 are send to the recognition module 4a for further processing.
  • The information from the text detection and text+font recognition module 25 is further processed in the vendor name recognition module 26, the brand name recognition module 27, the product name recognition module28, the slogan recognition module 29, and product description recognition module 30, before the information is send to the product recognition module 4a for further processing.
  • The product recognition module 4a is integrated in the product detection and recognition module 4.
  • Figure 5 is illustrates a method for logo and symbol detection as shown in figure 4.
  • In the logo detection module and symbol detection module the overall detection principles are generally the same. When the modules receive an image from the image distributor, the image is first processed in a feature extraction module 40, extracting local features. The information is sent to a feature description module 41 which describes the local features and send the information to a matching module 42. The matching module 42 interacts with a feature descriptor database 44 which can provide further information about the features. From the matching module 42 matched local feature descriptors are send to a clustering module 43, before the information is provided to the product recognition module for further processing.
  • Figure 6 illustrates in more details the principles of text detection and recognition carried out in the text detection and text+font recognition module 25.
  • When the text detection and text+font recognition module receive an image from the image distributor, the image is first processed in a convolutional neural network 50 which send a compressed image representation to a text detection module 25a which again sends text boxes to a text recognition module 25b and font recognition module 25c. The text recognition module 25b and the font recognition module 25c provides information about text and font to the modules 26 - 30 in figure 4. After processing in the modules 26 - 30, text information is provided to the product recognition module.
  • During the processing of the image, the convolutional neural network 50, the text detection module 25a, and the text recognition module 25b interact with a images and annotations database 51. The images and annotations database 51 is a training database which supports the image the convolutional neural network 50. Neural network parameters are learned in the training phase from images and annotations. It is the learned model that is extracted from the images and annotations which is interacted with during operation/processing.
  • Figure 7 illustrates the general principles of neural network object detection. The image is send to the convolutional neural network 50 for processing and the convolutional neural network 50 sends compressed image representation to an object detection module 52 which detects the objects.
  • During the process the convolutional neural network 50 and the object detection module 52 interact with the images and annotations database 51. Neural network parameters are learned in the training phase from images and annotations. It is the learned model that is extracted from the images and annotations which is interacted with during operation/processing.
  • Figure 8 illustrates the general principles of two-stage neural network object detection.
  • An image is distributed from the image distributor module 21. The image is sent to the convolutional neural network 50 and the object recognition module 53. The convolutional neural network 50 sends compressed image representation to the object detection module 52 which detects the objects and sends the information to the object recognition module 53, which recognize the objects.
  • The convolutional neural network 50, the object detection module 52, and the object recognition module 53 interact with the images and annotations database 51 during the detection and recognition process. The neural network parameters are learned in the training phase from images and annotations. It is the learned model that is extracted from the images and annotations which is interacted with during operation/processing.
  • Figure 9 illustrates an embodiment where an image with high resolution is linked to a neural network for object detection. The architecture of the network is adapted to the high resolution in the images by neural network layers 50a, 50b and 50 c in the beginning of the network. The embodiment corresponds to the embodiment shown in figure 7, but adapted for images with high resolution.
  • Figure 10 illustrates examples of symbols which can be detected by the method according to the invention.

Claims (14)

  1. A method for sorting objects travelling on a conveyor belt,
    the method comprising:
    receiving image data captured by at least one imaging sensor for an image comprising at least one object travelling on the conveyor belt said imaging sensor providing color image data with a spatial resolution of at least 0.4 px/mm;
    executing a product detection and recognition module on a processor, the product detection and recognition module being configured to detect characteristics of the at least one object travelling on the conveyor belt by processing the image data;
    determining an expected time when the at least one object will be located within a sorting area of at least one sorting device; and
    selectively generating a device control signal to operate the at least one device on whether the at least one object comprises a target object.
  2. A method according to claim 1, wherein the target object is guided to a collection device in the sorting area by means of the sorting device.
  3. A method according to claim 1 or 2, wherein characteristics of the at least one object travelling on the conveyor belt is the physical appearance or shape of the object.
  4. A method according to anyone of the preceding claims, wherein characteristics of the at least one object travelling on the conveyor belt is the color or colors and/or transparency of the object.
  5. A method according to anyone of the preceding claims, wherein characteristics of the at least one object travelling on the conveyor belt is selected from vendor names, brand names, product names, trademarks, logos, symbols, slogans or a combination of two or more of the characteristics.
  6. A method according to anyone of the preceding claims, wherein the product detection and recognition module applies two or more characteristics in the product detection and recognition.
  7. A method according to anyone of the preceding claims, wherein said spatial resolution is at least 2 px/mm.
  8. A method according to anyone of the preceding claims, wherein said spatial resolution is at least 4 px/mm.
  9. A method according to anyone of the preceding claims, wherein product detection and recognition involves a convolutional neural network.
  10. A method according to anyone of the preceding claims, wherein the method further comprises interaction with a product database.
  11. A method according to anyone of the preceding claims, wherein the object is a plastic object.
  12. A method according to anyone of the preceding claims, wherein the method is adapted for detecting and recognizing objects used as packaging or container for food items, such a bottles and trays.
  13. A method according to anyone of the preceding claims, wherein the method is adapted for detecting and recognizing black objects.
  14. A method according to claim 13, wherein the black object is a tray for food.
EP19218995.9A 2019-12-20 2019-12-20 A method for sorting objects travelling on a conveyor belt Withdrawn EP3838427A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19218995.9A EP3838427A1 (en) 2019-12-20 2019-12-20 A method for sorting objects travelling on a conveyor belt
EP20215996.8A EP3865222A1 (en) 2019-12-20 2020-12-21 A method for sorting consumer packaging objects travelling on a conveyor belt

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP19218995.9A EP3838427A1 (en) 2019-12-20 2019-12-20 A method for sorting objects travelling on a conveyor belt

Publications (1)

Publication Number Publication Date
EP3838427A1 true EP3838427A1 (en) 2021-06-23

Family

ID=69061108

Family Applications (2)

Application Number Title Priority Date Filing Date
EP19218995.9A Withdrawn EP3838427A1 (en) 2019-12-20 2019-12-20 A method for sorting objects travelling on a conveyor belt
EP20215996.8A Withdrawn EP3865222A1 (en) 2019-12-20 2020-12-21 A method for sorting consumer packaging objects travelling on a conveyor belt

Family Applications After (1)

Application Number Title Priority Date Filing Date
EP20215996.8A Withdrawn EP3865222A1 (en) 2019-12-20 2020-12-21 A method for sorting consumer packaging objects travelling on a conveyor belt

Country Status (1)

Country Link
EP (2) EP3838427A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113420819A (en) * 2021-06-25 2021-09-21 西北工业大学 Lightweight underwater target detection method based on CenterNet
CN114800533A (en) * 2022-06-28 2022-07-29 诺伯特智能装备(山东)有限公司 Sorting control method and system for industrial robot
CN114887927A (en) * 2022-05-10 2022-08-12 浙江工业大学 Automatic conveying quality detection and sorting system based on industrial robot
CN115311241A (en) * 2022-08-16 2022-11-08 天地(常州)自动化股份有限公司 Coal mine down-hole person detection method based on image fusion and feature enhancement
CN116475081A (en) * 2023-06-26 2023-07-25 工业富联(佛山)创新中心有限公司 Industrial product sorting control method, device and system based on cloud edge cooperation
US11741733B2 (en) 2020-03-26 2023-08-29 Digimarc Corporation Arrangements for digital marking and reading of items, useful in recycling
US11878327B2 (en) 2019-03-13 2024-01-23 Digimarc Corporation Methods and arrangements for sorting items, useful in recycling
CN117472015A (en) * 2023-12-28 2024-01-30 承德石油高等专科学校 Industrial processing control method based on machine vision
AT526401A1 (en) * 2022-08-11 2024-02-15 Brantner Env Group Gmbh Method for sorting material to be sorted

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190247891A1 (en) * 2015-07-16 2019-08-15 UHV Technologies, Inc. Sorting Cast and Wrought Aluminum

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017176855A1 (en) * 2016-04-06 2017-10-12 Waste Repurposing International, Inc. Waste identification systems and methods

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190247891A1 (en) * 2015-07-16 2019-08-15 UHV Technologies, Inc. Sorting Cast and Wrought Aluminum

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11878327B2 (en) 2019-03-13 2024-01-23 Digimarc Corporation Methods and arrangements for sorting items, useful in recycling
US11741733B2 (en) 2020-03-26 2023-08-29 Digimarc Corporation Arrangements for digital marking and reading of items, useful in recycling
CN113420819A (en) * 2021-06-25 2021-09-21 西北工业大学 Lightweight underwater target detection method based on CenterNet
CN114887927A (en) * 2022-05-10 2022-08-12 浙江工业大学 Automatic conveying quality detection and sorting system based on industrial robot
CN114887927B (en) * 2022-05-10 2024-02-13 浙江工业大学 Automatic conveying quality detection sorting system based on industrial robot
CN114800533A (en) * 2022-06-28 2022-07-29 诺伯特智能装备(山东)有限公司 Sorting control method and system for industrial robot
CN114800533B (en) * 2022-06-28 2022-09-02 诺伯特智能装备(山东)有限公司 Sorting control method and system for industrial robot
AT526401A1 (en) * 2022-08-11 2024-02-15 Brantner Env Group Gmbh Method for sorting material to be sorted
CN115311241A (en) * 2022-08-16 2022-11-08 天地(常州)自动化股份有限公司 Coal mine down-hole person detection method based on image fusion and feature enhancement
CN115311241B (en) * 2022-08-16 2024-04-23 天地(常州)自动化股份有限公司 Underground coal mine pedestrian detection method based on image fusion and feature enhancement
WO2024037408A1 (en) * 2022-08-16 2024-02-22 天地(常州)自动化股份有限公司 Underground coal mine pedestrian detection method based on image fusion and feature enhancement
CN116475081B (en) * 2023-06-26 2023-08-15 工业富联(佛山)创新中心有限公司 Industrial product sorting control method, device and system based on cloud edge cooperation
CN116475081A (en) * 2023-06-26 2023-07-25 工业富联(佛山)创新中心有限公司 Industrial product sorting control method, device and system based on cloud edge cooperation
CN117472015A (en) * 2023-12-28 2024-01-30 承德石油高等专科学校 Industrial processing control method based on machine vision
CN117472015B (en) * 2023-12-28 2024-03-22 承德石油高等专科学校 Industrial processing control method based on machine vision

Also Published As

Publication number Publication date
EP3865222A1 (en) 2021-08-18

Similar Documents

Publication Publication Date Title
EP3838427A1 (en) A method for sorting objects travelling on a conveyor belt
US11527072B2 (en) Systems and methods for detecting waste receptacles using convolutional neural networks
US7449655B2 (en) Apparatus for, and method of, classifying objects in a waste stream
US10625304B2 (en) Recycling coins from scrap
US9156628B2 (en) Sort systems and methods
CN114600169A (en) Neural network for stockpile sorting
KR20180103898A (en) Waste collection system and method
WO2017051278A1 (en) System and method for automatic identification of products
US20230192418A1 (en) Object path planning in a sorting facility
CN112543680A (en) Recovery of coins from waste
KR101921858B1 (en) Classification system for used clothes
Moirogiorgou et al. Intelligent robotic system for urban waste recycling
EP4301524A1 (en) Material detector
US20240139778A1 (en) Methods, apparatuses, and systems for automatically performing sorting operations
Calaiaro AI Takes a Dumpster Dive: Computer-vision systems sort your recyclables at superhuman speed
US20230196132A1 (en) Object material type identification using multiple types of sensors
JP2024522545A (en) Continuous and rapid metal sorting with machine-readable marking
Koganti et al. Deep Learning based Automated Waste Segregation System based on degradability
McDonnell et al. Using style-transfer to understand material classification for robotic sorting of recycled beverage containers
CN112046970A (en) Kitchen waste classification and identification method
US20240149305A1 (en) Air sorting unit
Dering et al. A computer vision approach for automatically mining and classifying end of life products and components
KR102578920B1 (en) Apparatus for PET sorting based on artificial intelligence
KR102578919B1 (en) Automatic Sorting Separation System for Recycled PET
US20230196188A1 (en) Maintaining a data structure corresponding to a target object

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20211224