WO2022254202A1 - Method for automated waste composition reporting - Google Patents

Method for automated waste composition reporting Download PDF

Info

Publication number
WO2022254202A1
WO2022254202A1 PCT/GB2022/051385 GB2022051385W WO2022254202A1 WO 2022254202 A1 WO2022254202 A1 WO 2022254202A1 GB 2022051385 W GB2022051385 W GB 2022051385W WO 2022254202 A1 WO2022254202 A1 WO 2022254202A1
Authority
WO
WIPO (PCT)
Prior art keywords
waste
image
container
images
type
Prior art date
Application number
PCT/GB2022/051385
Other languages
French (fr)
Inventor
Richard Hankins
Hujun Yin
Paul EAGLETON
Original Assignee
Kenny Waste Management Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kenny Waste Management Ltd filed Critical Kenny Waste Management Ltd
Publication of WO2022254202A1 publication Critical patent/WO2022254202A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/30Administration of product recycling or disposal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects

Abstract

Broadly speaking, embodiments of the present techniques provide methods and systems for automated waste composition reporting, and in particular to machine learning methods for estimating a composition of waste from a waste container including a plurality of waste items of different types. The waste container may contain construction and demolition waste, for example. The estimation methods analyse images of the waste items to determine the composition of the waste. Advantageously, the present techniques automate the process of waste composition reporting, which may make it quicker to produce waste composition reports than existing techniques which rely on a human operative to inspect the waste. This may enable more accurate waste composition reporting across the waste industry.

Description

Method for Automated Waste Composition Reporting
Field
The present techniques generally relate to methods and systems for automated waste composition reporting, and in particular to methods for estimating a composition of waste from a waste container including a plurality of waste items of different types. The waste container may contain construction and demolition waste, or municipal wastes (industrial, commercial, and household wastes)for example. It will be understood that these are non-limiting types of waste that could be analysed using the present techniques. The waste items from a waste container may be tipped out to be analysed. The estimation methods analyse images of the tipped waste to determine the composition of the waste.
Background
Generally speaking, in the waste management industry in the UK and Europe, waste reporting requires waste to be identified and classified before it is sent for recycling or disposal. The classification may be according to EWC (European Waste Catalogue) codes , where each waste type is represented by a six-digit code. For example, concrete waste is represented by code 17-01-01, untreated wood is represented by code 17-02-01, and aluminium is represented by code 17-04-02. It will be understood the classification may use any waste codes defined by any jurisdiction/region or organisation. Waste management companies may be restricted by what type of codes they can process, and must report the waste type, weight and geographical location of all incoming waste and outgoing waste every quarter to the Environment Agency (EA).
Certain codes, such as 17-09-04, represent mixed construction and demolition waste. However, increasing environmental concerns regarding the management of waste means that waste producers are keen to obtain a more detailed breakdown of the constituents of the mixed waste. Often, when waste management companies perform a further breakdown of the constituent materials of their mixed waste, the analysis is performed retrospectively based on average waste amounts determined over a period of time. Thus, the specific quantities of different materials in a waste container of mixed waste (e.g. a 'skip' of construction waste) may not be known, which may lead to inaccuracies in the overall reporting over time. Furthermore, waste management companies may lack weighbridge facilities and therefore, may use averages when reporting weights of specific waste materials.
Currently, waste management companies may rely on manual estimates of the volume and/or weight of different materials in a waste container. This typically involves a specially trained human operative estimating the percentage of a volume of a waste container that contains a particular material type. The waste from a waste container is tipped onto the ground and the human operative inspects the waste on the ground to make their volume estimates. The human operative may then use specific software applications to convert their volume estimates to weight or mass estimates using average known densities of different materials. This provides an estimate of the waste composition of a waste container, which may enable waste producers to better understand their processes.
Automated waste sorting systems may be able to perform waste composition analysis and other waste reporting. These automated systems typically use an array of sensors to identify and sort different categories of waste (such as construction and demolition waste, commercial and industrial, metal, plastics and inert). However, the composition analysis is obtained as a consequence of (i.e. after) the sorting process.
The present applicant has therefore identified the need for improved techniques for estimating the composition of waste by waste producers and waste management companies, thereby enabling more accurate waste reporting.
Summary
In a first approach of the present techniques, there is provided a computer- implemented method of estimating a composition of waste including a plurality of waste items, the method comprising: obtaining at least one image of waste from a waste container, the waste including a plurality of waste items; applying a first machine learning, ML, algorithm for material classification to the at least one image of waste, to classify regions of the image of waste according to material type; applying a second machine learning, ML, algorithm for object classification to the at least one image of waste, to classify regions of the image of waste according to object type; combining, for the at least one image of waste, the material classification and the object classification to generate an image segmentation map indicating the location of each material type and object type in the image of waste; and applying a third machine learning, ML, algorithm to the image segmentation map, to quantify each material type and object type in the image of waste and thereby determine an estimated composition of the waste from the waste container.
The phrase "waste from a waste container" is used herein to mean waste which has been tipped out from a waster container onto a surface (e.g. the ground). The term "waste container" is used herein to mean any container used to hold and/or transport waste. An example of a waste container is a British 'skip', but this is a non-limiting example type of waste container.
Advantageously, the present techniques automate the process of waste composition reporting. The present techniques may make it quicker to produce waste composition reports than the existing techniques which rely on a human operative to inspect the waste. Furthermore, the present techniques enable more accurate waste reporting on a skip-by-skip basis (i.e. on a per waste container basis), which does not require additional waste processing (e.g. sorting).
The present techniques may also advantageously be implemented by any waste producer or waste management company because the present techniques do not require specialist equipment to sense and sort the waste content. Thus, this may enable more accurate waste composition reporting across the waste industry. In turn, this may have an environmental impact as it may encourage or enable other industries, such as the construction industry, to consider the quantities of materials they use in projects to assist with reducing waste. In addition, the construction industry may be inclined to use more sustainable building materials based on the analysis. Furthermore, from a governance perspective the present techniques could provide valuable insight into the types and amount of waste within the construction and demolition industries. The present techniques may be used to analyse waste content of waste containers of any type. That is, the waste container may contain construction and demolition waste, or municipal wastes (industrial, commercial, and household wastes) for example. It will be understood that these are example, non-limiting types of waste that could be analysed using the present techniques. Further advantageously, the present techniques could provide other useful capabilities/functionalities, such as being able to locate the presence of foreign or hazardous materials or objects in the waste, determining whether waste content has been classified correctly/incorrectly, and determining if the contents and quantities of a waste container (e.g. a skip), deviates from their documentation (i.e. checking whether waste producers are correctly documenting their waste).
The present techniques are beneficial because not only do they automate the process of waste composition reporting, but they do not require specialist equipment to do so. The present techniques involve analysing images captured of the tipped waste content of a waste container (such as a British 'skip'), and predicting or estimating the amount of different types of waste material in the waste content. The quantification of the different types of waste material may take a number of different forms, and may depend on whether additional information about the waste content is available.
For example, where the images of the tipped waste content are available, the quantification may be, for each material or object type:
• A percentage indicating how much of the total area of the waste content, as determined from the image, is formed of that specific material/object type. Area may be considered analogous to volume. (The percentage may be determined by estimating the total area of the waste content from the image); and/or
• A percentage indicating how much of the total weight of the waste content is formed of that specific material/object type. (The percentage may be determined by estimating the total area of the waste content from the image).
In another example, where the images of the tipped waste content are available together with a volume of the waste container which contained the waste content, the quantification may be, for each material or object type:
• A volume value for each specific material/object type. The volume quantification uses the percentage area (as determined using the above- mentioned technique) and the known volume of the waste container to determine a volume of each specific material/object type in the waste content; • A weight or mass value for each specific material/object type. The weight value may be determined by using the volume value (mentioned above) and density.
In another example, where the images of the tipped waste content are available together with a total weight/mass of the waste content, the quantification may be, for each material or object type:
• A weight or mass value for each specific material/object type. The weight value is determined by using a percentage indicating how much of the total weight of the waste content is formed of that specific material/object type, and the known total weight of the waste content. Alternatively, the total weight of the waste content may be estimated from the image. The percentage may be determined by estimating the total area of the waste content from the image.
• A weight or mass value and a volume value for each specific material/object type. The weight value may be determined as above, while the volume may be determined by using the mass and density.
In another example, where the images of the waste content are available together with a total weight or mass of the waste content and a volume of the waste container which contained the waste content, the quantification may be, for each material or object type:
• A weight or mass value and/or a volume value for each specific material/object type.
The waste content of a waste container is tipped out onto the ground, and images of the waste content from the waste container are captured once the waste content is on the ground. This is so that more of the waste content can be seen, relative to simply capturing an image of the waste content while it is still in the waste container. The images may be captured using any suitable image capture device, such as a smartphone camera or digital camera. The images may be photographs, or may be individual frames of a video captured using the image capture device. In some cases, the images may be captured by a human operative who is on the ground or near to the waste content. Additionally or alternatively, the images may be captured by a mounted or robotic device that is on the ground or near to the waste content, and which has an image capture device. Additionally or alternatively, the images of the waste content may be captured from above/aerially, by, for example, a drone that has an image capture device. The images may be captured as part of performing the method of estimating a composition of waste (e.g. as part of an app used to perform the method), or may be captured separately and input when required (e.g. input into an app used to perform the method). The images are analysed by two different ML algorithms - one trained specifically to identify different material types within images, and another trained to identify different object types within images. The results of the analysis of the two ML algorithms are used to generate image segmentation maps that show the location of each material type and each object type in the images. The image segmentation maps are then analysed by a third ML algorithm to quantify each material type and object type in order to produce a waste composition report for the waste from that waste container.
The first machine learning algorithm for material classification is able to classify regions of the image of waste according to material type based on being trained to recognise specific material types. For example, the first ML algorithm may be trained to recognise any one or more of the following material types: cardboard, paper, carpet, carpet tiles, soft floor coverings, co-mingled fine material, co-mingled granular material, electrical cables or wires, glass, green waste, insulation, mixed dense material, metal, wood, non-rigid plastic, other mixed waste, plasterboard, gypsum, polystyrene, rigid plastic, and roofing felt. It will be understood that this is a non-limiting and non-exhaustive list of example material types that the first ML algorithm may be trained to recognise. The first ML algorithm may learn to recognise different material types based on, for example, the colours and/or textures associated with different material types. The second machine learning algorithm for object classification is able to classify regions of the image of waste according to object type based on being trained to recognise specific object types. For example, the second ML algorithm may be trained to recognise any one or more of the following object types: bagged waste, black bag waste, fridge, freezer, fridge-freezer, gas bottles or cylinders, fire extinguishers, mattresses, sofas or chairs, household or office furniture, television, display monitors, laptops or computers, tyres, machine tracks, and waste electrical and electronic equipment (WEEE). It will be understood that this is a non-limiting and non-exhaustive list of example object types that the second ML algorithm may be trained to recognise. It will be understood that both the first and second machine learning algorithms may be trained to recognise material and object types that are hazardous or may contain hazardous substances. One or both of the first and second ML algorithms may be trained to recognise additional material and object types that are hazardous or may contain hazardous substances, such as asbestos, treated wood, paints, varnishes, adhesives, sealants, and so on.
The present techniques may be faster at analysing the composition of waste containers, and may be at least as accurate than a human operative.
The quantification of each material type and object type may be a volume amount (e.g. volume in m3). In this case, the third ML algorithm may quantify each material type and object type by determining a volume of each material type and object type. The method may further comprise outputting an estimated volume distribution for each material type and object type.
Additionally or alternatively, the quantification of each material and object type may be a weight or mass amount (e.g. mass in kg). In this case, the third ML algorithm may quantify each material and object type by determining a mass of each material and object type. The method may further comprise outputting an estimated mass distribution for each material and object type.
Knowing the total volume and/or mass of the waste content may enable the third ML algorithm to more accurately determine the composition of the waste (i.e. the volume and/or mass of each material and object type found in the waste content).
Obtaining at least one image of waste may comprise obtaining a plurality of images of the waste content of the waste container, each image captured from a different viewpoint. This may be useful because the set of images may enable some material types in the waste content, or some objects within the waste content, to be classified more easily. The first, second and third ML algorithms may analyse each image of the plurality of images and combine the analysis.
The first ML algorithm may be implemented using a convolutional neural network, CNN.
The second ML algorithm may be implemented using a region-based convolutional neural network, R-CNN.
The at least one image of waste may be pre-processed to separate the waste content from any background or foreground materials or objects in the image that are not from the waste container. The method as claimed may further comprise: inputting the estimated composition of the waste from the waste container into a fourth machine learning algorithm to obtain a refined estimated composition of the waste.
Obtaining at least one image of waste from a waste container may comprise: tipping out the waste and spreading-out the waste over a surface; and capturing at least one image of the waste.
Thus far, the method of estimating a composition of waste items using a set of trained ML models, which is performed at run-time or inference time, has been described. The method used to train the set of ML models is now described.
In a second approach of the present techniques, there is provided a computer-implemented method for training a set of machine learning, ML, models to estimate a composition of waste including a plurality of waste items, the method comprising: obtaining a training data set comprising a plurality of images of waste, each image of waste depicting a plurality of waste items from tipped waste; inputting the plurality of images of waste into a first machine learning, ML, model for material classification, and training the first ML model to identify a plurality of material types; inputting the plurality of images of waste into a second machine learning, ML, model for object classification, and training the second ML model to identify object types; combining, for each image of waste, the material classification and the object classification to generate an image segmentation map indicating the location of objects of each material type and object type in each image of waste; and inputting the image segmentation map into a third machine learning, ML, model, and training the third ML model to quantify each material type and object type in the image of waste, and thereby determine an estimated composition of the waste from the tipped waste.
Advantageously, the ML models may be trained centrally, and the trained models may be accessible to waste producers or waste management companies for use. Thus, individual waste companies do not need to undertake the training process. The models may be accessible via a software (mobile or web) application, for example, which may be deployed on a remote or cloud server.
The step of inputting the plurality of images of waste into the first machine learning, ML, model for material classification may comprise processing the images first. For example, the step of inputting the images of the training data set may comprise: using keypoint analysis to identify locations of features in each image of waste; generating, for each image of waste, a further image comprising a material type based on the identified locations of features in each image of waste; and inputting the further images into the first ML model. Thus, the first ML model may not be trained using the original images in the training data set, but may be trained on pre-processed versions of the original images. These pre- processed images (i.e. "further images") are extracted from keypoint locations in the original, full size images. For example, to train the first ML model to recognise cardboard, some of the original images in the training data set may contain representations of cardboard as well as other objects and materials. The keypoint analysis enables the locations of representations of cardboard to be identified within the original images and smaller, further images to be generated that contain the representation of cardboard. (This depends on crop size. For a small crop size, the crop may only contain cardboard, while for a larger crop size, other objects or context may be present in the cropped image. However, the representations present in the cropped image will generally be far fewer than in the original image). Thus, the further images may be thought of as cropped versions of the original images which may depict a single material type (such as cardboard).
In contrast, the second ML model may be trained using the original images with annotation of the object types concerned in the training data set. Training the second ML model to identify object types may comprise: using a bounding box analysis to extract features from each image of waste; and generating, for each image of waste, an image comprising bounding box annotations.
Training the third ML model to quantify each material type and object type in each image of waste may comprise: using the image segmentation map to estimate a percentage of the waste content that is formed of each material type and object type.
As mentioned above, volume data may be used to help quantify each material type and object type in the image of waste. Thus, in cases where the training data set further comprises a volume of the waste container which contained the waste content shown in each image, training the third ML algorithm may comprise: using the image segmentation map and the total volume to estimate a volume of each material type and object type.
Similarly, as mentioned above, weight data may be used to help quantify each material type and object type in the image of waste. Thus, in cases where the training data set further comprises a total weight of the waste, training the third ML algorithm may comprise: using the image segmentation map and the total weight to estimate a weight of each material type and object type.
The first ML algorithm may be implemented using a convolutional neural network, CNN.
The second ML algorithm may be implemented using a region-based convolutional neural network, R-CNN.
The set of ML models may comprise a fourth ML model. The training dataset may comprise at least one further image of the waste from the waste container depicting the waste spread-out over a surface. The at least one further image may be processed by the first, second and third ML models as described above. In this way, two outputs may be obtained from the third ML model - a first output for a first image (or images) of waste that has not been spread-out, and a second output for a second image (or images) of the same waste that has been spread- out over a surface. These two outputs may be used to train the fourth ML model to learn a map between the first output and second output. This may be useful because the second output may be more accurate, but it may not always be possible to obtain images of spread-out waste (because it might not always be possible to spread-out the waste over a surface). The fourth ML model may enable a more accurate waste composition estimate to be obtained when only the first output is available. Alternatively, one or both of the first and second outputs may be human outputs rather than from the third ML model. Further alternatively, one or both of the first and second outputs may be any form of ground truth suitable for supervised learning. The outputs themselves may be waste composition estimates, waste composition volume estimates, waste composition weight estimates, and so on.
Thus, the method may further comprise: obtaining a first output for a first set of images (where the set may contain a single image) of waste from the waste container; obtaining a second output for a second set of images (where the set may contain a single image) of waste from the same waste container where the waste has been spread-out over a large area; and training the fourth ML model to learn a mapping from the first output (i.e. estimated composition) to the second output (i.e. estimated composition of the spread-out waste). In some cases, the first output and/or second output may be obtained from the third ML model.
The set of ML models may comprise a further (fifth) ML model. Thus, the training method may further comprise: training the further ML model to separate the waste content from any background or foreground materials or objects in the images of the training dataset that are not from the waste container.
In a further approach of the present techniques, there is provided a computer-implemented method of estimating a composition of waste including a plurality of waste items, the method comprising: obtaining at least one image of waste from a waste container, the waste including a plurality of waste items; applying at least one machine learning, ML, algorithm to the at least one image of waste to: classify regions of the image of waste according to material type; classify regions of the image of waste according to object type; combine the material classification and the object classification to generate an image segmentation map indicating the location of each material type and object type in the image of waste; and quantify, using the image segmentation map, each material type and object type in the image of waste and thereby determine an estimated composition of the waste from the waste container. In other words, separate ML models/algorithms may not be required to perform the composition estimation. Some or all of the above-described ML models/algorithms may be combined.
The features described above with respect to the second approach apply equally to this further approach, and are therefore not repeated.
In a third approach of the present techniques, there is provided a system for estimating a composition of waste including a plurality of waste items, the system comprising: at least one processor, coupled to memory, arranged to: obtain at least one image of tipped waste from a waste container, the waste including a plurality of waste items; apply a first machine learning, ML, algorithm for material classification to the at least one image of waste, to classify regions of the image of waste according to material type; apply a second machine learning, ML, algorithm for object classification to the at least one image of waste, to classify regions of the image of waste according to object type; combine, for the at least one image of waste, the material classification and the object classification to generate an image segmentation map indicating the location of each material type and object type in the image of waste; and apply a third machine learning, ML, algorithm to the image segmentation map, to quantify each material type and object type in the image of waste and thereby determine an estimated composition of the waste from the waste container. The system may further comprise: an image capture device for obtaining the at least one image of waste content of a waste container. The image capture device may be any suitable image capture device, such as a smartphone camera or digital camera. In some cases, the images may be captured by a human operative who is on the ground or near to the waste content. Additionally or alternatively, the images may be captured by a robotic device that is on the ground or near to the waste content, and which has an image capture device. Additionally or alternatively, the images of the waste content may be captured from above/aerially, by, for example, a drone that has an image capture device. The images may be captured as part of performing the method of estimating a composition of waste (e.g. as part of an app used to perform the method), or may be captured separately and input when required (e.g. input into an app used to perform the method). The image capture device may obtain a plurality of images of the waste content of the waste container, each image captured from a different viewpoint.
Optionally, the system may comprise a weighbridge for determining a mass or weight of the waste content of the waste container, wherein the determined mass or weight is provided by the at least one processor to the third ML algorithm to estimate a weight of each material type and object type in the image of waste.
Optionally, volume data may be input into the third ML algorithm to estimate a volume of each material type and object type in the image of waste. Typically, waste containers such as the British 'skip' have defined volumes. Thus, by knowing the volume of the waste container, the approximate volume of the waste content from the waste container is known. This can be used, as mentioned above, to estimate the volume of each material and object type in the waste content.
The system may further comprise: an imaging area for obtaining images of waste, wherein the imaging area comprises: a surface upon which waste content of the waste container are spread out; and a mechanism for spreading the waste content over the surface. The mechanism for spreading out the waste content may be a mechanical spreading or grabbing tool. Additionally or alternatively, the mechanism for spreading out the waste content may be a mechanism for causing the surface to vibrate or shake. Walls may be employed on the sides of the surface in order to retain the waste content. The surface and walls may be fully or partially transparent. In a related approach of the present techniques, there is provided a non- transitory data carrier carrying processor control code to implement any of the methods, processes and techniques described herein.
As will be appreciated by one skilled in the art, the present techniques may be embodied as a system, method or computer program product. Accordingly, present techniques may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects.
Furthermore, the present techniques may take the form of a computer program product embodied in a computer readable medium having computer readable program code embodied thereon. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable medium may be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present techniques may be written in any combination of one or more programming languages, including object oriented programming languages and conventional procedural programming languages. Code components may be embodied as procedures, methods or the like, and may comprise sub-components which may take the form of instructions or sequences of instructions at any of the levels of abstraction, from the direct machine instructions of a native instruction set to high-level compiled or interpreted language constructs.
Embodiments of the present techniques also provide a non-transitory data carrier carrying code which, when implemented on a processor, causes the processor to carry out any of the methods described herein.
The techniques further provide processor control code to implement the above-described methods, for example on a general purpose computer system or on a digital signal processor (DSP). The techniques also provide a carrier carrying processor control code to, when running, implement any of the above methods, in particular on a non-transitory data carrier. The code may be provided on a carrier such as a disk, a microprocessor, CD- or DVD-ROM, programmed memory such as non-volatile memory (e.g. Flash) or read-only memory (firmware), or on a data carrier such as an optical or electrical signal carrier. Code (and/or data) to implement embodiments of the techniques described herein may comprise source, object or executable code in a conventional programming language (interpreted or compiled) such as C, or assembly code, code for setting up or controlling an ASIC (Application Specific Integrated Circuit) or FPGA (Field Programmable Gate Array), or code for a hardware description language such as Verilog (RTM) or VHDL (Very high speed integrated circuit Hardware Description Language). As the skilled person will appreciate, such code and/or data may be distributed between a plurality of coupled components in communication with one another. The techniques may comprise a controller which includes a microprocessor, working memory and program memory coupled to one or more of the components of the system.
It will also be clear to one of skill in the art that all or part of a logical method according to embodiments of the present techniques may suitably be embodied in a logic apparatus comprising logic elements to perform the steps of the above-described methods, and that such logic elements may comprise components such as logic gates in, for example a programmable logic array or application-specific integrated circuit. Such a logic arrangement may further be embodied in enabling elements for temporarily or permanently establishing logic structures in such an array or circuit using, for example, a virtual hardware descriptor language, which may be stored and transmitted using fixed or transmittable carrier media.
In an embodiment, the present techniques may be implemented using multiple processors or control circuits. The present techniques may be adapted to run on, or integrated into, the operating system of an apparatus.
In an embodiment, the present techniques may be realised in the form of a data carrier having functional data thereon, said functional data comprising functional computer data structures to, when loaded into a computer system or network and operated upon thereby, enable said computer system to perform all the steps of the above-described method.
Brief description of drawings
Implementations of the present techniques will now be described, by way of example only, with reference to the accompanying drawings, in which: Figure 1 shows a schematic diagram of computer-implemented methods for training a set of machine learning, ML, models and using the training models to estimate a composition of waste;
Figure 2 shows an example image segmentation, produced using segpoint annotations, which is used as ground truth when training the third ML model;
Figure 3 shows a schematic diagram of how the volume and weight image datasets are collected;
Figures 4 and 5 show example images captured of the waste content from two different waste containers;
Figure 6 shows positions within an image of further, cropped images, based on a keypoint analysis;
Figure 7 shows positions of bounding boxes based on a bounding box analysis;
Figure 8 shows an image captured of the waste content of a waste container, and Figure 9 shows material classification of the image of Figure 8 as performed by the first ML model;
Figure 10 shows a flowchart of example steps to estimate a composition of waste using a set of trained ML models;
Figure 11 shows a flowchart of example steps to train a set of ML models to estimate a composition of waste;
Figure 12 shows a system for estimating a composition of waste;
Figure 13 shows a test image used to test the trained ML models of the present techniques;
Figures 14 and 15 show, respectively, an image segmentation map and an estimated volume distribution of material types generated from the test image of Figure 13 using the trained ML models;
Figure 16 shows a schematic diagram of an example imaging area for obtaining images of waste;
Figure 17 is a flowchart of example steps to further refine the estimate of the composition of waste.
Detailed description of drawings
Broadly speaking, embodiments of the present techniques provide methods and systems for automated waste composition reporting, and in particular to machine learning methods for estimating a composition of waste from a waste container including a plurality of waste items of different types. The waste container may contain construction and demolition waste, for example. The estimation methods analyse images of the waste items to determine the composition of the waste. Advantageously, the present techniques automate the process of waste composition reporting. The present techniques may make it quicker to produce waste composition reports than the existing techniques which rely on a human operative to inspect the waste. This may enable more accurate waste composition reporting across the waste industry. Figure 1 shows a schematic diagram of computer-implemented methods for training a set of machine learning, ML, models and for using the trained models to estimate a composition of waste. The method for estimating a composition of waste may comprise three ML models. (The term "machine learning model" is used interchangeably herein with the term "machine learning algorithm".) Each method (whether for training or inference) may take, as an input, at least one image of waste from a waste container, the waste including a plurality of waste items.
A first ML model (such as a convolutional neural network, CNN, or fully convolutional neural network, FCN), may be used to classify regions of the image of waste according to material type. The first ML model may be trained using a VGG-16 CNN model (16 weight layers), optimised with stochastic gradient decent (SGD) with momentum and L2 penalty. The first ML model may be pretrained using ImageNet, and then further trained on MINC-2500 (a materials dataset available under a Creative Commons license). During training, the whole network may be updated, batch normalisation may be applied and morphological augmentation (rotation, scaling, mirroring, shearing) may be performed. Inference by the first ML model may be performed using an FCN, fully convolutional network. The first ML model may be trained using the pytorch deep learning library or tensorflow.
A second ML model (region-based convolutional neural network, R-CNN, or a faster R-CNN), may be used to classify regions of the image of waste according to object type. The second ML model may be trained using a Faster R-CNN model using a ResNet-50-FPN (50 weight layers) backbone, optimised with SGD with momentum and L2 penalty. The model may be pretrained on COCO. During training, the whole network may be updated, and morphological augmentation (mirroring) may be performed. The second ML model may be trained using the pytorch deep learning library or tensorflow.
Other libraries that may be used to train any of the three ML models include numpy, pandas, scipy, matplotlib. It will be understood that this is a non-limiting and non-exhaustive list of libraries that may be used.
The method of estimating waste composition comprises combining, for the at least one image of waste, the material classification and the object classification produced by the first and second ML models, respectively, to generate an image segmentation map indicating the location of each material and object type in the image of waste. At inference time, both the material classification (first) and the object classification (second) models are used to predict the probability distribution over the waste fractions/classes for multiple spatial regions for both materials and objects. The material (first) CNN model may be converted to a Fully Convolutional Network (FCN), so to produce output maps for whole image segmentation of the materials (as shown in Figure 9). The R-CNN is unchanged at inference time, and predicts the location of objects using bounding boxes. The bounding box outputs are combined with the material segmentation map to produce an overall image segmentation map.
A third ML model is used to analyse the image segmentation map, to quantify each material and object type in the image of waste and thereby determine an estimated composition of the waste. The quantification of each material and object type may be a volume amount (e.g. volume in m3). In this case, the third ML algorithm may quantify each material and object type by determining a volume of each material and object type. The method may further comprise outputting an estimated volume distribution for each material and object type. Additionally or alternatively, the quantification of each material and object type may be a weight or mass amount (e.g. mass in kg). In this case, the third ML algorithm may quantify each material and object type by determining a mass of each material and object type . The method may further comprise outputting an estimated mass distribution for each material and object type.
The quantification of the different types of waste material may take a number of different forms, and may depend on whether additional information about the waste content is available.
For example, where the images of the waste content are available, the third ML algorithm may determine an estimated composition of the waste from the waste container by estimating a percentage of an area of the waste content that is formed of each material type and object type. The percentage may be determined by estimating the total area of the waste content from the image. (Area may be considered analogous to volume). Additionally or alternatively, the third ML algorithm may determine an estimated composition of the waste from the waste container by estimating a percentage of a weight of the waste content that is formed of each material type and object type. The percentage may be determined by estimating the total area of the waste content from the image.
In another example, the images of the waste content may be available together with a volume of the waste container which contained the waste content. In this case the third ML algorithm may determine an estimated composition of the waste from the waste container by estimating a volume of each material type and object type. The volume quantification may use the percentage area (as determined using the above-mentioned technique) and the known volume of the waste container to determine a volume of each specific material/object type in the waste content. Additionally or alternatively, the third ML algorithm may determine an estimated composition of the waste from the waste container by estimating a weight or mass of each material type and object type. The weight or mass may be determined by using the volume value (mentioned above) and density.
In another example, the images of the waste content may be available together with a total weight/mass of the waste content. In this case, the third ML algorithm may determine an estimated composition of the waste from the waste container by estimating a weight of each material type and object type. The weight value is determined by using a percentage indicating how much of the total weight of the waste content is formed of that specific material/object type, and the known total weight of the waste content. The percentage may be determined by estimating the total area of the waste content from the image. Additionally or alternatively, the third ML algorithm may determine an estimated composition of the waste from the waste container by estimating a volume of each material type and object type. The weight value may be determined as above, while the volume may be determined by using mass and density information.
In another example, the images of the waste content may be available together with a total weight or mass of the waste content and a volume of the waste container which contained the waste content. In this case, the third ML algorithm may determine an estimated composition of the waste from the waste container by estimating a weight or mass value and/or a volume value for each specific material/object type.
The image of waste may depict waste content of a waste container, captured after the contents of the waste container have been tipped onto the ground.
It will be understood that although three ML algorithms/models are described, the functionality of some or all of the models may be combined such that fewer algorithms/models are required. Thus, although separate models are described herein for ease, it will be understood that the functionality of the models may be combined.
As shown in Figure 1, there may be a "Rescale/Refine" step. During this step, the output of the waste composition from the ML models may be compared to human analysis of the same waste. This comparison may be used to improve the training of the models. Other techniques may be used instead or in addition to the human analysis to refine or improve the waste composition estimates of the models. For example, any form of ground truth suitable for supervised learning may be used to refine or improve the model estimates.
Figure 2 shows an example image segmentation (which shows the location of each material and object type in an image of waste), produced using segpoint annotations, which is used as ground truth when training the third ML model. Segpoints (dense keypoints) are used to obtain a rough segmentation of the percentage area of each material and object type or class. The estimated image segmentation has been produced by applying 2D Gaussians to each segpoint location for each class. The resultant class images are combined into a final segmentation image by selecting the corresponding class value at each pixel location for the class image that is maximum over all class images. From this segmentation, class area percentages are calculated. These percentages may be converted into volumes by considering the overall volume of each skip.
Figure 3 shows a schematic diagram of how the volume and weight image datasets are collected. In addition to images, datasets can also be constructed when volume and weight data is available. To enable this, waste containers may be weighed using weighbridges to obtain a weight (mass) of the combination of the waste container and waste content contained therein (see image 1 of Figure 3). Most waste containers have a known weight, and therefore, the weight of the waste content may be easily determined. After weighing, the content of the waste container may be tipped onto the ground, and at least one image of the waste content may be obtained/captured (see image 2 of Figure 3). The volume image dataset is then formed of the total weight of the waste content, the volume of the waste container and at least one image of the waste. The collected images need to be additionally annotated using segpoint annotations and processed, as above, to produce corresponding volume percentages for each class. For the weight image dataset, which may increase accuracy further, the waste needs to be manually separated into its constituent fractions/classes and weighed (see image 3 of Figure 3). The weight image dataset is then formed of the total weight of the waste content, the volume of the waste container, fraction/class weights and at least one image of the waste.
Figures 4 and 5 show example images captured of the waste content from two different waste containers, after the waste content of the waste containers has been emptied out/tipped out onto the ground. It can be seen that the waste content of the two containers is quite different and contains lots of different materials and object types.
Figure 6 shows positions within an image of further, cropped images, based on a keypoint analysis. The step of inputting the plurality of images of waste into the first machine learning, ML, model for material classification may comprise processing the images first. For example, the step of inputting the images of the training data set may comprise: using a keypoint analysis to identify locations of features in each image of waste; generating, for each image of waste, a further image comprising a material type based on the identified locations of features in each image of waste; and inputting the further images into the first ML model. Thus, the first ML model may not be trained using the original images in the training data set, but may be trained on pre-processed versions of the original images. These pre-processed images (i.e. "further images") are extracted from keypoint locations in the original, full size images. As can be seen in Figure 6, keypoint locations (boxes) indicate where particular material types are located in the original image. For example, to train the first ML model to recognise cardboard, some of the original images in the training data set may contain representations of cardboard as well as other objects and materials. The keypoint analysis enables the locations of representations of cardboard to be identified within the original images and smaller, further images to be generated that contain the representation of cardboard. Thus, the further images may be thought of as cropped versions of the original images which may depict a single material type (such as cardboard). The cropped images (further images) are cropped from the original images based on the keypoint locations.
The first ML model may be pretrained on suitable images from commonly available image data sets, such as ImageNet. Thereafter, the first ML model may be trained using a materials database, in order to recognise materials of different types.
Figure 7 shows position of a bounding box based on a bounding box analysis. The second ML model may be trained using the original images in the training data set. Training the second ML model to identify object types may comprise: using a bounding box analysis to extract features from each image of waste; and generating, for each image of waste, an image comprising bounding box annotations, each bounding box annotation indicating the location of a particular object type. Here, the bounding box is used to indicate a particular object type, i.e. an object that appears to be a gas bottle or cylinder.
Figure 8 shows an image captured of the waste content of a waste container, and Figure 9 shows material classification of the image of Figure 8 as performed by the first ML model. It can be seen that the first ML model has identified a number of different material types in the image, such as cardboard and mixed wood.
Figure 10 shows a flowchart of example steps to estimate a composition of waste. The method comprises obtaining at least one image of waste from a waste container, the waste including a plurality of waste items (step S100). The method comprises applying a first machine learning, ML, algorithm for material classification to the at least one image of waste, to classify regions of the image of waste according to material type (step S102). The method comprises applying a second machine learning, ML, algorithm for object classification to the at least one image of waste, to classify regions of the image of waste according to object type (step S104). As shown in Figure 10, steps S102 and S104 may occur simultaneously. It will be understood that, alternatively, step S102 could be performed before step S104, or vice versa.
The method comprises combining, for the at least one image of waste, the material classification and the object classification produced by the first and second ML models, respectively, to generate an image segmentation map (of the type shown in Figure 9) indicating the location of each material and object type in the image of waste (step S106). The method may comprise applying a third machine learning, ML, algorithm to the image segmentation map, to quantify each material and object type in the image of waste and thereby determine an estimated composition of the waste (step S108). As noted above, the quantification of each material and object type may be a volume amount (e.g. volume in m3), and the method may further comprise outputting an estimated volume distribution for each material and object type. Additionally or alternatively, the quantification of each material and object type may be a weight or mass amount (e.g. mass in kg), and the method may further comprise outputting an estimated mass distribution for each material and object type.
As mentioned above, the image of waste that is obtained at step S100 may depict waste content of a waste container, captured after the contents of the waste container have been tipped onto the ground. Optionally, the method may comprise applying a fourth machine learning, ML, model to refine the estimated composition of the waste (step SI 10). This may be useful if, for example, the waste container is very large such that when the contents of the waster container have been tipped onto the ground, the contents are in a pile. Items which are located within the pile will not be easily seen, or not be seen at all, and therefore, the estimated composition of the waste may be determined only for the waste contents that are visible, i.e. on top of the pile. That is, images of the visible contents of the waste may not be representative of the hidden contents of the waste. The fourth ML model may be used to refine the estimated composition of the waste based on the waste contents that are visible. The fourth ML model is explained in more detail below with reference to Figure 17.
Alternatively, the method may comprise obtaining further images of the waste after the waste has been spread out so that the contents can be more easily seen, and fewer contents are hidden. This is explained in more detail below with respect to Figure 16.
Figure 11 shows a flowchart of example steps to train a set of ML models to estimate a composition of waste. The method may comprise obtaining a training data set comprising a plurality of images of waste, each image of waste depicting a plurality of waste items from a waste container (step S200). As noted above, the training data set may include annotated images of waste from waste containers. The method may comprise inputting the plurality of images of waste into a first machine learning, ML, model for material classification, and training the first ML model to identify a plurality of material types (step S202).
The method may comprise inputting the plurality of images of waste into a second machine learning, ML, model for object classification, and training the second ML model to identify object types (step S204).
As shown in Figure 11, steps S202 and S204 may occur simultaneously. It will be understood that, alternatively, step S202 could be performed before step S204, or vice versa.
The method may comprise combining, for each image of waste, the material classification and the object classification to generate an image segmentation map indicating the location of each material and object type in each image of waste (step S206).
The method may comprise inputting the image segmentation map into a third machine learning, ML, model, and training the third ML model to quantify each material and object type in the image of waste, and thereby determine an estimated composition of the waste (step S208). The third ML model may be provided with either hand-annotated images which include human-estimated volumes, or may be provided with segpoint-based volume data for each image, or with weight data corresponding to each image, or be provided with some combination of these. This information may be used to train the third ML model to quantify each material and object type in the images of waste.
Advantageously, the ML models may be trained centrally, and the trained models may be accessible to waste producers or waste management companies for use. Thus, individual waste companies do not need to undertake the training process. The models may be accessible via a software (mobile or web) application, for example, which may be deployed on a remote or cloud server.
Figure 12 shows a system 10 for estimating a composition of waste including a plurality of waste items.
The system 10 may comprise an image capture device 102 for obtaining the at least one image of waste content of a waste container. The image capture device 102 may be any suitable image capture device, such as a smartphone camera or digital camera. In some cases, the images may be captured by a human operative who is on the ground or near to the waste content. Additionally or alternatively, the images may be captured by a robotic device that is on the ground or near to the waste content, and which has an image capture device. Additionally or alternatively, the images of the waste content may be captured from above/aerially, by, for example, a drone that has an image capture device. The images may be captured as part of performing the method of estimating a composition of waste (e.g. as part of an app used to perform the method), or may be captured separately and input when required (e.g. input into an app used to perform the method).
The image capture device 102 may obtain a plurality of images of the waste content of the waste container, each image captured from a different viewpoint.
The images captured by the image capture device 102 may be obtained by a user apparatus 100. The user apparatus 100 may be any suitable computing device, such as smartphone, a PC, laptop, tablet, and so on. The user apparatus 100 may comprise a display 104 for viewing the images obtained from the image capture device 102 and for displaying a waste composition report. The image capture device 102 may be part of the apparatus 100 or external to the apparatus 100.
The system 10 may comprise a remote or cloud server 106. The remote or cloud server 106 may comprise at least one processor 108, coupled to memory 110, arranged to: obtain at least one image of waste (from the user apparatus 100), the waste including a plurality of waste items. The remote server 106 may comprise a set of ML models 112 that have been trained to estimate a composition of waste in images of waste.
The processor 108 may be arrange to: apply a first machine learning, ML, model (of the set of ML models 112) for material classification to the at least one image of waste, to classify regions of the image of waste according to material type; apply a second machine learning, ML, model (of the set of ML models 112) for object classification to the at least one image of waste, to classify regions of the image of waste according to object type; combine, for the at least one image of waste, the material classification and the object classification to generate an image segmentation map indicating the location of each material and object type in the image of waste; and apply a third machine learning, ML, model (of the set of ML models 112) to the image segmentation map, to quantify each material and object type in the image of waste and thereby determine an estimated composition of the waste from the waste container. The remote server 106 may transmit the estimated composition of the waste back to the user apparatus 100 for display on display 104.
Optionally, the system 10 may comprise a weighbridge 114 for determining a mass or weight of the waste content of the waste container. A vehicle carrying a waste container (such as a truck carrying a skip) may drive onto the weighbridge 114, and the combined weight of the truck, waste container and waste content may be determined by the weighbridge 114. The tare weight of the truck with an empty weight container may be known (e.g. may be stored). This can then be used to determine the weight of the waste content alone. The mass data from the weighbridge 114 may be transmitted to the user apparatus 100, and the user apparatus 100 may transmit this data to the remote server 106 together with the at least one image captured of the waste content. Alternatively, the mass data may be noted by a user of the user apparatus, and the user may input the mass data together with the at least one image. The mass data may be provided by the at least one processor to the third ML model to estimate a weight of each material and object type in the image of waste.
Optionally, volume data may be input into the third ML algorithm to estimate a volume of each material type and object type in the image of waste. Typically, waste containers such as the British 'skip' have defined volumes. Thus, by knowing the volume of the waste container, the approximate volume of the waste content from the waste container is known. This can be used, as mentioned above, to estimate the volume of each material and object type in the waste content.
The waste composition estimates may be used to identify seasonal trends or long-term trends in wasting. For example, the estimates may enable moves towards using more recyclable or environmentally-friendly materials to be identified.
Figure 13 shows a test image used to test the trained ML models of the present techniques, and Figures 14 and 15 show an image segmentation map and an estimated volume distribution of material types respectively, generated from the test image of Figure 13 using the trained ML models. It can be seen from Figure 13 that the waste contains a number of different material types. It can be seen from Figure 14 that the different materials, and their locations within the image, have been identified - e.g. it can be seen that the image contains cardboard, mixed wood, rigid plastic, and so on. Figure 15 shows an estimated volume distribution of material types for waste content, which is generated by the third ML model. During training of the third ML model, the third ML model may be provided with a hand annotated volume dataset. For each image in this dataset, the volumes (in the form of percentages) for each material and object class are manually estimated by a human (as per current manual visual inspection methods). These percentages are used as a target distribution for regression, as shown by the lower graph in Figure 15.
Prior to training the regression network, the resultant outputs from the spatial analysis for each image of the target skip undergo a feature extraction step followed by an encoding step to form a new volume feature dataset. The volume features are used to train a regression network to predict the target volume distributions. Currently, this is performed using a linear model which is fitted using the scikit learn machine learning library. Other libraries used include numpy, pandas, scipy, matplotlib.
At inference time the model predicts the volume distribution over the target fractions/classes, as shown by the upper graph in Figure 15. The present techniques may be faster at analysing the composition of waste containers, and may be at least as accurate than a human operative.
The waste composition estimates may be provided to a human operative to check if the estimates/predictions are associated with a low confidence level. That is, the predictions may be compared to predictions made by a human operative on the same waste content. The analysis performed by the human operative may determine whether a low confidence level is a one-off event or occurs multiple times. In the latter case, the trained models may be retrained periodically based on new data or new analysis techniques, in order to improve the confidence level or maintain a certain confidence level over time.
Figure 16 shows a schematic diagram of an example imaging area 20 for obtaining images of waste. The imaging area 20 comprises a floor or surface 26 upon which the contents 28 of a waste container are tipped out and spread. The imaging area 20 comprises a plurality of image capture devices 24, such as cameras. In one example, the floor 26 may be provided between at least two walls 22, which function to retain the waste contents 28 on the floor and in locations that the waste contents can be imaged. In this example, image capture devices 24 may be provided on the walls 22 so that images of the waste contents 28 can be captured from above. In another example, the image capture devices 24 may be mounted on poles that are positioned to enable images of the waste contents to be captured. The image capture devices 24 may be fixed in position or may be moveable.
A spreading or grabbing tool (not shown) may be used to spread out the waste contents 28 over the floor 26, so that the waste contents 28 are not piled- up. Different spreading protocols may be used for different sizes/volumes of waste container. For example, waste from a 40-yard British 'skip' may need to be spread out over a larger area than waste from an 8-yard British 'skip', in order for the waste to be better imaged. Furthermore, once the waste content has been spread out, more images may need to be captured of waste from a larger container than a smaller container, because the waste content will cover a larger area.
Alternatively, the floor 26 may be able to vibrate or shake, such that vibrations cause the waste contents 28 to spread out over the floor.
The floor 26 and/or or walls 22 may be fully or partially transparent. This may be advantageous as it would enable images of the waste content to be captured from below or from the side by a suitably located image capture device 24.
The step mentioned above of obtaining at least one image of waste from a waste container may comprise: tipping out the waste contents from a waste container on floor 26, capturing at least one image of the waste content, spreading out the waste contents over the floor 26, and capturing at least one image of the spread-out waste content.
Figure 17 is a flowchart of example steps to further refine the estimate of the composition of waste. The method begins by obtaining a first output for a first image (or set of images) for the waste content (step S300), and then obtaining a second output for a second image (or set of images) showing the same waste content spread-out over an area, i.e. over floor 26 (step S302). In some cases, the first output and/or second output may be obtained from the third ML model. In this case, the process to obtain an estimate of the waste (i.e. steps S202 to S208 of Figure 11) are repeated using the images showing the spread-out waste contents. Alternatively, the estimation may be human-estimated volumes, segpoint-based volume data for each image, or weight data corresponding to each image or some combination of the above. The output of the third ML model and the estimation of the waste from the spread-out waste are then used to train a fourth ML model to learn a mapping between the output obtained when the waste contents are piled-up, and the estimation obtained when the waste contents are spread-out (step S304). This mapping may be possible because waste contents of waste containers often contain similar types of waste depending on the origin of the waste container. For example, waste containers obtained from a construction site will contain similar types of waste (concrete, rubble, metal parts, bricks, pipes, etc.), while waste containers obtained from a garden landscaping project will contain other similar types of waste (soil, gravel, wood, etc.) Thus, once the fourth ML model has been trained, the fourth ML model can be used to provide a refined composition estimate (step S110 in Figure 10) when additional images of the waste spread-out over an area cannot be obtained. It may not always be possible, or cost effective, to obtain these additional images but customers may still want a refined composition estimate - thus the fourth ML model may enable this refinement. In other words, the fourth ML model allows a refined composition estimate to be obtained for waste content, particular waste content from very large containers, without having to spread-out the waste content. Additionally, knowing the total volume and/or mass of the waste content may enable the fourth ML algorithm to more accurately determine the composition of the waste.
In all of the techniques described above, the at least one image of waste from a waste container may also depict other items in the image. For example, there may be other materials and objects in the background or foreground or in the vicinity of the waste. It may be desirable to separate, in the image, the waste content of the waste container being analysed from other materials and objects which are not from the waste container. This ensures that the waste composition estimate is accurate. Thus, the image(s) of waste may be pre-processed to separate the waste of interest from other materials and objects. This pre processing step may be performed using another ML model.
Those skilled in the art will appreciate that while the foregoing has described what is considered to be the best mode and where appropriate other modes of performing present techniques, the present techniques should not be limited to the specific configurations and methods disclosed in this description of the preferred embodiment. Those skilled in the art will recognise that present techniques have a broad range of applications, and that the embodiments may take a wide range of modifications without departing from any inventive concept as defined in the appended claims.

Claims

1. A computer-implemented method of estimating a composition of waste including a plurality of waste items, the method comprising: obtaining at least one image of waste from a waste container, the waste including a plurality of waste items; applying a first machine learning, ML, algorithm for material classification to the at least one image of waste, to classify regions of the image of waste according to material type; applying a second machine learning, ML, algorithm for object classification to the at least one image of waste, to classify regions of the image of waste according to object type; combining, for the at least one image of waste, the material classification and the object classification to generate an image segmentation map indicating the location of each material type and object type in the image of waste; and applying a third machine learning, ML, algorithm to the image segmentation map, to quantify each material type and object type in the image of waste and thereby determine an estimated composition of the waste from the waste container.
2. The method as claimed in claim 1 wherein the third ML algorithm determines an estimated composition of the waste from the waste container by estimating a percentage of an area of the waste content that is formed of each material type and object type.
3. The method as claimed in claim 1 or 2 wherein the third ML algorithm determines an estimated composition of the waste from the waste container by estimating a percentage of a weight of the waste content that is formed of each material type and object type.
4. The method as claimed in claim 1, 2 or 3, further comprising obtaining a volume of the waste container which contained the waste content shown in the at least one image.
5. The method as claimed in claim 4, wherein the third ML algorithm determines an estimated composition of the waste from the waste container by estimating a volume of each material type and object type.
6. The method as claimed in claim 4 or 5, wherein the third ML algorithm determines an estimated composition of the waste from the waste container by estimating a weight of each material type and object type.
7. The method as claimed in claim 5 or 6 further comprising: outputting an estimated volume distribution for each material type and object type.
8. The method as claimed in any preceding claim further comprising obtaining a total weight of the waste content shown in the at least one image.
9. The method as claimed in claim 8, wherein the third ML algorithm determines an estimated composition of the waste from the waste container by estimating a weight of each material type and object type.
10. The method as claimed in claim 8 or 9, wherein the third ML algorithm determines an estimated composition of the waste from the waste container by estimating a volume of each material type and object type.
11. The method as claimed in claim 9 or 10 further comprising: outputting an estimated weight distribution for each material type and object type.
12. The method as claimed in any preceding claim wherein obtaining at least one image of waste comprises obtaining a plurality of images of the waste content of the waste container, each image captured from a different viewpoint.
13. The method as claimed in any preceding claim wherein the first ML algorithm is implemented using a convolutional neural network, CNN.
14. The method as claimed in any preceding claim wherein the second ML algorithm is implemented using a region-based convolutional neural network, R- CNN.
15. The method as claimed in any preceding claim wherein the at least one image of waste is pre-processed to separate the waste content from any background or foreground materials or objects in the image that are not from the waste container.
16. The method as claimed in any preceding claim further comprising: inputting the estimated composition of the waste from the waste container into a fourth machine learning algorithm to obtain a refined estimated composition of the waste.
17. The method as claimed in any preceding claim wherein obtaining at least one image of waste from a waste container comprises: tipping out the waste and spreading-out the waste over a surface; and capturing at least one image of the waste.
18. A computer-implemented method for training a set of machine learning, ML, models to estimate a composition of waste including a plurality of waste items, the method comprising: obtaining a training data set comprising a plurality of images of waste, each image of waste depicting a plurality of waste items from a waste container; inputting the plurality of images of waste into a first machine learning, ML, model for material classification, and training the first ML model to identify a plurality of material types; inputting the plurality of images of waste into a second machine learning, ML, model for object classification, and training the second ML model to identify object types; combining, for each image of waste, the material classification and the object classification to generate an image segmentation map indicating the location of each material type and object type in each image of waste; and inputting the image segmentation map into a third machine learning, ML, model, and training the third ML model to quantify each material type and object type in the image of waste, and thereby determine an estimated composition of the waste from the waste container.
19. The method of claim 18 wherein inputting the plurality of images of waste into the first machine learning, ML, model for material classification comprises: using a keypoint analysis to identify locations of features in each image of waste; generating, for each image of waste, a further image comprising a material type based on the identified locations of features in each image of waste; and inputting the further images into the first ML model.
20. The method of claim 18 or 19 wherein training the second ML model to identify object types comprises: using a bounding box analysis to extract features from each image of waste; and generating, for each image of waste, an image comprising bounding box annotations.
21. The method of claim 18, 19 or 20 wherein training the third ML model to quantify each material type and object type in each image of waste comprises: using the image segmentation map to estimate a percentage of a total area of the waste content that is formed of each material type and object type.
22. The method as claimed in any of claims 18 to 21 wherein the training data set further comprises a volume of the waste container which contained the waste content shown in each image, and wherein training the third ML algorithm comprises: using the image segmentation map and the volume to estimate a volume of each material type and object type.
23. The method as claimed in any of claims 18 to 22 wherein the training data set further comprises a total weight of the waste content shown in each image, and wherein training the third ML algorithm comprises: using the image segmentation map and the total weight to estimate a weight of each material type and object type.
24. The method as claimed in any of claims 18 to 23 further comprising: training a further ML model to separate the waste content from any background or foreground materials or objects in the images of the training dataset that are not from the waste container.
25. The method as claimed in any of claims 18 to 24 wherein the training dataset comprises a first set of images of the waste from the waste container and a second set of images of the waste from the waste container depicting the waste spread-out over a surface.
26. The method as claimed in claim 25 further comprising: inputting into a fourth machine learning model: a first output obtained for the first set of images, a second output obtained for the second set of images; and training the fourth ML model to learn a map from the first output to the second output.
27. The method as claimed in claim 26 wherein the first output and the second output are any of: an estimated composition of the waste obtained from the third ML model, and an estimated composition of the waste obtained from a human.
28. A computer-implemented method of estimating a composition of waste including a plurality of waste items, the method comprising: obtaining at least one image of waste from a waste container, the waste including a plurality of waste items; applying at least one machine learning, ML, algorithm to the at least one image of waste to: classify regions of the image of waste according to material type; classify regions of the image of waste according to object type; combine the material classification and the object classification to generate an image segmentation map indicating the location of each material type and object type in the image of waste; and quantify, using the image segmentation map, each material type and object type in the image of waste and thereby determine an estimated composition of the waste from the waste container.
29. A non-transitory data carrier carrying code which, when implemented on a processor, causes the processor to carry out the method of any of claims 1 to 28.
30. A system for estimating a composition of waste including a plurality of waste items, the system comprising: at least one processor, coupled to memory, arranged to: obtain at least one image of waste from a waste container, the waste including a plurality of waste items; apply a first machine learning, ML, algorithm for material classification to the at least one image of waste, to classify regions of the image of waste according to material type; apply a second machine learning, ML, algorithm for object classification to the at least one image of waste, to classify regions of the image of waste according to object type; combine, for the at least one image of waste, the material classification and the object classification to generate an image segmentation map indicating the location of each material type and object type in the image of waste; and apply a third machine learning, ML, algorithm to the image segmentation map, to quantify each material type and object type in the image of waste and thereby determine an estimated composition of the waste from the waste container.
31. The system as claimed in claim 30 further comprising: an image capture device for obtaining the at least one image of waste content of a waste container.
32. The system as claimed in claim 31 wherein the image capture device obtains a plurality of images of the waste content of the waste container, each image captured from a different viewpoint.
33. The system as claimed in claim30, 31 or 32 further comprising: a weighbridge for determining a weight of the waste content of the waste container, wherein the determined weight is provided by the at least one processor to the third ML algorithm to estimate a weight of each material type and object type of the waste from the waste container.
34. The system as claimed in any of claims 30 to 33 further comprising: an imaging area for obtaining images of waste, wherein the imaging area comprises: a surface upon which waste content of the waste container are spread out; and a mechanism for spreading the waste content over the surface
35. The system as claimed in claim 34 wherein the imaging area may comprise two or more walls to retain waste content on the surface.
36. The system as claimed in claim 34 or 35 wherein the mechanism for spreading out the waste content is a mechanical spreading or grabbing tool.
37. The system as claimed in claim 34 or 35 wherein the mechanism for spreading out the waste content is a mechanism for causing the surface to vibrate.
38. The system as claimed in any of claims 34 to 37 wherein the surface and/or walls are fully or partially transparent.
PCT/GB2022/051385 2021-06-03 2022-06-01 Method for automated waste composition reporting WO2022254202A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB2107938.9A GB2607583B (en) 2021-06-03 2021-06-03 Method for automated waste composition reporting
GB2107938.9 2021-06-03

Publications (1)

Publication Number Publication Date
WO2022254202A1 true WO2022254202A1 (en) 2022-12-08

Family

ID=76838868

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2022/051385 WO2022254202A1 (en) 2021-06-03 2022-06-01 Method for automated waste composition reporting

Country Status (2)

Country Link
GB (1) GB2607583B (en)
WO (1) WO2022254202A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140379588A1 (en) * 2013-03-15 2014-12-25 Compology, Inc. System and method for waste managment
US20200222949A1 (en) * 2017-09-19 2020-07-16 Intuitive Robotics, Inc. Systems and methods for waste item detection and recognition

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018200866A1 (en) * 2017-04-26 2018-11-01 UHV Technologies, Inc. Material sorting using a vision system
US11335086B2 (en) * 2020-03-21 2022-05-17 Fidelity Ag, Inc. Methods and electronic devices for automated waste management

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140379588A1 (en) * 2013-03-15 2014-12-25 Compology, Inc. System and method for waste managment
US20200222949A1 (en) * 2017-09-19 2020-07-16 Intuitive Robotics, Inc. Systems and methods for waste item detection and recognition

Also Published As

Publication number Publication date
GB2607583B (en) 2023-12-20
GB2607583A (en) 2022-12-14
GB202107938D0 (en) 2021-07-21

Similar Documents

Publication Publication Date Title
US20210371196A1 (en) Automatic sorting of waste
US11610185B2 (en) System and method for waste management
US11631022B2 (en) Forecasting soil and groundwater contamination migration
Syfert et al. Using species distribution models to inform IUCN Red List assessments
US20230046145A1 (en) Systems and methods for detecting waste receptacles using convolutional neural networks
Ferentinou et al. Integrating rock engineering systems device and artificial neural networks to predict stability conditions in an open pit
CN106429084B (en) Garbage classification method and device
Narayanswamy et al. Development of computer vision algorithms for multi-class waste segregation and their analysis
JP6873611B2 (en) Ground surface information analysis system and ground surface information analysis method
WO2022254202A1 (en) Method for automated waste composition reporting
CN114663711B (en) X-ray security inspection scene-oriented dangerous goods detection method and device
Praneeth et al. Scaling object detection to the edge with yolov4, tensorflow lite
Roarty et al. Laboratory measurements of bed load sediment transport dynamics
CN114549940B (en) Image processing method
US20230108134A1 (en) Deterioration diagnosis device, and recording medium
CN113256600B (en) Camera dust detection method and device based on artificial intelligence and electronic equipment
Pawase Avinash et al. Automated Waste Segregator for Efficient Recycling Using IoT
Khan et al. A Socialized Geotagging Based Garbage Identification and Severity Ranking Mechanism
Hassan et al. Smart Recycle Bin Prototype Using Convolutional Neural Network for Trash Classification
Siregar et al. Monitoring of Toxic Gas and Dust from Motorized Vehicles on The Highway Using Internet of Things and Blob Detection
Zheng et al. Explainable deep learning for automatic rock classification
de Raat et al. Predictive twin for steel bridge in The Netherlands
Gao et al. Machine learning in construction and demolition waste management: Progress, challenges, and future directions
Vayadande et al. Landslide Susceptibility Prediction System
Demetriou et al. CODD: A benchmark dataset for the automated sorting of construction and demolition waste

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22731773

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE