WO2019049060A1 - Froth segmentation in flotation cells - Google Patents

Froth segmentation in flotation cells Download PDF

Info

Publication number
WO2019049060A1
WO2019049060A1 PCT/IB2018/056802 IB2018056802W WO2019049060A1 WO 2019049060 A1 WO2019049060 A1 WO 2019049060A1 IB 2018056802 W IB2018056802 W IB 2018056802W WO 2019049060 A1 WO2019049060 A1 WO 2019049060A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
bubble
froth
segmentation
dnn
Prior art date
Application number
PCT/IB2018/056802
Other languages
French (fr)
Inventor
Shaun George IRWIN
Kristo BOTHA
Leendert VAN DER BIJL
Original Assignee
Stone Three Mining Solutions (Pty) Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Stone Three Mining Solutions (Pty) Ltd filed Critical Stone Three Mining Solutions (Pty) Ltd
Priority to AU2018327270A priority Critical patent/AU2018327270A1/en
Priority to BR112020004522-5A priority patent/BR112020004522B1/en
Publication of WO2019049060A1 publication Critical patent/WO2019049060A1/en
Priority to ZA2020/01710A priority patent/ZA202001710B/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B03SEPARATION OF SOLID MATERIALS USING LIQUIDS OR USING PNEUMATIC TABLES OR JIGS; MAGNETIC OR ELECTROSTATIC SEPARATION OF SOLID MATERIALS FROM SOLID MATERIALS OR FLUIDS; SEPARATION BY HIGH-VOLTAGE ELECTRIC FIELDS
    • B03DFLOTATION; DIFFERENTIAL SEDIMENTATION
    • B03D1/00Flotation
    • B03D1/02Froth-flotation processes
    • B03D1/028Control and monitoring of flotation processes; computer models therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24143Distances to neighbourhood prototypes, e.g. restricted Coulomb energy networks [RCEN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • This invention relates to a method, apparatus and system for analysing the froth phase of a flotation cell. More specifically, but not exclusively, the invention relates to the real-time measuring of the froth phase of a flotation cell with a view to estimating and/or predicting grade and recovery of the flotation process.
  • Froth flotation is a process for separating minerals from gangue by taking advantage of differences in their hydrophobicity. Hydrophobicity differences between valuable minerals and waste gangue are increased through the use of surfactants and wetting agents.
  • the flotation process typically takes place in an open cell and consists of the pulp phase which can be described as the 'reactor' and the froth phase which can be termed the 'separator'.
  • pulp phase hydrophobic particles preferentially attach to rising air bubbles which form a froth at the top of the pulp phase and are recovered as concentrate.
  • Sub-processes such as bubble-particle collision, attachment, detachment and entrainment dominate. These sub- processes have an overall effect of transporting particles, mostly hydrophobic particles, to the froth phase.
  • the process of froth formation and transport determines the kind of sub-processes that take place.
  • Froth phase sub-processes such as thinning of bubble films, bubble coalescence and froth drainage result in an increase in bubble sizes, particles detaching from bubbles and draining back into the pulp phase.
  • Froth phase sub-processes may lead to cleaning/separating action if there is preferential re-attachment of the draining particles to the available bubble surface area. This cleaning action determines the overall grade and recovery of the flotation process.
  • the initial separating action of the froth phase starts at the pulp-froth interface. Having an accurate model of the flotation process and obtaining real-time measurements of the froth phase can aid in predicting the grade and recovery of the flotation cell as a whole. This in turn is useful for plant optimisation.
  • One conventional way of measuring characteristics of the froth phase of a flotation cell is to place a camera above the surface of the froth phase which is configured to capture a video stream of the froth surface.
  • Image analysis techniques are then used to calculate, amongst others, froth velocity, height, stability and bubble size.
  • a conventional technique for calculating bubble size from images involves using a so-called "watershedding" algorithm to identify the boundaries of individual bubbles on the froth phase surface.
  • Various bubble size metrics may then be calculated from these measurements including mean bubble size and complete bubble size distribution.
  • a computer-implemented method of generating a bubble segmentation image from a digital froth image of a froth phase of a flotation cell comprising:
  • the deep learning networks having been trained with one or more datasets of training images of labelled, predetermined bubble segmentation images to learn to identify features useful for identifying bubble boundaries automatically;
  • the bubble segmentation image utilising the deep learning networks so that the bubble segmentation image includes identified boundary data representing boundaries between bubbles present in the froth image
  • step of applying the deep learning networks to the froth image to include applying one or more a deep neural network (DNN) algorithms to the froth image; for the method to include the step of creating the dataset of training images; for the method to include the step of training the DNN algorithms using the dataset; and for the method to include the step of determining if identified bubble boundary data includes false positives, and optionally retraining the DNN architecture if it does.
  • DNN deep neural network
  • Still further features provide for the method to include the steps of: training the DNN algorithms with a dataset comprising a plurality of labelled, segmentation boundary training images generated utilising commercially available bubble/rock segmentation products; and/or training the DNN algorithms with a dataset comprising a plurality of training images simulated using computer animation software.
  • the DNN to be a convolutional deep neural network (CNN); and for the CNN to utilise the U-Net convolutional network architecture.
  • CNN convolutional deep neural network
  • a system for generating a bubble segmentation image from a froth image of a froth phase of a flotation cell including a memory for storing computer-readable program code and a processor for executing the computer-readable program code, the system comprising:
  • an image receiving component arranged to receive a digital froth image from an image capturing component
  • a bubble boundary identification component arranged to receive the digital froth image as input, identify bubble boundaries between bubbles in the digital froth image utilising one or more deep learning networks, create the bubble segmentation image and provide it as output, the deep learning networks having been trained with one or more datasets of training images of labelled, predetermined bubble segmentation images to learn to identify features useful for identifying bubble boundaries automatically;
  • an image output component arranged to output the bubble segmentation image received from the bubble boundary identification component.
  • the system may include an image capturing component which may form part of a measuring unit arranged to be secured above a flotation cell.
  • the one or more deep learning networks to be one or more deep neural network (DNN) algorithms; for the system to include a training component arranged to train the DNN networks with the training datasets comprising bubble segmentation training images generated utilising commercially available bubble/rock segmentation products and/or simulated using computer animation software.
  • DNN deep neural network
  • DNN to be a convolutional deep neural network (CNN); and for the CNN to utilise the U-Net convolutional network architecture.
  • CNN convolutional deep neural network
  • Figure 1 is a schematic diagram of a flotation cell installation in which a method and system for generating a bubble segmentation image from a froth image of a froth phase of the flotation cell may be utilised;
  • Figure 2 is an image of a froth phase of a flotation cell
  • Figure 3 is a bubble segmentation image created utilising a method and system according to the disclosure.
  • Figure 4 is an image showing a bubble segmentation image created utilising a method and system according to the disclosure, superimposed over a corresponding froth image;
  • Figure 5 is a block diagram which illustrates components of an exemplary system used for implementing a method according to the disclosure.
  • Figure 6 illustrates an example of a computing device in which various aspects of the disclosure may be implemented.
  • Some representations are loosely based on interpretation of information processing and communication patterns in a biological nervous system, such as neural coding that attempts to define a relationship between various stimuli and associated neuronal responses in the brain. Research attempts to create efficient systems to learn these representations from large-scale, unlabelled data sets.
  • Deep learning architectures such as deep neural networks, deep belief networks and recurrent neural networks have been applied to fields including computer vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation and bioinformatics where they produced results comparable to and in some cases superior to human experts.
  • Deep learning is a class of machine learning algorithms that:
  • the algorithms may be supervised or unsupervised and applications include pattern analysis (unsupervised) and classification (supervised);
  • Deep learning adds the assumption that these layers of factors correspond to levels of abstraction or composition. Varying numbers of layers and layer sizes can provide different amounts of abstraction. Deep learning exploits this idea of hierarchical explanatory factors where higher level, more abstract concepts are learned from the lower level ones. Deep learning architectures are often constructed with a greedy layer-by-layer method. Deep learning helps to disentangle these abstractions and pick out which features are useful for improving performance. For supervised learning tasks, deep learning methods obviate feature engineering, by translating the data into compact intermediate representations akin to principal components, and derive layered structures that remove redundancy in representation.
  • Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabelled data are more abundant than labelled data. Examples of deep structures that can be trained in an unsupervised manner are neural history compressors and deep belief networks.
  • a deep neural network is an artificial neural network (ANN) with multiple hidden layers between the input and output layers. Similar to shallow ANNs, DNNs can model complex nonlinear relationships. DNN architectures generate compositional models where the object is expressed as a layered composition of primitives. The extra layers enable composition of features from lower layers, potentially modelling complex data with fewer units than a similarly performing shallow network.
  • Deep architectures include many variants of a few basic approaches. Each architecture has found success in specific domains. It is not always possible to compare the performance of multiple architectures, unless they have been evaluated on the same data sets.
  • DNNs are typically feedforward networks in which data flows from the input layer to the output layer without looping back.
  • Recurrent neural networks in which data can flow in any direction, are used for applications such as language modelling. Long short-term memory is particularly effective for this use.
  • CNNs Convolutional deep neural networks
  • ASR automatic speech recognition
  • the present disclosure presents a method and system for conducting such digital analysis and automatically generating a bubble segmentation image (as shown in Figure 3) from a froth image (as shown in Figure 2).
  • the bubble segmentation image is essentially made up of only the boundaries between the bubbles visible in the froth image.
  • FIG 1 is a schematic diagram which illustrates a flotation cell installation in which an exemplary system (100) for conducting such digital bubble segmentation analysis is used with a flotation cell (107).
  • the system (100) may be sold as a standalone unit capable of receiving a digital froth image captured by an external imaging device, for present purposes it will be described as part of a measuring unit (102) arranged to be positioned above the flotation cell (107).
  • the system (100) includes a digital camera (101 ), which could of course be a video camera, positioned at the top of a sun tube (103) which forms part of a housing (105) of the measuring unit (102).
  • the housing (105) is positioned over the flotation cell (107) with the lens of the camera (not shown) directed toward the top of the flotation cell.
  • a froth phase (109) made up of a collection of bubbles will form at the top of the flotation cell.
  • the top (1 1 1 ) of the froth phase defines the froth level and the bottom (1 13) of the froth phase represents the top of the pulp phase (1 15) of the flotation cell (107).
  • a laser sensor or laser pointer (1 17) is provided in the sun tube (103) and is also directed at an angle towards the froth phase (109). While not an essential part of the present invention it bears mention that the laser pointer (1 17) may be used to measure the height of the froth.
  • the digital camera (101 ) is in data communication with a processor which may of course be incorporated in a suitable computer infrastructure (121 ).
  • the computer infrastructure (121 ) also includes a memory (125) for storing computer-readable program code and the processor (121 ) is arranged to execute the computer-readable program code.
  • the computer-readable program code includes one or more deep learning network models which have been trained with one or more datasets of training images. It will be appreciated by those skilled in the art that the training images may include previously generated bubble segmentation images with boundary data of boundaries between bubbles already defined and added to the images.
  • the deep learning network is a deep neural network (DNN) algorithm, more specifically a convolutional deep neural network (CNN) of the U-Net architecture, which is a deep neural network architecture that has been applied in industry to self-driving cars and medical imaging.
  • DNN deep neural network
  • CNN convolutional deep neural network
  • the U-Net receives a froth image as an input and outputs the bubble segmentation image.
  • a typical froth image used as input to the system (100) is shown in Figure 2 and a typical bubble segmentation image received as an output from the DNN algorithm is shown in Figure 3.
  • Figure 4 shows the bubble segmentation image superimposed over the froth image. It will be clear to those skilled in the art that the segmentation image may represent an accurate estimation of the outlines or boundaries of the bubbles in the froth image.
  • the U-Net model consists of a series of convolutional layers arranged in a "bottleneck" formation. Images, which can be considered high dimensional data points, are "squeezed” through a narrow information bottleneck, before again expanding back to the original dimensions at the output. However, instead of the output image being the same as the input image, the algorithm is configured to let the output image be the bubble segmentation boundaries. Therefore, during the training of the DNN, the model learns to identify features useful for identifying the bubble boundaries.
  • the U-Net model combines the high level abstractions that are output from the successive convolutional layers, with the lower level features that are fed through using "skip connections" from earlier layers in the network. It will, however, be clear to those skilled in the art that any alternative or additional deep learning architectures may be used in the disclosure including, but not limited to, FusionNet and DenseNet.
  • the system described above may be used to implement a method in accordance with the disclosure.
  • Various components of the system (500) used to implement the method are shown in more detail in the block diagram of Figure 5.
  • the method may include the creation of the DNN training datasets by a training component (501 ) as described in more detail further below.
  • the method is carried out by capturing a froth image using the digital camera or another image capturing component (503).
  • the image is then communicated to the computer infrastructure where it may be received by a suitable image receiving component (505).
  • the DNN is applied to the froth image by a DNN application component (507) and boundaries between the various bubbles in the froth image are identified through the application of the DNN.
  • An image construction component (509) then constructs the bubble segmentation image including identified boundary data representing the boundaries between the bubbles visible in the froth image.
  • the bubble segmentation image is then outputted by an output component (51 1 ) to further processing modules which fall outside the scope of this disclosure. It will, however, be appreciated by those skilled in the art that the further processing modules may use the bubble segmentation image with the identified bubble boundary data to determine the bubble sizes and bubble size distributions.
  • the method and system for generating bubble segmentation images may be used for continuous analysis of the froth phase of the flotation cell.
  • various parameters of the flotation cell's performance may be derived, for example, the froth velocity, froth height, froth stability, bubble size and bubble size distribution.
  • the accurate measurement of these parameters in near real-time may be used to derive critical operational parameters of the flotation cell which may serve as indicators for input adjustments to be made to the flotation cell to ensure efficient operation.
  • the compiling of the DNN training datasets and the actual training of the DNN may be essential aspects required for the effective operation of the method and system disclosed.
  • the DNN typically requires substantial amounts of labelled data for training.
  • the DNN may therefore be trained with datasets of previously generated or measured training images which may typically be labelled.
  • Each training image may include defined bubble segmentation boundaries which, in combination with other defined bubble segmentation boundary images that make up the dataset effectively teaches the DNN to identify features useful for identifying bubble boundaries in input images automatically. Since hand segmenting thousands of images is an extremely time-consuming task, it may not be practical to use this technique to gather training image data.
  • already available, commercially used products such as the Lynxx particle size analyser, the applicant's existing bubble/rock segmentation product, may be used to provide rough segmentation data that may then be used in the training dataset to train the DNN.
  • Another technique that may be used to generate training datasets involves simulating froth images using computer animation software. Unlike the capturing of real froth images, with computer simulations the true geometry of the bubbles used to create the rendering is accessible. As such, perfect segmentation of the bubbles can automatically be generated. Since the textures used to generate the geometries and colours of the bubbles are essentially mathematical expressions, infinite varieties of froth conditions may be artificially created, which helps supplement the training datasets for froth conditions for which good segmentations are not available.
  • FIG. 6 illustrates an example of a computing device (600) in which various aspects of the disclosure may be implemented.
  • the computing device (600) may be embodied as any form of data processing device including a personal computing device (e.g. laptop or desktop computer), a server computer (which may be self-contained, physically distributed over a number of locations), a client computer, or a communication device, such as a mobile phone (e.g. cellular telephone), satellite phone, tablet computer, personal digital assistant or the like.
  • a mobile phone e.g. cellular telephone
  • satellite phone e.g. cellular telephone
  • tablet computer e.g. cellular telephone
  • personal digital assistant e.g. cellular telephone
  • the computing device (600) may be suitable for storing and executing computer program code.
  • the various participants and elements in the previously described system diagrams may use any suitable number of subsystems or components of the computing device (600) to facilitate the functions described herein.
  • the computing device (600) may include subsystems or components interconnected via a communication infrastructure (605) (for example, a communications bus, a network, etc.).
  • the computing device (600) may include one or more processors (610) and at least one memory component in the form of computer-readable media.
  • the one or more processors (610) may include one or more of: CPUs, graphical processing units (GPUs), microprocessors, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs) and the like.
  • a number of processors may be provided and may be arranged to carry out calculations simultaneously.
  • various subsystems or components of the computing device (600) may be distributed over a number of physical locations (e.g. in a distributed, cluster or cloud-based computing configuration) and appropriate software units may be arranged to manage and/or process data on behalf of remote devices.
  • the memory components may include system memory (615), which may include read only memory (ROM) and random access memory (RAM).
  • ROM read only memory
  • RAM random access memory
  • BIOS basic input/output system
  • System software may be stored in the system memory (615) including operating system software.
  • the memory components may also include secondary memory (620).
  • the secondary memory (620) may include a fixed disk (621 ), such as a hard disk drive, and, optionally, one or more storage interfaces (622) for interfacing with storage components (623), such as removable storage components (e.g. magnetic tape, optical disk, flash memory drive, external hard drive, removable memory chip, etc.), network attached storage components (e.g. NAS drives), remote storage components (e.g. cloud-based storage) or the like.
  • removable storage components e.g. magnetic tape, optical disk, flash memory drive, external hard drive, removable memory chip, etc.
  • network attached storage components e.g. NAS drives
  • remote storage components e.g. cloud-based storage
  • the computing device (600) may include an external communications interface (630) for operation of the computing device (600) in a networked environment enabling transfer of data between multiple computing devices (600) and/or the Internet.
  • Data transferred via the external communications interface (630) may be in the form of signals, which may be electronic, electromagnetic, optical, radio, or other types of signal.
  • the external communications interface (630) may enable communication of data between the computing device (600) and other computing devices including servers and external storage facilities. Web services may be accessible by and/or from the computing device (600) via the communications interface (630).
  • the external communications interface (630) may be configured for connection to wireless communication channels (e.g., a cellular telephone network, wireless local area network (e.g. using Wi-FiTM), satellite-phone network, Satellite Internet Network, etc.) and may include an associated wireless transfer element, such as an antenna and associated circuity.
  • wireless communication channels e.g., a cellular telephone network, wireless local area network (e.g. using Wi-FiTM), satellite-phone network
  • the computer-readable media in the form of the various memory components may provide storage of computer-executable instructions, data structures, program modules, software units and other data.
  • a computer program product may be provided by a computer-readable medium having stored computer-readable program code executable by the central processor (610).
  • a computer program product may be provided by a non-transient computer-readable medium, or may be provided via a signal or other transient means via the communications interface (630).
  • Interconnection via the communication infrastructure (605) allows the one or more processors (610) to communicate with each subsystem or component and to control the execution of instructions from the memory components, as well as the exchange of information between subsystems or components.
  • Peripherals such as printers, scanners, cameras, or the like
  • input/output (I/O) devices such as a mouse, touchpad, keyboard, microphone, touch-sensitive display, input buttons, speakers and the like
  • I/O controller (635) may couple to or be integrally formed with the computing device (600) either directly or via an I/O controller (635).
  • One or more displays (645) (which may be touch-sensitive displays) may be coupled to or integrally formed with the computing device (600) via a display (645) or video adapter (640).
  • the processor for executing the functions of components described may be provided by hardware or by software units executing on the computer infrastructure.
  • the software units may be stored in a memory component and instructions may be provided to the processor to carry out the functionality of the described components.
  • software units arranged to manage and/or process data on behalf of the computer infrastructure may be provided remotely.
  • a software unit is implemented with a computer program product comprising a non-transient computer-readable medium containing computer program code, which can be executed by a processor for performing any or all of the steps, operations, or processes described.
  • Software units or functions described in this application may be implemented as computer program code using any suitable computer language such as, for example, JavaTM, C++, or PerlTM using, for example, conventional or object-oriented techniques.
  • the computer program code may be stored as a series of instructions, or commands on a non- transitory computer-readable medium, such as a random access memory (RAM), a read-only memory (ROM), a magnetic medium such as a hard-drive, or an optical medium such as a CD- ROM. Any such computer-readable medium may also reside on or within a single computational apparatus, and may be present on or within different computational apparatuses within a system or network.
  • a non- transitory computer-readable medium such as a random access memory (RAM), a read-only memory (ROM), a magnetic medium such as a hard-drive, or an optical medium such as a CD- ROM.
  • RAM random access memory
  • ROM read-only memory
  • magnetic medium such as a hard-drive
  • optical medium such as a CD- ROM

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Investigating Or Analysing Biological Materials (AREA)

Abstract

System (100) and method for generating a bubble segmentation image from a digital froth image of a froth phase of a flotation cell (107). The method comprises receiving the froth image, applying one or more deep learning networks to the froth image, the deep learning networks having been trained with one or more datasets of training images of labelled, predetermined bubble segmentation images to learn to identify features useful for identifying bubble boundaries automatically. The method further comprises generating the bubble segmentation image utilizing the deep learning networks so that the bubble segmentation image includes identified boundary data representing boundaries between bubbles present in the froth image, and outputting the bubble segmentation image.

Description

FROTH SEGMENTATION IN FLOTATION CELLS
CROSS-REFERENCE TO RELATED APPLICATION
This application claims priority from South African provisional patent application number 2017/061 14 filed on 8 September 2017, which is incorporated by reference herein.
FIELD OF THE INVENTION
This invention relates to a method, apparatus and system for analysing the froth phase of a flotation cell. More specifically, but not exclusively, the invention relates to the real-time measuring of the froth phase of a flotation cell with a view to estimating and/or predicting grade and recovery of the flotation process.
BACKGROUND TO THE INVENTION
Froth flotation is a process for separating minerals from gangue by taking advantage of differences in their hydrophobicity. Hydrophobicity differences between valuable minerals and waste gangue are increased through the use of surfactants and wetting agents.
The flotation process typically takes place in an open cell and consists of the pulp phase which can be described as the 'reactor' and the froth phase which can be termed the 'separator'. In the pulp phase, hydrophobic particles preferentially attach to rising air bubbles which form a froth at the top of the pulp phase and are recovered as concentrate. Sub-processes such as bubble-particle collision, attachment, detachment and entrainment dominate. These sub- processes have an overall effect of transporting particles, mostly hydrophobic particles, to the froth phase. In the froth phase, the process of froth formation and transport determines the kind of sub-processes that take place. Froth phase sub-processes such as thinning of bubble films, bubble coalescence and froth drainage result in an increase in bubble sizes, particles detaching from bubbles and draining back into the pulp phase. Froth phase sub-processes may lead to cleaning/separating action if there is preferential re-attachment of the draining particles to the available bubble surface area. This cleaning action determines the overall grade and recovery of the flotation process. The initial separating action of the froth phase starts at the pulp-froth interface. Having an accurate model of the flotation process and obtaining real-time measurements of the froth phase can aid in predicting the grade and recovery of the flotation cell as a whole. This in turn is useful for plant optimisation.
One conventional way of measuring characteristics of the froth phase of a flotation cell is to place a camera above the surface of the froth phase which is configured to capture a video stream of the froth surface. Image analysis techniques are then used to calculate, amongst others, froth velocity, height, stability and bubble size. A conventional technique for calculating bubble size from images involves using a so-called "watershedding" algorithm to identify the boundaries of individual bubbles on the froth phase surface. Various bubble size metrics may then be calculated from these measurements including mean bubble size and complete bubble size distribution. These existing techniques, however, suffer from a number of drawbacks. Most notably, the known techniques require favourable lighting conditions in order to work well. While they are generally considered to be reasonably accurate when the froth surface is illuminated with only a single point light source placed above the froth in the absence of other light sources (including ambient light), they are not accurate in the presence of multiple light sources such as multiple lights or reflections from a single light source, or in the presence of sunlight on the froth surface. In the latter scenario conventional imaging techniques tend to over segment (identify a single bubble as multiple bubbles) or under segment (identify multiple bubbles as a single bubble). Over - or under segmenting leads to erroneous bubble size measurements which may in turn lead to a perceived bias in the froth recovery or other metrics that are being derived from the bubble size.
While it is possible to attempt to minimise the ingress of sun- or other ambient light onto the camera lens it is not practical to completely eliminate it. The height of the froth surface typically varies during the flotation process which prevents conventional sun shields from completely blocking the sun from the froth surface. Furthermore, since sunlight is a far stronger light source than most artificial light sources it tends to overpower other light sources leading to loss of control of the lighting conditions.
There is accordingly scope for improvement of bubble size measuring techniques used in flotation cell froth phase analysis.
The applicant's co-pending South African patent application numbers ZA2016/08265 and ZA2017/03892 describe methods, apparatus and systems for measuring properties of the pulp phase of a flotation cell and computer implemented methods and systems for monitoring froth flotation processes including multiple flotation cells on a plant level, respectively, and are incorporated herein in their entirety by reference.
The preceding discussion of the background to the invention is intended only to facilitate an understanding of the present invention. It should be appreciated that the discussion is not an acknowledgment or admission that any of the material referred to was part of the common general knowledge in the art as at the priority date of the application.
SUMMARY OF THE INVENTION
In accordance with an aspect of the invention there is provided a computer-implemented method of generating a bubble segmentation image from a digital froth image of a froth phase of a flotation cell, the method comprising:
receiving the froth image;
applying one or more deep learning networks to the froth image, the deep learning networks having been trained with one or more datasets of training images of labelled, predetermined bubble segmentation images to learn to identify features useful for identifying bubble boundaries automatically;
generating the bubble segmentation image utilising the deep learning networks so that the bubble segmentation image includes identified boundary data representing boundaries between bubbles present in the froth image; and
outputting the bubble segmentation image.
Further features provide for the step of applying the deep learning networks to the froth image to include applying one or more a deep neural network (DNN) algorithms to the froth image; for the method to include the step of creating the dataset of training images; for the method to include the step of training the DNN algorithms using the dataset; and for the method to include the step of determining if identified bubble boundary data includes false positives, and optionally retraining the DNN architecture if it does.
Still further features provide for the method to include the steps of: training the DNN algorithms with a dataset comprising a plurality of labelled, segmentation boundary training images generated utilising commercially available bubble/rock segmentation products; and/or training the DNN algorithms with a dataset comprising a plurality of training images simulated using computer animation software. Yet further feature provide for the DNN to be a convolutional deep neural network (CNN); and for the CNN to utilise the U-Net convolutional network architecture.
In accordance with a further aspect of the invention there is provided a system for generating a bubble segmentation image from a froth image of a froth phase of a flotation cell, the system including a memory for storing computer-readable program code and a processor for executing the computer-readable program code, the system comprising:
an image receiving component arranged to receive a digital froth image from an image capturing component;
a bubble boundary identification component arranged to receive the digital froth image as input, identify bubble boundaries between bubbles in the digital froth image utilising one or more deep learning networks, create the bubble segmentation image and provide it as output, the deep learning networks having been trained with one or more datasets of training images of labelled, predetermined bubble segmentation images to learn to identify features useful for identifying bubble boundaries automatically; and
an image output component arranged to output the bubble segmentation image received from the bubble boundary identification component.
The system may include an image capturing component which may form part of a measuring unit arranged to be secured above a flotation cell.
Further features provides for the one or more deep learning networks to be one or more deep neural network (DNN) algorithms; for the system to include a training component arranged to train the DNN networks with the training datasets comprising bubble segmentation training images generated utilising commercially available bubble/rock segmentation products and/or simulated using computer animation software.
Still further feature provide for the DNN to be a convolutional deep neural network (CNN); and for the CNN to utilise the U-Net convolutional network architecture.
Embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
In the drawings: Figure 1 is a schematic diagram of a flotation cell installation in which a method and system for generating a bubble segmentation image from a froth image of a froth phase of the flotation cell may be utilised;
Figure 2 is an image of a froth phase of a flotation cell;
Figure 3 is a bubble segmentation image created utilising a method and system according to the disclosure;
Figure 4 is an image showing a bubble segmentation image created utilising a method and system according to the disclosure, superimposed over a corresponding froth image;
Figure 5 is a block diagram which illustrates components of an exemplary system used for implementing a method according to the disclosure; and
Figure 6 illustrates an example of a computing device in which various aspects of the disclosure may be implemented.
DETAILED DESCRIPTION WITH REFERENCE TO THE DRAWINGS
While the area of deep learning networks is highly specialized and relatively new, those familiar with and skilled in their use have a thorough understanding of their underlying principles and practical implementation. For present purposes it is therefore unnecessary to explain them in mathematical or algorithmic terms as those skilled in the art will know how to implement the techniques relied on in this disclosure. Suffice it at this stage therefore to refer to the Wikipedia explanation of "Deep learning", as provided below, simply to set the relevant background to the description that follows.
"Deep learning", also known as "deep structured learning" or "hierarchical learning", is part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms. Learning can be supervised, partially supervised or unsupervised.
Some representations are loosely based on interpretation of information processing and communication patterns in a biological nervous system, such as neural coding that attempts to define a relationship between various stimuli and associated neuronal responses in the brain. Research attempts to create efficient systems to learn these representations from large-scale, unlabelled data sets.
Deep learning architectures such as deep neural networks, deep belief networks and recurrent neural networks have been applied to fields including computer vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation and bioinformatics where they produced results comparable to and in some cases superior to human experts.
Deep learning is a class of machine learning algorithms that:
• use a cascade of many layers of nonlinear processing units for feature extraction and transformation. Each successive layer uses the output from the previous layer as input. The algorithms may be supervised or unsupervised and applications include pattern analysis (unsupervised) and classification (supervised);
• are based on the (unsupervised) learning of multiple levels of features or representations of the data. Higher level features are derived from lower level features to form a hierarchical representation;
• are part of the broader machine learning field of learning representations of data;
• learn multiple levels of representations that correspond to different levels of abstraction, the levels form a hierarchy of concepts; and
• use some form of gradient descent for training.
These definitions have in common (1 ) multiple layers of nonlinear processing units and (2) the supervised or unsupervised learning of feature representations in each layer, with the layers forming a hierarchy from low-level to high-level features. The composition of a layer of nonlinear processing units used in a deep learning algorithm depends on the problem to be solved.
The assumption underlying distributed representations is that observed data are generated by the interactions of layered factors. Deep learning adds the assumption that these layers of factors correspond to levels of abstraction or composition. Varying numbers of layers and layer sizes can provide different amounts of abstraction. Deep learning exploits this idea of hierarchical explanatory factors where higher level, more abstract concepts are learned from the lower level ones. Deep learning architectures are often constructed with a greedy layer-by-layer method. Deep learning helps to disentangle these abstractions and pick out which features are useful for improving performance. For supervised learning tasks, deep learning methods obviate feature engineering, by translating the data into compact intermediate representations akin to principal components, and derive layered structures that remove redundancy in representation.
Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabelled data are more abundant than labelled data. Examples of deep structures that can be trained in an unsupervised manner are neural history compressors and deep belief networks.
A deep neural network (DNN) is an artificial neural network (ANN) with multiple hidden layers between the input and output layers. Similar to shallow ANNs, DNNs can model complex nonlinear relationships. DNN architectures generate compositional models where the object is expressed as a layered composition of primitives. The extra layers enable composition of features from lower layers, potentially modelling complex data with fewer units than a similarly performing shallow network.
Deep architectures include many variants of a few basic approaches. Each architecture has found success in specific domains. It is not always possible to compare the performance of multiple architectures, unless they have been evaluated on the same data sets.
DNNs are typically feedforward networks in which data flows from the input layer to the output layer without looping back. Recurrent neural networks (RNNs), in which data can flow in any direction, are used for applications such as language modelling. Long short-term memory is particularly effective for this use.
Convolutional deep neural networks (CNNs) are used in computer vision. CNNs also have been applied to acoustic modelling for automatic speech recognition (ASR).
In order to automatically determine bubble sizes and bubble size distribution in the froth phase of a flotation cell it is necessary to digitally analyze images taken from the top of the froth phase of the cell. More specifically, images of the froth phase need to be analyzed so as to segment the bubbles by determining the locations of the boundaries between them. The present disclosure presents a method and system for conducting such digital analysis and automatically generating a bubble segmentation image (as shown in Figure 3) from a froth image (as shown in Figure 2). The bubble segmentation image is essentially made up of only the boundaries between the bubbles visible in the froth image. Figure 1 is a schematic diagram which illustrates a flotation cell installation in which an exemplary system (100) for conducting such digital bubble segmentation analysis is used with a flotation cell (107). Although it will be appreciated that the system (100) may be sold as a standalone unit capable of receiving a digital froth image captured by an external imaging device, for present purposes it will be described as part of a measuring unit (102) arranged to be positioned above the flotation cell (107). The system (100) includes a digital camera (101 ), which could of course be a video camera, positioned at the top of a sun tube (103) which forms part of a housing (105) of the measuring unit (102). In use, the housing (105) is positioned over the flotation cell (107) with the lens of the camera (not shown) directed toward the top of the flotation cell. During operation, a froth phase (109) made up of a collection of bubbles will form at the top of the flotation cell. The top (1 1 1 ) of the froth phase defines the froth level and the bottom (1 13) of the froth phase represents the top of the pulp phase (1 15) of the flotation cell (107). A laser sensor or laser pointer (1 17) is provided in the sun tube (103) and is also directed at an angle towards the froth phase (109). While not an essential part of the present invention it bears mention that the laser pointer (1 17) may be used to measure the height of the froth.
The digital camera (101 ) is in data communication with a processor which may of course be incorporated in a suitable computer infrastructure (121 ). The computer infrastructure (121 ) also includes a memory (125) for storing computer-readable program code and the processor (121 ) is arranged to execute the computer-readable program code. The computer-readable program code includes one or more deep learning network models which have been trained with one or more datasets of training images. It will be appreciated by those skilled in the art that the training images may include previously generated bubble segmentation images with boundary data of boundaries between bubbles already defined and added to the images.
In the present embodiment the deep learning network is a deep neural network (DNN) algorithm, more specifically a convolutional deep neural network (CNN) of the U-Net architecture, which is a deep neural network architecture that has been applied in industry to self-driving cars and medical imaging. In the present embodiment the U-Net receives a froth image as an input and outputs the bubble segmentation image. A typical froth image used as input to the system (100) is shown in Figure 2 and a typical bubble segmentation image received as an output from the DNN algorithm is shown in Figure 3. Figure 4 shows the bubble segmentation image superimposed over the froth image. It will be clear to those skilled in the art that the segmentation image may represent an accurate estimation of the outlines or boundaries of the bubbles in the froth image. The U-Net model consists of a series of convolutional layers arranged in a "bottleneck" formation. Images, which can be considered high dimensional data points, are "squeezed" through a narrow information bottleneck, before again expanding back to the original dimensions at the output. However, instead of the output image being the same as the input image, the algorithm is configured to let the output image be the bubble segmentation boundaries. Therefore, during the training of the DNN, the model learns to identify features useful for identifying the bubble boundaries. The U-Net model combines the high level abstractions that are output from the successive convolutional layers, with the lower level features that are fed through using "skip connections" from earlier layers in the network. It will, however, be clear to those skilled in the art that any alternative or additional deep learning architectures may be used in the disclosure including, but not limited to, FusionNet and DenseNet.
The system described above may be used to implement a method in accordance with the disclosure. Various components of the system (500) used to implement the method are shown in more detail in the block diagram of Figure 5. The method may include the creation of the DNN training datasets by a training component (501 ) as described in more detail further below. Once the DNN has, however, been trained, the method is carried out by capturing a froth image using the digital camera or another image capturing component (503). The image is then communicated to the computer infrastructure where it may be received by a suitable image receiving component (505). Once received, the DNN is applied to the froth image by a DNN application component (507) and boundaries between the various bubbles in the froth image are identified through the application of the DNN. An image construction component (509) then constructs the bubble segmentation image including identified boundary data representing the boundaries between the bubbles visible in the froth image. The bubble segmentation image is then outputted by an output component (51 1 ) to further processing modules which fall outside the scope of this disclosure. It will, however, be appreciated by those skilled in the art that the further processing modules may use the bubble segmentation image with the identified bubble boundary data to determine the bubble sizes and bubble size distributions.
It is foreseen that the method and system for generating bubble segmentation images may be used for continuous analysis of the froth phase of the flotation cell. By analysing successive froth images in near real-time as described above, various parameters of the flotation cell's performance may be derived, for example, the froth velocity, froth height, froth stability, bubble size and bubble size distribution. It should be appreciated that the accurate measurement of these parameters in near real-time may be used to derive critical operational parameters of the flotation cell which may serve as indicators for input adjustments to be made to the flotation cell to ensure efficient operation.
As will be appreciated, the compiling of the DNN training datasets and the actual training of the DNN may be essential aspects required for the effective operation of the method and system disclosed. To ensure accurate identification by the DNN of the bubble segmentation boundaries, the DNN typically requires substantial amounts of labelled data for training. The DNN may therefore be trained with datasets of previously generated or measured training images which may typically be labelled. Each training image may include defined bubble segmentation boundaries which, in combination with other defined bubble segmentation boundary images that make up the dataset effectively teaches the DNN to identify features useful for identifying bubble boundaries in input images automatically. Since hand segmenting thousands of images is an extremely time-consuming task, it may not be practical to use this technique to gather training image data. As an alternative, already available, commercially used products such as the Lynxx particle size analyser, the applicant's existing bubble/rock segmentation product, may be used to provide rough segmentation data that may then be used in the training dataset to train the DNN.
Experimental results have shown that this technique of training the DNN is effective and in some cases even manages to provide more robust and accurate segmentations than the data used to train it.
Another technique that may be used to generate training datasets involves simulating froth images using computer animation software. Unlike the capturing of real froth images, with computer simulations the true geometry of the bubbles used to create the rendering is accessible. As such, perfect segmentation of the bubbles can automatically be generated. Since the textures used to generate the geometries and colours of the bubbles are essentially mathematical expressions, infinite varieties of froth conditions may be artificially created, which helps supplement the training datasets for froth conditions for which good segmentations are not available.
Figure 6 illustrates an example of a computing device (600) in which various aspects of the disclosure may be implemented. The computing device (600) may be embodied as any form of data processing device including a personal computing device (e.g. laptop or desktop computer), a server computer (which may be self-contained, physically distributed over a number of locations), a client computer, or a communication device, such as a mobile phone (e.g. cellular telephone), satellite phone, tablet computer, personal digital assistant or the like. Different embodiments of the computing device may dictate the inclusion or exclusion of various components or subsystems described below.
The computing device (600) may be suitable for storing and executing computer program code. The various participants and elements in the previously described system diagrams may use any suitable number of subsystems or components of the computing device (600) to facilitate the functions described herein. The computing device (600) may include subsystems or components interconnected via a communication infrastructure (605) (for example, a communications bus, a network, etc.). The computing device (600) may include one or more processors (610) and at least one memory component in the form of computer-readable media. The one or more processors (610) may include one or more of: CPUs, graphical processing units (GPUs), microprocessors, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs) and the like. In some configurations, a number of processors may be provided and may be arranged to carry out calculations simultaneously. In some implementations various subsystems or components of the computing device (600) may be distributed over a number of physical locations (e.g. in a distributed, cluster or cloud-based computing configuration) and appropriate software units may be arranged to manage and/or process data on behalf of remote devices.
The memory components may include system memory (615), which may include read only memory (ROM) and random access memory (RAM). A basic input/output system (BIOS) may be stored in ROM. System software may be stored in the system memory (615) including operating system software. The memory components may also include secondary memory (620). The secondary memory (620) may include a fixed disk (621 ), such as a hard disk drive, and, optionally, one or more storage interfaces (622) for interfacing with storage components (623), such as removable storage components (e.g. magnetic tape, optical disk, flash memory drive, external hard drive, removable memory chip, etc.), network attached storage components (e.g. NAS drives), remote storage components (e.g. cloud-based storage) or the like.
The computing device (600) may include an external communications interface (630) for operation of the computing device (600) in a networked environment enabling transfer of data between multiple computing devices (600) and/or the Internet. Data transferred via the external communications interface (630) may be in the form of signals, which may be electronic, electromagnetic, optical, radio, or other types of signal. The external communications interface (630) may enable communication of data between the computing device (600) and other computing devices including servers and external storage facilities. Web services may be accessible by and/or from the computing device (600) via the communications interface (630). The external communications interface (630) may be configured for connection to wireless communication channels (e.g., a cellular telephone network, wireless local area network (e.g. using Wi-Fi™), satellite-phone network, Satellite Internet Network, etc.) and may include an associated wireless transfer element, such as an antenna and associated circuity.
The computer-readable media in the form of the various memory components may provide storage of computer-executable instructions, data structures, program modules, software units and other data. A computer program product may be provided by a computer-readable medium having stored computer-readable program code executable by the central processor (610). A computer program product may be provided by a non-transient computer-readable medium, or may be provided via a signal or other transient means via the communications interface (630).
Interconnection via the communication infrastructure (605) allows the one or more processors (610) to communicate with each subsystem or component and to control the execution of instructions from the memory components, as well as the exchange of information between subsystems or components. Peripherals (such as printers, scanners, cameras, or the like) and input/output (I/O) devices (such as a mouse, touchpad, keyboard, microphone, touch-sensitive display, input buttons, speakers and the like) may couple to or be integrally formed with the computing device (600) either directly or via an I/O controller (635). One or more displays (645) (which may be touch-sensitive displays) may be coupled to or integrally formed with the computing device (600) via a display (645) or video adapter (640).
The foregoing description has been presented for the purpose of illustration; it is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Persons skilled in the relevant art can appreciate that many modifications and variations are possible in light of the above disclosure. In particular, it is foreseen that the processor for executing the functions of components described may be provided by hardware or by software units executing on the computer infrastructure. The software units may be stored in a memory component and instructions may be provided to the processor to carry out the functionality of the described components. In some cases, for example in a cloud computing implementation, software units arranged to manage and/or process data on behalf of the computer infrastructure may be provided remotely.
Any of the steps, operations, components or processes described herein may be performed or implemented with one or more hardware or software units, alone or in combination with other devices. In one embodiment, a software unit is implemented with a computer program product comprising a non-transient computer-readable medium containing computer program code, which can be executed by a processor for performing any or all of the steps, operations, or processes described. Software units or functions described in this application may be implemented as computer program code using any suitable computer language such as, for example, Java™, C++, or Perl™ using, for example, conventional or object-oriented techniques. The computer program code may be stored as a series of instructions, or commands on a non- transitory computer-readable medium, such as a random access memory (RAM), a read-only memory (ROM), a magnetic medium such as a hard-drive, or an optical medium such as a CD- ROM. Any such computer-readable medium may also reside on or within a single computational apparatus, and may be present on or within different computational apparatuses within a system or network.
Flowchart illustrations and block diagrams of methods, systems, and computer program products according to embodiments are used herein. Each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, may provide functions which may be implemented by computer readable program instructions. In some alternative implementations, the functions identified by the blocks may take place in a different order to that shown in the flowchart illustrations.
Some portions of this description describe the embodiments of the invention in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are commonly used by those skilled in the data processing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the like. The described operations may be embodied in software, firmware, hardware, or any combinations thereof.
The language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of the embodiments of the invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
Finally, throughout the specification and claims unless the contents requires otherwise the word 'comprise' or variations such as 'comprises' or 'comprising' will be understood to imply the inclusion of a stated integer or group of integers but not the exclusion of any other integer or group of integers.

Claims

CLAIMS:
1 . A computer-implemented method of generating a bubble segmentation image from a digital froth image of a froth phase of a flotation cell, the method comprising:
receiving the froth image;
applying one or more deep learning networks to the froth image, the deep learning networks having been trained with one or more datasets of training images of labelled, predetermined bubble segmentation images to learn to identify features useful for identifying bubble boundaries automatically;
generating the bubble segmentation image utilising the deep learning networks so that the bubble segmentation image includes identified boundary data representing boundaries between bubbles present in the froth image; and
outputting the bubble segmentation image.
2. A method as claimed in claim 1 , wherein the step of applying the deep learning networks to the froth image includes applying one or more deep neural network (DNN) algorithms to the froth image.
3. A method as claimed in claim 2, wherein the method includes the steps of creating the dataset of training images, and training the DNN algorithms using the dataset.
4. A method as claimed in claim 2 or claim 3, wherein the method includes the step of determining if identified bubble boundary data includes false positives, and optionally retraining the DNN architecture if it does.
5. A method as claimed in any one of claims 2 to 4, wherein the method includes the steps of: training the DNN algorithms with a dataset comprising a plurality of labelled, segmentation boundary training images generated utilising commercially available bubble or rock segmentation products.
6. A method as claimed in any one of claims 2 to 5, wherein the method includes the step of training the DNN algorithms with a dataset comprising a plurality of training images simulated using computer animation software.
7. A method as claimed in any one of claims 2 to 6, wherein the DNN is a convolutional deep neural network (CNN).
8. A method as claimed in claim 7, wherein the CNN utilises the U-Net convolutional network architecture.
9. A system for generating a bubble segmentation image from a froth image of a froth phase of a flotation cell, the system including a memory for storing computer-readable program code and a processor for executing the computer-readable program code, the system comprising:
an image receiving component arranged to receive a digital froth image from an image capturing component;
a bubble boundary identification component arranged to receive the digital froth image as input, identify bubble boundaries between bubbles in the digital froth image utilising one or more deep learning networks, create the bubble segmentation image and provide it as output, the deep learning networks having been trained with one or more datasets of training images of labelled, predetermined bubble segmentation images to learn to identify features useful for identifying bubble boundaries automatically; and
an image output component arranged to output the bubble segmentation image received from the bubble boundary identification component.
10. A system as claimed in claim 9, wherein the system includes an image capturing component forming part of a measuring unit arranged to be secured above a flotation cell.
1 1 . A system as claimed in claim 9 or claim 10, wherein the one or more deep learning networks comprise one or more deep neural network (DNN) algorithms.
12. A system as claimed in claim 1 1 , wherein the system includes a training component arranged to train the DNN network with the training datasets comprising bubble segmentation training images generated utilising commercially available bubble or rock segmentation products.
13. A system as claimed in any one of claims 9 to 12, wherein the training datasets comprise bubble segmentation training images simulated using computer animation software.
14. A system as claimed in any one of claims 9 to 13, wherein the DNN is a convolutional deep neural network (CNN).
5. A system as claimed in claim 14, wherein the CNN utilises the U-Net convolutional network architecture.
PCT/IB2018/056802 2017-09-08 2018-09-06 Froth segmentation in flotation cells WO2019049060A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
AU2018327270A AU2018327270A1 (en) 2017-09-08 2018-09-06 Froth segmentation in flotation cells
BR112020004522-5A BR112020004522B1 (en) 2017-09-08 2018-09-06 SEGMENTATION IN FLOTATION CELLS
ZA2020/01710A ZA202001710B (en) 2017-09-08 2020-03-18 Froth segmentation in flotation cells

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
ZA2017/06114 2017-09-08
ZA201706114 2017-09-08

Publications (1)

Publication Number Publication Date
WO2019049060A1 true WO2019049060A1 (en) 2019-03-14

Family

ID=65634915

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2018/056802 WO2019049060A1 (en) 2017-09-08 2018-09-06 Froth segmentation in flotation cells

Country Status (4)

Country Link
AU (1) AU2018327270A1 (en)
CL (1) CL2020000547A1 (en)
WO (1) WO2019049060A1 (en)
ZA (1) ZA202001710B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110689020A (en) * 2019-10-10 2020-01-14 湖南师范大学 Segmentation method of mineral flotation froth image and electronic equipment
CN111259972A (en) * 2020-01-20 2020-06-09 北矿机电科技有限责任公司 Flotation bubble identification method based on cascade classifier
CN111325281A (en) * 2020-03-05 2020-06-23 新希望六和股份有限公司 Deep learning network training method and device, computer equipment and storage medium
CN113837193A (en) * 2021-09-23 2021-12-24 中南大学 Zinc flotation froth image segmentation algorithm based on improved U-Net network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997045203A1 (en) * 1996-05-31 1997-12-04 Baker Hughes Incorporated Method and apparatus for controlling froth flotation machines
CN101036904A (en) * 2007-04-30 2007-09-19 中南大学 Flotation froth image recognition device based on machine vision and the mine concentration grade forecast method
CN104331714A (en) * 2014-11-28 2015-02-04 福州大学 Image data extraction and neural network modeling-based platinum flotation grade estimation method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997045203A1 (en) * 1996-05-31 1997-12-04 Baker Hughes Incorporated Method and apparatus for controlling froth flotation machines
CN101036904A (en) * 2007-04-30 2007-09-19 中南大学 Flotation froth image recognition device based on machine vision and the mine concentration grade forecast method
CN104331714A (en) * 2014-11-28 2015-02-04 福州大学 Image data extraction and neural network modeling-based platinum flotation grade estimation method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ALDRICH C. ET AL.: "Online monitoring and control of froth flotation systems with machine vision: A review", INTERNATIONAL JOURNAL OF MINERAL PROCESSING, vol. 96, no. 1-4, 2010, pages 1 - 13, XP027206989, ISSN: 0301-7516, Retrieved from the Internet <URL:https://www.sciencedirect.com/ science /art[c]e/pii/S0301751610000633> [retrieved on 20181106] *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110689020A (en) * 2019-10-10 2020-01-14 湖南师范大学 Segmentation method of mineral flotation froth image and electronic equipment
CN111259972A (en) * 2020-01-20 2020-06-09 北矿机电科技有限责任公司 Flotation bubble identification method based on cascade classifier
CN111259972B (en) * 2020-01-20 2023-08-11 北矿机电科技有限责任公司 Flotation bubble identification method based on cascade classifier
CN111325281A (en) * 2020-03-05 2020-06-23 新希望六和股份有限公司 Deep learning network training method and device, computer equipment and storage medium
CN111325281B (en) * 2020-03-05 2023-10-27 新希望六和股份有限公司 Training method and device for deep learning network, computer equipment and storage medium
CN113837193A (en) * 2021-09-23 2021-12-24 中南大学 Zinc flotation froth image segmentation algorithm based on improved U-Net network
CN113837193B (en) * 2021-09-23 2023-09-01 中南大学 Zinc flotation froth image segmentation method based on improved U-Net network

Also Published As

Publication number Publication date
AU2018327270A1 (en) 2020-04-09
CL2020000547A1 (en) 2020-09-04
ZA202001710B (en) 2021-04-28
BR112020004522A2 (en) 2020-09-08

Similar Documents

Publication Publication Date Title
WO2019049060A1 (en) Froth segmentation in flotation cells
CN113168567A (en) System and method for small sample transfer learning
CN112967243A (en) Deep learning chip packaging crack defect detection method based on YOLO
CN110276402B (en) Salt body identification method based on deep learning semantic boundary enhancement
CN111241989A (en) Image recognition method and device and electronic equipment
CN112967248B (en) Method, apparatus, medium and program product for generating defect image samples
CN111259812B (en) Inland ship re-identification method and equipment based on transfer learning and storage medium
Mathias et al. Underwater object detection based on bi-dimensional empirical mode decomposition and Gaussian Mixture Model approach
Zhou et al. Convolutional neural networks–based model for automated sewer defects detection and classification
JP2024513596A (en) Image processing method and apparatus and computer readable storage medium
Guo et al. A novel transformer-based network with attention mechanism for automatic pavement crack detection
CN117011274A (en) Automatic glass bottle detection system and method thereof
CN110472673B (en) Parameter adjustment method, fundus image processing device, fundus image processing medium and fundus image processing apparatus
CN114882494B (en) Three-dimensional point cloud feature extraction method based on multi-modal attention driving
CN115953394A (en) Target segmentation-based detection method and system for mesoscale ocean vortexes
CN116977265A (en) Training method and device for defect detection model, computer equipment and storage medium
CN115953506A (en) Industrial part defect image generation method and system based on image generation model
DR RECOGNITION OF SIGN LANGUAGE USING DEEP NEURAL NETWORK.
Kee et al. Cracks identification using mask region-based denoised deformable convolutional network
CN114170625A (en) Context-aware and noise-robust pedestrian searching method
CN114596442A (en) Image identification method, device, equipment and storage medium
CN114187487A (en) Processing method, device, equipment and medium for large-scale point cloud data
CN112749691A (en) Image processing method and related equipment
BR112020004522B1 (en) SEGMENTATION IN FLOTATION CELLS
CN113392924B (en) Identification method of acoustic-electric imaging log and related equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18854608

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112020004522

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 2018327270

Country of ref document: AU

Date of ref document: 20180906

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 112020004522

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20200306

122 Ep: pct application non-entry in european phase

Ref document number: 18854608

Country of ref document: EP

Kind code of ref document: A1