WO2018154411A2 - Vending machines and methods for dispensing products - Google Patents

Vending machines and methods for dispensing products Download PDF

Info

Publication number
WO2018154411A2
WO2018154411A2 PCT/IB2018/050881 IB2018050881W WO2018154411A2 WO 2018154411 A2 WO2018154411 A2 WO 2018154411A2 IB 2018050881 W IB2018050881 W IB 2018050881W WO 2018154411 A2 WO2018154411 A2 WO 2018154411A2
Authority
WO
WIPO (PCT)
Prior art keywords
product
layers
image
neural network
package type
Prior art date
Application number
PCT/IB2018/050881
Other languages
French (fr)
Other versions
WO2018154411A3 (en
Inventor
Neeraj RAY
Ram Prakash Hanumanthappa
Sharath PURANIK
Original Assignee
Savis Retail Private Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Savis Retail Private Limited filed Critical Savis Retail Private Limited
Publication of WO2018154411A2 publication Critical patent/WO2018154411A2/en
Publication of WO2018154411A3 publication Critical patent/WO2018154411A3/en

Links

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F9/00Details other than those peculiar to special kinds or types of apparatus
    • G07F9/02Devices for alarm or indication, e.g. when empty; Advertising arrangements in coin-freed apparatus
    • G07F9/026Devices for alarm or indication, e.g. when empty; Advertising arrangements in coin-freed apparatus for alarm, monitoring and auditing in vending machines or means for indication, e.g. when empty

Definitions

  • the present invention relates to use of vending machines for dispensing or effecting automated sales of products.
  • the invention is directed towards vending machines, methods of dispensing products from such vending machines, and methods for effecting payment for products or articles dispensed by a vending machine.
  • vending machines typically incorporate complex product selection, payment and dispensing mechanisms to enable purchasers to view and select a product, allow users to make payment by cash or card, dispense change, and deliver the product in a way that prevents the purchaser from accessing the entire product inventory. Incorporation of each of these mechanisms into a vending machine makes the machine expensive, bulky and complicated to use. Additionally, the incorporation of multiple diverse hardware elements significantly increases the cost of manufacture of such vending machines, while simultaneously reducing available space for product inventory.
  • the invention provides a system for dispensing products from a vending machine.
  • the system comprises a vending machine cabinet including at least one interior compartment configured to accommodate products for dispensing, at least one door configured to provide access to the at least one interior compartment, an imaging apparatus configured to acquire images of at least part of the at least one interior compartment.
  • the system may include a product recognition apparatus communicably coupled with the imaging apparatus, and configured to identify products located within the at least one interior compartment.
  • Said product recognition apparatus may comprising a first neural network and a group of neural networks.
  • the first neural network may be configured to identify within an image received from the imaging apparatus (i) locations at which one or more products are positioned within the image, and (ii) for each determined location at which a product is positioned within the image, a package type corresponding to the product positioned at the determined location.
  • the group of neural networks may comprise at least two neural networks that are distinct from the first neural network, wherein each neural network within the group of neural networks is (i) associated with one of a plurality of package types that the first neural network is configured to recognize, and configured to sub-classify products of the associated package type into one of a plurality of predetermined product types, which sub -classification is based on image information received from the imaging apparatus.
  • the product recognition apparatus may be configured such that, responsive to the first neural network identifying a specific package type corresponding to a product detected at a specific location within an image received from the imaging apparatus, (i) image information corresponding to the specific location within the image is input to a second neural network selected from within the group of neural networks, wherein said second neural network is associated with the specific package type identified by the first neural network, and (ii) recognition of a product at the specific location within the image is based on an output from the second neural network.
  • the first neural network may comprise (i) a set of common layers of network nodes, wherein said set of common layers includes an input layer and an output layer, (ii) a plurality of distinct sets of package type detector layers of network nodes, each set of package type detector layers comprising an input layer and an output layer, and a set of product location detection layers of network nodes, the set of product location detection layers comprising an input layer and an output layer.
  • the system may be configured (i) to provide as input to the input layer of the set of common layers, an input image vector generated based on an image received from the imaging apparatus, as input to the input layer of the set of common layers, (ii) to provide as input to the respective input layers of each set of package type detector layers, output from the output layer of the set of common layers, (iii) to provide as input to the input layer of the set of product detection layers, output from the output layer of the set of common layers, (iv) to determine based on output from the output layer of the set of product detection layers, locations at which one or more products are positioned within the image received from the imaging apparatus, and (v) for each determined location at which a product is positioned within the image received from the imaging apparatus, to determine based on output from the respective output layers of each set of package type detector layers, a package type corresponding to the product positioned at the determined location.
  • the system may include a set of background detection layers of network nodes.
  • the set of background detection layers may include an input layer and an output layer, wherein (i) the system is configured to provide as input to the input layer of the set of background detection layers, output from the output layer of the set of common layers, (ii) and the determination of package type corresponding to product(s) positioned within the image received from the imaging apparatus is additionally based on output from the output layer of the set of background detection layers.
  • any one or more of the set of common layers, the sets of package type detector layers, the set of product location detection layers, and the set of background detection layers may include one or more intermediate layers or network nodes disposed between an input layer and an output layer thereof.
  • the first neural network may be configured such that the identified locations at which one or more products are positioned within the image received from the imaging apparatus are locations at which top-center regions of said one or more products are positioned.
  • any one or more of the neural networks of the system comprises a convolutional neural network.
  • the invention additionally provides a method for configuring a product recognition apparatus for neural network based recognition of products located within an interior compartment of a vending machine, based on one or more images of said interior compartment acquired at an imaging apparatus.
  • the method comprises (i) configuring a first neural network to identify within an image received from an imaging apparatus (a) locations at which one or more products are positioned within the image, and (b) for each determined location at which a product is positioned within the image, a package type corresponding to the product positioned at the determined location, (ii) configuring a group of neural networks comprising at least two neural networks that are distinct from the first neural network, such that each neural network within the group of neural networks is (c) associated with one of a plurality of package types that the first neural network is configured to recognize, and (d) configured to sub-classify products of the associated package type into one of a plurality of predetermined product types, which sub-classification is based on image information received from the imaging apparatus, and (iii) configuring the product recognition apparatus such that, responsive to the first neural network identifying a specific package type corresponding to a product detected at a specific location within an image received from the imaging apparatus (e) image information corresponding to the specific location within the
  • the method may comprise configuring the first neural network to include (i) a set of common layers of network nodes, wherein said set of common layers includes an input layer and an output layer, (ii) a plurality of distinct sets of package type detector layers of network nodes, each set of package type detector layers comprising an input layer and an output layer, and (iii) a set of product location detection layers of network nodes, the set of product location detection layers comprising an input layer and an output layer.
  • the product recognition apparatus may be configured (i) to provide as input to the input layer of the set of common layers, an input image vector generated based on an image received from the imaging apparatus, as input to the input layer of the set of common layers, (ii) to provide as input to the respective input layers of each set of package type detector layers, output from the output layer of the set of common layers, (iii) to provide as input to the input layer of the set of product detection layers, output from the output layer of the set of common layers, (iv) to determine based on output from the output layer of the set of product detection layers, locations at which one or more products are positioned within the image received from the imaging apparatus, and (v) for each determined location at which a product is positioned within the image received from the imaging apparatus, to determine based on output from the respective output layers of each set of package type detector layers, a package type corresponding to the product positioned at the determined location.
  • the method may comprise configuring the product recognition apparatus such that the first neural network includes a set of background detection layers of network nodes, the set of background detection layers comprising an input layer and an output layer, wherein (i) the product recognition apparatus is configured to provide as input to the input layer of the set of background detection layers, output from the output layer of the set of common layers, and (ii) the determination of package type corresponding to product(s) positioned within the image received from the imaging apparatus is additionally based on output from the output layer of the set of background detection layers.
  • any one or more of the set of common layers, the sets of package type detector layers, the set of product location detection layers, and the set of background detection layers may include one or more intermediate layers or network nodes disposed between an input layer and an output layer thereof.
  • the method may further comprise configuring the first neural network such that the identified locations at which one or more products are positioned within the image received from the imaging apparatus are locations at which top-center regions of said one or more products are positioned.
  • the method includes configuring the product recognition apparatus such that any one or more of the neural networks of said product recognition apparatus comprises a convolutional neural network.
  • Configuring the product recognition apparatus may further comprise responding to detection of an unrecognizable product within an image received from the imaging apparatus with the steps of (i) in response to determining that the unrecognizable product comprising a previously unrecognizable package type, (a) generate an additional neural network within the set of product type identifier networks and uniquely associate the generated neural network with the previously unrecognizable package type, (b) generate an additional set of package type detector layers within the first neural network, said additional set of package type detector layers comprising an input layer and an output layer, and associating the generated additional set of package type detector layers with the previously unrecognizable package type, and (c) input training data corresponding to the previously unrecognizable product to one or both of the generated additional neural network and the generated additional set of package type detector layers, and (ii) in response to determining that the unrecognizable product comprising a recognizable package type (d) identify within the set of product type identifier networks, a neural network associated with the recognizable package type, and (e) input training data corresponding to the unrecognizable product to said identified neural network.
  • the invention may additionally provide a method for generating training data for training one or more neural networks configured for recognition of products located within an interior compartment of a vending machine.
  • the method comprises the steps of (i) positioning a first product having a defined package type and a defined product identity at a defined first location within the interior compartment, (ii) triggering video acquisition mode at an imaging apparatus configured to acquire a video feed of the interior compartment, (iii) for the duration of the video feed, maintain the first product at the defined first location, while implementing one or more of placement, removal or movement of other products at or between various other locations within the interior compartment, (iv) extract a plurality of image frames from the acquired video feed, and (v) utilize image information from the extracted image frames as training data corresponding to the defined package type or the defined product identity.
  • the invention may also provide a method for generating training data for training one or more neural networks configured for recognition of products located within an interior compartment of a vending machine, comprising the steps of (i) obtaining an image of an interior compartment of a vending machine, said interior compartment having one or more products positioned therewithin, (ii) tagging a product within the image by selecting a first image segment comprising a portion of the image which contains a top-center region of the product, (iii) labeling the first image segment with a label identifying the package type or the product identity, (iv) generating one or more variant images corresponding to the identified package type or the product identity, wherein generating a variant image comprises generating a second image segment— such that the second image segment comprises (a) at least a sub-set of pixels within the first image segment, which sub-set of pixels have been used to image the top-center region of the product, and (b) either (I) the first image segment comprises at least a second sub-set of pixels that are not included
  • the invention may additionally provide one or more computer program products for implementing any of the above methods.
  • Said computer program product(s) may comprise a computer usable medium having a computer readable program code embodied therein, the computer readable program code comprising instructions for any one or more of the methods steps described above, and in the following detailed description.
  • Figure 1 illustrates an external view of a vending machine according to the present invention.
  • Figure 2A illustrates an embodiment of a vending machine cabinet.
  • Figures 2B and 2C illustrate a horizontal partition of a vending machine.
  • Figure 3 illustrates an exemplary neural network.
  • Figure 4 is an object diagram representing a product recognition apparatus.
  • Figure 5 illustrates a method of product classification and identification.
  • Figure 6 illustrates an exemplary configuration for a first neural network.
  • Figure 7 illustrates a method of identifying package types and locations based on an input image information.
  • Figure 8 illustrates a method of re-configuring a product recognition apparatus of the present invention.
  • Figure 9 illustrates a generalized method for training a neural network to recognize package types or product types.
  • Figures 10 and 11 illustrate methods for efficiently generating training data for training neural networks.
  • Figure 12 illustrates a method for authenticating product identifications.
  • Figure 13 illustrates communication flow in operating a vending machine in accordance with the teachings of the present invention.
  • Figure 14 illustrates control components of a vending machine.
  • Figures 15A to 15C illustrate a process flow involved in dispensing products from a vending machine.
  • Figure 16 illustrates an exemplary computing system for implementing the present invention. Detailed Description
  • the present invention provides novel and inventive vending machines and methods for configuring such vending machines, and for purchasing and dispensing articles from such vending machines.
  • the invention provides novel and inventive technologies for enabling recognition of products disposed or located within a vending machine, or removed from a vending machine to enable inventory control and customer billing.
  • the invention provides advanced image recognition techniques based on adaptive classification systems and in particular specific arrangements and configurations of neural networks for the purpose of product recognition.
  • the present invention incorporates by reference the disclosure in Indian Patent Application No. 201641034130 dated October 5, 2016.
  • FIG. 1 illustrates an external view of a vending machine 100 manufactured and configured in accordance with the teachings of the present invention.
  • Vending machine 100 comprises a cabinet 102 with an interior space for storage of articles.
  • Cabinet 102 may optionally include (i) a compressor cabinet 104 for housing a compressor or other equipment for temperature and / or climate control within the vending machine and (ii) a control equipment access panel 110 which access panel enable access to control and communication components that are disposed within vending machine 100 for the purpose of operating the vending machine.
  • Vending machine 100 is also shown with a plurality of doors 106a to 106d - each of which doors provides access to the interior space within cabinet 102.
  • cabinet 102 may consist of either a single door, or multiple doors (as illustrated) depending on the specific configuration of the vending machine. In embodiments where the vending machine is provided with multiple doors, each door may permit access to a corresponding compartment or partitioned storage space within cabinet 102.
  • Vending machine 100 may additionally include a display panel or signage panel 108 used to display signage or ads.
  • display panel 108 may comprise a CRT, LCD or plasma display.
  • FIG. 2A illustrates an embodiment of cabinet 102.
  • the vertical sidewalls and top and bottom walls of cabinet 102 define an interior compartment 112, which interior compartment may be used to house various components of vending machine 100, as well as articles that require to be dispensed by the vending machine.
  • internal surfaces of vertical sidewalls of cabinet 102 may be provided with support members(which support members may comprise brackets, slots, lugs, grooves, raceways or other members) which are located and configured to enable shelves, trays or any other partitioning/storage members to be affixed within cabinet 102.
  • Interior compartment 112 is accessible through the front side - which is an open side, and which may be configured to enable one or more doors to be mounted thereon.
  • FIG. 2A ditionally illustrates cabinet 102 as having at least one (and preferably a plurality of) horizontally oriented partition(s) (horizontal partitions) 114a to 114c mounted horizontally within interior compartment 112.
  • horizontal partitions 114a to 114c may comprise one or more shelves or trays that may be used to store articles intended to be dispensed from vending machine 100, which horizontal partition(s) may optionally be configured and sized such that affixing said horizontal partition(s) within cabinet 102 serves to partition or compartmentalize cabinet 102 into a plurality of sub -compartments.
  • Embodiments of horizontal partitions 114a to 114c are described in more detail below.
  • FIG. 2B illustrates an embodiment of horizontal partitions 114a, 114b or 114c of a type that that may be used for sub-compartmentalization and storage within vending machine 100.
  • the horizontal partition consists of a partition chassis, said partition chassis including at least one base plate 202 and at least one (and preferably a plurality of) partition tray(s) 204a to 204d mounted independent of (and in isolation from) each other on said base plate 202.
  • the partition chassis may in an embodiment be sized and configured such that it can be inserted or slotted into vending cabinet 102 and supported in a desired position either by support members or by virtue of one or more fasteners including without limitation screws, bolts, lugs or rivets.
  • Base plate 202 of the partition chassis may include a plurality of holes or perforations. Said holes or perforations enable circulation of air throughout cabinet 102 - which is particularly advantageous in temperature and / or climate controlled cabinets. Base plate 202 may additionally be provided with one or more holes sized to accept mounting fasteners such as bolts, screws or rivets. Likewise, each partition tray may be perforated.
  • FIG. 2C illustrates an exploded view of a partially assembled partition chassis of horizontal partition 114a - comprising partition tray 204 mounted on base plate 202.
  • partition tray 204a is mounted on bracket 206 - which bracket 206 is in turn coupled to base plate 202 by means of load cell 208.
  • partition tray 204 may be mounted on bracket 206 by means of fasteners 210 (e.g. bolts, screws or rivets) passing through one or more holes 214, 214' provided on partition tray 204, and through corresponding holes on bracket 206.
  • bracket 206 may be mounted by means of one or more fasteners 212 onto load cell 208.
  • Load cell 208 may in turn be mounted by means of fasteners 216 and holes 218, 218' onto the surface of base plate 202. While in the illustrated embodiment, partition tray 204 is mounted on base plate 202 using a single bracket and single load cell, it would be understood that other embodiments involving multiple brackets or multiple load cells are equally implementable.
  • FIG. 2Band 2Care only exemplary, and that the horizontal partitions may comprise either less or more parts than are illustrated in the exploded view.
  • one or more of the fasteners may be done away with, and one or more of the illustrated component parts may either be welded together or otherwise unitarily integrated with a view to mount one or more partition tray plates on a base plate by means of one or more load cells.
  • Load cell 208 may comprise any type of load cell or load sensor capable of detecting and signaling a load state of (i.e. weight / load applied to) the partition tray to which said load cell or load sensor is coupled.
  • load cell 208 may comprise load beams, strain gauges and associated electronic or analogue components for signaling a change in load.
  • the load cell or load sensor is mounted between bracket 206 and base plate 202 such that any change in load / weight placed on partition tray 204 is detected and signaled by load cell 208.
  • load cell 208 is a single point load cell.
  • each partition tray may be allocated for storing products of a single / specific product type.
  • Each partition tray may accordingly have a predetermined per-unit product weight associated therewith i.e. the per-unit weight of the product type associated with said partition tray.
  • the vending machine Based on (i) weight of product(s) removed from partition trays where load state changes have been detected, (ii) per-unit product weight associated with said partition trays, and optionally (iii) a per-unit product price forthe product typeassociated with the concerned partition tray, the vending machine enables calculation of the number of product units that have been removed by a customer, or optionally the total price of products that have been removed by a customer.
  • the present invention implements image recognition techniques based on adaptive classification systems or neural networks for the purposes of tracking product inventory within the vending machine.
  • the vending machine is provided with one or more imaging apparatuses, such as for example cameras or image sensors that are positioned and configured to monitor the interior compartment of the vending machine, and more particularly, the products positioned within the interior compartment.
  • each horizontal partition positioned within the interior compartment may have an imaging apparatus dedicated thereto, which imaging apparatus is used to monitor products located on said horizontal partition.
  • each imaging apparatus may be disposed within the interior compartment such that it provides a view of the products located on a horizontal partition from above - for example, an imaging apparatus may be located on an interior wall of the vending machine cabinet above a horizontal partition that is monitored by it, and inclined downwards towards the horizontal partition such that the imaging apparatus obtains a top perspective image feed of the horizontal partition and objects located thereon.
  • neural networks emulate higher order brain functions such as memory, learning and / or pattern perception / recognition. Such systems may be trained to model regions in which particular features or characteristics of an input signal may be distributed. By accurately modeling such regions, a neural network is capable of recognizing whether unknown data received by it belongs to a particular class represented by the modeled region. The modeling may be accomplished by presenting the neural network with a number of training signals belonging to the known classes of interest. During training, each of the training signals and the class to which a signal belongs are provided to the neural network. The neural network stores the information and generates a model of the region which includes signals of a particular class.
  • Figure 3 illustrates an exemplary neural network 300 including an input layer 302 having a first plurality of nodes 302a to 302n.
  • Each of the input layer nodes 302a to 302n receives one of a plurality of input features f a to f n provided thereto.
  • Intermediate layer 304 includes a second plurality of nodes 304a to 304m.
  • Each of the nodes 304a to 304m is coupled to at least one of the input layer nodes 302a to 302n.
  • An output layer 306 includes a third plurality of nodes 306a to 3061, wherein each of the nodes 306a to 3061 are coupled to at least one of intermediate layer nodes 304a to 304m.
  • each of the nodes represented in the neural network may comprise a corresponding weight associated therewith, and stored in a processor memory.
  • the neural network receives an input vector representing image information, and processes the content of the input vector at the individual nodes of the neural network by applying the weights associated with said node to the vector content.
  • Each layer of nodes communicates output data to the next layer of the network, until the output layer of the network generates a probability value representing the probability that an image represented by the input vector is an image of a class that the neural network has been trained to recognize.
  • the training process in turn iteratively adapts or modifies the weights of each node so as to improve the accuracy of the neural network in identifying images within a class of images that the neural network is being trained to identify.
  • FIG. 4 illustrates an object diagram representing a product recognition apparatus 400 in accordance with the teachings of the present invention.
  • Apparatus 400 comprises an imaging apparatus 406 communicably coupled with (i) neural network NN1 (402) - which neural network is configured for identifying a package type of a product and also the location of the top-center region of said product within the vending machine and (ii) a plurality of neural networks NN2 to NNn (404a to 404n) - which plurality of neural networks are configured to identify specific products (i.e. product types) corresponding to a package type.
  • package type may be generally understood as comprising a classification corresponding to the type of packaging associated with a product - for example box, tetrapack, can, large bottle, small bottle, jar, bag, envelope, etc.
  • product type may be understood as a specific product within a package type.
  • Coke, Pepsi and Redbull may comprise specific product types within the "can” or “bottle” package type, while each different brand or flavor of potato crisps may comprise a separate product type within the "bag” package type.
  • each different packaging variation within a package type could be a separate product type of that package type.
  • each of the neural networks NN2 to NNn within the set of product type identifier networks 404 is associated with a separate or unique package type and is configured to classify specific products within such package type.
  • NN2 may be associated with the package type "can" and may be configured to classify cans as Coke cans, Pepsi cans and Redbull cans
  • NN3 may be associated with the package type bag and may be configured to identify or classify different brands or varieties of products within bags.
  • neural network NN1 may be trained to classify different package types based on specific features such as size and shape
  • neural networks NN2 to NNn may be trained to classify products within a specific package type based on features such as packaging characteristics such as color schemes, logos, patterns etc. on the packaging.
  • the product type identifier networks can be trained to recognize the new product type based on a significantly smaller set of training data in comparison with training data that would be required if a single neural network was used to identify both product and package type.
  • Figure 5 comprises a flowchart briefly describing a method of product classification and identification based on the product recognition apparatus 400 of Figure 4.
  • Step 502 comprises receiving at first neural network NNl, image information representing an image feed received from imaging apparatus 406 - which imaging apparatus is positioned such that a product storage space corresponding to an interior compartment of a vending machine (for example the product storage space corresponding to a horizontal partition tray within the vending machine) is within the image capture region of the imaging apparatus.
  • the image information provided as input to the first neural network NNl may be provided in the form of an image vector.
  • the method based on the output from first neural network NNl, identifies (i) one or more locations of products within the product storage space - which in an embodiment may comprise one or more locations at which neural network NNl has detected top-center regions of products positioned within the product storage space and (ii) a package type corresponding to the package disposed at each of said one or more product locations.
  • image information representing a W x H image pixel region that contains the identified product location is provided in the form of an input vector to a second neural network - which second neural network is selected from a plurality of neural networks within the plurality of product type identifier networks 404.
  • the selection of a second neural network from among the plurality of neural networks within the product type identifier networks is based on the package type corresponding to the identified product location (within the W x H pixel region) that has been identified by the first neural network.
  • the selected second neural network is a neural network (within the product type identified networks) that has been associated with the identified package type.
  • Step 508 thereafter comprises identifying the specific product located within the W x H pixel region, based on output received from the selected second neural network. It would be understood that in the event the first neural network is unable to identify a particular package type, or the second neural network is unable to identify a specific product of a known package type, the apparatus may return an output indicating a failure to recognize an object or product located within the vending machine.
  • Figure 6 illustrates an exemplary embodiment 600 of a configuration for first neural network NNl.
  • first neural network NNl comprises a first set of neural network layers 602 (hereinafter referred to as the set of common layers).
  • the first set of neural network layers 602 comprises individual neural network layers 602a to 602n.
  • Each network layer 602a to 602n in turn comprises one or more neural network nodes.
  • First neural network NNl comprises a plurality 604 of sets of package type detector layers (hereinafter referred to as the sets of package type detector layers), 604a to 604n. Each set of package type detector layers in turn comprises a distinct or unique set of neural network layers.
  • package type detector layer set 604a comprises neural network layers 604al to 604al
  • package type detector layer set 604b comprises neural network layers 604b 1 to 604bm
  • package type detector layer set 604n comprises neural network layers 604nl to 604np.
  • Each of package type detector layer sets 604a to 604n is associated with (and iteratively trained for identifying) a specific package type - and the output from each package type detector layer set provides a likelihood or a determination regarding the presence of the corresponding package type at a pixel location or at a specific location within a particular image region.
  • First neural network NNl additionally comprises a set of neural network layers 604x comprising neural network layers 604x1 to 604xq— which set of neural network layers 604x is iteratively trained for identifying portions of an image which contain only the background or portions of the interior of the vending machine cabinet (and which do not contain any specific package or product.
  • Said set of neural network layers 604x (hereinafter referred to as the set of background layers) is configured to provide a likelihood or a determination regarding the presence of background features (or in other words the absence of a specific product or package type) at a pixel location or at a specific location within a particular image region.
  • the implementation of a specific set of layers for recognizing background has been found to significantly improve product recognition - by reducing the likelihood that background portions of the vending machine are incorrectly categorized as an "unrecognized" product.
  • First neural network NNl further comprises a set of neural network layers 606, comprising neural network layers 606a to 606r (hereinafter referred to as the set of product center detector layers) - which set of neural network layers 606 is iteratively trained for identifying portions of an image at which top-center regions of products disposed within the vending machine are located.
  • the set of neural network layers 606 is configured to provide a likelihood or determination regarding the presence of top-center regions of any product at a pixel location or a specific location within the image.
  • first neural network NNl is capable of providing an output identifying pixel locations within an image at which top-center regions of one or more products are located (or at which there is a high probability that such top-center regions of one or more products are located).
  • the first neural network NNl is based on a convolutional neural network (for example a Visual Geometry Group (V GG) network) - which has been configured in accordance with the specific teachings within this disclosure.
  • V GG Visual Geometry Group
  • a critical feature of the configuration of first neural network NNl is that all input vectors communicated to first neural network NNl are input to input layer 602a of the set of common layers 602.
  • a corresponding output vector generated at the output layer 602n is thereafter simultaneously communicated as input to (i) each of the plurality 604 of sets of package type detector layers 604a to 604n (ii) the set of background layers 604x and (iii) the set of product center detector layers 606.
  • each of the plurality 604 of sets of package type detector layer (604a to 604n) Based on the output vector generated at output layer 602n of the set of common layers 602, each of the plurality 604 of sets of package type detector layer (604a to 604n) generate a corresponding output in the form of an output vector.
  • Output from each of the plurality 604 of sets of package type detector layers 604a to 604n is used to generate a heatmap or location map corresponding to the image region represented by the input vector (originally input into the set of common layers 602).
  • Each heatmap identifies the probability of a corresponding package type (i.e. that corresponds to the generating set of package detector layers) being located at one or more pixel locations within the image region.
  • each set of package type detector layers 604a to 604n may generate output, which output is used to generate a heatmap or location map corresponding to said specific set of package type detector layers.
  • an input vector provided to the set of common layers 602 would result in a plurality of heatmaps (i.e. heatmap a to heatmapn).
  • the set of background layers 604x Simultaneously, based on the output vector generated at output layer 602n of the set of common layers, the set of background layers 604x generates a corresponding output in the form of an output vector. Output from the set of background layers 604x is used to generate a heatmap or location map corresponding to the image region represented by the input vector (originally input into the set of common layers 602) which heatmap identifies the probability of a particular pixel location within the image region representing background of the vending machine cabinet (i.e. representing the absence of any specific product at said pixel location).
  • the set of product center detector layers 606 generates a corresponding output in the form of an output vector.
  • Output from the set of product center detector layers 606 represents for each pixel location within an image region, the likelihood or probability that a top-center region of a product is located at said pixel location. Said output is used to identify locations within the image region at which top-centers of products positioned within the vending machine cabinet are located.
  • each neural network layer within any of the layer sets 602, 604a to 604n, 604x, 606 may comprise one or more neural network nodes.
  • Figure 7 illustrates a method of identifying package types and their respective locations based on processing of an input vector representing image information received from an imaging apparatus associated with the vending machine.
  • Step 702 comprises providing as input, image information to an input layer of the set of common layers 602 within first neural network NNl.
  • output from an output layer of the set of common layers 602 is communicated to (i) one or more sets of package type detector layers 604a to 604n and (ii) the set of product center detector layers 606.
  • output from the output layer of the set of common layers 602 is additionally communicated to the set of background layers 604x.
  • the method identifies one or more locations within the image region represented by the input vector, at which top-center regions of products or product packages are located. Said identification is, in an embodiment, based on output from an output layer of the set of product center detector layers 606.
  • step 708 comprises identifying a product package type located at one or more regions (and preferably each of the one or more regions) of the image region under analysis—which identification is based on output from the set of package detector layers 604a to 604n and optionally on output from the set of background layers 604x.
  • steps 702 to 708 utilizes and relies on the specific features of configuration of first neural network NNI, that have been discussed above in connection with Figure 6.
  • steps 702 to 708 described above correspond to steps 502 and 504 of Figure 5 previously described.
  • a neural network associated with such package type is selected from among the set of product type identifier networks 404— and image information representing a W x H image pixel region (that contains the top-center location of the identified package type) is input as an input vector to the selected neural network.
  • the output from such selected neural network identifies the specific product corresponding to the package type previously identified by first neural network NNI.
  • Figure 8 illustrates a method of re-configuring the product recognition apparatus 400 in response to detection of a product that the neural networks within the said product recognition 400 have not been trained to identify.
  • Figure 802 comprises the step of determining (or arriving at the conclusion that) a specific product is not recognizable by the product recognition apparatus 400. This determination may either arise by virtue of the apparatus failing to identify a product located within the vending machine cabinet, or alternatively by a user or operator responsible for training or maintaining the product recognition apparatus 400.
  • step 804 would comprise the steps of (a) generating a new neural network NNi within the set of product type identifier networks 404 and uniquely associating said new neural network NNi with the new package type, (b) generating a new set of package type detector layers 604i within the plurality of sets of package type detector layers 604 and associating said new set of package type detector layers 604i with the new package type, and (c) providing as input, training data corresponding to the new product (i) to said new set of package type detector layers 604i within first neural network NN1 and (ii) to the generated new neural network NNi within the plurality of sets of product type identifier networks 404.
  • step 806 comprises (a) identifying within the plurality of sets of product type identifier networks 404, a neural network, associated with the known package type, and (b) providing as input to the identified neural network training data corresponding to the new product.
  • FIG. 9 illustrates a generalized method for training a neural network of the present invention to recognize package types or product types.
  • Step 902 of the method comprises obtaining one or more images of the interior of the vending machine cabinet (or of horizontal partitions/vending machine trays positioned within the vending machine cabinet), wherein said images capture one or more products positioned within the vending machine cabinet / on the horizontal partitions.
  • an operator selects or segments a portion of said product through a user interface and tags said selected portions of the product with a label identifying the package type and / or product type.
  • selecting or segmenting a portion of an imaged product comprises selecting or segmenting a portion of the imaged product that contains the top-center region of the imaged product.
  • Step 906 comprises inputting image information corresponding to each selected and labeled image segment as training data for one or more of (i) the first neural network NNI (i.e. the package type and product center identifier network 402) (ii) specific one or more sets of package type detector layers 604a to 604n and/ or (iii) a neural network within the set of product type identifier networks 404.
  • a labeled image segment is submitted as training data to a set of package type detector layers within first neural network NNI that corresponds to the same package type as the labeled image segment.
  • a labeled image segment is submitted as training data to a specific neural network within the set of product type identifier networks 404, based on determining that the specific neural network is associated with / corresponds to the same package type as the labeled image segment.
  • Step 908 comprises utilizing the training data to train the relevant neural network.
  • Figure 10 illustrates a method for efficiently generating training data for training neural networks within the product recognition apparatus 400 in accordance with the present invention.
  • Step 1002 of the method comprises positioning a first product of a defined package type and defined product type at a specified location within the vending machine cabinet or on a vending machine tray / partition.
  • video acquisition is triggered at an imaging apparatus configured to acquire a video feed of the position of the first product within the vending machine cabinet (for example, the imaging apparatus may be positioned to acquire a video feed of the first product as well as the surrounding region or of the entire tray on which the product is located).
  • Step 1006 comprises maintaining, for the duration of the video feed, the first product at the specified first location, while placing/ removing and/ or replacing other products (of the same package type and / or product type, or different types) at various other positions within the field of view of the imaging apparatus (for example in the regions surrounding the first product).
  • Step 1008 comprises extracting images from the acquired video feed.
  • image information from the extracted images is utilized as training data for one or more of (i) the first neural network NN1 (i.e. the package type and product center identifier network 402) (ii) specific one or more sets of package type detector layers 604a to 604n and/ or (iii) a neural network within the set of product type identifier networks 404.
  • image information extracted from the video feed is submitted as training data to a set of package type detector layers within first neural network NN1 that corresponds to the same package type as the package type of the first product.
  • image information extracted from the video feed is submitted as training data to a specific neural network within the set of product type identifier networks 404, responsive to determining that the specific neural network is associated with / corresponds to the same package type as the first product.
  • Figure 11 illustrates a method for efficiently generating training data for training neural networks within the product recognition apparatus 400 in accordance with the present invention.
  • Step 1102 comprises obtaining an image of a portion of the vending machine cabinet
  • an operator selects or segments a first image segment comprising a portion of a product through a user interface, and tags said selected portions of the product with a label identifying the package type and / or product type.
  • selecting or segmenting a portion of the product comprises selecting or segmenting a portion of the imaged product that contains the top-center region of the imaged product.
  • Step 1106 comprises generating one or more variant images corresponding to the package type and / or the product type identified in the label corresponding to the first image segment.
  • Generating each variant image comprises generating a second image segment - such that (i) the second image segment comprises (i) at least a sub-set of the pixels within the first image segment, which subset of pixels have been used to represent or image the top-center region of the product and (ii) either (a) the first image segment comprises at least a second sub-set of pixels that are not included within the second image segment and / or (b) the second image segment includes a third sub-set of pixels that lie within the image obtained at step 1102, but which are not included within the first image segment.
  • generating variant images may comprise any of (i) selecting a second image segment that surrounds the first image segment, (ii) selecting a second image segment that falls entirely within the first image segment, (iii) selecting a second image segment that comprises a part of the first image segment, and further comprises certain pixel regions that adjoin the first image segment and (vi) cropping portions of the first image segment.
  • Step 1108 thereafter comprises utilizing the first image segment and the one or more variant images as training data to train the relevant neural network(s) within the product recognition apparatus 400.
  • Figure 12 illustrates a method for authenticating product identifications that have been arrived at in accordance with the teachings of the present invention.
  • vending machines in accordance with the teachings of the present invention are configured to identify specific products that are removed from the vending machine by a user / customer, which identification may be based either on the load sensing mechanisms or the image recognition apparatuses described above. It will however be understood that in certain cases the product identifications made by either the load sensors or the image recognition apparatuses may be erroneous.
  • one or the other mechanism may be prone to spoofing by a user / customer - for example, (i) the load sensing mechanism may be spoofed or misled by a customer by removing a product from the vending machine and simultaneously replacing the product with another object of equal weight, or (ii) the image recognition apparatuses may be spoofed or misled by a customer replacing a product from the vending machine with a counterfeit similar looking product (e.g. replacing a full can of an aerated drink with an empty can of the same drink).
  • the method of Figure 12 enables authentication of the determinations by either mechanism, by comparison with a corresponding authentication by the other mechanism - and may be used to detect errors, identify attempts at theft or spoofing, or to raise a maintenance alert in case of a detected malfunction of one or the other product identification mechanisms.
  • Step 1202 comprises inputting image information corresponding to an article (for example a product that has been removed by a customer from the vending machine) into the neural networks of the product recognition apparatus 400 and identifying the article based on the output from the neural networks (in accordance with the teachings of Figure 5) .
  • an article for example a product that has been removed by a customer from the vending machine
  • Step 1204 comprises obtaining weight of the removed article based on signals obtained from one or more load sensors configured to detect load state changes associated with a vending machine tray on which the article was situated.
  • Step 1206 comprises identifying the article based on the detected weight and a per- unit product weight associated with the vending machine tray or with the load sensor(s) from which the load state change signal has been received (in accordance with the teachings above).
  • step 1208 comprises generating an authentication / verification decision concerning either (i) the identity of the article as received from the product recognition apparatus 400 based on image analysis or (ii) the identity of the article as determined based on load state changes - which authentication / verification decision is based on a determination of consistency between the findings based on image analysis and the findings based on load state changes.
  • a determination of consistency between said findings results in confirmation of said findings.
  • a determination of inconsistency between said findings results in generation of an error alert, a theft or spoofing alert, or a maintenance request.
  • FIG. 13 is a high level communication flow diagram illustrating communications involved in operating a vending machine of the type described hereinabove.
  • vending machine 1302 is communicably coupled with remote server 1306.
  • the communication link between vending machine 1302 and remote server 1306 may comprise any wired or wireless communication link.
  • Data communication over the communication link may in an embodiment be achieved by way of any one or more communication protocols, including without limitation, TCP/IP communication protocol or a UDP protocol.
  • the underlying communication network used to implement the communication protocol may include any one or more of local area network, wide area network, broadband network or a combination of the above (such as the internet).
  • data communication may be implemented by any electrical, optical or wireless transmission media or link, including by way of example, by one or more of RF, infrared, acoustic, microwave, Bluetooth or other transmission media or link.
  • a customer seeking to operate vending machine 1302 requires access to a client device 1304 - which client device 1304 may comprise any client terminal, and in a preferred embodiment is a mobile communication device (such as a tablet, smart phone, mobile phone, phablet or personal digital assistant).
  • Client device 1304 likewise may be communicably coupled with vending machine 1302 as well as with remote server 1306 over independent communication channels — wherein the communication link may once again be implemented by any electrical, optical, RF, infrared, acoustic, microwave, Bluetooth or other transmission media or link.
  • the client device 1304 may communicate with vending machine 1302 through remote server 1306 (acting as an intermediate server) standard communication methods— for example by means of TCP/IP or UDP based protocols.
  • client device 1304 and remote server 1306 a customer may operate the vending machine of the present invention in accordance with the methods described hereinbelow.
  • vending machine 1302 comprises a vending machine (VM) controller 1402 that controls higher levelfunctions and operations of vending machine 1302 through communication controller 1404.
  • Communication controller 1404 comprises (i) wireless communication controller 1406 that is configured to interface, control and / or communicate with devices or components wirelessly, and (ii) wired communication control 1408 that is configured to interface, control and / or communicate with devices or components over a wired connection.
  • Wireless communication controller 1406 communicates with server interface 1410— which service interface 1410 enables remote server 1406 to communicate with vending machine 1302.
  • Wireless communication controller 1406 additionally communicates with client device interface 1412 - which client device interface enables client device 1304 to communicate with or operate vending machine 1302 (it will be understood that the communication between client device 1304 and vending machine 1304 may in an embodiment take place through remote server 1306 by virtue of one or more conventional communication protocols such as the TCP/IP or UDP protocol).
  • Wired communication controller 1406 is communicably coupled with load sensor controls 1414 and imaging apparatus controls 1418 (and in an embodiment, with electronic lock controls) to enable VM controller 1402 to respectively receive information regarding load state changes from one or more load cells or load sensors within vending machine 1302, and image information corresponding to images captured by an imaging apparatus.
  • Wired communication controller 1406 is additionally communicably coupled with security controls 1416 such that VM controller 1402 can selectively enable and disable access (for example by engaging or disengaging door locks) to one or more doors of vending machine 1302 with a view to allow and / or terminate access to one or more horizontal partitions within vending machine 1302. It would be understood that for the purposes of the present invention, product recognition apparatus 400 may be implemented either within vending machine 1302 (for example within VM controller 1402) or within the remote server 1306.
  • Figures 15A to 15C hereafterillustrate a process flow setting out the various steps involved in dispensing products stored within a vending machine of the type contemplated by the present disclosure, and which has been configured in accordance with the disclosure set out in connection with Figure 14 above.
  • an exemplary client device 1304 may comprise a mobile communication device having an internet or wireless data connection, and having a mobile software application installed thereon - which mobile software application is configured to implement some or all of the steps discussed in connection with Figures 15A to 15C. It will however be understood that this is only an exemplary embodiment, and the steps of Figures 15A to 15C may be implemented by any client device 1304 having the minimum capabilities that have been discussed previously.
  • Step 1502 comprises receiving at a client device 1304, information identifying a specific vending machine (i.e. a selected vending machine) from which a customer seeks to obtain a product.
  • the client device may receive such information in the form of a vending machine identifier received from the vending machine 1302 in the course of wireless (e.g.
  • Bluetooth communication with said vending machine (or using TCP/IP / UDP protocols through a remote server), or by way of user input at client device 1304 (based on a vending machine identifier displayed on the vending machine), or by way of an RFID, bar code or other unique identification markings that are displayed on vending machine 1302 and which can be scanned by client device 1304 or by one or more peripherals connected therewith.
  • the client device 1304 may send its GPS information(or through other proximity techniques such as Bluetooth beacons) to remote server 1306, and remote server 1306 may respond by sending client device 1304 the vending machine identifier corresponding to a vending machine present at the identified GPS location at which the client device 1304 is located.
  • Steps 1504 and 1506 comprise communicating from client device 1304 to remote server 1306 (i) information identifying the vending machine 1302 that a customer has selected for product purchase and (ii) a product identifier of at least one product that the customer intends to purchase from vending machine 1302 (i.e. the selected product).
  • the product identifier may in an embodiment correspond to a product that the customer has selected for purchase on the mobile application software.
  • Step 1508 comprises verifying (i) customer payment credentials (e.g. one or both of payment instruments / mechanisms associated with the customer, and credit available to the customer) and (ii) availability of the selected product at the selected vending machine 1302.
  • customer payment credentials e.g. one or both of payment instruments / mechanisms associated with the customer, and credit available to the customer
  • Step 1510 comprises responding to satisfactory verification of (i) customer payment credentials and / or (ii) availability of the selected product at the selected vending machine - by sending a signal from remote server 1306 to vending machine 1302, which signal instructs vending machine 1302 to enable the customer to access a horizontal partition that stocks the selected product.
  • the instruction sent from remote server 1306 may identify one or more of (i) a specific horizontal partition (or partition tray) on which the selected product is available, (ii) a vending machine door that enables / restricts access to the identified horizontal partition (or partition tray).
  • remote server 1306 may instead send to client device one or more of (i) information identifying a specific horizontal partition (or partition tray) on which the selected product is available, (ii) information identifying a vending machine door that enables / restricts access to the identified horizontal partition (or partition tray) and (iii) one or more unlock codes necessary to instruct vending machine 1302 to unlock the relevant vending machine door.
  • client device 1304 communicates the received information onward to vending machine 1302 (either directly or through the remote server)— signaling a request to vending machine 1302 for unlocking the relevant vending machine door to enable the customer access to the product stored behind the unlocked door.
  • unlock codes forwarded to the client device 1304 may comprise encrypted unlock codes or encrypted single-use unlock codes.
  • Vending machine 1302 receives (either directly from remote server 1306 or through client device 1304) the information discussed in connection with step 1510, analyses the information, and subject to verification that the information received is genuine, may at step 1512 unlock the relevant vending machine door and allow the customer access to horizontal partitions (or partition trays) behind the unlocked vending machine door.
  • the customer may thereafter remove one or more products stored on one or more of the partition trays (or horizontal partitions) to which said customer has been granted access by unlocking of the vending machine door(s).
  • Step 1516 comprises receiving at VM controller 1402, signals from load sensor(s) and
  • step 1510 which signals communicate load state change information and / or image information corresponding to removal of product(s) from said partition trays.
  • step 1518 based on the load state change signal received from a load sensor associated with a partition tray, based on image information received from an image sensor monitoring said partition tray, product(s) removed from said partition tray may be identified in accordance with any of the methods discussed above in this specification.
  • Step 1520 thereafter comprises using (i) the determined identity of product(s) removed from a partition tray and (ii) a per-unit product price associated with said productto determine the total price of products removed from the partition tray.
  • payment of the total price may be obtained from the customer - for example by debiting a pre-paid electronic fund account associated with the customer or by charging the customer's bank account or credit or debit card.
  • steps 1518 and 1520 may be implemented after the vending machine door that has been opened to enable a customer to access and remove products from a horizontal partition / partition tray has been closed or has been closed and locked.
  • steps 1518 and 1520 may be implemented after VM controller 1402 (i) receives a signal from security controls 1416 that a vending machine door has been closed (ii) dispatches an instruction to security controls 1416 to re-engage door lock(s) for one or more vending machine doors or (iii) receives a signal from security controls 1416 that locks for one or more vending machine doors have been re-engaged.
  • a timer may be activated, wherein the process of securing payment for items removed from the vending machine is initiated after expiry of a predefined time interval from activation of the timer.
  • activation of the timer and elapse of time from activation of the timer is communicated to the client device by way of one or more alerts.
  • Figure 16 illustrates an exemplary computing system for implementing the present invention.
  • the computing system 1602 comprises one or more processors 1604 and at least one memory 1606.
  • Processor 1604 is configured to execute program instructions - and may be a real processor or a virtual processor. It will be understood that computer system 1602 does not suggest any limitation as to scope of use or functionality of described embodiments.
  • the computer system 1602 may include, but is not be limited to, one or more of a general-purpose computer, a programmed microprocessor, a micro-controller, an integrated circuit, and other devices or arrangements of devices that are capable of implementing the steps that constitute the method of the present invention.
  • Exemplary embodiments of a system 1602 in accordance with the present invention may include one or more servers, desktops, laptops, tablets, smart phones, mobile phones, mobile communication devices, tablets, phablets and personal digital assistants.
  • the memory 1606 may store software for implementing various embodiments of the present invention.
  • the computer system 1602 may have additional components.
  • the computer system 1602 may include one or more communication channels 1608, one or more input devices 1610, one or more output devices 1612, and storage 1614.
  • An interconnection mechanism such as a bus, controller, or network, interconnects the components of the computer system 1602.
  • operating system software provides an operating environment for various softwares executing in the computer system 1602 using a processor 1604, and manages different functionalities of the components of the computer system 1602.
  • the communication channel(s) 1608 allow communication over a communication medium to various other computing entities.
  • the communication medium provides information such as program instructions, or other data in a communication media.
  • the communication media includes, but is not limited to, wired or wireless methodologies implemented with an electrical, optical, RF, infrared, acoustic, microwave, Bluetooth or other transmission media.
  • the input device(s) 1610 may include, but is not limited to, a touch screen, a keyboard, mouse, pen, joystick, trackball, a voice device, a scanning device, or any another device that is capable of providing input to the computer system 1602.
  • the input device(s) 1010 may be a sound card or similar device that accepts audio input in analog or digital form.
  • the output device(s) 1612 may include, but not be limited to, a user interface on CRT, LCD, LED display, or any other display associated with any of servers, desktops, laptops, tablets, smart phones, mobile phones, mobile communication devices, tablets, phablets and personal digital assistants, printer, speaker, CD/DVD writer, or any other device that provides output from the computer system 1602.
  • the storage 1614 may include, but not be limited to, magnetic disks, magnetic tapes,
  • the storage 1614 may contain program instructions for implementing any of the described embodiments.
  • the computer system 1602 is part of a distributed network or a part of a set of available cloud resources.
  • the present invention may be implemented in numerous ways including as a system, a method, or a computer program product such as a computer readable storage medium or a computer network wherein programming instructions are communicated from a remote location.
  • the present invention may suitably be embodied as a computer program product for use with the computer system 1602.
  • the method described herein is typically implemented as a computer program product, comprising a set of program instructions which is executed by the computer system 1602 or any other similar device.
  • the set of program instructions may be a series of computer readable codes stored on a tangible medium, such as a computer readable storage medium (storage 1614), for example, diskette, CD-ROM, ROM, flash drives or hard disk, or transmittable to the computer system 1602, via a modem or other interface device, over either a tangible medium, including but not limited to optical or analogue communications channel (s) 1608.
  • a tangible medium including but not limited to optical or analogue communications channel (s) 1608.
  • the implementation of the invention as a computer program product may be in an intangible form using wireless techniques, including but not limited to microwave, infrared, Bluetooth or other transmission techniques. These instructions can be preloaded into a system or recorded on a storage medium such as a CD-ROM, or made available for downloading over a network such as the Internet or a mobile telephone network.
  • the series of computer readable instructions may embody all or part of the functionality previously described herein.

Landscapes

  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • Control Of Vending Devices And Auxiliary Devices For Vending Devices (AREA)

Abstract

The invention provides a vending machine for dispensing or effecting automated sales of product. The vending machine implements a product recognition apparatus for neural network based image recognition of products located within the vending machine. The invention additionally provides methods for configuring the product recognition apparatus, and methods for generating training data for training one or more neural networks configured for product recognition.

Description

Vending Machines and Methods for Dispensing Products
Field of the Invention
[001] The present invention relates to use of vending machines for dispensing or effecting automated sales of products. In particular, the invention is directed towards vending machines, methods of dispensing products from such vending machines, and methods for effecting payment for products or articles dispensed by a vending machine.
Background
[002] Conventional vending machines typically incorporate complex product selection, payment and dispensing mechanisms to enable purchasers to view and select a product, allow users to make payment by cash or card, dispense change, and deliver the product in a way that prevents the purchaser from accessing the entire product inventory. Incorporation of each of these mechanisms into a vending machine makes the machine expensive, bulky and complicated to use. Additionally, the incorporation of multiple diverse hardware elements significantly increases the cost of manufacture of such vending machines, while simultaneously reducing available space for product inventory.
[003] The process of identifying a product from the product display, entering a product selection, making payment and retrieving the dispensed article tends to involve multiple steps which the busy user would prefer to avoid. Additionally, users wanting to purchase multiple units of a product typically require to go through multiple iterations of some or all of these steps - leading to non-optimal user experience.
[004] Further, purchasers are requiredto rely on conventional payment instruments such as cash or electronic cards - which can often enough present a barrier to impulse purchases in situations where the purchaser is carrying neither cash nor card.
[005] There is therefore a need for vending machines that simplify the user experience and present manufacturing and cost efficiencies, while simultaneously ensuring an efficient and secure payment system.
Summary
[006] The invention provides a system for dispensing products from a vending machine.
[007] In an embodiment, the system comprises a vending machine cabinet including at least one interior compartment configured to accommodate products for dispensing, at least one door configured to provide access to the at least one interior compartment, an imaging apparatus configured to acquire images of at least part of the at least one interior compartment.
[008] The system may include a product recognition apparatus communicably coupled with the imaging apparatus, and configured to identify products located within the at least one interior compartment. Said product recognition apparatus may comprising a first neural network and a group of neural networks. [009] The first neural network may be configured to identify within an image received from the imaging apparatus (i) locations at which one or more products are positioned within the image, and (ii) for each determined location at which a product is positioned within the image, a package type corresponding to the product positioned at the determined location.
[0010] The group of neural networks may comprise at least two neural networks that are distinct from the first neural network, wherein each neural network within the group of neural networks is (i) associated with one of a plurality of package types that the first neural network is configured to recognize, and configured to sub-classify products of the associated package type into one of a plurality of predetermined product types, which sub -classification is based on image information received from the imaging apparatus.
[0011] The product recognition apparatus may be configured such that, responsive to the first neural network identifying a specific package type corresponding to a product detected at a specific location within an image received from the imaging apparatus, (i) image information corresponding to the specific location within the image is input to a second neural network selected from within the group of neural networks, wherein said second neural network is associated with the specific package type identified by the first neural network, and (ii) recognition of a product at the specific location within the image is based on an output from the second neural network.
[0012] The first neural network may comprise (i) a set of common layers of network nodes, wherein said set of common layers includes an input layer and an output layer, (ii) a plurality of distinct sets of package type detector layers of network nodes, each set of package type detector layers comprising an input layer and an output layer, and a set of product location detection layers of network nodes, the set of product location detection layers comprising an input layer and an output layer.
[0013] The system may be configured (i) to provide as input to the input layer of the set of common layers, an input image vector generated based on an image received from the imaging apparatus, as input to the input layer of the set of common layers, (ii) to provide as input to the respective input layers of each set of package type detector layers, output from the output layer of the set of common layers, (iii) to provide as input to the input layer of the set of product detection layers, output from the output layer of the set of common layers, (iv) to determine based on output from the output layer of the set of product detection layers, locations at which one or more products are positioned within the image received from the imaging apparatus, and (v) for each determined location at which a product is positioned within the image received from the imaging apparatus, to determine based on output from the respective output layers of each set of package type detector layers, a package type corresponding to the product positioned at the determined location.
[0014] The system may include a set of background detection layers of network nodes. The set of background detection layers may include an input layer and an output layer, wherein (i) the system is configured to provide as input to the input layer of the set of background detection layers, output from the output layer of the set of common layers, (ii) and the determination of package type corresponding to product(s) positioned within the image received from the imaging apparatus is additionally based on output from the output layer of the set of background detection layers.
[0015] In a system embodiment, any one or more of the set of common layers, the sets of package type detector layers, the set of product location detection layers, and the set of background detection layers may include one or more intermediate layers or network nodes disposed between an input layer and an output layer thereof.
[0016] The first neural network may be configured such that the identified locations at which one or more products are positioned within the image received from the imaging apparatus are locations at which top-center regions of said one or more products are positioned.
[0017] In an embodiment, any one or more of the neural networks of the system comprises a convolutional neural network.
[0018] The invention additionally provides a method for configuring a product recognition apparatus for neural network based recognition of products located within an interior compartment of a vending machine, based on one or more images of said interior compartment acquired at an imaging apparatus.
[0019] In an embodiment, the method comprises (i) configuring a first neural network to identify within an image received from an imaging apparatus (a) locations at which one or more products are positioned within the image, and (b) for each determined location at which a product is positioned within the image, a package type corresponding to the product positioned at the determined location, (ii) configuring a group of neural networks comprising at least two neural networks that are distinct from the first neural network, such that each neural network within the group of neural networks is (c) associated with one of a plurality of package types that the first neural network is configured to recognize, and (d) configured to sub-classify products of the associated package type into one of a plurality of predetermined product types, which sub-classification is based on image information received from the imaging apparatus, and (iii) configuring the product recognition apparatus such that, responsive to the first neural network identifying a specific package type corresponding to a product detected at a specific location within an image received from the imaging apparatus (e) image information corresponding to the specific location within the image is input to a second neural network selected from within the group of neural networks, wherein said second neural network is associated with the specific package type identified by the first neural network, and (f) recognition of a product at the specific location within the image is based on an output from the second neural network.
[0020] In an embodiment, the method may comprise configuring the first neural network to include (i) a set of common layers of network nodes, wherein said set of common layers includes an input layer and an output layer, (ii) a plurality of distinct sets of package type detector layers of network nodes, each set of package type detector layers comprising an input layer and an output layer, and (iii) a set of product location detection layers of network nodes, the set of product location detection layers comprising an input layer and an output layer.
[0021] The product recognition apparatus may be configured (i) to provide as input to the input layer of the set of common layers, an input image vector generated based on an image received from the imaging apparatus, as input to the input layer of the set of common layers, (ii) to provide as input to the respective input layers of each set of package type detector layers, output from the output layer of the set of common layers, (iii) to provide as input to the input layer of the set of product detection layers, output from the output layer of the set of common layers, (iv) to determine based on output from the output layer of the set of product detection layers, locations at which one or more products are positioned within the image received from the imaging apparatus, and (v) for each determined location at which a product is positioned within the image received from the imaging apparatus, to determine based on output from the respective output layers of each set of package type detector layers, a package type corresponding to the product positioned at the determined location.
[0022] The method may comprise configuring the product recognition apparatus such that the first neural network includes a set of background detection layers of network nodes, the set of background detection layers comprising an input layer and an output layer, wherein (i) the product recognition apparatus is configured to provide as input to the input layer of the set of background detection layers, output from the output layer of the set of common layers, and (ii) the determination of package type corresponding to product(s) positioned within the image received from the imaging apparatus is additionally based on output from the output layer of the set of background detection layers.
[0023] In an embodiment of the method, any one or more of the set of common layers, the sets of package type detector layers, the set of product location detection layers, and the set of background detection layers may include one or more intermediate layers or network nodes disposed between an input layer and an output layer thereof.
[0024] The method may further comprise configuring the first neural network such that the identified locations at which one or more products are positioned within the image received from the imaging apparatus are locations at which top-center regions of said one or more products are positioned.
[0025] In an embodiment, the method includes configuring the product recognition apparatus such that any one or more of the neural networks of said product recognition apparatus comprises a convolutional neural network.
[0026] Configuring the product recognition apparatus may further comprise responding to detection of an unrecognizable product within an image received from the imaging apparatus with the steps of (i) in response to determining that the unrecognizable product comprising a previously unrecognizable package type, (a) generate an additional neural network within the set of product type identifier networks and uniquely associate the generated neural network with the previously unrecognizable package type, (b) generate an additional set of package type detector layers within the first neural network, said additional set of package type detector layers comprising an input layer and an output layer, and associating the generated additional set of package type detector layers with the previously unrecognizable package type, and (c) input training data corresponding to the previously unrecognizable product to one or both of the generated additional neural network and the generated additional set of package type detector layers, and (ii) in response to determining that the unrecognizable product comprising a recognizable package type (d) identify within the set of product type identifier networks, a neural network associated with the recognizable package type, and (e) input training data corresponding to the unrecognizable product to said identified neural network.
[0027] The invention may additionally provide a method for generating training data for training one or more neural networks configured for recognition of products located within an interior compartment of a vending machine. The method comprises the steps of (i) positioning a first product having a defined package type and a defined product identity at a defined first location within the interior compartment, (ii) triggering video acquisition mode at an imaging apparatus configured to acquire a video feed of the interior compartment, (iii) for the duration of the video feed, maintain the first product at the defined first location, while implementing one or more of placement, removal or movement of other products at or between various other locations within the interior compartment, (iv) extract a plurality of image frames from the acquired video feed, and (v) utilize image information from the extracted image frames as training data corresponding to the defined package type or the defined product identity.
[0028] The invention may also provide a method for generating training data for training one or more neural networks configured for recognition of products located within an interior compartment of a vending machine, comprising the steps of (i) obtaining an image of an interior compartment of a vending machine, said interior compartment having one or more products positioned therewithin, (ii) tagging a product within the image by selecting a first image segment comprising a portion of the image which contains a top-center region of the product, (iii) labeling the first image segment with a label identifying the package type or the product identity, (iv) generating one or more variant images corresponding to the identified package type or the product identity, wherein generating a variant image comprises generating a second image segment— such that the second image segment comprises (a) at least a sub-set of pixels within the first image segment, which sub-set of pixels have been used to image the top-center region of the product, and (b) either (I) the first image segment comprises at least a second sub-set of pixels that are not included within the second image segment or (II) the second image segment includes a third sub-set of pixels that are within the obtained image and that are not included within the first image segment, and (v) utilizing the first image segment and the generated variant images as training data for the one or more neural networks.
[0029] The invention may additionally provide one or more computer program products for implementing any of the above methods. Said computer program product(s) may comprise a computer usable medium having a computer readable program code embodied therein, the computer readable program code comprising instructions for any one or more of the methods steps described above, and in the following detailed description.
Brief Description of the Accompanying Drawings
[0030] Figure 1 illustrates an external view of a vending machine according to the present invention. [0031] Figure 2A illustrates an embodiment of a vending machine cabinet. [0032] Figures 2B and 2C illustrate a horizontal partition of a vending machine. [0033] Figure 3 illustrates an exemplary neural network.
[0034] Figure 4 is an object diagram representing a product recognition apparatus.
[0035] Figure 5 illustrates a method of product classification and identification.
[0036] Figure 6 illustrates an exemplary configuration for a first neural network.
[0037] Figure 7 illustrates a method of identifying package types and locations based on an input image information.
[0038] Figure 8 illustrates a method of re-configuring a product recognition apparatus of the present invention. [0039] Figure 9 illustrates a generalized method for training a neural network to recognize package types or product types.
[0040] Figures 10 and 11 illustrate methods for efficiently generating training data for training neural networks.
[0041] Figure 12 illustrates a method for authenticating product identifications.
[0042] Figure 13 illustrates communication flow in operating a vending machine in accordance with the teachings of the present invention.
[0043] Figure 14 illustrates control components of a vending machine.
[0044] Figures 15A to 15C illustrate a process flow involved in dispensing products from a vending machine.
[0045] Figure 16 illustrates an exemplary computing system for implementing the present invention. Detailed Description
[0046] The present invention provides novel and inventive vending machines and methods for configuring such vending machines, and for purchasing and dispensing articles from such vending machines. In addition, the invention provides novel and inventive technologies for enabling recognition of products disposed or located within a vending machine, or removed from a vending machine to enable inventory control and customer billing. In an embodiment, the invention provides advanced image recognition techniques based on adaptive classification systems and in particular specific arrangements and configurations of neural networks for the purpose of product recognition. The present invention incorporates by reference the disclosure in Indian Patent Application No. 201641034130 dated October 5, 2016.
[0047] Figure 1 illustrates an external view of a vending machine 100 manufactured and configured in accordance with the teachings of the present invention. Vending machine 100 comprises a cabinet 102 with an interior space for storage of articles. Cabinet 102 may optionally include (i) a compressor cabinet 104 for housing a compressor or other equipment for temperature and / or climate control within the vending machine and (ii) a control equipment access panel 110 which access panel enable access to control and communication components that are disposed within vending machine 100 for the purpose of operating the vending machine. Vending machine 100 is also shown with a plurality of doors 106a to 106d - each of which doors provides access to the interior space within cabinet 102. It will be understood that cabinet 102 may consist of either a single door, or multiple doors (as illustrated) depending on the specific configuration of the vending machine. In embodiments where the vending machine is provided with multiple doors, each door may permit access to a corresponding compartment or partitioned storage space within cabinet 102. Vending machine 100 may additionally include a display panel or signage panel 108 used to display signage or ads. In an embodiment, display panel 108 may comprise a CRT, LCD or plasma display.
[0048] Figure 2A illustrates an embodiment of cabinet 102. The vertical sidewalls and top and bottom walls of cabinet 102 define an interior compartment 112, which interior compartment may be used to house various components of vending machine 100, as well as articles that require to be dispensed by the vending machine. In the illustrated embodiment, internal surfaces of vertical sidewalls of cabinet 102 may be provided with support members(which support members may comprise brackets, slots, lugs, grooves, raceways or other members) which are located and configured to enable shelves, trays or any other partitioning/storage members to be affixed within cabinet 102. Interior compartment 112 is accessible through the front side - which is an open side, and which may be configured to enable one or more doors to be mounted thereon.
[0049] Figure 2Aadditionally illustrates cabinet 102 as having at least one (and preferably a plurality of) horizontally oriented partition(s) (horizontal partitions) 114a to 114c mounted horizontally within interior compartment 112. In an embodiment of the invention, horizontal partitions 114a to 114c may comprise one or more shelves or trays that may be used to store articles intended to be dispensed from vending machine 100, which horizontal partition(s) may optionally be configured and sized such that affixing said horizontal partition(s) within cabinet 102 serves to partition or compartmentalize cabinet 102 into a plurality of sub -compartments. Embodiments of horizontal partitions 114a to 114c are described in more detail below.
[0050] Figure 2B illustrates an embodiment of horizontal partitions 114a, 114b or 114c of a type that that may be used for sub-compartmentalization and storage within vending machine 100. In the illustrated embodiment the horizontal partition consists of a partition chassis, said partition chassis including at least one base plate 202 and at least one (and preferably a plurality of) partition tray(s) 204a to 204d mounted independent of (and in isolation from) each other on said base plate 202. The partition chassis may in an embodiment be sized and configured such that it can be inserted or slotted into vending cabinet 102 and supported in a desired position either by support members or by virtue of one or more fasteners including without limitation screws, bolts, lugs or rivets.
[0051] Base plate 202 of the partition chassis may include a plurality of holes or perforations. Said holes or perforations enable circulation of air throughout cabinet 102 - which is particularly advantageous in temperature and / or climate controlled cabinets. Base plate 202 may additionally be provided with one or more holes sized to accept mounting fasteners such as bolts, screws or rivets. Likewise, each partition tray may be perforated.
[0052] Figure 2C illustrates an exploded view of a partially assembled partition chassis of horizontal partition 114a - comprising partition tray 204 mounted on base plate 202. As illustrated in Figure 2C, partition tray 204a is mounted on bracket 206 - which bracket 206 is in turn coupled to base plate 202 by means of load cell 208. As illustrated in Figure 2C, partition tray 204 may be mounted on bracket 206 by means of fasteners 210 (e.g. bolts, screws or rivets) passing through one or more holes 214, 214' provided on partition tray 204, and through corresponding holes on bracket 206. Likewise bracket 206 may be mounted by means of one or more fasteners 212 onto load cell 208. Load cell 208 may in turn be mounted by means of fasteners 216 and holes 218, 218' onto the surface of base plate 202. While in the illustrated embodiment, partition tray 204 is mounted on base plate 202 using a single bracket and single load cell, it would be understood that other embodiments involving multiple brackets or multiple load cells are equally implementable.
[0053] The illustration of Figures 2Band 2Care only exemplary, and that the horizontal partitions may comprise either less or more parts than are illustrated in the exploded view. In some embodiments, one or more of the fasteners may be done away with, and one or more of the illustrated component parts may either be welded together or otherwise unitarily integrated with a view to mount one or more partition tray plates on a base plate by means of one or more load cells. [0054] Load cell 208 may comprise any type of load cell or load sensor capable of detecting and signaling a load state of (i.e. weight / load applied to) the partition tray to which said load cell or load sensor is coupled. Illustrative embodiments of load cell 208 may comprise load beams, strain gauges and associated electronic or analogue components for signaling a change in load. The load cell or load sensor is mounted between bracket 206 and base plate 202 such that any change in load / weight placed on partition tray 204 is detected and signaled by load cell 208. In an embodiment of the invention, load cell 208 is a single point load cell.
[0055] In an embodiment of the invention, each partition tray may be allocated for storing products of a single / specific product type. Each partition tray may accordingly have a predetermined per-unit product weight associated therewith i.e. the per-unit weight of the product type associated with said partition tray. Based on (i) weight of product(s) removed from partition trays where load state changes have been detected, (ii) per-unit product weight associated with said partition trays, and optionally (iii) a per-unit product price forthe product typeassociated with the concerned partition tray, the vending machine enables calculation of the number of product units that have been removed by a customer, or optionally the total price of products that have been removed by a customer.
[0056] In addition to the mechanism for sensing load state changes associated with a horizontal partition within the vending machine, the present invention implements image recognition techniques based on adaptive classification systems or neural networks for the purposes of tracking product inventory within the vending machine. For this purpose, the vending machine is provided with one or more imaging apparatuses, such as for example cameras or image sensors that are positioned and configured to monitor the interior compartment of the vending machine, and more particularly, the products positioned within the interior compartment. In an embodiment of the invention, each horizontal partition positioned within the interior compartment may have an imaging apparatus dedicated thereto, which imaging apparatus is used to monitor products located on said horizontal partition. In a further embodiment, each imaging apparatus may be disposed within the interior compartment such that it provides a view of the products located on a horizontal partition from above - for example, an imaging apparatus may be located on an interior wall of the vending machine cabinet above a horizontal partition that is monitored by it, and inclined downwards towards the horizontal partition such that the imaging apparatus obtains a top perspective image feed of the horizontal partition and objects located thereon.
[0057] The analysis of the image feeds received from said imaging apparatuses, for the purposes of product monitoring and inventory control is achieved through image recognition systems that are based on neural networks.
[0058] It would be understood that neural networks emulate higher order brain functions such as memory, learning and / or pattern perception / recognition. Such systems may be trained to model regions in which particular features or characteristics of an input signal may be distributed. By accurately modeling such regions, a neural network is capable of recognizing whether unknown data received by it belongs to a particular class represented by the modeled region. The modeling may be accomplished by presenting the neural network with a number of training signals belonging to the known classes of interest. During training, each of the training signals and the class to which a signal belongs are provided to the neural network. The neural network stores the information and generates a model of the region which includes signals of a particular class. [0059] Figure 3 illustrates an exemplary neural network 300 including an input layer 302 having a first plurality of nodes 302a to 302n. Each of the input layer nodes 302a to 302n receives one of a plurality of input features fa to fn provided thereto. Intermediate layer 304 includes a second plurality of nodes 304a to 304m. Each of the nodes 304a to 304m is coupled to at least one of the input layer nodes 302a to 302n. An output layer 306 includes a third plurality of nodes 306a to 3061, wherein each of the nodes 306a to 3061 are coupled to at least one of intermediate layer nodes 304a to 304m. In implementation, each of the nodes represented in the neural network may comprise a corresponding weight associated therewith, and stored in a processor memory. The neural network receives an input vector representing image information, and processes the content of the input vector at the individual nodes of the neural network by applying the weights associated with said node to the vector content. Each layer of nodes communicates output data to the next layer of the network, until the output layer of the network generates a probability value representing the probability that an image represented by the input vector is an image of a class that the neural network has been trained to recognize. The training process in turn iteratively adapts or modifies the weights of each node so as to improve the accuracy of the neural network in identifying images within a class of images that the neural network is being trained to identify.
[0060] Figure 4 illustrates an object diagram representing a product recognition apparatus 400 in accordance with the teachings of the present invention. Apparatus 400 comprises an imaging apparatus 406 communicably coupled with (i) neural network NN1 (402) - which neural network is configured for identifying a package type of a product and also the location of the top-center region of said product within the vending machine and (ii) a plurality of neural networks NN2 to NNn (404a to 404n) - which plurality of neural networks are configured to identify specific products (i.e. product types) corresponding to a package type.
[0061] For the purposes of the invention "package type" may be generally understood as comprising a classification corresponding to the type of packaging associated with a product - for example box, tetrapack, can, large bottle, small bottle, jar, bag, envelope, etc. For the purposes of the invention "product type" may be understood as a specific product within a package type. For example, Coke, Pepsi and Redbull may comprise specific product types within the "can" or "bottle" package type, while each different brand or flavor of potato crisps may comprise a separate product type within the "bag" package type. Typically, each different packaging variation within a package type could be a separate product type of that package type. For the purposes of the invention "top -center region" of a product may be understood to refer to a region of a product that includes and surrounds an approximate or exact center of an upper surface of the product.
[0062] In an embodiment of the invention each of the neural networks NN2 to NNn within the set of product type identifier networks 404 is associated with a separate or unique package type and is configured to classify specific products within such package type. For example, NN2 may be associated with the package type "can" and may be configured to classify cans as Coke cans, Pepsi cans and Redbull cans, while NN3 may be associated with the package type bag and may be configured to identify or classify different brands or varieties of products within bags. In an embodiment, neural network NN1 may be trained to classify different package types based on specific features such as size and shape, whereas neural networks NN2 to NNn may be trained to classify products within a specific package type based on features such as packaging characteristics such as color schemes, logos, patterns etc. on the packaging. [0063] It has been discovered that by using a first neural network NNl for the purpose of package type and product center identification, and using a separate set of neural networks NN2 to NNn for product type identification, the image classification by the product recognition apparatus 400 improves significantly. Additionally, this configuration significantly enhances training efficiency for the neural network(s) - inasmuch that in case of addition of a new (i.e. previously unrecognizable) product type of a known package type, the product type identifier networks can be trained to recognize the new product type based on a significantly smaller set of training data in comparison with training data that would be required if a single neural network was used to identify both product and package type.
[0064] Figure 5 comprises a flowchart briefly describing a method of product classification and identification based on the product recognition apparatus 400 of Figure 4. Step 502 comprises receiving at first neural network NNl, image information representing an image feed received from imaging apparatus 406 - which imaging apparatus is positioned such that a product storage space corresponding to an interior compartment of a vending machine (for example the product storage space corresponding to a horizontal partition tray within the vending machine) is within the image capture region of the imaging apparatus. It would be understood that the image information provided as input to the first neural network NNl may be provided in the form of an image vector.
[0065] At step 504, based on the output from first neural network NNl, the method identifies (i) one or more locations of products within the product storage space - which in an embodiment may comprise one or more locations at which neural network NNl has detected top-center regions of products positioned within the product storage space and (ii) a package type corresponding to the package disposed at each of said one or more product locations.
[0066] At step 506, for each detected location, image information representing a W x H image pixel region that contains the identified product location (for example a W x H pixel region containing the top-center region of a product disposed within the product storage space) is provided in the form of an input vector to a second neural network - which second neural network is selected from a plurality of neural networks within the plurality of product type identifier networks 404. In an embodiment of the invention, the selection of a second neural network from among the plurality of neural networks within the product type identifier networks is based on the package type corresponding to the identified product location (within the W x H pixel region) that has been identified by the first neural network. In a specific embodiment the selected second neural network is a neural network (within the product type identified networks) that has been associated with the identified package type.
[0067] Step 508 thereafter comprises identifying the specific product located within the W x H pixel region, based on output received from the selected second neural network. It would be understood that in the event the first neural network is unable to identify a particular package type, or the second neural network is unable to identify a specific product of a known package type, the apparatus may return an output indicating a failure to recognize an object or product located within the vending machine.
[0068] The invention as described in connection with Figure 5uses the innovative product recognition apparatus 400 and image recognition techniques of Figure 5 to enable identification of products that are removed by a user / customer after opening the vending machine door. By identifying products that are removed from the vending machine, the total price of the removed products can be calculated and invoiced to the user / customer, or debited from an electronic account associated with user / customer. [0069] Figure 6 illustrates an exemplary embodiment 600 of a configuration for first neural network NNl.
[0070] As illustrated in Figure 6, first neural network NNl comprises a first set of neural network layers 602 (hereinafter referred to as the set of common layers). The first set of neural network layers 602 comprises individual neural network layers 602a to 602n. Each network layer 602a to 602n in turn comprises one or more neural network nodes.
[0071] First neural network NNl comprises a plurality 604 of sets of package type detector layers (hereinafter referred to as the sets of package type detector layers), 604a to 604n. Each set of package type detector layers in turn comprises a distinct or unique set of neural network layers. In the illustrated Figure 6, package type detector layer set 604a comprises neural network layers 604al to 604al, package type detector layer set 604b comprises neural network layers 604b 1 to 604bm,package type detector layer set 604n comprises neural network layers 604nl to 604np.Each of package type detector layer sets 604a to 604nis associated with (and iteratively trained for identifying) a specific package type - and the output from each package type detector layer set provides a likelihood or a determination regarding the presence of the corresponding package type at a pixel location or at a specific location within a particular image region.
[0072] First neural network NNl additionally comprises a set of neural network layers 604x comprising neural network layers 604x1 to 604xq— which set of neural network layers 604x is iteratively trained for identifying portions of an image which contain only the background or portions of the interior of the vending machine cabinet (and which do not contain any specific package or product. Said set of neural network layers 604x (hereinafter referred to as the set of background layers) is configured to provide a likelihood or a determination regarding the presence of background features (or in other words the absence of a specific product or package type) at a pixel location or at a specific location within a particular image region. The implementation of a specific set of layers for recognizing background has been found to significantly improve product recognition - by reducing the likelihood that background portions of the vending machine are incorrectly categorized as an "unrecognized" product.
[0073] First neural network NNl further comprises a set of neural network layers 606, comprising neural network layers 606a to 606r (hereinafter referred to as the set of product center detector layers) - which set of neural network layers 606 is iteratively trained for identifying portions of an image at which top-center regions of products disposed within the vending machine are located. The set of neural network layers 606 is configured to provide a likelihood or determination regarding the presence of top-center regions of any product at a pixel location or a specific location within the image. By providing image information to the set of product center detector layers as input, first neural network NNl is capable of providing an output identifying pixel locations within an image at which top-center regions of one or more products are located (or at which there is a high probability that such top-center regions of one or more products are located).
[0074] In an embodiment of the invention the first neural network NNl is based on a convolutional neural network (for example a Visual Geometry Group (V GG) network) - which has been configured in accordance with the specific teachings within this disclosure. [0075] A critical feature of the configuration of first neural network NNl is that all input vectors communicated to first neural network NNl are input to input layer 602a of the set of common layers 602. A corresponding output vector generated at the output layer 602n is thereafter simultaneously communicated as input to (i) each of the plurality 604 of sets of package type detector layers 604a to 604n (ii) the set of background layers 604x and (iii) the set of product center detector layers 606.
[0076] Based on the output vector generated at output layer 602n of the set of common layers 602, each of the plurality 604 of sets of package type detector layer (604a to 604n) generate a corresponding output in the form of an output vector. Output from each of the plurality 604 of sets of package type detector layers 604a to 604n is used to generate a heatmap or location map corresponding to the image region represented by the input vector (originally input into the set of common layers 602). Each heatmap identifies the probability of a corresponding package type (i.e. that corresponds to the generating set of package detector layers) being located at one or more pixel locations within the image region. It would be understood that each set of package type detector layers 604a to 604n may generate output, which output is used to generate a heatmap or location map corresponding to said specific set of package type detector layers. In other words for package type detector layer 604a to 604n, an input vector provided to the set of common layers 602 would result in a plurality of heatmaps (i.e. heatmapa to heatmapn).
[0077] Simultaneously, based on the output vector generated at output layer 602n of the set of common layers, the set of background layers 604x generates a corresponding output in the form of an output vector. Output from the set of background layers 604x is used to generate a heatmap or location map corresponding to the image region represented by the input vector (originally input into the set of common layers 602) which heatmap identifies the probability of a particular pixel location within the image region representing background of the vending machine cabinet (i.e. representing the absence of any specific product at said pixel location).
[0078] Also simultaneously, based on the output vector generated at output layer 602n of the set of common layers, the set of product center detector layers 606 generates a corresponding output in the form of an output vector. Output from the set of product center detector layers 606 represents for each pixel location within an image region, the likelihood or probability that a top-center region of a product is located at said pixel location. Said output is used to identify locations within the image region at which top-centers of products positioned within the vending machine cabinet are located.
[0079] While not specifically illustrated in Figure 6, it would be understood that each neural network layer within any of the layer sets 602, 604a to 604n, 604x, 606 may comprise one or more neural network nodes.
[0080] Figure 7 illustrates a method of identifying package types and their respective locations based on processing of an input vector representing image information received from an imaging apparatus associated with the vending machine.
[0081] Step 702 comprises providing as input, image information to an input layer of the set of common layers 602 within first neural network NNl.
[0082] At step 704, output from an output layer of the set of common layers 602 is communicated to (i) one or more sets of package type detector layers 604a to 604n and (ii) the set of product center detector layers 606. Optionally (while not illustrated in Figure 7) output from the output layer of the set of common layers 602 is additionally communicated to the set of background layers 604x.
[0083] At step 706, the method identifies one or more locations within the image region represented by the input vector, at which top-center regions of products or product packages are located. Said identification is, in an embodiment, based on output from an output layer of the set of product center detector layers 606.
[0084] Thereafter, step 708 comprises identifying a product package type located at one or more regions (and preferably each of the one or more regions) of the image region under analysis— which identification is based on output from the set of package detector layers 604a to 604n and optionally on output from the set of background layers 604x.
[0085] It would be understood that each of steps 702 to 708 utilizes and relies on the specific features of configuration of first neural network NNI, that have been discussed above in connection with Figure 6.
[0086] It has been discovered that by first processing an input vector through a set of common layers, and thereafter simultaneously processing the output of said set of common layers through the sets of package type detector layers, set of background layers and set of product center detector layers, the apparatus for product recognition shows improved performance and computational efficiencies.
[0087] It would be understood that steps 702 to 708 described above correspond to steps 502 and 504 of Figure 5 previously described. Subsequently, for each package type identified within an imaged region, a neural network associated with such package type is selected from among the set of product type identifier networks 404— and image information representing a W x H image pixel region (that contains the top-center location of the identified package type) is input as an input vector to the selected neural network. The output from such selected neural network identifies the specific product corresponding to the package type previously identified by first neural network NNI.
[0088] Figure 8 illustrates a method of re-configuring the product recognition apparatus 400 in response to detection of a product that the neural networks within the said product recognition 400 have not been trained to identify.
[0089] Figure 802 comprises the step of determining (or arriving at the conclusion that) a specific product is not recognizable by the product recognition apparatus 400. This determination may either arise by virtue of the apparatus failing to identify a product located within the vending machine cabinet, or alternatively by a user or operator responsible for training or maintaining the product recognition apparatus 400.
[0090] In the event the new (unrecognizable) product corresponds to a package type that the product recognition apparatus 400 has not previously been trained to identify, step 804 would comprise the steps of (a) generating a new neural network NNi within the set of product type identifier networks 404 and uniquely associating said new neural network NNi with the new package type, (b) generating a new set of package type detector layers 604i within the plurality of sets of package type detector layers 604 and associating said new set of package type detector layers 604i with the new package type, and (c) providing as input, training data corresponding to the new product (i) to said new set of package type detector layers 604i within first neural network NN1 and (ii) to the generated new neural network NNi within the plurality of sets of product type identifier networks 404.
[0091] If on the other hand the unrecognizable product corresponds to a known package type, step 806 comprises (a) identifying within the plurality of sets of product type identifier networks 404, a neural network, associated with the known package type, and (b) providing as input to the identified neural network training data corresponding to the new product.
[0092] Figure 9 illustrates a generalized method for training a neural network of the present invention to recognize package types or product types. Step 902 of the method comprises obtaining one or more images of the interior of the vending machine cabinet (or of horizontal partitions/vending machine trays positioned within the vending machine cabinet), wherein said images capture one or more products positioned within the vending machine cabinet / on the horizontal partitions. At step 904, for one or more products located within each image, an operator selects or segments a portion of said product through a user interface and tags said selected portions of the product with a label identifying the package type and / or product type. In an embodiment selecting or segmenting a portion of an imaged product comprises selecting or segmenting a portion of the imaged product that contains the top-center region of the imaged product.
[0093] Step 906 comprises inputting image information corresponding to each selected and labeled image segment as training data for one or more of (i) the first neural network NNI (i.e. the package type and product center identifier network 402) (ii) specific one or more sets of package type detector layers 604a to 604n and/ or (iii) a neural network within the set of product type identifier networks 404. In an embodiment, a labeled image segment is submitted as training data to a set of package type detector layers within first neural network NNI that corresponds to the same package type as the labeled image segment. In another embodiment a labeled image segment is submitted as training data to a specific neural network within the set of product type identifier networks 404, based on determining that the specific neural network is associated with / corresponds to the same package type as the labeled image segment.
[0094] Step 908 comprises utilizing the training data to train the relevant neural network.
[0095] Figure 10 illustrates a method for efficiently generating training data for training neural networks within the product recognition apparatus 400 in accordance with the present invention.
[0096] Step 1002 of the method comprises positioning a first product of a defined package type and defined product type at a specified location within the vending machine cabinet or on a vending machine tray / partition. At step 1004 video acquisition is triggered at an imaging apparatus configured to acquire a video feed of the position of the first product within the vending machine cabinet (for example, the imaging apparatus may be positioned to acquire a video feed of the first product as well as the surrounding region or of the entire tray on which the product is located).
[0097] Step 1006 comprises maintaining, for the duration of the video feed, the first product at the specified first location, while placing/ removing and/ or replacing other products (of the same package type and / or product type, or different types) at various other positions within the field of view of the imaging apparatus (for example in the regions surrounding the first product). [0098] Step 1008 comprises extracting images from the acquired video feed. At step 1010, image information from the extracted images is utilized as training data for one or more of (i) the first neural network NN1 (i.e. the package type and product center identifier network 402) (ii) specific one or more sets of package type detector layers 604a to 604n and/ or (iii) a neural network within the set of product type identifier networks 404. In an embodiment, image information extracted from the video feed is submitted as training data to a set of package type detector layers within first neural network NN1 that corresponds to the same package type as the package type of the first product. In another embodiment,image information extracted from the video feed is submitted as training data to a specific neural network within the set of product type identifier networks 404, responsive to determining that the specific neural network is associated with / corresponds to the same package type as the first product.
[0099] Figure 11 illustrates a method for efficiently generating training data for training neural networks within the product recognition apparatus 400 in accordance with the present invention.
[00100] Step 1102 comprises obtaining an image of a portion of the vending machine cabinet
(for example, an image of vending machine tray / horizontal partition)at which one or more products are located.
[00101] At step 1104, for one or more products located within the image, an operator selects or segments a first image segment comprising a portion of a product through a user interface, and tags said selected portions of the product with a label identifying the package type and / or product type. In an embodiment selecting or segmenting a portion of the product comprises selecting or segmenting a portion of the imaged product that contains the top-center region of the imaged product.
[00102] Step 1106 comprises generating one or more variant images corresponding to the package type and / or the product type identified in the label corresponding to the first image segment. Generating each variant image comprises generating a second image segment - such that (i) the second image segment comprises (i) at least a sub-set of the pixels within the first image segment, which subset of pixels have been used to represent or image the top-center region of the product and (ii) either (a) the first image segment comprises at least a second sub-set of pixels that are not included within the second image segment and / or (b) the second image segment includes a third sub-set of pixels that lie within the image obtained at step 1102, but which are not included within the first image segment. It would be understood that in one embodiment, generating variant images may comprise any of (i) selecting a second image segment that surrounds the first image segment, (ii) selecting a second image segment that falls entirely within the first image segment, (iii) selecting a second image segment that comprises a part of the first image segment, and further comprises certain pixel regions that adjoin the first image segment and (vi) cropping portions of the first image segment.
[00103] Step 1108 thereafter comprises utilizing the first image segment and the one or more variant images as training data to train the relevant neural network(s) within the product recognition apparatus 400.
[00104] Figure 12 illustrates a method for authenticating product identifications that have been arrived at in accordance with the teachings of the present invention. As discussed above, vending machines in accordance with the teachings of the present invention are configured to identify specific products that are removed from the vending machine by a user / customer, which identification may be based either on the load sensing mechanisms or the image recognition apparatuses described above. It will however be understood that in certain cases the product identifications made by either the load sensors or the image recognition apparatuses may be erroneous. Additionally, one or the other mechanism may be prone to spoofing by a user / customer - for example, (i) the load sensing mechanism may be spoofed or misled by a customer by removing a product from the vending machine and simultaneously replacing the product with another object of equal weight, or (ii) the image recognition apparatuses may be spoofed or misled by a customer replacing a product from the vending machine with a counterfeit similar looking product (e.g. replacing a full can of an aerated drink with an empty can of the same drink). The method of Figure 12 enables authentication of the determinations by either mechanism, by comparison with a corresponding authentication by the other mechanism - and may be used to detect errors, identify attempts at theft or spoofing, or to raise a maintenance alert in case of a detected malfunction of one or the other product identification mechanisms.
[00105] Step 1202 comprises inputting image information corresponding to an article (for example a product that has been removed by a customer from the vending machine) into the neural networks of the product recognition apparatus 400 and identifying the article based on the output from the neural networks (in accordance with the teachings of Figure 5) .
[00106] Step 1204 comprises obtaining weight of the removed article based on signals obtained from one or more load sensors configured to detect load state changes associated with a vending machine tray on which the article was situated.
[00107] Step 1206 comprises identifying the article based on the detected weight and a per- unit product weight associated with the vending machine tray or with the load sensor(s) from which the load state change signal has been received (in accordance with the teachings above).
[00108] Thereafter, step 1208 comprises generating an authentication / verification decision concerning either (i) the identity of the article as received from the product recognition apparatus 400 based on image analysis or (ii) the identity of the article as determined based on load state changes - which authentication / verification decision is based on a determination of consistency between the findings based on image analysis and the findings based on load state changes. In an embodiment a determination of consistency between said findings results in confirmation of said findings. In another embodiment, a determination of inconsistency between said findings results in generation of an error alert, a theft or spoofing alert, or a maintenance request.
[00109] Figure 13 is a high level communication flow diagram illustrating communications involved in operating a vending machine of the type described hereinabove. In the illustrated embodiment, vending machine 1302 is communicably coupled with remote server 1306. The communication link between vending machine 1302 and remote server 1306 may comprise any wired or wireless communication link. Data communication over the communication link may in an embodiment be achieved by way of any one or more communication protocols, including without limitation, TCP/IP communication protocol or a UDP protocol. The underlying communication network used to implement the communication protocol may include any one or more of local area network, wide area network, broadband network or a combination of the above (such as the internet). It would be understood that in other embodiments of the invention, data communication may be implemented by any electrical, optical or wireless transmission media or link, including by way of example, by one or more of RF, infrared, acoustic, microwave, Bluetooth or other transmission media or link. A customer seeking to operate vending machine 1302 requires access to a client device 1304 - which client device 1304 may comprise any client terminal, and in a preferred embodiment is a mobile communication device (such as a tablet, smart phone, mobile phone, phablet or personal digital assistant). Client device 1304 likewise may be communicably coupled with vending machine 1302 as well as with remote server 1306 over independent communication channels — wherein the communication link may once again be implemented by any electrical, optical, RF, infrared, acoustic, microwave, Bluetooth or other transmission media or link. In an alternative embodiment, the client device 1304 may communicate with vending machine 1302 through remote server 1306 (acting as an intermediate server) standard communication methods— for example by means of TCP/IP or UDP based protocols. By virtue of the communication links established between vending machine 1302, client device 1304 and remote server 1306, a customer may operate the vending machine of the present invention in accordance with the methods described hereinbelow.
[00110] Figure 14 illustrates control components of vending machine 1302 in detail. In the illustrated embodiment, vending machine 1302 comprises a vending machine (VM) controller 1402 that controls higher levelfunctions and operations of vending machine 1302 through communication controller 1404. Communication controller 1404 comprises (i) wireless communication controller 1406 that is configured to interface, control and / or communicate with devices or components wirelessly, and (ii) wired communication control 1408 that is configured to interface, control and / or communicate with devices or components over a wired connection. Wireless communication controller 1406 communicates with server interface 1410— which service interface 1410 enables remote server 1406 to communicate with vending machine 1302. Wireless communication controller 1406 additionally communicates with client device interface 1412 - which client device interface enables client device 1304 to communicate with or operate vending machine 1302 (it will be understood that the communication between client device 1304 and vending machine 1304 may in an embodiment take place through remote server 1306 by virtue of one or more conventional communication protocols such as the TCP/IP or UDP protocol). Wired communication controller 1406 is communicably coupled with load sensor controls 1414 and imaging apparatus controls 1418 (and in an embodiment, with electronic lock controls) to enable VM controller 1402 to respectively receive information regarding load state changes from one or more load cells or load sensors within vending machine 1302, and image information corresponding to images captured by an imaging apparatus. Wired communication controller 1406 is additionally communicably coupled with security controls 1416 such that VM controller 1402 can selectively enable and disable access (for example by engaging or disengaging door locks) to one or more doors of vending machine 1302 with a view to allow and / or terminate access to one or more horizontal partitions within vending machine 1302. It would be understood that for the purposes of the present invention, product recognition apparatus 400 may be implemented either within vending machine 1302 (for example within VM controller 1402) or within the remote server 1306.
[00111] Figures 15A to 15C hereafterillustrate a process flow setting out the various steps involved in dispensing products stored within a vending machine of the type contemplated by the present disclosure, and which has been configured in accordance with the disclosure set out in connection with Figure 14 above.
[00112] For the purposes of discussing the method discussed in connection with Figures 15A to 15C, an exemplary client device 1304 may comprise a mobile communication device having an internet or wireless data connection, and having a mobile software application installed thereon - which mobile software application is configured to implement some or all of the steps discussed in connection with Figures 15A to 15C. It will however be understood that this is only an exemplary embodiment, and the steps of Figures 15A to 15C may be implemented by any client device 1304 having the minimum capabilities that have been discussed previously.
[00113] Step 1502 comprises receiving at a client device 1304, information identifying a specific vending machine (i.e. a selected vending machine) from which a customer seeks to obtain a product. By way of example, the client device may receive such information in the form of a vending machine identifier received from the vending machine 1302 in the course of wireless (e.g. Bluetooth) communication with said vending machine (or using TCP/IP / UDP protocols through a remote server), or by way of user input at client device 1304 (based on a vending machine identifier displayed on the vending machine), or by way of an RFID, bar code or other unique identification markings that are displayed on vending machine 1302 and which can be scanned by client device 1304 or by one or more peripherals connected therewith. In an embodiment of the invention, the client device 1304 may send its GPS information(or through other proximity techniques such as Bluetooth beacons) to remote server 1306, and remote server 1306 may respond by sending client device 1304 the vending machine identifier corresponding to a vending machine present at the identified GPS location at which the client device 1304 is located.
[00114] Steps 1504 and 1506 comprise communicating from client device 1304 to remote server 1306 (i) information identifying the vending machine 1302 that a customer has selected for product purchase and (ii) a product identifier of at least one product that the customer intends to purchase from vending machine 1302 (i.e. the selected product). The product identifier may in an embodiment correspond to a product that the customer has selected for purchase on the mobile application software.
[00115] Step 1508 comprises verifying (i) customer payment credentials (e.g. one or both of payment instruments / mechanisms associated with the customer, and credit available to the customer) and (ii) availability of the selected product at the selected vending machine 1302.
[00116] Step 1510 comprises responding to satisfactory verification of (i) customer payment credentials and / or (ii) availability of the selected product at the selected vending machine - by sending a signal from remote server 1306 to vending machine 1302, which signal instructs vending machine 1302 to enable the customer to access a horizontal partition that stocks the selected product. In a specific embodiment, the instruction sent from remote server 1306 may identify one or more of (i) a specific horizontal partition (or partition tray) on which the selected product is available, (ii) a vending machine door that enables / restricts access to the identified horizontal partition (or partition tray).
[00117] In an alternative embodiment of step 1510, remote server 1306 may instead send to client device one or more of (i) information identifying a specific horizontal partition (or partition tray) on which the selected product is available, (ii) information identifying a vending machine door that enables / restricts access to the identified horizontal partition (or partition tray) and (iii) one or more unlock codes necessary to instruct vending machine 1302 to unlock the relevant vending machine door. In this alternative embodiment, client device 1304 communicates the received information onward to vending machine 1302 (either directly or through the remote server)— signaling a request to vending machine 1302 for unlocking the relevant vending machine door to enable the customer access to the product stored behind the unlocked door. In a preferred embodiment the unlock codes forwarded to the client device 1304 may comprise encrypted unlock codes or encrypted single-use unlock codes. [00118] Vending machine 1302 receives (either directly from remote server 1306 or through client device 1304) the information discussed in connection with step 1510, analyses the information, and subject to verification that the information received is genuine, may at step 1512 unlock the relevant vending machine door and allow the customer access to horizontal partitions (or partition trays) behind the unlocked vending machine door.
[00119] At step 1514, the customer may thereafter remove one or more products stored on one or more of the partition trays (or horizontal partitions) to which said customer has been granted access by unlocking of the vending machine door(s).
[00120] Step 1516 comprises receiving at VM controller 1402, signals from load sensor(s) and
/ or image sensors associated with partition trays from which products have been removed by the customer (as step 1510) which signals communicate load state change information and / or image information corresponding to removal of product(s) from said partition trays.
[00121] At step 1518, based on the load state change signal received from a load sensor associated with a partition tray, based on image information received from an image sensor monitoring said partition tray, product(s) removed from said partition tray may be identified in accordance with any of the methods discussed above in this specification.
[00122] Step 1520 thereafter comprises using (i) the determined identity of product(s) removed from a partition tray and (ii) a per-unit product price associated with said productto determine the total price of products removed from the partition tray.
[00123] Thereafter payment of the total price may be obtained from the customer - for example by debiting a pre-paid electronic fund account associated with the customer or by charging the customer's bank account or credit or debit card.
[00124] In an embodiment of the invention, one or both of steps 1518 and 1520 may be implemented after the vending machine door that has been opened to enable a customer to access and remove products from a horizontal partition / partition tray has been closed or has been closed and locked. In an embodiment of the invention, one or both of steps 1518 and 1520 may be implemented after VM controller 1402 (i) receives a signal from security controls 1416 that a vending machine door has been closed (ii) dispatches an instruction to security controls 1416 to re-engage door lock(s) for one or more vending machine doors or (iii) receives a signal from security controls 1416 that locks for one or more vending machine doors have been re-engaged. In an embodiment of the invention, if the vending machine door does not close, a timer may be activated, wherein the process of securing payment for items removed from the vending machine is initiated after expiry of a predefined time interval from activation of the timer. In an embodiment, activation of the timer and elapse of time from activation of the timer is communicated to the client device by way of one or more alerts.
[00125] Figure 16 illustrates an exemplary computing system for implementing the present invention.
[00126] The computing system 1602 comprises one or more processors 1604 and at least one memory 1606. Processor 1604 is configured to execute program instructions - and may be a real processor or a virtual processor. It will be understood that computer system 1602 does not suggest any limitation as to scope of use or functionality of described embodiments. The computer system 1602 may include, but is not be limited to, one or more of a general-purpose computer, a programmed microprocessor, a micro-controller, an integrated circuit, and other devices or arrangements of devices that are capable of implementing the steps that constitute the method of the present invention. Exemplary embodiments of a system 1602 in accordance with the present invention may include one or more servers, desktops, laptops, tablets, smart phones, mobile phones, mobile communication devices, tablets, phablets and personal digital assistants. In an embodiment of the present invention, the memory 1606 may store software for implementing various embodiments of the present invention. The computer system 1602 may have additional components. For example, the computer system 1602 may include one or more communication channels 1608, one or more input devices 1610, one or more output devices 1612, and storage 1614. An interconnection mechanism (not shown) such as a bus, controller, or network, interconnects the components of the computer system 1602. In various embodiments of the present invention, operating system software (not shown) provides an operating environment for various softwares executing in the computer system 1602 using a processor 1604, and manages different functionalities of the components of the computer system 1602.
[00127] The communication channel(s) 1608 allow communication over a communication medium to various other computing entities. The communication medium provides information such as program instructions, or other data in a communication media. The communication media includes, but is not limited to, wired or wireless methodologies implemented with an electrical, optical, RF, infrared, acoustic, microwave, Bluetooth or other transmission media.
[00128] The input device(s) 1610 may include, but is not limited to, a touch screen, a keyboard, mouse, pen, joystick, trackball, a voice device, a scanning device, or any another device that is capable of providing input to the computer system 1602. In an embodiment of the present invention, the input device(s) 1010 may be a sound card or similar device that accepts audio input in analog or digital form. The output device(s) 1612 may include, but not be limited to, a user interface on CRT, LCD, LED display, or any other display associated with any of servers, desktops, laptops, tablets, smart phones, mobile phones, mobile communication devices, tablets, phablets and personal digital assistants, printer, speaker, CD/DVD writer, or any other device that provides output from the computer system 1602.
[00129] The storage 1614 may include, but not be limited to, magnetic disks, magnetic tapes,
CD-ROMs, CD-RWs, DVDs, any types of computer memory, magnetic stripes, smart cards, printed barcodes or any other transitory or non-transitory medium which can be used to store information and can be accessed by the computer system 1002. In various embodiments of the present invention, the storage 1614 may contain program instructions for implementing any of the described embodiments.
[00130] In an embodiment of the present invention, the computer system 1602 is part of a distributed network or a part of a set of available cloud resources.
[00131] The present invention may be implemented in numerous ways including as a system, a method, or a computer program product such as a computer readable storage medium or a computer network wherein programming instructions are communicated from a remote location. [00132] The present invention may suitably be embodied as a computer program product for use with the computer system 1602. The method described herein is typically implemented as a computer program product, comprising a set of program instructions which is executed by the computer system 1602 or any other similar device. The set of program instructions may be a series of computer readable codes stored on a tangible medium, such as a computer readable storage medium (storage 1614), for example, diskette, CD-ROM, ROM, flash drives or hard disk, or transmittable to the computer system 1602, via a modem or other interface device, over either a tangible medium, including but not limited to optical or analogue communications channel (s) 1608. The implementation of the invention as a computer program product may be in an intangible form using wireless techniques, including but not limited to microwave, infrared, Bluetooth or other transmission techniques. These instructions can be preloaded into a system or recorded on a storage medium such as a CD-ROM, or made available for downloading over a network such as the Internet or a mobile telephone network. The series of computer readable instructions may embody all or part of the functionality previously described herein.
[00133] It will be understood that methods and systems in accordance with the present invention provide an efficient and effective solution to the need for vending machines that simplify the user experience and present manufacturing and cost efficiencies, while simultaneously ensuring an efficient and secure payment system. Additionally, by incorporating a purchase and payment solution into mobile communication devices, users no longer need to have cash or electronic cards available to effect a purchase - and can instead rely entirely on their mobile phone.
[00134] While the exemplary embodiments of the present invention are described and illustrated herein, it will be appreciated that they are merely illustrative. It will be understood by those skilled in the art that various modifications in form and detail may be made therein without departing from or offending the spirit and scope of the invention as defined by the appended claims. Additionally, the invention illustratively disclose herein suitably may be practiced in the absence of any element which is not specifically disclosed herein - and in particular embodiment specifically contemplated, is intended to be practiced in the absence of any element which is not specifically disclosed herein.

Claims

Claims:
1. A system for dispensing products from a vending machine, the system comprising: a vending machine cabinet comprising: at least one interior compartment configured to accommodate products for dispensing; at least one door configured to provide access to the at least one interior compartment; an imaging apparatus configured to acquire images of at least part of the at least one interior compartment; and a product recognition apparatus communicably coupled with the imaging apparatus, and configured to identify products located within the at least one interior compartment, said product recognition apparatus comprising: a first neural network configured to identify within an image received from the imaging apparatus: locations at which one or more products are positioned within the image; and for each determined location at which a product is positioned within the image, a package type corresponding to the product positioned at the determined location a group of neural networks comprising at least two neural networks that are distinct from the first neural network, wherein each neural network within the group of neural networks is: associated with one of a plurality of package types that the first neural network is configured to recognize; and configured to sub-classify products of the associated package type into one of a plurality of predetermined product types, which sub-classification is based on image informationreceived from the imaging apparatus; wherein the product recognition apparatus is configured such that, responsive to the first neural network identifying a specific package type corresponding to a product detected at a specific location within an image received from the imaging apparatus: image information corresponding to the specific location within the image is input to a second neural network selected from within the group of neural networks, wherein said second neural network is associated with the specific package type identified by the first neural network; and recognition of a product at the specific location within the image is based on an output from the second neural network.
2. The system as claimed in claim 1, wherein the first neural network comprises:
1 a set of common layers of network nodes, wherein said set of common layers includes an input layer and an output layer; a plurality of distinct sets of package type detector layers of network nodes, each set of package type detector layers comprising an input layer and an output layer; and a set of product location detection layers of network nodes, the set of product location detection layers comprising an input layer and an output layer; wherein the system is configured: to provide as input to the input layer of the set of common layers, an input image vector generated based on an image received from the imaging apparatus, as input to the input layer of the set of common layers ; to provide as input to the respective input layers of each set of package type detector layers, output from the output layer of the set of common layers; and to provide as input to the input layer of the set of product detection layers, output from the output layer of the set of common layers; to determine based on output from the output layer of the set of product detection layers, locations at which one or more products are positioned within the image received from the imaging apparatus; and for each determined location at which a product is positioned within the image received from the imaging apparatus, to determine based on output from the respective output layers of each set of package type detector layers, a package type corresponding to the product positioned at the determined location.
3. The system as claimed in claim 2, comprising a set of background detection layers of network nodes, the set of background detection layers comprising an input layer and an output layer, wherein: the system is configured to provide as input to the input layer of the set of background detection layers, output from the output layer of the set of common layers; and the determination of package type corresponding to product(s) positioned within the image received from the imaging apparatus is additionally based on output from the output layer of the set of background detection layers.
4. The system as claimed in claims 2 or 3, wherein any one or more of the set of common layers, the sets of package type detector layers, the set of product location detection layers, and the set of background detection layers includes one or more intermediate layers or network nodes disposed between an input layer and an output layer thereof.
5. The system as claimed in claim 1, wherein the first neural network is configured such that the identified locations at which one or more products are positioned within the image received from the
2 imaging apparatus are locations at which top-center regions of said one or more products are positioned.
6. The system as claimed in claim 1, wherein any one or more of the neural networks of the system comprises a convolutional neural network.
7. A method for configuring a product recognition apparatus for neural network based recognition of products located within an interior compartment of a vending machine, based on one or more images of said interior compartment acquired at an imaging apparatus, the method comprising: configuring a first neural network to identify within an image received from an imaging apparatus: locations at which one or more products are positioned within the image; and for each determined location at which a product is positioned within the image, a package type corresponding to the product positioned at the determined location; configuring a group of neural networks comprising at least two neural networks that are distinct from the first neural network, such that each neural network within the group of neural networks is: associated with one of a plurality of package types that the first neural network is configured to recognize; and configured to sub-classify products of the associated package type into one of a plurality of predetermined product types, which sub-classification is based on image information received from the imaging apparatus; and configuring the product recognition apparatus such that, responsive to the first neural network identifying a specific package type corresponding to a product detected at a specific location within an image received from the imaging apparatus: image information corresponding to the specific location within the image is input to a second neural network selected from within the group of neural networks, wherein said second neural network is associated with the specific package type identified by the first neural network; and recognition of a product at the specific location within the image is based on an output from the second neural network.
8. The method as claimed in claim 7, comprising configuring the first neural network to include: a set of common layers of network nodes, wherein said set of common layers includes an input layer and an output layer; a plurality of distinct sets of package type detector layers of network nodes, each set of package type detector layers comprising an input layer and an output layer; and
3 a set of product location detection layers of network nodes, the set of product location detection layers comprising an input layer and an output layer; and wherein the product recognition apparatus is configured: to provide as input to the input layer of the set of common layers, an input image vector generated based on an image received from the imaging apparatus, as input to the input layer of the set of common layers ; to provide as input to the respective input layers of each set of package type detector layers, output from the output layer of the set of common layers; and to provide as input to the input layer of the set of product detection layers, output from the output layer of the set of common layers; to determine based on output from the output layer of the set of product detection layers, locations at which one or more products are positioned within the image received from the imaging apparatus; and for each determined location at which a product is positioned within the image received from the imaging apparatus, to determine based on output from the respective output layers of each set of package type detector layers, a package type corresponding to the product positioned at the determined location.
9. The method as claimed in claim 8, comprising configuring the product recognition apparatus such that the first neural network includes a set of background detection layers of network nodes, the set of background detection layers comprising an input layer and an output layer, wherein: the product recognition apparatus is configured to provide as input to the input layer of the set of background detection layers, output from the output layer of the set of common layers; and the determination of package type corresponding to product(s) positioned within the image received from the imaging apparatus is additionally based on output from the output layer of the set of background detection layers.
10. The method as claimed in claims 8 or 9, wherein any one or more of the set of common layers, the sets of package type detector layers, the set of product location detection layers, and the set of background detection layers includes one or more intermediate layers or network nodes disposed between an input layer and an output layer thereof.
11. The method as claimed in claim 7, comprising configuring the first neural network such that the identified locations at which one or more products are positioned within the image received from the imaging apparatus are locations at which top-center regions of said one or more products are positioned.
4
12. The method as claimed in claim 7, comprising configuring the product recognition apparatus such that any one or more of the neural networks of said product recognition apparatus comprises a convolutional neural network.
13. The method as claimed in claim 7, wherein configuring the product recognition apparatus comprises responding to detection of an unrecognizable product within an image received from the imaging apparatus with the steps of: in response to determining that the unrecognizable product comprising a previously unrecognizable package type: generate an additional neural network within the set of product type identifier networks and uniquely associate the generated neural network with the previously unrecognizable package type; generate an additional set of package type detector layers within the first neural network, said additional set of package type detector layers comprising an input layer and an output layer, and associating the generated additional set of package type detector layers with the previously unrecognizable package type; and input training data corresponding to the previously unrecognizable product to one or both of the generated additional neural network and the generated additional set of package type detector layers; and in response to determining that the unrecognizable product comprising a recognizable package type: identify within the set of product type identifier networks, a neural network associated with the recognizable package type; and input training data corresponding to the unrecognizable product to said identified neural network.
14. A method for generating training data for training one or more neural networks configured for recognition of products located within an interior compartment of a vending machine, the method comprising the steps of: positioning a first product having a defined package type and a defined product identity at a defined first location within the interior compartment; triggering video acquisition mode at an imaging apparatus configured to acquire a video feed of the interior compartment;
for the duration of the video feed, maintain the first product at the defined first location, while implementing one or more of placement, removal or movement of other products at or between various other locations within the interior compartment; extract a plurality of image frames from the acquired video feed; and
5 utilize image information from the extracted image frames as training data corresponding to the defined package type or the defined product identity.
15. A method for generating training data for training one or more neural networks configured for recognition of products located within an interior compartment of a vending machine, the method comprising the steps of: obtaining an image of an interior compartment of a vending machine, said interior compartment having one or more products positioned therewithin; tagging a product within the image by selecting a first image segment comprising a portion of the image which contains a top-center region of the product; labeling the first image segment with a label identifying the package type or the product identity; generating one or more variant images corresponding to the identified package type or the product identity, wherein generating a variant image comprises generating a second image segment - such that the second image segment comprises (i) at least a sub-set of pixels within the first image segment, which sub-set of pixels have been used to image the top-center region of the product, and (ii) either (a) the first image segment comprises at least a second sub-set of pixels that are not included within the second image segment or (b) the second image segment includes a third sub-set of pixels that are within the obtained image and that are not included within the first image segment; and utilizing the first image segment and the generated variant images as training data for the one or more neural networks.
6
PCT/IB2018/050881 2017-02-22 2018-02-13 Vending machines and methods for dispensing products WO2018154411A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN201741006353 2017-02-22
IN201741006353 2017-02-22

Publications (2)

Publication Number Publication Date
WO2018154411A2 true WO2018154411A2 (en) 2018-08-30
WO2018154411A3 WO2018154411A3 (en) 2018-11-29

Family

ID=63252470

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2018/050881 WO2018154411A2 (en) 2017-02-22 2018-02-13 Vending machines and methods for dispensing products

Country Status (1)

Country Link
WO (1) WO2018154411A2 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109362067A (en) * 2018-09-30 2019-02-19 北京未来便利技术有限公司 A kind of automatically vending system
CN110097087A (en) * 2019-04-04 2019-08-06 浙江科技学院 A kind of automatic binding reinforcing bars location recognition method
WO2020081170A1 (en) * 2018-10-20 2020-04-23 The Nordam Group Llc Neural vending machine
CN111260850A (en) * 2020-03-09 2020-06-09 厦门翟湾电脑有限公司 Mobile terminal protective housing equipment on probation
WO2021175601A1 (en) * 2020-03-02 2021-09-10 BSH Hausgeräte GmbH Creating and updating a product database
CN113763629A (en) * 2021-03-25 2021-12-07 北京京东乾石科技有限公司 Intelligent sales counter and foreign matter detection method
WO2022190061A1 (en) * 2021-03-11 2022-09-15 Rk.Ai - Serviços De Processamento De Imagens E Análise De Dados Lda. Storage cabinet, methods and uses thereof

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10346726B2 (en) * 2014-12-15 2019-07-09 Samsung Electronics Co., Ltd. Image recognition method and apparatus, image verification method and apparatus, learning method and apparatus to recognize image, and learning method and apparatus to verify image

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109362067A (en) * 2018-09-30 2019-02-19 北京未来便利技术有限公司 A kind of automatically vending system
WO2020081170A1 (en) * 2018-10-20 2020-04-23 The Nordam Group Llc Neural vending machine
US10984282B2 (en) 2018-10-20 2021-04-20 The Nordam Group Llc Neural vending machine
JP2022508808A (en) * 2018-10-20 2022-01-19 ザ・ノーダム・グループ・エルエルシー Neural vending machine
CN110097087A (en) * 2019-04-04 2019-08-06 浙江科技学院 A kind of automatic binding reinforcing bars location recognition method
WO2021175601A1 (en) * 2020-03-02 2021-09-10 BSH Hausgeräte GmbH Creating and updating a product database
CN111260850A (en) * 2020-03-09 2020-06-09 厦门翟湾电脑有限公司 Mobile terminal protective housing equipment on probation
CN111260850B (en) * 2020-03-09 2020-09-11 诸暨叶蔓电子商务有限公司 Mobile terminal protective housing equipment on probation
WO2022190061A1 (en) * 2021-03-11 2022-09-15 Rk.Ai - Serviços De Processamento De Imagens E Análise De Dados Lda. Storage cabinet, methods and uses thereof
CN113763629A (en) * 2021-03-25 2021-12-07 北京京东乾石科技有限公司 Intelligent sales counter and foreign matter detection method

Also Published As

Publication number Publication date
WO2018154411A3 (en) 2018-11-29

Similar Documents

Publication Publication Date Title
WO2018154411A2 (en) Vending machines and methods for dispensing products
CN108335408B (en) Article identification method, device and system for vending machine and storage medium
US11638490B2 (en) Method and device for identifying product purchased by user and intelligent shelf system
CN109409291B (en) Commodity identification method and system of intelligent container and shopping order generation method
EP3454698B1 (en) System and method for computer vision driven applications within an environment
WO2020047919A1 (en) Self-service vending method, apparatus and system, and server and computer-readable storage medium
US20200042969A1 (en) Vending machines and methods for dispensing products
US20200097897A1 (en) Method and system for automatic vending, vending terminal
US11989996B2 (en) Device for storing objects and method using such a device
US11941629B2 (en) Electronic device for automated user identification
US20230118277A1 (en) Method, a device and a system for checkout
CA3040843A1 (en) An automatic in-store registration system
EP3901841A1 (en) Settlement method, apparatus, and system
JP2023524501A (en) Product identification system and method
US20210406531A1 (en) Electronic device for automated user identification
CN111222870A (en) Settlement method, device and system
CN109934569B (en) Settlement method, device and system
CN109190706A (en) Self-service method, apparatus and system
CN104766408A (en) Method capable of being used for counting user operation situations of self-service article taking and placing device
CN114387735A (en) Method, device and system for picking up goods
CN110837824B (en) Commodity identification method for vending device, vending device and storage medium
CN111932774A (en) Method and device for identifying sold commodities of vending machine and vending machine
CN108648036A (en) Commodity recognition method, system and storage medium on a kind of shelf
CN111523348B (en) Information generation method and device and equipment for man-machine interaction
US20240119435A1 (en) Automated and self-service item kiosk

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18757614

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18757614

Country of ref document: EP

Kind code of ref document: A2