US20210209550A1 - Systems, apparatuses, and methods for triggering object recognition and planogram generation via shelf sensors - Google Patents
Systems, apparatuses, and methods for triggering object recognition and planogram generation via shelf sensors Download PDFInfo
- Publication number
- US20210209550A1 US20210209550A1 US16/737,717 US202016737717A US2021209550A1 US 20210209550 A1 US20210209550 A1 US 20210209550A1 US 202016737717 A US202016737717 A US 202016737717A US 2021209550 A1 US2021209550 A1 US 2021209550A1
- Authority
- US
- United States
- Prior art keywords
- shelf
- sensor
- camera
- threshold
- met
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 230000008859 change Effects 0.000 claims description 10
- 238000010586 diagram Methods 0.000 description 10
- 230000008901 benefit Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 8
- 238000012545 processing Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 230000009471 action Effects 0.000 description 3
- 238000003491 array Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 230000001960 triggered effect Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 235000013361 beverage Nutrition 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 230000003139 buffering effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/08—Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
- G06Q10/087—Inventory or stock management, e.g. order filling, procurement or balancing against orders
- G06Q10/0875—Itemisation or classification of parts, supplies or services, e.g. bill of materials
-
- G06K9/00671—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/06—Recognition of objects for industrial automation
Definitions
- Retailers use a variety of shelf monitoring systems to track their inventory.
- Current shelf monitoring systems can either count products or identify products with good accuracy, but they do not perform both well.
- smart shelves based on resistive, capacitive, or light sensors can know how many objects are on a shelf and where the items are positioned, but they do not have the ability to confidently state what specifically those items are.
- shelf monitoring robots and fixed cross-aisle cameras periodically capture images of a retailer's shelves and use image recognition tools to identify what the stocked items are, but this only acquires product facings. They therefore cannot accurately tell how many items, if any, are actually on the shelf behind those first items.
- One of the counting systems from above could be deployed along with an object recognition system in an attempt to provide a complete arrangement, but this presents a number of system integration and cost issues.
- the present invention is an apparatus for triggering object recognition and planogram generation.
- the apparatus may comprise a shelf, at least one sensor affixed to the shelf, at least one camera affixed to the shelf, and at least one server communicatively coupled to the at least one camera and the at least one sensor.
- the apparatus may be configured such that the at least one sensor is configured to detect that at least one threshold has been met, and the at least one sensor is further configured to trigger an image to be taken by the least one camera; wherein the at least one camera is configured to capture at least one image based on the met threshold, and the at least one server is configured to identify at least one product identifier based on the at least one captured image.
- the present invention is a method for triggering object recognition and planogram generation.
- the method may comprise detecting, by at least one sensor coupled to a shelf, that at least one threshold has been met; triggering, by the at least one sensor, at least one image to be captured by at least one camera coupled to the shelf; capturing, at the at least one camera, at least one image based on the met threshold; and identifying, by at least one server, at least one product identifier based on the at least one captured image.
- the present invention is a tangible machine-readable medium comprising instructions for triggering object recognition and planogram generation.
- the instructions when executed, cause a machine to at least detect that at least one threshold has been met by at least one sensor; trigger at least one image to be captured by at least one camera coupled to the at least one sensor; capture at least one image based on the met threshold at the at least one camera; and identify at least one product identifier based on the at least one captured image by at least one server.
- FIG. 1 illustrates an example system for object recognition and planogram generation.
- FIG. 2 illustrates an example apparatus for object recognition and planogram generation.
- FIG. 3 illustrates example shelves comprising apparatuses for object recognition and planogram generation.
- FIG. 4 illustrates an example flow diagram for object recognition and planogram generation.
- FIG. 5 is a block diagram of an example logic circuit for implementing example methods and/or operations described herein.
- Systems, apparatuses, and methods for object recognition and planogram generation are disclosed herein.
- systems, apparatuses, and methods for triggering object recognition and planogram generation via shelf sensors are disclosed herein.
- the systems and methods disclosed herein are a better overall approach for retailers to take in tracking their inventory.
- the systems, apparatuses, and methods described herein may lead to cost savings and higher quality inventory management and planograms based on the more accurate object recognition and planogram generation.
- an apparatus, or system may comprise a camera and a smart shelf where the camera is integrated into the front edge of the smart shelf and looks back at the items sitting on the shelf, resulting in a singular piece of hardware that could both count the items on the shelf and recognize those items on the shelf.
- the retailer does not have to purchase and deploy both a robot and smart shelves or be concerned about situations where there is no physical place to mount a cross-aisle camera to use in conjunction with smart shelves.
- the camera and smart shelf may not be physically integrated into a singular piece of hardware, but instead are logically integrated via network communication links.
- One self-contained piece of hardware may simplify the system design, cost structure, deployment, and maintenance.
- the challenge with this approach is attempting to acquire consistently good images for object recognition when the merchandise could be placed at a variety of distances from the camera. Achieving a suitable focus at both an especially close range as well as several inches away is a problem.
- the output of the smart shelf sensors can be used to determine if an object is appropriately positioned for a photo before exercising the camera.
- the camera would be focused only on the front inventory position near the shelf's edge, thereby simplifying the camera requirements and the object recognition processing as well as setting up clear bounds for a useful image.
- the front position is empty, the system can continue to refer to the most recent image taken, and then once the position is occupied, the camera can be allowed to take another image. Since a successful implementation will require limiting power and communications, it is important that neither energy nor time are wasted on capturing and analyzing images that ultimately serve no purpose. This embodiment therefore serves as a valuable filter.
- the smart shelf can determine when it is restocked based on the gross changes in the inventory it senses. This naturally is when the system can really benefit from knowing what specifically is now on the shelf at the time of restocking since it knows a change was detected. Taking a photo after this restocking trigger is beneficial, as opposed to an arbitrarily prescribed time when a retailer doesn't know if anything changed or if the photo will be useful. For example, cameras could be programmed to take a picture at 8 : 00 every morning, assuming that restocking was completed by then, but in reality, the first shelf may have been stocked at 7 : 58 , the second shelf at 8 : 03 , the third shelf at 8 : 05 , and so on.
- the images taken at 8 : 00 for the second shelf onward would be outdated within minutes and would sit with this information for 24 hours until the next image is taken. More intelligent choices could be made if the timing of the photo was tied to the shelf sensing output. Additional criteria for when the camera could be triggered or authorized to take another image may include: 1) when the shelves are faced (i.e. pulling the merchandise forward to the shelf's edge, a.k.a. leveled or zoned), 2) when the number of items on the shelf met a specified threshold, 3) when the change in the number of items on the shelf met a specified threshold, or 4) when a certain percentage of the shelf area is populated. When the specified conditions are not met, the camera will wait to take the next photo.
- metal may mean that a particular threshold value has been crossed meaning that a value has risen above the threshold value, or fallen below the threshold value. In some cases, “met” may mean that a value tracked by the system is equal to the threshold value.
- this arrangement can check if the actual shelf inventory is compliant with a formal planogram, and additionally, the smart shelf can also be used to automatically generate and update the formal planogram.
- This will allow store managers to digitally see what is actually on their shelves and where it's stocked. For example, when the retailer is out of stock on one SKU but has overstock for the SKU in the next shelf lane, store associates will likely put the overstocked item in the adjacent empty spot on the shelf. Retailers and consumer packaged goods companies prefer to have something on the shelves to sell instead of nothing, even if it's the “wrong” item per the formal planogram.
- the apparatuses, systems, and methods disclosed herein may be able to identify a plurality of different product identifiers and characteristics, such as, for example, product SKUs and product UPCs.
- ESLs electronic shelf labels
- the ESLs may be created and/or updated at the same time as the planogram is created and/or updated. For example, when overstocked inventory is put in the adjacent lane, and the smart shelf sensor, in conjunction with the camera, update the planogram, this could then trigger ESL data for the respective SKUs affected in the planogram to be updated as well. This includes product names, SKUs, prices, etc. listed on the ESLs.
- FIG. 1 illustrates an example system 100 for object recognition and planogram generation.
- the system 100 may include additional or fewer components than are depicted in the figure.
- the example system 100 may comprise a shelf 102 , a camera 104 , a sensor 106 , a network 108 , network connections 110 , and a server 112 .
- the camera 104 , sensor 106 , and server 112 may be communicatively coupled to each other via the network 108 and network connections 110 . As such, those components may be able to exchange information with each other relevant to object recognition and planogram generation.
- the shelf 102 may be part of a larger shelving unit comprising multiple individual shelves such as shelf 102 . These shelves may be used to store products of a variety of dimensions.
- the camera 104 and the sensor 106 are physically coupled to the shelf 102 .
- the camera 104 and sensor 106 may be manufactured as part of the shelf 102 itself, and in other embodiments the camera 104 and sensor 106 may be added to the shelf 102 after the shelf itself has been manufactured.
- the camera 104 may comprise one individual camera. In other embodiments, the camera 104 may comprise multiple cameras able to capture images at a variety of positions and angles on the shelf 102 . In some embodiments, the camera 104 may be configured to capture at least one image based on the met threshold and identify at least one product identifier based on the at least one captured image. In yet other embodiments, the camera 104 may be configured to take multiple images of the objects on the shelf 102 or of the shelf 102 itself.
- the system 100 there may be multiple cameras per shelf 102 with space between them in order to appropriately cover all the product facings across the complete shelf 102 .
- the camera-to-camera pitch may be based on the viewing angle of each camera 104 .
- the plurality of cameras 104 may be communicatively coupled to each other to provide a more complete picture of the status of goods on an individual shelf 102 , a section of shelves in a lane, as well as the totality of shelves in an aisle, or even an entire store.
- the server 112 may be configured to perform the identification itself, but in other embodiments, the server 112 may identify the at least one product identifier by communicating with software, applications, and/or databases located at the server 112 .
- the product images in the databases used for product recognition may originate from a variety of sources. For example, the images could be provided by each product's supplier, by the retailer, or by a third party. The images may be taken manually or there may be an automated setup that captures multiple images for each product and are uploaded to the database. Additionally, the image data stored in the libraries may be collected via point of sale solutions that capture images of the products, along with UPC (barcode) data and/or EPC (RFID) data.
- an in-lane or bioptic scanner may include a camera that records an image of the product while the barcode is read.
- the server 112 may be further configured to identify in the at least one captured image a product sku, product UPC, or a combination thereof.
- other product identifiers may be used for products and identifiable by the server 112 , such as, for example, brand names for products, the product class, product dimensions, as well as colors, shapes, outlines, features, patterns, and anything else in the product's appearance that visually distinguishes it.
- the camera 104 may be capable of performing the identification itself without communication with the server 112 , such that the camera 104 may be capable of performing all the same functionality listed above with respect to the server 112 in reference to performing the identification.
- the camera 104 may work in conjunction with the server 112 to identify the best types of images, and the best conditions under which to take an image. This identification process may improve over time through the use of artificial intelligence and/or machine learning techniques to train the camera 104 to take better pictures.
- the training of the camera 104 may also include integrating the data from sensor 106 into the library of information that the camera 104 , or other program, may be trained on.
- the camera 104 in conjunction with the server 112 may utilize artificial intelligence and machine learning tactics to identify suitable images, and conditions best suited for taking images of the shelves and the objects on the shelves.
- the smart shelf i.e. the systems, apparatuses, and methods described herein, may learn and improve over time as it is deployed based on a library of images and the corresponding sensor data. More particularly, the camera 104 may be triggered to capture an image based on a threshold being detected by the sensor 106 .
- the system 100 may know 1) whether or not the captured image is good based on how useful it is for the object recognition engine, that may be stored on the server 112 , and 2) from the sensor 106 , where the associated merchandise was located on the shelf 102 when the image was captured. If it is determined that images are more useful when the merchandise is positioned in a specific location, then the system 100 could adjust the thresholds that trigger the camera 104 to capture an image, improving future performance. The adjustments of the triggers and thresholds may be managed by software located at the server 112 , camera 104 , and/or sensor 106 .
- the system 100 may be counting items in a shelf's lane that turn out to be beverage cans, which are relatively short, so a suitable image could have the front can positioned closer to the camera 104 .
- the system 100 may be counting tall beverage bottles, and in order to get a quality image for successful object recognition, the front bottle needs to be further from the camera 104 to fit more of it in the camera's field of view. So, while the system 100 may start with pre-defined default parameters for acceptable merchandise locations on the shelf 102 relative to the camera 104 , based on real-world interactions it may learn over time what works well for object recognition and what doesn't and then adjust accordingly.
- the server 112 may be further configured to generate a planogram for the shelf based on the at least one identified product identifier. The generation of the planogram may take into account other product information such as characteristics, or dimensions related to the product. Additionally, the server 112 may be further configured to update a planogram for the shelf based on the at least one identified product identifier. The created planogram, or updated planogram, may be transmitted to a user device (not shown) via the network 108 and server 112 . At the user device, an individual working in the store where the shelf 102 is located may thus have a real-time picture of the inventory in the store.
- planograms may be handled by the camera 104 capable of the same functionality listed above for the server with respect to creating and updating planograms.
- server 112 may be configured to generate or update an electronic shelf label for the shelf 102 based on the identified product identifier.
- the sensor 106 may comprise one individual sensor or a sensor array. In other embodiments, the sensor 106 may comprise a variety of sensors or sensor arrays linked together. In one embodiment, the at least one sensor may be a resistive sensor, capacitive sensor, light sensor, or a combination thereof. In one embodiment, the sensor 106 may be configured to detect that at least one threshold has been met. Additionally, the sensor 106 may be further configured to trigger an image to be taken by the least one camera. As such, the sensor 106 may be further configured to transmit a message to the at least one camera comprising information about the at least one threshold that has been met.
- the at least one threshold may be a value representative of: objects on the at least one shelf are faced, when the number of objects on the shelf met a threshold, when a change in the number of objects on the shelf met a threshold, when a certain percentage of an area of the shelf is occupied by objects, or a combination thereof.
- the store employees may move the objects to the front edge of the shelf 102 , which is an optimal location for the objects' images to be captured by the at least one camera 104 . This may also be an optimal time for the camera 104 to capture an image since the employees just spent time organizing and arranging the shelf 102 merchandise.
- the number of objects on the shelf 102 meets a minimum threshold, this may indicate that the shelf 102 is full enough with objects such that having an updated image captured would be useful.
- this may indicate a low inventory condition, and it may be beneficial to capture another image to identify the current objects left on the shelf 102 .
- the number of objects on the shelf 102 increases by a threshold quantity in a specific time period (i.e. the change in the number of objects), this may indicate a restocking event has taken place and may be the optimal time to capture an image.
- the number of objects on the shelf 102 decreases by a threshold quantity in a specific time period, this may indicate other significant activity has taken place at the shelf 102 , such as a high volume of shopping or even a theft event (i.e. a sweep).
- a theft event i.e. a sweep.
- the number of objects on the shelf 102 has neither increased nor decreased beyond a threshold quantity in a specific time period, this may indicate that very little to nothing has physically occurred to the objects on the shelf, so there is no need to consume time or energy to capture a new image.
- the percentage of the shelf area occupied by objects may be used for a threshold since object count may not be sufficient; there could be a larger number of physically smaller items and/or a smaller number of physically larger items on shelf 102 , so using a threshold that considers the occupied shelf space compared to the size of the shelf 102 itself may be needed.
- a threshold could be set to only capture images when the shelf is at least 75% occupied.
- the above listed thresholds may be set by individuals working in the retail store where the shelf 102 is located or by individuals at a remote location, such as a corporate office.
- the network 108 may be a local network that is constrained to one physical location, such as, for example, an individual store. In other embodiments, the network 108 may be the Internet, and/or a combination of a local network connected to the Internet.
- the other components of the system 100 may connect to the network 108 via the network connections 110 . This connection may be wired, or wireless, in nature. It may also utilize NFC communications, such as, for example, Bluetooth, and other types of NFC communications.
- the server 112 may be local or remote to the other components in the system 100 .
- the server may be stored in the same physical building as those components.
- the server may be stored at a different physical building from the other components of the system and communicate with them via the network 108 .
- FIG. 2 illustrates an example apparatus 200 for object recognition and planogram generation.
- the apparatus 200 may include additional or fewer components than are depicted in the figure.
- the apparatus 200 includes a shelf 202 , a camera 204 , and a sensor 206 . Also shown in the figure are a plurality of the same product 208 , and a space 210 where product 208 could fit on the shelf 202 .
- the apparatus 200 shown in FIG. 2 may be the same kind of apparatus shown in FIG. 1 comprising the shelf 102 , camera 104 , and sensor 106 . Accordingly, the apparatus 200 in FIG. 2 may share the same characteristics and capabilities as the shelf 102 , camera 104 , and sensor 106 in FIG. 1 . In other embodiments, the apparatus 200 in FIG. 2 may be configured differently from the shelf 102 , camera 104 , and sensor 106 of FIG. 1 .
- FIG. 2 depicts a plurality of products 208 that are at rest on the shelf 202 .
- the shelf may be in a retail store, and the products 208 on the shelf are part of the inventory of that store.
- the camera 204 and sensor 206 may be capable of detecting changes related to the products 208 depicted in the figure. For example, the sensor 206 may detect that there is an absence of product 208 in the space 210 . Depending on the sensor 206 , it could, for example, detect that the level of pressure applied at the space 210 has fallen below a threshold level set by the retail store manager, and then the sensor 206 could generate and transmit a message to the camera 204 .
- the sensor 206 could detect a change in the capacitance or the light intensity at the space 210 beyond a set threshold level.
- the camera 204 may be programmed such that it may be triggered to capture images when the sensor 206 detects object(s) on the shelf and can count them, coordinating with the camera 204 when to capture an image, which is then used to identify what those objects are.
- the camera 204 may receive the message and capture an image of the space 210 , and in some cases the surrounding shelf 202 space. Based on this image the camera 204 may be able to identify the product 208 that is supposed to be in that space 210 , or in other embodiments the camera 204 may be able to identify a product identifier related to the types of products 208 that should be located in space 210 .
- FIG. 3 illustrates example shelves 202 comprising apparatuses for object recognition and planogram generation.
- Each shelf 202 in FIG. 3 may include a camera 204 and sensor 206 that share the same characteristics and capabilities of the apparatus 200 depicted in FIG. 2 .
- FIG. 3 depicts another configuration of products and shelves in, for example, a retail store.
- FIG. 3 depicts a variety of different products 302 , 304 , 306 , 308 , 310 , and 312 .
- Each shelf 202 in FIG. 3 may include two different types of products as shown in the figure.
- the sensors 206 and cameras 204 for each shelf may be capable of detecting each product type that is located on each shelf as well as changes that are relevant to each product. For example, the sensor 206 on the top shelf 202 may detect an accurate count for products 302 and 304 , but it cannot identify what products these are or distinguish one from the other.
- the quantities detected by the sensor 206 satisfy a threshold, giving permission to the top camera 204 to capture an image or images of the objects on this shelf 202 , especially the item in the center (from left-to-right) at the front edge of the shelf 202 . These images are then used to identify which products are products 302 and which are products 304 , and the planogram that details the specific products and their quantities is updated.
- the sensor 206 on the middle shelf 202 may detect an empty space at 314 . This condition may meet a threshold that triggers the camera 204 to capture an image of the shelf.
- the sensor 206 on the bottom shelf 202 may detect the low product count in space 316 , allowing the camera 204 on the bottom shelf to capture an image or images of the two product facings remaining in that space (e.g. one product 310 and one product 312 ). These images may then be used to confirm the products' identities. With the knowledge of what these products are and that both are low in inventory in their respective shelf locations, an employee can restock this bottom shelf 202 with the products 310 and 312 . These shelves may be part of a longer shelving system that runs the length of an aisle in the store.
- the cameras 204 and sensors 206 may be communicatively linked to a server (not pictured) that may assist in the updating and creation of planograms based on events that are relevant to the shown products 302 - 312 .
- FIG. 4 illustrates an example flow diagram 400 for object recognition and planogram generation.
- One or more steps of the method 400 may be implemented as a set of instructions on a computer-readable memory and executable on one or more processors.
- the example flow diagram may utilize a shelf, at least one sensor, at least one camera, and at least one server.
- the method 400 may comprise detecting, by at least one sensor coupled to a shelf, that at least one threshold has been met (block 402 ); triggering, by the at least one sensor, at least one image to be captured by at least one camera coupled to the shelf (block 404 ); capturing, at the at least one camera, at least one image based on the met threshold (block 406 ); and identifying, by at least one server, at least one product identifier based on the at least one captured image (block 408 ).
- the at least one threshold in (block 402 ) may be a value representative of: when objects on the at least one shelf are faced, when the number of objects on the shelf met a threshold, when a change in the number of objects on the shelf met a threshold, when a certain percentage of an area of the shelf is occupied by objects, or a combination thereof.
- triggering at least one image to be captured (block 404 ) further comprises transmitting, at the at least one sensor, a message to the at least one camera comprising information about the at least one threshold that has been met.
- the camera capturing an image may include the camera capturing a plurality of images.
- the camera may capture a time series of images and select the best image representative of the objects on the shelf and/or change to the shelf. This image may ultimately be presented to a user of a user device that is attempting to access an updated, or newly created, planogram related to the shelf.
- identifying the at least one product identifier further comprises identifying, at the server, in the at least one captured image a product sku, product UPC, or a combination thereof. Additionally, other product identifiers such as those listed herein may be used and may be able to be identified by the server.
- the method 400 may further comprise generating or updating, at the at least one server, a planogram for the shelf based on the at least one identified product identifier. Similarly, the method 400 may further comprise generating or updating, at the at least one server, the information displayed on at least one electronic shelf label for the shelf based on the at least one identified product identifier.
- FIG. 5 is a block diagram representative of an example logic circuit capable of implementing, for example, one or more components of the example logic circuit of FIG. 5 or, more generally, the example processors included in the camera 104 , sensor 106 , and server 112 of FIG. 1 , as well as the cameras 204 , and sensors 206 depicted in FIGS. 2-3 .
- the example logic circuit of FIG. 5 is a processing platform 500 capable of executing instructions to, for example, implement operations of the example methods described herein, as may be represented by the flowcharts of the drawings that accompany this description.
- Other example logic circuits capable of, for example, implementing operations of the example methods described herein include field programmable gate arrays (FPGAs) and application specific integrated circuits (ASICs).
- FPGAs field programmable gate arrays
- ASICs application specific integrated circuits
- the example processing platform 500 of FIG. 5 includes a processor 502 such as, for example, one or more microprocessors, controllers, and/or any suitable type of processor.
- the example processing platform 500 of FIG. 5 includes memory (e.g., volatile memory, non-volatile memory) 504 accessible by the processor 502 (e.g., via a memory controller).
- the example processor 502 interacts with the memory 504 to obtain, for example, machine-readable instructions stored in the memory 504 corresponding to, for example, the operations represented by the flowcharts of this disclosure.
- machine-readable instructions corresponding to the example operations described herein may be stored on one or more removable media (e.g., a compact disc, a digital versatile disc, removable flash memory, etc.) that may be coupled to the processing platform 500 to provide access to the machine-readable instructions stored thereon.
- removable media e.g., a compact disc, a digital versatile disc, removable flash memory, etc.
- the example processing platform 500 of FIG. 5 also includes a network interface 506 to enable communication with other machines via, for example, one or more networks.
- the example network interface 506 includes any suitable type of communication interface(s) (e.g., wired and/or wireless interfaces) configured to operate in accordance with any suitable protocol(s).
- processing platform 500 of FIG. 5 also includes input/output (I/O) interfaces 508 to enable receipt of user input and communication of output data to the user.
- I/O input/output
- logic circuit is expressly defined as a physical device including at least one hardware component configured (e.g., via operation in accordance with a predetermined configuration and/or via execution of stored machine-readable instructions) to control one or more machines and/or perform operations of one or more machines.
- Examples of a logic circuit include one or more processors, one or more coprocessors, one or more microprocessors, one or more controllers, one or more digital signal processors (DSPs), one or more application specific integrated circuits (ASICs), one or more field programmable gate arrays (FPGAs), one or more microcontroller units (MCUs), one or more hardware accelerators, one or more special-purpose computer chips, and one or more system-on-a-chip (SoC) devices.
- Some example logic circuits, such as ASICs or FPGAs are specifically configured hardware for performing operations (e.g., one or more of the operations described herein and represented by the flowcharts of this disclosure, if such are present).
- Some example logic circuits are hardware that executes machine-readable instructions to perform operations (e.g., one or more of the operations described herein and represented by the flowcharts of this disclosure, if such are present). Some example logic circuits include a combination of specifically configured hardware and hardware that executes machine-readable instructions.
- the above description refers to various operations described herein and flowcharts that may be appended hereto to illustrate the flow of those operations. Any such flowcharts are representative of example methods disclosed herein. In some examples, the methods represented by the flowcharts implement the apparatus represented by the block diagrams. Alternative implementations of example methods disclosed herein may include additional or alternative operations. Further, operations of alternative implementations of the methods disclosed herein may combined, divided, re-arranged or omitted.
- the operations described herein are implemented by machine-readable instructions (e.g., software and/or firmware) stored on a medium (e.g., a tangible machine-readable medium) for execution by one or more logic circuits (e.g., processor(s)).
- the operations described herein are implemented by one or more configurations of one or more specifically designed logic circuits (e.g., ASIC(s)).
- the operations described herein are implemented by a combination of specifically designed logic circuit(s) and machine-readable instructions stored on a medium (e.g., a tangible machine-readable medium) for execution by logic circuit(s).
- each of the terms “tangible machine-readable medium,” “non-transitory machine-readable medium” and “machine-readable storage device” is expressly defined as a storage medium (e.g., a platter of a hard disk drive, a digital versatile disc, a compact disc, flash memory, read-only memory, random-access memory, etc.) on which machine-readable instructions (e.g., program code in the form of, for example, software and/or firmware) are stored for any suitable duration of time (e.g., permanently, for an extended period of time (e.g., while a program associated with the machine-readable instructions is executing), and/or a short period of time (e.g., while the machine-readable instructions are cached and/or during a buffering process)).
- machine-readable instructions e.g., program code in the form of, for example, software and/or firmware
- each of the terms “tangible machine-readable medium,” “non-transitory machine-readable medium” and “machine-readable storage device” is expressly defined to exclude propagating signals. That is, as used in any claim of this patent, none of the terms “tangible machine-readable medium,” “non-transitory machine-readable medium,” and “machine-readable storage device” can be read to be implemented by a propagating signal.
- a includes . . . a”, “contains . . . a” does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises, has, includes, contains the element.
- the terms “a” and “an” are defined as one or more unless explicitly stated otherwise herein.
- the terms “substantially”, “essentially”, “approximately”, “about” or any other version thereof, are defined as being close to as understood by one of ordinary skill in the art, and in one non-limiting embodiment the term is defined to be within 10%, in another embodiment within 5%, in another embodiment within 1% and in another embodiment within 0.5%.
- the term “coupled” as used herein is defined as connected, although not necessarily directly and not necessarily mechanically.
- a device or structure that is “configured” in a certain way is configured in at least that way but may also be configured in ways that are not listed.
Abstract
Description
- Retailers use a variety of shelf monitoring systems to track their inventory. Current shelf monitoring systems can either count products or identify products with good accuracy, but they do not perform both well. For example, smart shelves based on resistive, capacitive, or light sensors can know how many objects are on a shelf and where the items are positioned, but they do not have the ability to confidently state what specifically those items are.
- Conversely, shelf monitoring robots and fixed cross-aisle cameras periodically capture images of a retailer's shelves and use image recognition tools to identify what the stocked items are, but this only acquires product facings. They therefore cannot accurately tell how many items, if any, are actually on the shelf behind those first items. One of the counting systems from above could be deployed along with an object recognition system in an attempt to provide a complete arrangement, but this presents a number of system integration and cost issues.
- In an embodiment, the present invention is an apparatus for triggering object recognition and planogram generation. The apparatus may comprise a shelf, at least one sensor affixed to the shelf, at least one camera affixed to the shelf, and at least one server communicatively coupled to the at least one camera and the at least one sensor. The apparatus may be configured such that the at least one sensor is configured to detect that at least one threshold has been met, and the at least one sensor is further configured to trigger an image to be taken by the least one camera; wherein the at least one camera is configured to capture at least one image based on the met threshold, and the at least one server is configured to identify at least one product identifier based on the at least one captured image.
- In another embodiment, the present invention is a method for triggering object recognition and planogram generation. The method may comprise detecting, by at least one sensor coupled to a shelf, that at least one threshold has been met; triggering, by the at least one sensor, at least one image to be captured by at least one camera coupled to the shelf; capturing, at the at least one camera, at least one image based on the met threshold; and identifying, by at least one server, at least one product identifier based on the at least one captured image.
- In yet another embodiment, the present invention is a tangible machine-readable medium comprising instructions for triggering object recognition and planogram generation. The instructions when executed, cause a machine to at least detect that at least one threshold has been met by at least one sensor; trigger at least one image to be captured by at least one camera coupled to the at least one sensor; capture at least one image based on the met threshold at the at least one camera; and identify at least one product identifier based on the at least one captured image by at least one server.
- The accompanying figures, where like reference numerals refer to identical or functionally similar elements throughout the separate views, together with the detailed description below, are incorporated in and form part of the specification, and serve to further illustrate embodiments of concepts that include the claimed invention, and explain various principles and advantages of those embodiments.
-
FIG. 1 illustrates an example system for object recognition and planogram generation. -
FIG. 2 illustrates an example apparatus for object recognition and planogram generation. -
FIG. 3 illustrates example shelves comprising apparatuses for object recognition and planogram generation. -
FIG. 4 illustrates an example flow diagram for object recognition and planogram generation. -
FIG. 5 is a block diagram of an example logic circuit for implementing example methods and/or operations described herein. - Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of embodiments of the present invention.
- The apparatus and method components have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.
- Systems, apparatuses, and methods for object recognition and planogram generation are disclosed herein. In particular, systems, apparatuses, and methods for triggering object recognition and planogram generation via shelf sensors are disclosed herein. Accordingly, the systems and methods disclosed herein are a better overall approach for retailers to take in tracking their inventory. The systems, apparatuses, and methods described herein may lead to cost savings and higher quality inventory management and planograms based on the more accurate object recognition and planogram generation.
- In one embodiment an apparatus, or system, may comprise a camera and a smart shelf where the camera is integrated into the front edge of the smart shelf and looks back at the items sitting on the shelf, resulting in a singular piece of hardware that could both count the items on the shelf and recognize those items on the shelf. This way, the retailer, for example, does not have to purchase and deploy both a robot and smart shelves or be concerned about situations where there is no physical place to mount a cross-aisle camera to use in conjunction with smart shelves. In other embodiments the camera and smart shelf may not be physically integrated into a singular piece of hardware, but instead are logically integrated via network communication links.
- One self-contained piece of hardware may simplify the system design, cost structure, deployment, and maintenance. The challenge with this approach however is attempting to acquire consistently good images for object recognition when the merchandise could be placed at a variety of distances from the camera. Achieving a suitable focus at both an especially close range as well as several inches away is a problem.
- For example, in one embodiment, rather than taking a photo at an arbitrary time and then deciding if the image is useful (i.e. is there an object present, is it in focus, and is it at an appropriate distance?), the output of the smart shelf sensors can be used to determine if an object is appropriately positioned for a photo before exercising the camera. The camera would be focused only on the front inventory position near the shelf's edge, thereby simplifying the camera requirements and the object recognition processing as well as setting up clear bounds for a useful image. When the front position is empty, the system can continue to refer to the most recent image taken, and then once the position is occupied, the camera can be allowed to take another image. Since a successful implementation will require limiting power and communications, it is important that neither energy nor time are wasted on capturing and analyzing images that ultimately serve no purpose. This embodiment therefore serves as a valuable filter.
- In a more specific example, the smart shelf can determine when it is restocked based on the gross changes in the inventory it senses. This naturally is when the system can really benefit from knowing what specifically is now on the shelf at the time of restocking since it knows a change was detected. Taking a photo after this restocking trigger is beneficial, as opposed to an arbitrarily prescribed time when a retailer doesn't know if anything changed or if the photo will be useful. For example, cameras could be programmed to take a picture at 8:00 every morning, assuming that restocking was completed by then, but in reality, the first shelf may have been stocked at 7:58, the second shelf at 8:03, the third shelf at 8:05, and so on. The images taken at 8:00 for the second shelf onward would be outdated within minutes and would sit with this information for 24 hours until the next image is taken. More intelligent choices could be made if the timing of the photo was tied to the shelf sensing output. Additional criteria for when the camera could be triggered or authorized to take another image may include: 1) when the shelves are faced (i.e. pulling the merchandise forward to the shelf's edge, a.k.a. leveled or zoned), 2) when the number of items on the shelf met a specified threshold, 3) when the change in the number of items on the shelf met a specified threshold, or 4) when a certain percentage of the shelf area is populated. When the specified conditions are not met, the camera will wait to take the next photo. As used herein in this description “met” may mean that a particular threshold value has been crossed meaning that a value has risen above the threshold value, or fallen below the threshold value. In some cases, “met” may mean that a value tracked by the system is equal to the threshold value.
- As an added benefit, this arrangement can check if the actual shelf inventory is compliant with a formal planogram, and additionally, the smart shelf can also be used to automatically generate and update the formal planogram. This will allow store managers to digitally see what is actually on their shelves and where it's stocked. For example, when the retailer is out of stock on one SKU but has overstock for the SKU in the next shelf lane, store associates will likely put the overstocked item in the adjacent empty spot on the shelf. Retailers and consumer packaged goods companies prefer to have something on the shelves to sell instead of nothing, even if it's the “wrong” item per the formal planogram. With the automatic planogram updates made by the system described here however, the actual planogram would reflect reality, which is ultimately more useful. Additionally, the apparatuses, systems, and methods disclosed herein may be able to identify a plurality of different product identifiers and characteristics, such as, for example, product SKUs and product UPCs.
- In addition to the planogram updates, information displayed on electronic shelf labels (ESLs) can be generated or updated in the same fashion as the planogram is. In some embodiments, the ESLs may be created and/or updated at the same time as the planogram is created and/or updated. For example, when overstocked inventory is put in the adjacent lane, and the smart shelf sensor, in conjunction with the camera, update the planogram, this could then trigger ESL data for the respective SKUs affected in the planogram to be updated as well. This includes product names, SKUs, prices, etc. listed on the ESLs.
-
FIG. 1 illustrates anexample system 100 for object recognition and planogram generation. In some embodiments, thesystem 100 may include additional or fewer components than are depicted in the figure. - The
example system 100 may comprise ashelf 102, acamera 104, asensor 106, anetwork 108,network connections 110, and aserver 112. Thecamera 104,sensor 106, andserver 112 may be communicatively coupled to each other via thenetwork 108 andnetwork connections 110. As such, those components may be able to exchange information with each other relevant to object recognition and planogram generation. - The
shelf 102 may be part of a larger shelving unit comprising multiple individual shelves such asshelf 102. These shelves may be used to store products of a variety of dimensions. In some embodiments, thecamera 104 and thesensor 106 are physically coupled to theshelf 102. In other embodiments, thecamera 104 andsensor 106 may be manufactured as part of theshelf 102 itself, and in other embodiments thecamera 104 andsensor 106 may be added to theshelf 102 after the shelf itself has been manufactured. - In some embodiments, the
camera 104 may comprise one individual camera. In other embodiments, thecamera 104 may comprise multiple cameras able to capture images at a variety of positions and angles on theshelf 102. In some embodiments, thecamera 104 may be configured to capture at least one image based on the met threshold and identify at least one product identifier based on the at least one captured image. In yet other embodiments, thecamera 104 may be configured to take multiple images of the objects on theshelf 102 or of theshelf 102 itself. - In other embodiments of the
system 100, there may be multiple cameras pershelf 102 with space between them in order to appropriately cover all the product facings across thecomplete shelf 102. The camera-to-camera pitch may be based on the viewing angle of eachcamera 104. Accordingly, the plurality ofcameras 104 may be communicatively coupled to each other to provide a more complete picture of the status of goods on anindividual shelf 102, a section of shelves in a lane, as well as the totality of shelves in an aisle, or even an entire store. - In some embodiments, the
server 112 may be configured to perform the identification itself, but in other embodiments, theserver 112 may identify the at least one product identifier by communicating with software, applications, and/or databases located at theserver 112. The product images in the databases used for product recognition may originate from a variety of sources. For example, the images could be provided by each product's supplier, by the retailer, or by a third party. The images may be taken manually or there may be an automated setup that captures multiple images for each product and are uploaded to the database. Additionally, the image data stored in the libraries may be collected via point of sale solutions that capture images of the products, along with UPC (barcode) data and/or EPC (RFID) data. For example, an in-lane or bioptic scanner may include a camera that records an image of the product while the barcode is read. To identify the at least one product identifier, theserver 112 may be further configured to identify in the at least one captured image a product sku, product UPC, or a combination thereof. Additionally, other product identifiers may be used for products and identifiable by theserver 112, such as, for example, brand names for products, the product class, product dimensions, as well as colors, shapes, outlines, features, patterns, and anything else in the product's appearance that visually distinguishes it. In some embodiments, thecamera 104 may be capable of performing the identification itself without communication with theserver 112, such that thecamera 104 may be capable of performing all the same functionality listed above with respect to theserver 112 in reference to performing the identification. - The
camera 104 may work in conjunction with theserver 112 to identify the best types of images, and the best conditions under which to take an image. This identification process may improve over time through the use of artificial intelligence and/or machine learning techniques to train thecamera 104 to take better pictures. The training of thecamera 104 may also include integrating the data fromsensor 106 into the library of information that thecamera 104, or other program, may be trained on. - In some embodiments, the
camera 104 in conjunction with theserver 112 may utilize artificial intelligence and machine learning tactics to identify suitable images, and conditions best suited for taking images of the shelves and the objects on the shelves. In this way, the smart shelf, i.e. the systems, apparatuses, and methods described herein, may learn and improve over time as it is deployed based on a library of images and the corresponding sensor data. More particularly, thecamera 104 may be triggered to capture an image based on a threshold being detected by thesensor 106. In this way, thesystem 100 may know 1) whether or not the captured image is good based on how useful it is for the object recognition engine, that may be stored on theserver 112, and 2) from thesensor 106, where the associated merchandise was located on theshelf 102 when the image was captured. If it is determined that images are more useful when the merchandise is positioned in a specific location, then thesystem 100 could adjust the thresholds that trigger thecamera 104 to capture an image, improving future performance. The adjustments of the triggers and thresholds may be managed by software located at theserver 112,camera 104, and/orsensor 106. - For example, the
system 100 may be counting items in a shelf's lane that turn out to be beverage cans, which are relatively short, so a suitable image could have the front can positioned closer to thecamera 104. In another lane, thesystem 100 may be counting tall beverage bottles, and in order to get a quality image for successful object recognition, the front bottle needs to be further from thecamera 104 to fit more of it in the camera's field of view. So, while thesystem 100 may start with pre-defined default parameters for acceptable merchandise locations on theshelf 102 relative to thecamera 104, based on real-world interactions it may learn over time what works well for object recognition and what doesn't and then adjust accordingly. - In some embodiments, the
server 112 may be further configured to generate a planogram for the shelf based on the at least one identified product identifier. The generation of the planogram may take into account other product information such as characteristics, or dimensions related to the product. Additionally, theserver 112 may be further configured to update a planogram for the shelf based on the at least one identified product identifier. The created planogram, or updated planogram, may be transmitted to a user device (not shown) via thenetwork 108 andserver 112. At the user device, an individual working in the store where theshelf 102 is located may thus have a real-time picture of the inventory in the store. In other embodiments, the creation and updating of planograms may be handled by thecamera 104 capable of the same functionality listed above for the server with respect to creating and updating planograms. Similarly, theserver 112 may be configured to generate or update an electronic shelf label for theshelf 102 based on the identified product identifier. - In some embodiments, the
sensor 106 may comprise one individual sensor or a sensor array. In other embodiments, thesensor 106 may comprise a variety of sensors or sensor arrays linked together. In one embodiment, the at least one sensor may be a resistive sensor, capacitive sensor, light sensor, or a combination thereof. In one embodiment, thesensor 106 may be configured to detect that at least one threshold has been met. Additionally, thesensor 106 may be further configured to trigger an image to be taken by the least one camera. As such, thesensor 106 may be further configured to transmit a message to the at least one camera comprising information about the at least one threshold that has been met. - In some embodiments, the at least one threshold may be a value representative of: objects on the at least one shelf are faced, when the number of objects on the shelf met a threshold, when a change in the number of objects on the shelf met a threshold, when a certain percentage of an area of the shelf is occupied by objects, or a combination thereof. When objects are faced, the store employees may move the objects to the front edge of the
shelf 102, which is an optimal location for the objects' images to be captured by the at least onecamera 104. This may also be an optimal time for thecamera 104 to capture an image since the employees just spent time organizing and arranging theshelf 102 merchandise. When the number of objects on theshelf 102 meets a minimum threshold, this may indicate that theshelf 102 is full enough with objects such that having an updated image captured would be useful. When the number of objects falls below a minimum threshold, this may indicate a low inventory condition, and it may be beneficial to capture another image to identify the current objects left on theshelf 102. When the number of objects on theshelf 102 increases by a threshold quantity in a specific time period (i.e. the change in the number of objects), this may indicate a restocking event has taken place and may be the optimal time to capture an image. When the number of objects on theshelf 102 decreases by a threshold quantity in a specific time period, this may indicate other significant activity has taken place at theshelf 102, such as a high volume of shopping or even a theft event (i.e. a sweep). When the number of objects on theshelf 102 has neither increased nor decreased beyond a threshold quantity in a specific time period, this may indicate that very little to nothing has physically occurred to the objects on the shelf, so there is no need to consume time or energy to capture a new image. Also, the percentage of the shelf area occupied by objects may be used for a threshold since object count may not be sufficient; there could be a larger number of physically smaller items and/or a smaller number of physically larger items onshelf 102, so using a threshold that considers the occupied shelf space compared to the size of theshelf 102 itself may be needed. For example, a threshold could be set to only capture images when the shelf is at least 75% occupied. The above listed thresholds may be set by individuals working in the retail store where theshelf 102 is located or by individuals at a remote location, such as a corporate office. - The
network 108 may be a local network that is constrained to one physical location, such as, for example, an individual store. In other embodiments, thenetwork 108 may be the Internet, and/or a combination of a local network connected to the Internet. The other components of thesystem 100 may connect to thenetwork 108 via thenetwork connections 110. This connection may be wired, or wireless, in nature. It may also utilize NFC communications, such as, for example, Bluetooth, and other types of NFC communications. - The
server 112 may be local or remote to the other components in thesystem 100. For example, if the server is local to the other components it may be stored in the same physical building as those components. Conversely, if the server is remote it may be stored at a different physical building from the other components of the system and communicate with them via thenetwork 108. -
FIG. 2 illustrates anexample apparatus 200 for object recognition and planogram generation. In some embodiments, theapparatus 200 may include additional or fewer components than are depicted in the figure. - In the embodiment shown in
FIG. 2 theapparatus 200 includes ashelf 202, acamera 204, and asensor 206. Also shown in the figure are a plurality of thesame product 208, and aspace 210 whereproduct 208 could fit on theshelf 202. Theapparatus 200 shown inFIG. 2 may be the same kind of apparatus shown inFIG. 1 comprising theshelf 102,camera 104, andsensor 106. Accordingly, theapparatus 200 inFIG. 2 may share the same characteristics and capabilities as theshelf 102,camera 104, andsensor 106 inFIG. 1 . In other embodiments, theapparatus 200 inFIG. 2 may be configured differently from theshelf 102,camera 104, andsensor 106 ofFIG. 1 . -
FIG. 2 depicts a plurality ofproducts 208 that are at rest on theshelf 202. The shelf may be in a retail store, and theproducts 208 on the shelf are part of the inventory of that store. Thecamera 204 andsensor 206 may be capable of detecting changes related to theproducts 208 depicted in the figure. For example, thesensor 206 may detect that there is an absence ofproduct 208 in thespace 210. Depending on thesensor 206, it could, for example, detect that the level of pressure applied at thespace 210 has fallen below a threshold level set by the retail store manager, and then thesensor 206 could generate and transmit a message to thecamera 204. Inother sensor 206 examples, thesensor 206 could detect a change in the capacitance or the light intensity at thespace 210 beyond a set threshold level. Generally speaking, thecamera 204 may be programmed such that it may be triggered to capture images when thesensor 206 detects object(s) on the shelf and can count them, coordinating with thecamera 204 when to capture an image, which is then used to identify what those objects are. - However, in other embodiments, the
camera 204 may receive the message and capture an image of thespace 210, and in some cases the surroundingshelf 202 space. Based on this image thecamera 204 may be able to identify theproduct 208 that is supposed to be in thatspace 210, or in other embodiments thecamera 204 may be able to identify a product identifier related to the types ofproducts 208 that should be located inspace 210. -
FIG. 3 illustratesexample shelves 202 comprising apparatuses for object recognition and planogram generation. Eachshelf 202 inFIG. 3 may include acamera 204 andsensor 206 that share the same characteristics and capabilities of theapparatus 200 depicted inFIG. 2 .FIG. 3 depicts another configuration of products and shelves in, for example, a retail store. -
FIG. 3 depicts a variety ofdifferent products shelf 202 inFIG. 3 may include two different types of products as shown in the figure. Thesensors 206 andcameras 204 for each shelf may be capable of detecting each product type that is located on each shelf as well as changes that are relevant to each product. For example, thesensor 206 on thetop shelf 202 may detect an accurate count forproducts sensor 206 satisfy a threshold, giving permission to thetop camera 204 to capture an image or images of the objects on thisshelf 202, especially the item in the center (from left-to-right) at the front edge of theshelf 202. These images are then used to identify which products areproducts 302 and which areproducts 304, and the planogram that details the specific products and their quantities is updated. In another example, thesensor 206 on themiddle shelf 202 may detect an empty space at 314. This condition may meet a threshold that triggers thecamera 204 to capture an image of the shelf. Thesensor 206 on thebottom shelf 202 may detect the low product count inspace 316, allowing thecamera 204 on the bottom shelf to capture an image or images of the two product facings remaining in that space (e.g. oneproduct 310 and one product 312). These images may then be used to confirm the products' identities. With the knowledge of what these products are and that both are low in inventory in their respective shelf locations, an employee can restock thisbottom shelf 202 with theproducts cameras 204 andsensors 206 may be communicatively linked to a server (not pictured) that may assist in the updating and creation of planograms based on events that are relevant to the shown products 302-312. -
FIG. 4 illustrates an example flow diagram 400 for object recognition and planogram generation. One or more steps of themethod 400 may be implemented as a set of instructions on a computer-readable memory and executable on one or more processors. The example flow diagram may utilize a shelf, at least one sensor, at least one camera, and at least one server. - In one embodiment, the
method 400 may comprise detecting, by at least one sensor coupled to a shelf, that at least one threshold has been met (block 402); triggering, by the at least one sensor, at least one image to be captured by at least one camera coupled to the shelf (block 404); capturing, at the at least one camera, at least one image based on the met threshold (block 406); and identifying, by at least one server, at least one product identifier based on the at least one captured image (block 408). - As described herein, in some embodiments of the
method 400, the at least one threshold in (block 402) may be a value representative of: when objects on the at least one shelf are faced, when the number of objects on the shelf met a threshold, when a change in the number of objects on the shelf met a threshold, when a certain percentage of an area of the shelf is occupied by objects, or a combination thereof. - In some embodiments of the
method 400, triggering at least one image to be captured (block 404) further comprises transmitting, at the at least one sensor, a message to the at least one camera comprising information about the at least one threshold that has been met. - In some embodiments of the
method 400, the camera capturing an image (block 406) may include the camera capturing a plurality of images. For example, the camera may capture a time series of images and select the best image representative of the objects on the shelf and/or change to the shelf. This image may ultimately be presented to a user of a user device that is attempting to access an updated, or newly created, planogram related to the shelf. - In some embodiments of the
method 400, identifying the at least one product identifier (block 408) further comprises identifying, at the server, in the at least one captured image a product sku, product UPC, or a combination thereof. Additionally, other product identifiers such as those listed herein may be used and may be able to be identified by the server. - The
method 400 may further comprise generating or updating, at the at least one server, a planogram for the shelf based on the at least one identified product identifier. Similarly, themethod 400 may further comprise generating or updating, at the at least one server, the information displayed on at least one electronic shelf label for the shelf based on the at least one identified product identifier. -
FIG. 5 is a block diagram representative of an example logic circuit capable of implementing, for example, one or more components of the example logic circuit ofFIG. 5 or, more generally, the example processors included in thecamera 104,sensor 106, andserver 112 ofFIG. 1 , as well as thecameras 204, andsensors 206 depicted inFIGS. 2-3 . The example logic circuit ofFIG. 5 is aprocessing platform 500 capable of executing instructions to, for example, implement operations of the example methods described herein, as may be represented by the flowcharts of the drawings that accompany this description. Other example logic circuits capable of, for example, implementing operations of the example methods described herein include field programmable gate arrays (FPGAs) and application specific integrated circuits (ASICs). - The
example processing platform 500 ofFIG. 5 includes aprocessor 502 such as, for example, one or more microprocessors, controllers, and/or any suitable type of processor. Theexample processing platform 500 ofFIG. 5 includes memory (e.g., volatile memory, non-volatile memory) 504 accessible by the processor 502 (e.g., via a memory controller). Theexample processor 502 interacts with thememory 504 to obtain, for example, machine-readable instructions stored in thememory 504 corresponding to, for example, the operations represented by the flowcharts of this disclosure. Additionally or alternatively, machine-readable instructions corresponding to the example operations described herein may be stored on one or more removable media (e.g., a compact disc, a digital versatile disc, removable flash memory, etc.) that may be coupled to theprocessing platform 500 to provide access to the machine-readable instructions stored thereon. - The
example processing platform 500 ofFIG. 5 also includes anetwork interface 506 to enable communication with other machines via, for example, one or more networks. Theexample network interface 506 includes any suitable type of communication interface(s) (e.g., wired and/or wireless interfaces) configured to operate in accordance with any suitable protocol(s). - The example,
processing platform 500 ofFIG. 5 also includes input/output (I/O) interfaces 508 to enable receipt of user input and communication of output data to the user. - The above description refers to a block diagram of the accompanying drawings. Alternative implementations of the example represented by the block diagram includes one or more additional or alternative elements, processes and/or devices. Additionally, or alternatively, one or more of the example blocks of the diagram may be combined, divided, re-arranged or omitted. Components represented by the blocks of the diagram are implemented by hardware, software, firmware, and/or any combination of hardware, software and/or firmware. In some examples, at least one of the components represented by the blocks is implemented by a logic circuit. As used herein, the term “logic circuit” is expressly defined as a physical device including at least one hardware component configured (e.g., via operation in accordance with a predetermined configuration and/or via execution of stored machine-readable instructions) to control one or more machines and/or perform operations of one or more machines. Examples of a logic circuit include one or more processors, one or more coprocessors, one or more microprocessors, one or more controllers, one or more digital signal processors (DSPs), one or more application specific integrated circuits (ASICs), one or more field programmable gate arrays (FPGAs), one or more microcontroller units (MCUs), one or more hardware accelerators, one or more special-purpose computer chips, and one or more system-on-a-chip (SoC) devices. Some example logic circuits, such as ASICs or FPGAs, are specifically configured hardware for performing operations (e.g., one or more of the operations described herein and represented by the flowcharts of this disclosure, if such are present). Some example logic circuits are hardware that executes machine-readable instructions to perform operations (e.g., one or more of the operations described herein and represented by the flowcharts of this disclosure, if such are present). Some example logic circuits include a combination of specifically configured hardware and hardware that executes machine-readable instructions. The above description refers to various operations described herein and flowcharts that may be appended hereto to illustrate the flow of those operations. Any such flowcharts are representative of example methods disclosed herein. In some examples, the methods represented by the flowcharts implement the apparatus represented by the block diagrams. Alternative implementations of example methods disclosed herein may include additional or alternative operations. Further, operations of alternative implementations of the methods disclosed herein may combined, divided, re-arranged or omitted. In some examples, the operations described herein are implemented by machine-readable instructions (e.g., software and/or firmware) stored on a medium (e.g., a tangible machine-readable medium) for execution by one or more logic circuits (e.g., processor(s)). In some examples, the operations described herein are implemented by one or more configurations of one or more specifically designed logic circuits (e.g., ASIC(s)). In some examples the operations described herein are implemented by a combination of specifically designed logic circuit(s) and machine-readable instructions stored on a medium (e.g., a tangible machine-readable medium) for execution by logic circuit(s).
- As used herein, each of the terms “tangible machine-readable medium,” “non-transitory machine-readable medium” and “machine-readable storage device” is expressly defined as a storage medium (e.g., a platter of a hard disk drive, a digital versatile disc, a compact disc, flash memory, read-only memory, random-access memory, etc.) on which machine-readable instructions (e.g., program code in the form of, for example, software and/or firmware) are stored for any suitable duration of time (e.g., permanently, for an extended period of time (e.g., while a program associated with the machine-readable instructions is executing), and/or a short period of time (e.g., while the machine-readable instructions are cached and/or during a buffering process)). Further, as used herein, each of the terms “tangible machine-readable medium,” “non-transitory machine-readable medium” and “machine-readable storage device” is expressly defined to exclude propagating signals. That is, as used in any claim of this patent, none of the terms “tangible machine-readable medium,” “non-transitory machine-readable medium,” and “machine-readable storage device” can be read to be implemented by a propagating signal.
- In the foregoing specification, specific embodiments have been described. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the invention as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of present teachings. Additionally, the described embodiments/examples/implementations should not be interpreted as mutually exclusive and should instead be understood as potentially combinable if such combinations are permissive in any way. In other words, any feature disclosed in any of the aforementioned embodiments/examples/implementations may be included in any of the other aforementioned embodiments/examples/implementations.
- The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential features or elements of any or all the claims. The claimed invention is defined solely by the appended claims including any amendments made during the pendency of this application and all equivalents of those claims as issued.
- Moreover in this document, relational terms such as first and second, top and bottom, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” “has”, “having,” “includes”, “including,” “contains”, “containing” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises, has, includes, contains a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element proceeded by “comprises . . . a”, “has . . . a”, “includes . . . a”, “contains . . . a” does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises, has, includes, contains the element. The terms “a” and “an” are defined as one or more unless explicitly stated otherwise herein. The terms “substantially”, “essentially”, “approximately”, “about” or any other version thereof, are defined as being close to as understood by one of ordinary skill in the art, and in one non-limiting embodiment the term is defined to be within 10%, in another embodiment within 5%, in another embodiment within 1% and in another embodiment within 0.5%. The term “coupled” as used herein is defined as connected, although not necessarily directly and not necessarily mechanically. A device or structure that is “configured” in a certain way is configured in at least that way but may also be configured in ways that are not listed.
- The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter may lie in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.
Claims (20)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/737,717 US20210209550A1 (en) | 2020-01-08 | 2020-01-08 | Systems, apparatuses, and methods for triggering object recognition and planogram generation via shelf sensors |
PCT/US2021/012274 WO2021141962A1 (en) | 2020-01-08 | 2021-01-06 | Systems, apparatuses, and methods for triggering object recognition and planogram generation via shelf sensors |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/737,717 US20210209550A1 (en) | 2020-01-08 | 2020-01-08 | Systems, apparatuses, and methods for triggering object recognition and planogram generation via shelf sensors |
Publications (1)
Publication Number | Publication Date |
---|---|
US20210209550A1 true US20210209550A1 (en) | 2021-07-08 |
Family
ID=76655290
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/737,717 Pending US20210209550A1 (en) | 2020-01-08 | 2020-01-08 | Systems, apparatuses, and methods for triggering object recognition and planogram generation via shelf sensors |
Country Status (2)
Country | Link |
---|---|
US (1) | US20210209550A1 (en) |
WO (1) | WO2021141962A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230008898A1 (en) * | 2021-07-06 | 2023-01-12 | Netronix, Inc. | Message updating system for electronic label |
WO2023061599A1 (en) * | 2021-10-14 | 2023-04-20 | Ses-Imagotag Sa | Method for logically linking an electronic display unit to a product |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040099741A1 (en) * | 2002-11-26 | 2004-05-27 | International Business Machines Corporation | System and method for selective processing of digital images |
US20140249916A1 (en) * | 2013-03-04 | 2014-09-04 | Capital One Financial Corporation | System and method for providing mobile grocery budget application |
US20140358656A1 (en) * | 2013-05-28 | 2014-12-04 | Capital One Financial Corporation | System and method providing flow-through private label card acquisition |
US20180005035A1 (en) * | 2016-05-19 | 2018-01-04 | Simbe Robotics, Inc. | Method for automatically generating planograms of shelving structures within a store |
US20190034864A1 (en) * | 2017-07-25 | 2019-01-31 | Bossa Nova Robotics Ip, Inc. | Data Reduction in a Bar Code Reading Robot Shelf Monitoring System |
US20190215424A1 (en) * | 2018-01-10 | 2019-07-11 | Trax Technologies Solutions Pte Ltd. | Camera configured to be mounted to store shelf |
US20200151692A1 (en) * | 2018-04-18 | 2020-05-14 | Sbot Technologies, Inc. d/b/a Caper Inc. | Systems and methods for training data generation for object identification and self-checkout anti-theft |
US20200232797A1 (en) * | 2019-01-23 | 2020-07-23 | Hewlett Packard Enterprise Development Lp | Drone-based scanning for location-based services |
US20210065281A1 (en) * | 2019-08-29 | 2021-03-04 | Ncr Corporation | Store planogram |
US20220116737A1 (en) * | 2019-07-23 | 2022-04-14 | 1904038 Alberta Ltd. O/A Smart Access | Methods and systems for providing context based information |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040034581A1 (en) * | 1998-11-18 | 2004-02-19 | Visible Inventory, Inc. | Inventory control and communication system |
US8189855B2 (en) * | 2007-08-31 | 2012-05-29 | Accenture Global Services Limited | Planogram extraction based on image processing |
US8800869B2 (en) * | 2012-05-23 | 2014-08-12 | Opticon Sensors Europe B.V. | Inventory control using electronic shelf label systems |
US10956856B2 (en) * | 2015-01-23 | 2021-03-23 | Samsung Electronics Co., Ltd. | Object recognition for a storage structure |
US10521914B2 (en) * | 2017-09-07 | 2019-12-31 | Symbol Technologies, Llc | Multi-sensor object recognition system and method |
-
2020
- 2020-01-08 US US16/737,717 patent/US20210209550A1/en active Pending
-
2021
- 2021-01-06 WO PCT/US2021/012274 patent/WO2021141962A1/en active Application Filing
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040099741A1 (en) * | 2002-11-26 | 2004-05-27 | International Business Machines Corporation | System and method for selective processing of digital images |
US20140249916A1 (en) * | 2013-03-04 | 2014-09-04 | Capital One Financial Corporation | System and method for providing mobile grocery budget application |
US20140358656A1 (en) * | 2013-05-28 | 2014-12-04 | Capital One Financial Corporation | System and method providing flow-through private label card acquisition |
US20180005035A1 (en) * | 2016-05-19 | 2018-01-04 | Simbe Robotics, Inc. | Method for automatically generating planograms of shelving structures within a store |
US20190034864A1 (en) * | 2017-07-25 | 2019-01-31 | Bossa Nova Robotics Ip, Inc. | Data Reduction in a Bar Code Reading Robot Shelf Monitoring System |
US20190215424A1 (en) * | 2018-01-10 | 2019-07-11 | Trax Technologies Solutions Pte Ltd. | Camera configured to be mounted to store shelf |
US20200151692A1 (en) * | 2018-04-18 | 2020-05-14 | Sbot Technologies, Inc. d/b/a Caper Inc. | Systems and methods for training data generation for object identification and self-checkout anti-theft |
US20200232797A1 (en) * | 2019-01-23 | 2020-07-23 | Hewlett Packard Enterprise Development Lp | Drone-based scanning for location-based services |
US20220116737A1 (en) * | 2019-07-23 | 2022-04-14 | 1904038 Alberta Ltd. O/A Smart Access | Methods and systems for providing context based information |
US20210065281A1 (en) * | 2019-08-29 | 2021-03-04 | Ncr Corporation | Store planogram |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230008898A1 (en) * | 2021-07-06 | 2023-01-12 | Netronix, Inc. | Message updating system for electronic label |
US11817018B2 (en) * | 2021-07-06 | 2023-11-14 | Netronix, Inc. | Message updating system for electronic label |
WO2023061599A1 (en) * | 2021-10-14 | 2023-04-20 | Ses-Imagotag Sa | Method for logically linking an electronic display unit to a product |
Also Published As
Publication number | Publication date |
---|---|
WO2021141962A1 (en) | 2021-07-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11727479B2 (en) | Computer vision system and method for automatic checkout | |
US11455869B2 (en) | Updating shopping list based on analysis of images | |
US20190236530A1 (en) | Product inventorying using image differences | |
JP6791534B2 (en) | Product management device, product management method and program | |
JP2020521199A (en) | How to track in-store inventory levels | |
US20150220783A1 (en) | Method and system for semi-automated venue monitoring | |
CN107995458B (en) | Method and device for shooting packaging process | |
US20210209550A1 (en) | Systems, apparatuses, and methods for triggering object recognition and planogram generation via shelf sensors | |
US10628792B2 (en) | Systems and methods for monitoring and restocking merchandise | |
CN113468914A (en) | Method, device and equipment for determining purity of commodities | |
WO2021216329A1 (en) | Methods and systems for monitoring on-shelf inventory and detecting out of stock events | |
US20180053145A1 (en) | Systems and methods for determining stocking locations of products having more than one stocking location on a sales floor | |
US20210342805A1 (en) | System and method for identifying grab-and-go transactions in a cashierless store | |
WO2022081518A2 (en) | Methods and systems for retail environments | |
CN113178032A (en) | Video processing method, system and storage medium | |
CN108629386B (en) | Goods shelf capable of automatically identifying commodity purchasing behavior | |
US20160224865A9 (en) | Methods and systems for enabling vision based inventory management | |
US20230274226A1 (en) | Retail shelf image processing and inventory tracking system | |
US20230274227A1 (en) | Retail shelf image processing and inventory tracking system | |
US20230274410A1 (en) | Retail shelf image processing and inventory tracking system | |
US20230359983A1 (en) | System and method for tracking wine in a wine-cellar and monitoring inventory | |
TWM599199U (en) | Article detection system of article-selection vending machine | |
CN111768211A (en) | Method, device and equipment for preventing goods fleeing of E-commerce commodities | |
CN106599982A (en) | Dynamic counting method and apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: ZEBRA TECHNOLOGIES CORPORATION, ILLINOIS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BELLOWS, DAVID;WULFF, THOMAS E.;REEL/FRAME:052724/0087 Effective date: 20200131 |
|
AS | Assignment |
Owner name: JPMORGAN CHASE BANK, N.A., NEW YORK Free format text: SECURITY INTEREST;ASSIGNORS:ZEBRA TECHNOLOGIES CORPORATION;LASER BAND, LLC;TEMPTIME CORPORATION;REEL/FRAME:053841/0212 Effective date: 20200901 |
|
AS | Assignment |
Owner name: ZEBRA TECHNOLOGIES CORPORATION, ILLINOIS Free format text: RELEASE OF SECURITY INTEREST - 364 - DAY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:056036/0590 Effective date: 20210225 Owner name: LASER BAND, LLC, ILLINOIS Free format text: RELEASE OF SECURITY INTEREST - 364 - DAY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:056036/0590 Effective date: 20210225 Owner name: TEMPTIME CORPORATION, NEW JERSEY Free format text: RELEASE OF SECURITY INTEREST - 364 - DAY;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:056036/0590 Effective date: 20210225 |
|
AS | Assignment |
Owner name: JPMORGAN CHASE BANK, N.A., NEW YORK Free format text: SECURITY INTEREST;ASSIGNOR:ZEBRA TECHNOLOGIES CORPORATION;REEL/FRAME:056471/0906 Effective date: 20210331 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |