WO2023058025A1 - Identification d'objets à l'aide de dispositifs pouvant être portés par l'utilisateur - Google Patents
Identification d'objets à l'aide de dispositifs pouvant être portés par l'utilisateur Download PDFInfo
- Publication number
- WO2023058025A1 WO2023058025A1 PCT/IL2022/051064 IL2022051064W WO2023058025A1 WO 2023058025 A1 WO2023058025 A1 WO 2023058025A1 IL 2022051064 W IL2022051064 W IL 2022051064W WO 2023058025 A1 WO2023058025 A1 WO 2023058025A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- identifier
- action
- sensor
- input
- hand
- Prior art date
Links
- 230000009471 action Effects 0.000 claims abstract description 323
- 230000000007 visual effect Effects 0.000 claims abstract description 268
- 238000000034 method Methods 0.000 claims abstract description 64
- 238000012544 monitoring process Methods 0.000 claims abstract description 41
- 230000000694 effects Effects 0.000 claims abstract description 34
- 238000004590 computer program Methods 0.000 claims abstract description 7
- 238000003860 storage Methods 0.000 claims description 22
- 230000033001 locomotion Effects 0.000 claims description 20
- 238000013507 mapping Methods 0.000 claims description 13
- 238000005259 measurement Methods 0.000 claims description 6
- 230000004044 response Effects 0.000 description 16
- 230000001953 sensory effect Effects 0.000 description 15
- 238000010586 diagram Methods 0.000 description 14
- 238000000605 extraction Methods 0.000 description 11
- 238000004458 analytical method Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 9
- 210000004247 hand Anatomy 0.000 description 9
- 230000005540 biological transmission Effects 0.000 description 7
- 235000011389 fruit/vegetable juice Nutrition 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 210000000707 wrist Anatomy 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 230000001960 triggered effect Effects 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 239000003814 drug Substances 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 235000013336 milk Nutrition 0.000 description 4
- 239000008267 milk Substances 0.000 description 4
- 210000004080 milk Anatomy 0.000 description 4
- 241000282412 Homo Species 0.000 description 3
- 230000033228 biological regulation Effects 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000004913 activation Effects 0.000 description 2
- 230000002730 additional effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 239000003795 chemical substances by application Substances 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 235000013365 dairy product Nutrition 0.000 description 2
- 238000007405 data analysis Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 229940079593 drug Drugs 0.000 description 2
- 230000000977 initiatory effect Effects 0.000 description 2
- 239000007788 liquid Substances 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012806 monitoring device Methods 0.000 description 2
- 235000016709 nutrition Nutrition 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 235000008939 whole milk Nutrition 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 206010020751 Hypersensitivity Diseases 0.000 description 1
- 230000007815 allergy Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 239000007799 cork Substances 0.000 description 1
- 230000009849 deactivation Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 210000001061 forehead Anatomy 0.000 description 1
- 235000019674 grape juice Nutrition 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 235000015122 lemonade Nutrition 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000002483 medication Methods 0.000 description 1
- 230000000116 mitigating effect Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012856 packing Methods 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000003908 quality control method Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Definitions
- the present disclosure relates to wearable devices in general, and to the identification of items captured using wearable devices, in particular.
- One exemplary embodiment of the disclosed subject matter is a method comprising: obtaining a visual input from a sensor located on a wearable device, the wearable device is worn by a subject on a hand of the subject, the sensor is configured to be placed in a location and orientation enabling monitoring of hand activity of the subject, wherein the visual input comprises at least a portion of an object, wherein the object is identified by an identifier, wherein the identifier is not extractable from the visual input; obtaining a second sensor input associated with the visual input; automatically extracting the identifier of the object based on the second sensor input, whereby identifying the object in the visual input; automatically determining that the subject performing a handbased action on the object, the hand-based action is automatically identified based on the visual input; and based on said automatically determining that the subject performed the hand-based action, performing a responsive action.
- the responsive action comprises recording in a digital record a log event, the log event indicating that the hand-based action was performed at a timestamp by the subject with respect to the object.
- the responsive action comprises updating a digital mapping of objects to locations, wherein said updating updates a location of a digital representation of the object from a first location to a second location.
- the second sensor input is a second visual input obtained from the sensor located on the wearable device.
- the second visual input is obtained prior to obtaining the first visual input, whereby the object is identified before the hand-based action is completed by the subject.
- the second visual input is obtained after the first visual input is obtained, whereby the object is not identified during performance of the hand-based action and is identified after the hand-based action is completed.
- the first visual input and the second visual input are obtained during a timeframe, wherein during the timeframe the object is continuously observable by the sensor, at least in part, whereby identification in the second visual input can be attributed to the object appearing in the first visual input.
- said automatically extracting the identifier of the object comprises determining that the subject is continuously interacting with the object using the hand of the subject during entirety of the timeframe, wherein said determining that the subject is continuously interacting with the object during the entirety of the timeframe is performed based on visual inputs obtained from the sensor throughout the timeframe.
- said automatically extracting the identifier of the object based on the second sensor input comprises: identifying a first portion of the identifier based on the visual input; identifying a second portion of the identifier based on the second visual input; and determining the identifier of the object based on a combination of the first and second portions of the identifier.
- the identifier is not directly extractable from the second sensor input, wherein said automatically extracting the identifier of the object based on the second sensor input comprises: obtaining a set of potential local identifiers, wherein the set of potential local identifiers comprise all identifiers that can be potentially observed by the wearable device when the visual input is obtained by the sensor, wherein the set of potential local identifiers is a subset of all identifiers of objects that are observable by the wearable device; and ruling out from the set of potential local identifiers all local identifiers except for the identifier, wherein said ruling out is performed based on the second sensor input.
- the set of potential local identifiers is obtained from a second visual input obtained from the sensor located on the wearable device; wherein the second sensor input is a non-visual sensor input.
- the second sensor data comprise motion sensor data
- said ruling out comprise: automatically determining based on the second sensor data a movement vector of the hand of the subject; determining that a local identifier is the identifier of the object based on the movement vector being directed towards an item associated with the local identifier.
- the second sensor is configured to simultaneously monitor multiple objects, each of which is associated with a different identifier; wherein said obtaining the set of potential local identifiers comprises: identifying at least a first and a second local identifiers in the second input; and wherein said ruling out comprises: determining that the hand-based action is not performed on an object associated with the first local identifier; and determining that the second local identifier is the identifier of the object.
- said determining that the hand-based action is not performed on an object associated with the first local identifier comprises: extracting from the visual input a visual feature of the object; determining that the object associated with the first local identifier does not have the visual feature; and determining that the object associated with the second identifier has the visual feature.
- said determining that the hand-based action is not performed on an object associated with the first local identifier is performed based on a hand movement vector of the subject with respect to objects associated with the first or the second local identifiers.
- the second sensor input is obtained from a second sensor different than the sensor, wherein the second sensor is characterized at least by one of: the second sensor is not located on the wearable device; and the second sensor is a non-visual sensor.
- the second sensor input is a second visual input obtained from the sensor of the wearable device, wherein said automatically extracting the identifier of the object based on the second sensor input comprises: automatically identifying, based on the second visual input, that the user utilized a second device; and determining the identifier based on input that is obtained from the second device within a timeframe from a time during which the visual input is obtained.
- the subject is a picker tasked with picking items to fulfill an order of a customer.
- the method further comprises: identifying that the object is associated with the order of the customer; wherein said automatically determining that the subject performing a hand-based action on the object comprises: identifying that the picker picked up the object and placed the object in a tote associated with the order of the customer; and wherein the responsive action is a fulfillment-related action.
- the method further comprises: identifying a mismatch between the object and a list of items in the order of the customer; wherein said automatically determining that the subject performing a hand-based action on the object comprises: identifying that the picker picked up the object and placed the object in a tote associated with the order of the customer; and wherein the responsive action comprises issuing an alert to the picker indicating the mismatch.
- the responsive action comprises instructing the subject to perform a second hand-based action, wherein the second hand-based action is an opposite action of the hand-based action on the object, whereby undoing the hand-based action.
- the second hand-based action is a returning action, wherein the responsive action comprises instructing the subject to refrain from placing the object in the tote associated with the order of the customer.
- the responsive action comprises instructing the picker to perform a picking action, wherein said instructing the picker comprises visually indicating to the subject, a picking and a placement location of the object.
- the method further comprises: identifying a second object in the visual input, wherein said identifying the second object comprises automatically extracting an identifier of the second input; automatically extracting a second identifier of a second object based on the second sensor input, whereby identifying the second object in the visual input; wherein the responsive action comprises instructing the picker to pick the second object, wherein said instructing comprises visually indicating a location of the second object.
- said visually indicating comprises pointing on the second object using a laser pointer.
- said identifying the second object comprises: identifying a plurality of objects in the visual input; and selecting the second object from the plurality of objects, wherein the second object is selected based on properties of the object.
- the sensor is camera, wherein the wearable device comprises a barcode reader, wherein the identifier is a barcode identifying objects of a predetermined type, wherein the second sensor input is obtained from the barcode reader when the barcode reader is utilized to read the barcode on the object, wherein the visual input and the second sensor input are obtained within a timeframe of a predetermined maximal duration.
- a plurality of objects of a same type as the object are identified using the identifier, whereby the identifier is a non-unique item identifier and a unique type of object identifier.
- the responsive action comprises instructing the subject to perform a second hand-based action.
- the method further comprises: obtaining a third sensor input associated with the visual input; automatically determining based on the third sensor input that the identifier automatically extracted based on the second input does not match the object identified in the visual input; and wherein the responsive action comprises issuing an alert to the subject indicative of a misidentification.
- said extracting the identifier of the object based on the second sensor input comprises: automatically determining that the subject is performing a second handbased action on the object in the second input; and determining that the hand-based action is a continuous action of the second based action.
- said extracting the identifier of the object based on the second sensor input comprises: identifying a second object in the second sensor input having a similarity measurement to the object above a predetermined threshold, wherein the second object is configured to be identified by a second identifier; extracting the second identifier of the second; and determining the identifier of the object based on the second unique identifier.
- the similarity measurement is determined based on at least one visual feature of the object.
- FIG. 1 Another exemplary embodiment of the disclosed subject matter is a computerized apparatus having a processor, the processor being adapted to perform the steps of: obtaining a visual input from a sensor located on a wearable device, the wearable device is worn by a subject on a hand of the subject, the sensor is configured to be placed in a location and orientation enabling monitoring of hand activity of the subject, wherein the visual input comprises at least a portion of an object, wherein the object is identified by an identifier, wherein the identifier is not extractable from the visual input; obtaining a second sensor input associated with the visual input; automatically extracting the identifier of the object based on the second sensor input, whereby identifying the object in the visual input; automatically determining that the subject performing a hand-based action on the object, the hand-based action is automatically identified based on the visual input; and based on said automatically determining that the subject performed the handbased action, performing a responsive action.
- Yet another exemplary embodiment of the disclosed subject matter is a computer program product comprising a non-transitory computer readable storage medium retaining program instructions, which program instructions when read by a processor, cause the processor to perform a method comprising: obtaining a visual input from a sensor located on a wearable device, the wearable device is worn by a subject on a hand of the subject, the sensor is configured to be placed in a location and orientation enabling monitoring of hand activity of the subject, wherein the visual input comprises at least a portion of an object, wherein the object is identified by an identifier, wherein the identifier is not extractable from the visual input; obtaining a second sensor input associated with the visual input; automatically extracting the identifier of the object based on the second sensor input, whereby identifying the object in the visual input; automatically determining that the subject performing a hand-based action on the object, the hand-based action is automatically identified based on the visual input; and based on said automatically determining that the subject performed the hand-based action, performing a visual input
- Figure 1 shows a schematic illustration of a monitoring wearable device, in accordance with some exemplary embodiments of the disclosed subject matter
- Figures 2A-2C show flowchart diagrams of methods, in accordance with some exemplary embodiments of the disclosed subject matter
- Figures 3A-3C show schematic illustrations of visual inputs provided by a hand action monitoring wearable device, in accordance with some exemplary embodiments of the disclosed subject matter.
- Figure 4 shows a block diagram of a system, in accordance with some exemplary embodiments of the disclosed subject matter.
- One technical problem dealt with by the disclosed subject matter is to enable accurate identification of objects that a human subject is performing an action thereon, without affecting performance of such actions, and without requiring the human subject to perform additional actions for identification thereof.
- the object may be an item or a component of an item, that has a unique identifier, an identifier of a type of such item, or the like.
- the identifier may be a barcode, a Quick Response (QR) code, any combination of letters or numbers, a unique image or shape, visual features extractable from the visual appearance of the item, a combination of colors or colored shapes, icon, Radio-frequency identification (RFID) identifier, any identifier capable of uniquely identifying an item or a set of items, any identifier capable of separating or differentiating an item or a set of items from other items or sets of items, or the like.
- the identifier may be located on the object, printed thereon, attached to the object, stamped on the object, located on a surface of the object, embedded in the object, extractable from the image of the object, or the like.
- the identifier may be located nearby the object, in a location, or on another object associated with the object, such as on a shelf the object is located on, on a similar object, on a holder of the object, on a container of the object, or the like.
- the identifier may not be readable to humans.
- the identifier may not necessarily be added to the item on purpose.
- an identifier may be determined using deep learning techniques to allow future identification of the type of object in future images. Such identification may be determined using an inherent identifier determined by the deep learning technique, which may not be useful for a human and may not be defined on purpose by humans.
- the identification of the object may comprise capturing, recognizing, or identifying the object. Additionally or alternatively, the disclosed subject matter may be configured to identify actions performed on objects, some exemplary embodiments, the human subject may perform natural actions, incidental actions, intentional actions, or the like, on one or more objects. The actions may be performed by the hand of the human subject, using instruments, tools, or the like. In some exemplary embodiments, the actions may be actions that do not intend to convey instruction to a computer device (e.g., not gestures that are translated into commands), but rather operate, manipulate or otherwise interact with the object.
- a computer device e.g., not gestures that are translated into commands
- the disclosed subject matter may be configured to identify a context of an item, such as location in which the item is present.
- the context may be determined using a background-foreground distinction. In case, the item is in the foreground and the background changes, the context may be “held in hand” or otherwise “being transferred”. If the item is in the background, it may be determined to be located in some specific location.
- identification of the objects may be useful in various disciplines, such as but not limited to safety inspection, compliance, quality control, self-service shopping, order fulfillment, picking service, engineering tasks, monitoring technician activities, auto -documentation of tasks, or the like.
- a shopper can enter a store (such as a supermarket, grocery store, fashion store, warehouse, or the like), collect items using her hands, and perform other related actions.
- the shopper may purchase products without being checked out by a cashier, without requiring additional scanning of the products before exiting the store, or the like.
- the shopper may be required to automatically identify the objects or items collected by the shopper without requiring her to perform an "unnatural" action that is part of the natural shopping behavior, such as intentionally scanning the objects, moving the objects, turning the objects, or the like.
- an "unnatural" action that is part of the natural shopping behavior, such as intentionally scanning the objects, moving the objects, turning the objects, or the like.
- compliance and accuracy can be verified while monitoring the hands of a human picker, ensuring that she has picked the correct items, placed the items in the correct tote of the relevant order being fulfilled while compiling with the company’s standards (e.g., placing certain items in a certain order, in refrigerated or non-refrigerated totes, or the like).
- Such automated monitoring may also increase the efficiency of the human picker enabling her to pick more items than before as some manual tasks may be eliminated.
- the picker may manually scan an item using a barcode scanner or a similar scanning apparatus.
- the disclosed subject matter may ensure that the item picked up and scanned is the one that was actually placed in the tote, improving accuracy and mitigating the risk of human error.
- identification of an object may be performed based on visual identifiers on the object, such as a name, a barcode, or the like.
- visual identifiers on the object such as a name, a barcode, or the like.
- an action performed thereon may be verified.
- the system may verify that the human subject touches or picks the same object identified by the identifier.
- other information or properties related to the object may be stamped or printed on the object, such as expiry date, nutritional marking, or the like.
- a wearable smart device worn by the human subject may be utilized to observe and track hand actions and objects the actions performed thereon.
- the wearable smart device may be worn on the hand of the human subject, on the wrist of the human subject, located on the body of the human subject, or the like.
- the wearable smart device may be equipped with a sensor, such as a digital camera, a scanner, a radar sensor, a radio waves -based sensor, a laser scanning sensor, a LiDAR, an Infrared (IR) sensor, an RFID sensor, an ultrasonic transducer, or the like.
- the wearable smart device may be equipped with other sensors which their input can be utilized for identification, such as a motion sensor, a location sensor, a temperature sensor, a touch sensor, a Global Positioning System (GPS) module, or any other component which can generate or identify data.
- the sensor may be utilized to provide a visual representation (or pseudo visual representation) of the interior portion of the hand of the human subject, other portions of the hand of the human subject, an area surrounding the hand of the human subject, a 360-degree view of the hand, or the like. Based on visual input provided by the wearable smart device, the object grasped by the hand, and the action performed thereon, may be identified, and a responsive action may be performed accordingly. The object and the action performed thereon may be matched.
- GPS Global Positioning System
- the object may be tracked so as to ensure that the object upon which the action is performed is indeed the identified object. Additionally or alternatively, the object may be matched with an externally identified object, such as a scanned object, ensuring that the scanned object and the object with which the user interacted are the same.
- the wearable smart device may be a device that can be worn on a human's hand without affecting actions performed by the hand, such as a bracelet, a wristband, a watch, a glove, a ring, or the like.
- Other hand wearable devices may be utilized such as a glove that covers all or part of the hand (e.g., a few fingers, a finger cover), or the like.
- the wearable smart device may be embedded in a smartwatch or other wearable device of the human subject being observed.
- the wearable smart device may be worn on a single hand, on both hands separately, on both hands simultaneously, or the like.
- a body-worn device that is not worn on the hand may be utilized.
- the device may be worn on a collar, on a torso, or the like, while having a view of the hand.
- Such embodiments may be less reliable than hand-worn devices and may be susceptible to manipulations by the subject.
- pickers and other staff members such devices may be utilized when the subject wearing the device is considered reliable, such as an employee, an agent, a trusted user, or the like.
- the wearable smart device may be configured to identify the object being touched by the hand(s), such as the type of the object, shape, name, or the like. Additionally or alternatively, the wearable smart device may be configured to identify other attributes related to the object, such as weight, size, color, temperature, texture, or the like.
- the wearable smart device may be configured to identify the object using visual input. As an example, the device may be configured to identify the object based on an optical image, based on QR code, barcode, any combination of letters, numbers or images, chip, RFID, or the like.
- computer vision techniques may be employed to analyze the images. The image analysis may be performed on-device.
- the off-device analysis may be implemented, such as to preserve battery and reduce computation requirements from the device, or the like.
- the off-device analysis may be performed using a server performing computer vision techniques, applying Al models, deep learning technology, or the like.
- capturing the object or identifiers thereof may not be feasible, as the identifier may be obscured or not visible to the visual sensor such as due to the view being blocked by other objects, the identifier not being located in the fatjade of the object or the side facing the sensor, or the like.
- the human subject may perform the action on a plurality of objects at once, such as on a bundle of objects, on objects placed one above the other, or the like. It may be required to automatically identify each of the plurality of objects.
- the wearable smart device or a system utilizing the device may identify in real-time, information related to the environment of the object, such as an origin from which the object is picked from, such as a shelf, a box, a container, a storage location, or the like, or a target destination in which the object is being placed in during the picking or the shopping session, such as a shelf, a bag, a cart, a picking pallet, a box, a container, the hands of the subject, or the like.
- the visual input of the wearable smart device may be analyzed to identify a surface from which the object is picked or on which the object is being placed, a predetermined shopping cart such as a physical shopping cart of the store, a personal shopping bag, or the like.
- the system may be configured to conclude that an identified action is made with respect to an object associated with a certain identifier that has been captured in a previous frame or is being captured in a later frame. The conclusion may be performed based on a previous or later identification of the object and associating the action with the identified object.
- the human subject e.g., the human agent may pick a yet unidentified item, which may be later identified by the system during performing other actions being performed thereon (such as when being placed in a shopping cart) or based on previous or later input from the sensors, or the like.
- the identification, the later identification, or the previous identification may be performed based on sensory data obtained from a device associated with the subject, the environment in which the action is being performed, or the like.
- the sensory data may comprise visual input obtained from visual sensors located on a wearable device worn by the subject, such as on the hand of the subject.
- the sensor may be configured to be placed in a location and orientation enabling monitoring of hand activity of the subject, in a manner enabling capturing at least a portion of the object upon which the activity is being performed.
- other types of sensory data may be utilized, such as accelerometry data, QR readings, or the like.
- sensors of the wearable device may capture the identifier of the object while being placed in its original location while the human subject intends to perform an action thereon; however, during performing the action the identifier may not be visible.
- the system may conclude that the object is the same object previously captured and accordingly identify the object based on the captured identifier.
- the later or earlier identification may be performed based on previous or successive input from the same sensor monitoring the activity of the hand of the subject, e.g., the same visual sensor. Additionally or alternatively, the later or earlier identification may be performed based on sensory data obtained from other sensors of the same type (e.g., other visual sensors located on the wearable device, other visual sensors not located on the wearable device, or the like), from sensors from other types (e.g., non-visual sensors located on the wearable device, other sensors from other types, or the like), or the like. Additional sensory data may be obtained during monitoring the activity of the subject, prior thereto, posterior thereto, or the like.
- other sensors of the same type e.g., other visual sensors located on the wearable device, other visual sensors not located on the wearable device, or the like
- sensors from other types e.g., non-visual sensors located on the wearable device, other sensors from other types, or the like
- Additional sensory data may be obtained during monitoring the activity of the subject, prior thereto, posterior thereto, or
- the user may manually scan the item. While the item is being scanned, such as using a third-party barcode scanner, embedded on the wearable device, or located in another position, the identifier of the object may be captured. While the sensors of the wearable device may not obtain the identifier, such sensors may be useful to confidently determine that the item in the hand of the user is currently being scanned.
- the held item may be identified by the system by correlating both the barcode reader information and the determination, using visual sensors, that the item is being held by the user while performing a scanning operation. The item may thereafter be tracked in successive frames, and when additional actions are performed thereon, the identity of the item can be known based on the barcode scanning.
- actions performed on the item in previous frames may also be matched with the identity of the item using the barcode scanning information, and based on a tracking of the item over time in the set of frames (e.g., consistently having at least a portion of the item within the frames; potentially allowing the item to be out of the frame for no more than a predetermined short timeframe; or the like).
- the conclusion that the object is the same object identified in a previous or later caption may be performed based on a categorizing of the object into a group or subgroups based on the properties of the objects.
- Artificial Intelligence Al
- clustering techniques to improve the identification of items based on features that can be learned from sensors, such as shape, color, weight, temperature, indicative images printed thereon, or the like, or the type of the action, such as based on the movement pattern, acceleration, or the like.
- the information may be verified using the scanner or camera (such as inside the stores) to accurately identify the item.
- the categorization may be utilized to identify the object being the same object despite partial visibility thereof during the performance of the action, by extracting properties thereof and inferring the identity of the objects based thereon.
- the properties may not be sufficient to identify the item in an absolute manner. However, such properties may distinguish the item from other potential items located nearby, may provide a verification having a confidence measurement above a threshold as a general rule, or relative to the potential other items located nearby, captured by the sensors, determined using a warehouse inventory system, or the like.
- the conclusion may be performed despite partial visibility or availability of the identifier to the visual sensor upon the performance of the action with the object, due to detection or identification of the object before or after the action.
- different portions of the identifier may be of the object may be completely or partially obscured while performing an action on the object, while other portions may be available.
- non-unique identifiers visible to the visual sensor during the current frame may be utilized to identify that the current object is the object from the previous/later frame.
- portions of the identifier that were obscured during performing the first action may be available or visible during performing a second action, after or before performing the first action.
- the different portions of the identifiers as captured at different times may be combined to infer a complete or a better view of the identifier, sufficient to identify the object based thereon, determine that the current object is the same previously or later identified object based on an overlap between the portion of the identifiers, similarity therebetween, or the like.
- the identifier of the juice bottle may be a barcode or a series of numbers, that different portions thereof appear in different frames of the visual input, that may be combined, overlapped, concatenated, or the like, to determine the identifier thereof.
- the non-unique identifier of the juice bottle may be the name of the manufacturer thereof, a name of the product, an identifying icon, or the like, that may be utilized to infer that the identified object and the object to be identified are the same objects, as the likelihood that another item was replaced and still has such identical features is relatively low (e.g., below a threshold).
- the conclusion may be performed based on the context of the object.
- the context may be whether the item is being held or is stationary. Additionally or alternatively, the context may be a location in which the stationary item is located.
- the object may be identified as appearing in the foreground, and as a result, it may be determined that the item is being held, as the background changes, while the item remains without a substantial change.
- the context may be determined based on the background. Based on a change in context, the disclosed subject matter may be enabled to determine that an action was performed by the human subject on the object, to identify the action, to differentiate between the object and the context, to identify the object, or the like.
- features of the object may be determined based on the context of the object.
- the initial image may show the item in the background.
- Background information from the image such as information related to the storage location of the item (e.g., a refrigerator indicating the item is a diary item), identification data (such as name, category, or the like), information from other similar objects on the shelf, or the like.
- a body-worn sensor such as a body- worn smart device
- an item appearing in the foreground may be inferred to be moving together with the user, whereas an item appearing in the background may be inferred to not be moving with the user.
- the conclusion may be performed based on other distinguishing features, in addition to the identifier originally used to identify the object.
- the distinguishing features may be identified based on the remote observation of the object. While an action is performed on an object whose original identifier is obscured, the distinguishing features that are available, completely or partially, to the visual sensors, may be extracted and utilized to identify the object.
- the identifier thereof may be hidden or unclear to the system and may accordingly be an unidentified item for the system while the shopper picks it.
- the system may determine based on the properties of the unidentified item, that the item is a certain juice bottle, such as based on the color of the liquid inside, properties of the bottle, the size of the cork, or the like.
- the distinguishing features may be utilized as a "local identifier" of the object. Such features, combined with the context of the object, may be utilized to identify the object. Referring again to the self-service shopping example, when the shopper performs an action nearby a dairy department or picks an item from a shelf of milk or dairy products, the system may be configured to determine based on the color appearing on the package, what is the type of the milk, a fat percentage thereof, (e.g., blue is whole milk, red is skin milk), or the like. While such features may not be useful identifiers, in the context of the specific local environment, such features may be useful to distinguish between the item and other items, thereby enabling the identification of the item in the specific local environment.
- a fat percentage thereof e.g., blue is whole milk, red is skin milk
- the local identifier may be re-computed periodically based on sensible information. For example, when first viewing the refrigerator, a set of potential local identifiers may be determined for different items. When approaching a single refrigerator door, a second different set of identifiers may be computed to the visible items inside the door. When the item is picked up, a third identifier may be computed for the item based on the context and background information. A fourth identifier may be determined when the third identifier becomes less useful in view of the changing background, such as when a confidence measurement of such identifier correctly identifying an item drops below a threshold. [0065] Additionally or alternatively, the conclusion may be performed based on an observation of the object during a continuous action.
- the object may be continuously observed and monitored during a continuous action, despite the lack of visibility of the identifier in the course of the action or a portion of the action.
- the continuous action may be picking an item during self-service shopping, which comprises getting close to the location of the item, touching the item, grasping the item by the hand, holding the item, and placing the item in a shopping cart.
- the item may be identified at a certain point of the action, and remains identifiable during a continuous action to the item (getting close to the location of the item), despite the lack of visibility of the identifier due to this action, because of recognition of continuous visibility of that item during that action.
- the object may actually be identified, such as based on a change in the images in which other portions of the identifier can be extracted.
- the conclusion may be based on a defined user behavior requiring that the user first locate the identifier so as to be sensible to the sensor (e.g., touch the barcode of the box, or swipe the sensor nearby the identifier) before interacting with the item.
- the disclosed subject matter may rely on an initial identification of the item using the identifier (e.g.: barcode) and may only track the identified item and ensure that the action (e.g., picking up the box) is performed on the same item and not on a nearby item.
- an identification of the item may be performed using a separate device, such as a barcode scanner scanning the barcode of the object, while the body-worn smart device monitors the activity of the user.
- a time-based correlation may be utilized to determine that while the barcode scanner scanned the item, the body-worn smart device observed the user performing the scanning operation from a different viewpoint. The correlation may be based on time-wise correlation, activity correlation, location-based correlation, or the like.
- the system may match the observed object with the identified item.
- the conclusion may be performed based on a constant signal from non-visual sensors observing the object, such as a radio signal, an RFID, a signal from a radar sensor, a radio waves-based sensor, Bluetooth, Wi-Fi, or the like.
- the system may be configured to conclude, based on identifying that the signal associated with the object is substantially constant or stable while the hand of the human subject is moving; that the object is positioned at the hand of the human subject, picked or moved by the human subject, or the like.
- other objects may simultaneously be identified based on having similar or changing signals.
- the system may be configured to conclude based on identifying objects whose signals are unstable or changeable with respect to the signals from the hand or from objects whose signal is substantially constant or stable, and are not picked or moved objects.
- the system may be configured to provide feedback if the action made with the object matches the identifier, or an alert if the action is made with a different object than the identified object or with an unidentified object.
- a local sensor may be configured to simultaneously observe and monitor multiple objects, and identify each object by its identifier.
- a visual sensor observing a shelf may be configured to identify multiple barcodes of the items located on the shelf.
- the object In response to identifying that an action was performed on an object associated with multiple objects (such as based on the location of the wearable device), the object may be identified based on the input from the sensor.
- the system may be configured to identify the object that the action was performed thereon, by eliminating all other identified items in the input from the sensor, by concluding that the action was not performed on such items, or the like.
- preloaded objects may be objects that have been known to the system, recorded, stored, loaded, learned, or the like. Additionally or alternatively, preloaded objects may be objects whose properties or identifiers are stored by the system, in a database available to the system, or the like.
- the matching process may be triggered by a natural, unintentional, or implicit action of the human subject, an action not intended to convey information, or not through an implicit, gesture, other intentional action, or the like. Additionally or alternatively, the matching process may be triggered by an explicit gesture or action of the human subject, such as moving one or more fingers or positioning of fingers towards the object. The gesture may be unique to each user (the system learns the position of the user’s fingers before or upon touching or picking an item) or general.
- the system may be configured to extract features or properties of the object, and compare them with propertied of preloaded objects to determine a match.
- the system may be configured to provide feedback if the object matched a preloaded object and identify the object based on the match with the preloaded object, or issue an alert indicating a lack of a match, a new object, or the like.
- any non-matching scenarios can be automatically marked as private, for applying one of several privacy-compliant actions such as deleting, obscuring/obfuscating, or the like, of the incoming data.
- a system in accordance with the disclosed subject matter may trigger performing matching by one or more different triggers.
- the matching may be triggered by a natural, unintentional, or implicit action of a user.
- the matching may be triggered by an action that is not intended to convey information to a computerized system.
- matching may be triggered by an explicit gesture or action of a user, which is aimed to instruct the computerized system to perform the matching or perform another functionality that may involve matching.
- the instruction may be moving one or more fingers or positioning of fingers.
- the gesture may be general, such as a swipe gesture over the barcode.
- the action may be unique to a user, such as an international gesture or an unintentional action that the system learns over time, such as an action that is based on a learned position of the user’s fingers before or upon touching or picking an item.
- the system in response to identifying the object, the system may be configured to predict an intended action the human subject is about to perform thereon. Prior to the human subject performing an action with an object, or when the system identifies that the human subject intends to perform the action, the system predicts the intended action before being performed, such as touching, picking or the like, and reports it to the user. The system may be further configured to validate that the predicted action is actually performed.
- the system may be configured to guide or instruct the human subject to perform an action on an object in response to the identification of the object, or objects related thereto.
- the system may utilize visual guidance, such as using a visual pointer, laser beam, or the like, vocal guidance, or the like.
- the wearable smart device may be utilized for performing a picking task. The wearable device may assist the picker in selecting the appropriate object to be picked, based on predetermined requirements, such as customer preferences or the like, documenting or tracking the manual fulfillment of a picking task by a picker for the purpose of restoring the order of placing items in the tote or to document their placement, or the like.
- the system may instruct the human subject, e.g., the picker, to pick a certain item from a plurality of items identified within the visual input.
- the system may instruct the human subject, e.g., the picker, what item to pick after the identified item, based on properties of the identified item, such as instructing the picker to pick an item from the order list having the closest distance to the identified item, or the item having physical properties suitable for being placed within the tote close to the identified item, or the like.
- system may instruct the picker how and where to place the identified item within the tote..
- the system may be configured to validate compliance of actions performed by the human subject based on identification of the objects.
- the system may be configured to validate performance or lack of performance of predetermined actions, order between the actions, timing, or the like.
- the system may further validate that the actions comply with predetermined requirements, safety rules, regulations, or the like.
- the wearable smart device may be utilized for safety verification. Actions performed on the identified object may be compared with a safety rule to determine conformation or violation thereof. In response to a violation of the safety rule, a safety alert may be issued to the subject, to a supervisor, or the like.
- the safety rule may be a rule relating to an administration of medicine to a patient, such as a type of the medicine, a dosage, a procedure of administration, prevention of mixture with other medications, allergies or sensitivity to drugs of the patient, or the like.
- the safety rule may be a rule relating to operating a machine, a vehicle, or the like, such as rules related to operating an airplane, a rule associated with pressing each button, or the like.
- the system may be configured to guide or instruct the human subject to perform an action on an object, to refrain from performing an action, or to encourage performing an action, in response to the identification of the object, or objects related thereto.
- the system may utilize a separate device for that purpose.
- the separate device may be a headphone, speaker, flashlight, or the like.
- One technical effect of utilizing the disclosed subject matter is enabling an efficient identification of objects without violating the privacy of monitored subjects and surrounding subjects.
- the disclosed subject matter may enable monitoring mainly the interior portion of the hand, while a wider scene may be blocked by the hand.
- Personal identification features such as in the face, name tags, or the like, may not be captured by the utilized sensors.
- the data obtained by sensors utilized in the disclosed subject matter may be limited and focused only on information essential for determining the action and the object the action is being performed on.
- the disclosed subject matter may spare tracking and monitoring of the entire environment (such as the entire store, the entire hospital room, the entire lab, or the like), thus reducing costs, not requiring changes in the design or additional equipment, not revealing sensitive data in the environment, or the like.
- Another technical effect of utilizing the disclosed subject matter is providing for a reliable monitoring system. Utilizing the disclosed subject matter enables users, such as pickers, to perform a fast and efficient picking session while reducing the time of the picking session, improving efficiency, ensuring compliance with pre-defined rules and regulations, and improving the accuracy of the picking activity. [0082] Yet another technical effect of utilizing the disclosed subject matter is providing for a monitoring system, at an affordable price, and without requiring a complicated and expensive setup.
- Yet another technical effect of utilizing the disclosed subject matter is to automatically provide for a safety system that monitors human activity and identifies violations of safety rules.
- a safety system may achieve a reduction in accidents and incidents caused due to human error.
- the safety system may relate to medical care and medical services, ensuring compliance with desired rules and regulations.
- the disclosed subject matter may provide for one or more technical improvements over any pre-existing technique and any technique that has previously become routine or conventional in the art. Additional technical problems, solutions, and effects may be apparent to a person of ordinary skill in the art in view of the present disclosure.
- FIG. 1 showing a schematic illustration of a hand action monitoring wearable device, in accordance with some exemplary embodiments of the disclosed subject matter.
- Wearable Device 120 may be a wearable digital product that may be worn by a subject on a Hand 130 of the subject. Wearable Device 120 may be worn on a wrist of Hand 130, such as a smart watch, smart wristband, or the like. Wearable Device 120 may be adapted in size and shape to fit to a human hand or wrist, such as Hand 130. Wearable Device 120 may be worn on the left hand, on the right hand, on a single hand, on both hands (e.g., comprising two wristbands, one on each hand), or the like. Additionally or alternatively, Wearable Device 120 may be embedded in another wearable device of the user, such as a smartwatch, a bracelet, or the like.
- Wearable Device 120 may comprise one or more vision Sensors 110 located thereon.
- Sensor 110 may be configured to be placed in a location and orientation enabling monitoring of activity of Hand 130.
- Sensor 110 may be used to stream live video, record step-by-step instructions for performing a task, or the like.
- Sensor 110 may be positioned at a location enabling capturing a view of the inner portion of Hand 130, the palm of Hand 130, the base portion of the fingers, or the like, such that when a subject is holding an object, Sensors 110 may capture the object being held, at least partially.
- Sensor 110 may be located at the base of the palm of Hand 130, such as at the wrist, or the like.
- Sensor 110 may be a Point Of View (POV) camera designed to capture the scene in front of Hand 130, such as a stationary mounted camera, or the like. Additionally or alternatively, Sensor 110 may be a SnorriCam camera adapted to fit Hand 130, to face Hand 130 directly so that Hand 130 appears in a fixed position in the center of the frame. The SnorriCam camera may be configured to present a dynamic, disorienting point of view from the perspective of Hand 130.
- POV Point Of View
- Sensor 110 may comprise several sensors (not shown) embedded in Wearable Device 120, attachable thereto, or the like.
- several sensors may be located all over Wearable Device 120, that cover a full range of view around the hand, such as 360°.
- the several sensors may be dispersed non-uniformly over Wearable Device 120, in order to provide the full range of view, provide a view enabling identification of actions and items, or the like.
- several sensors may be located in the portion of Wearable Device 120 that is configured to face the interior portion of Hand 130. The several sensors may be at a predetermined constant distance from each other, may overlap, or the like.
- Sensor 110 may comprise visual sensors such as multiple camera lenses, different cameras, LiDAR scanners, ultrasonic transductors, RF-based sensors, other sensors or components having alternative or equivalent technology, a combination thereof, or the like. Sensor 110 may be configured to capture pictures, videos, or signals around Wearable Device 120. Other types of input may be provided, such as heat maps, thermal images, or the like.
- Wearable Device 120 may be utilized to recognize that Hand 130 is about to perform an action (or is performing the action) on an item, to identify the item being held by Hand 130, information thereabout, or the like. Wearable Device 120 may be utilized to track actions of Hand 130, items Hand 130 performs or avoids performing the action thereon, or the like. Sensor 110 may be configured to recognize when the hand is approaching an object, picking, holding (e.g., the object stays constant at the hand), moving the object (e.g., background picture changed), releasing the object, or the like.
- Wearable Device 120 may be configured to identify a hand-based action that is not intended as a command to the device itself (e.g., a gesture intended as a purposeful command). As an example, Wearable Device 120 may be utilized to identify a picking-up action performed naturally, as opposed to a purposeful gesture with Hand 130 that may be performed specifically with the intent to instruct the device. In some exemplary embodiments, Wearable Device 120 may be configured to identify actions that are performed as part of the regular interaction of the subject with the items, and no dedicated actions or gestures by the subject are relied upon. In some exemplary embodiments, Wearable Device 120 may be configured to identify actions that are performed as a result of a gesture captured by the device, such as gesture, movement, or special position of the device user’s finger(s).
- Wearable Device 120 may be utilized to identify the item upon which the hand-based action is being performed. Additionally or alternatively, Sensor 110 may be configured to identify parameters of the object or enable identification thereof, such as type, category, name, shape, size, price, or the like.
- Wearable Device 120 may comprise motion sensors or detectors configured to recognize any movement of Wearable Device 120 and support tracking the disposition of an item, such as a GPS sensor, an accelerometer, or the like.
- input from the motion sensors may be utilized to support tracking the disposition of items upon which the hands perform actions.
- the motion sensors may be configured to recognize any movement of Wearable Device 120.
- Wearable Device 120 may comprise a Barcode Scanner 170, such as a barcode reader.
- Barcode Scanner 170 may be utilized to scan barcodes associated with items to support identification thereof and provide additional information, such as price, weight, or the like.
- the barcodes may be identifiers that identify objects of a predetermined type.
- Barcode Scanner 170 may be utilized to read the barcode on objects identified in the visual input obtained from Sensor 110.
- Barcode Scanner 170 may be configured to scan the barcodes and provide respective data simultaneously with Sensor 110, within a timeframe of a predetermined maximal duration of obtaining data from Sensor 110, or the like.
- Wearable Device 120 may comprise a communication component (not shown), such as a chip or another hardware technology, configured to receive, collect and process pictures, videos signals, or the like captured by Sensor 110, Barcode Scanner 170, or other sensors.
- the communication component may be embedded within Wearable Device 120.
- the communication component may comprise a transmitter utilized for transmitting input captured by Sensor 110 to a backend device configured to perform the respected analysis.
- the transmitter may be configured to utilize a wireless connection, such as a Wi-Fi network, Bluetooth, RF transmission, IR transmission, cellular data transmission, or the like, for transmitting the data. It may be noted that all functionalities of Wearable Device 120 may be based on on-device computations or on-off-device computations, such as performed by an edge device, a remote server, a cloud-based server, or the like.
- Wearable Device 120 may comprise an Input/Output (I/O) component (not shown).
- the RO component may be utilized to obtain input or provide output from the subject or other users, such as for informing that the item is identified, viewing a list of items provided by one or more customers, viewing picking tasks status, or the like.
- the RO component may comprise a small screen, a touch screen, a microphone, a Light Emitting Diode (LED), or the like.
- a green light may be lightened as a positive signal.
- Other types of signals such as audio signals, vibration signals, or the like, may be used.
- the RO component may be configured to provide output to the subject (e.g., LED lighting up in green) indicating an update of her virtual cart, such as in view of an addition of an item thereto.
- the I/O component may be configured to provide output to the subject (e.g., LED lighting up in red) indicating an invalidating of her virtual cart, such as in view of a misidentification of an item in the shopping cart, identification of a tampering event, placing an item in a wrong shopping cart, or the like.
- the I/O component may be utilized to obtain input from the subject, such as using voice, touch, pressure, or the like.
- Wearable Device 120 may be utilized as a retail smart device.
- Wearable Device 120 may be configured to be worn by a shopper during self-service shopping, may be configured to be worn by a picker fulfilling an online order, by a retailer or an employee placing stock, or the like.
- the device may be worn by a cashier during checkout activities, such as scanning the products and creating the digital shopping list.
- Wearable Device 120 may be utilized for other tasks, such as safety monitoring, actions monitoring, logging user actions, augmented reality games, virtual reality applications, or the like.
- Wearable Device 120 may be configured to continuously monitor Hand 130 between a first and second activities, such as check-in and check-out activities, log-in and log-out into another device, or the like. Such monitoring may comprise obtaining and analyzing input related to Hand 130, such as visual input, geospatial location, or the like.
- Sensor 110 may be configured to capture at least an Interior Portion 132 of Hand 130.
- Interior Portion 132 may comprise Distal Portion 134 of a Palm 133.
- Sensor 110 may be configured to face Palm 133 whereby capturing Distal Portion 134.
- the visual input may capture at least a portion of the object when the object is being held by Hand 130, such as when being grasped by Fingers 136 of Hand 130, or the like.
- At least a portion of the visual input of Sensor 110 such as about 5%, about 10%, and about 50%, may comprise a view of Interior Portion 133 to enable identification of the object.
- an identifier of the object may not be visible to Sensor 110, may be obfuscated by Hand 130, may be covered by one or more of Figures 136, may be facing Interior Portion 133 of Hand 130, may be blocked by other items, or the like.
- a view of Sensor 110 may be blocked, at least in part, by Hand 130, or by the object held by Hand 130.
- Sensor 110 may not be enabled to capture the whole environment surrounding Hand 130, such as the face of the user, other people in the surrounding environment, unrelated objects, or the like.
- the view of Sensor 110 may be a spherical view capturing 360-degree panoramic space surrounding Hand 130.
- the spherical view may have a relatively limited view, such as a spherical view with a radius of up to about 10 centimeters around Hand 130, up to about 25 centimeters around Hand 130, or the like.
- a relatively limited view such as a spherical view with a radius of up to about 10 centimeters around Hand 130, up to about 25 centimeters around Hand 130, or the like.
- sides, elements, portions or aspects of the object held by Hand 130 may not be visible to Sensor 120, because of a position of the object.
- Wearable Device 120 may be configured to provide images captured by Sensor 110 to be utilized by an analysis component (not shown).
- the analysis component may be located on Wearable Device 120, embedded therein, a module thereof, or the like. Additionally or alternatively, the analysis component may be located external to Wearable Device 120, such as on a server (not shown), on an external device (not shown), on a backend device (not shown), a controller (not shown), or the like. Activities and analysis related to Wearable Device 120, such as identification of actions performed by Hand 130, identification of objects upon which the actions are performed, or the like, may be performed by the analysis component, in accordance therewith, or the like.
- Wearable Device 120 may be configured to automatically extract identifiers of objects appearing in the visual input obtained from Sensor 110, based on previous or later visual input obtained from Sensor 110, based on other sensory data obtained from other sensors of Wearable Device 120, from external sensors, or the like.
- Wearable Device 120 may be connected to a controller (not shown) external to Wearable Device 120.
- the controller may be configured to determine a responsive action based on the action or the item.
- the responsive action may be associated with the purpose of monitoring actions of Hand 130, such as reporting the action or the object, calculating a check based on the action and the object, issuing an alert based on the action or the object, or the like.
- the responsive action may comprise recording in a digital record a log event indicating that the hand-based action was performed at a timestamp by the subject with respect to the object.
- the responsive action comprises updating a digital mapping of objects to locations, a location of a digital representation of the object from a first location to a second location, or the like.
- Wearable Device 120 may be devoid of a deactivation interface for the user. Activation and de-activation of Wearable Device 120 may be performed automatically by the controller.
- the power source (not shown) of Wearable Device 120 such as the battery, may be sealed and the subject may not have access thereto.
- Wearable Device 120 may be provided with a limited de-activation interface for the user, that enables the user to de-activate Wearable Device 120 upon finishing a shopping session, based on permission from the controller, or the like.
- Wearable Device 120 may be configured to be utilized for self-service shopping. Wearable Device 120 may be configured to be utilized to identify items grabbed by Hand 130 and moved to or from a physical shopping tote of the user, wherein the items are identifiable based on the input of Sensor 110. Wearable Device 120 may be configured to be associated with a virtual cart upon initiating a self-shopping session. The virtual cart may indicate a list of items shopped by the user. The virtual cart may be automatically updated based on items moved to and from the shopping cart by Hand 130. In some exemplary embodiments, Wearable Device 120 may comprise a tampering detection module (not shown) that is configured to monitor and detect a tamper event during a shopping session of the user, avoid monitoring user activity outside the shopping session, or the like.
- a tampering detection module not shown
- Wearable Device 120 may be configured to be utilized for manual fulfillment of a shopping order of a customer.
- the shopping order may comprise a list of items.
- Hand 130 may be of a picker tasked with picking items to fulfill the shopping order of the customer.
- Wearable Device 120 may be configured to identify actions of picking up an object by Hand 130 and placing the object in a tote associated with the shopping order of the customer.
- Wearable Device 120 may be configured to be utilized for protecting the user or other related subjects.
- the responsive action determined based on the input of Sensor 110 may comprise comparing the action performed by Hand 130 with a safety rule. In response to a violation of the safety rule, a safety alert may be issued.
- Wearable Device 120 may be configured to be utilized for monitoring a healthcare system. Wearable Device 120 may be configured to continuously monitor the hand of healthcare workers during the treatment of patients.
- wearable device configured to be worn on the chest of the user, embedded in a vest to be worn by the user, a hat-shaped device configured to be worn on the head of the user, a device configured to be worn on the forehead of the user such as using elasticized straps, or the like.
- wearable devices may also comprise visual sensors (such as in Sensor 110) configured to capture at least an interior portion of the hand of the user, objects being held by the hand user actions performed by the hands of the user, or the like.
- a visual input may be obtained from a sensor located on a wearable device.
- the wearable device may be designed to be worn by a subject on the hand of the subject, such as Wearable Device 120 in Figure 1.
- the sensor may be configured to be placed in a location and orientation enabling monitoring of the hand activity of the subject.
- the wearable device may be utilized to monitor the hand actions of the subject and objects upon which the hand actions are being performed, during self-service shopping or a manual fulfillment of online shopping for customers by the subject, or the like.
- the subject may be a picker tasked with picking items to fulfill an order of a customer.
- the visual input may comprise at least a portion of an object (e.g., the object of interest).
- the object may be configured to be identified by an identifier.
- the identifier may not be extractable from the visual input.
- an object may be determined to be of interest based on an identification that the subject is intending to perform an action on the object, that the subject is performing or completed performing an action on the object, or the like.
- moving the hand towards an object may be an indication that the subject is about to perform an action on the object and thus it is an object of interest.
- moving the hand opposite of an object may be an indication that the subject has finished performing an action on the object and thus it is an object of interest.
- an object may be determined to be of interest based on an identification of the object in the foreground of the visual input.
- items in the foreground may remain the same, while the background may change over time.
- the object being in the foreground of the visual input may be indicative that the being held by the user. As long as the object is being held by the user, images captured by the visual sensor would comprise the item in a relatively constant location, posture and size, with respect to the hand of the user.
- the identifier may be a non-unique item identifier, such as a unique type of object identifier, or the like. A plurality of objects of the same type as the object may be identified using the identifier.
- Step 230 a second sensor input associated with the visual input may be obtained.
- the second sensor input may be a second visual input obtained from the sensor located on the wearable device.
- the first and the second visual inputs may be of the same type, may be configured to capture the same point of view, may observe the same environment, or the like.
- the first and the second visual inputs may be consequent visual inputs, e.g., visual inputs obtained from the sensor after a predetermined timeframe, such as every 10 milliseconds, every 100 milliseconds, every 400 milliseconds, every 1 second, or the like.
- the first and the second visual inputs may be two subsequent visual inputs from Images 3Ola-3O3a of Figure 3A, 304b-306b of Figure 3B, or 307c-308c.
- the first and the second visual inputs may be obtained during a timeframe.
- the object may be continuously observable by the sensor, at least in part. Accordingly, identification in the second visual input can be attributed to the object appearing in the first visual input.
- the second visual input may be obtained prior to obtaining the first visual input. Accordingly, the object may be identified before the hand-based action is completed by the subject. Additionally or alternatively, the second visual input may be obtained after the first visual input is obtained. Accordingly, the object may not be identified during performance of the hand-based action and may be identified after the hand-based action is completed.
- the second sensor input is obtained from a second sensor different than the sensor.
- the second sensor may be a different sensor located on the wearable device, such as a non-visual sensor, an accelerometer, a different type of visual sensor, or the like.
- the sensor may be a camera, and the second sensor may be a barcode reader.
- the identifier of the object may be a barcode identifying objects of a predetermined type, that can be extracted from the barcode reader when the barcode reader is utilized to read the barcode on the object.
- the visual input and the second sensor input are obtained within a timeframe of a predetermined maximal duration, such as within 1 minute, within 30 seconds, within 10 seconds, or the like.
- the second sensor may be a sensor not located on the wearable device, such as another similar sensor located on another wearable device, an external sensor from a different device, or the like.
- the identifier of the object may be automatically extracted based on the second sensor input, thereby identifying the object in the visual input.
- additional visual inputs such as visual inputs obtained from the sensor located on the wearable device, may be utilized to determine that the subject is continuously interacting with the object using the hand of the subject during the entirety of the timeframe in which the first visual input and the second input are obtained.
- a first portion of the identifier may be determined based on the visual input, and a second portion of the identifier may be determined based on the second input.
- the identifier of the object may be identified based on a combination of the first and second portions of the identifier.
- Step 240 may be implemented as Step 240b of Figure 2B.
- a set of potential local identifiers may be obtained.
- the set of potential local identifiers may comprise all identifiers that can be potentially observed by the wearable device when the visual input is obtained by the sensor.
- the set of potential local identifiers may be a subset of all identifiers of objects that are observable by the wearable device. As an example, based on identification that the object is located in a certain location (such as in diary zone), a set of potential identifiers may be determined.
- the certain location may be identified based on location data, temperature data or other sensory data obtained from sensors of the wearable device or sensors of other devices, based on identifying a location indicative identifier within the first or second visual input, or the like.
- the set of potential identifiers may be selected from a database, an items catalog, from the store, or the like.
- the set of potential identifiers may comprise all identifiers of all objects that can be located within the identified certain location (e.g., all diary items from the items catalog).
- the set of potential local identifiers may be obtained from a second visual input obtained from the sensor located on the wearable device, such as from a previous frame capturing a wider vision of the shelf on which the object is located thereon.
- the second sensor may be configured to simultaneously monitor multiple objects, each of which is associated with a different identifier, to obtain the set of potential local identifiers.
- Step 244b all local identifiers except for the identifier may be ruled out from the set of potential local identifiers.
- the ruling out may be performed based on the second sensor input.
- the second input may comprise a portion of the identifiers, such as sub-string of the identifier. Identifiers that do not comprise the identified sub-string may be ruled out from the set of potential identifiers.
- a second non-unique identifier of the object may be determined based on the second visual input, such as an identifier indicative of a manufacturer of the object, a color indicative of a sub-type of the object (e.g., black for grape juice and yellow for lemonade, blue pattern for whole milk and red pattern for fat low milk, or the like).
- the second sensor input may be a non- visual sensor input, such as motion sensor data, acceleration data, or the like.
- a movement vector of the hand of the subject may be determined based on the second sensor data.
- the local identifier of the object based may be selected on the movement vector being directed towards an item associated with the local identifier.
- the ruling out may be performed based on a hand movement vector of the subject not being directed to objects associated with the local identifiers.
- Step 240 may be implemented as Step 240c of Figure 2C.
- the second sensor input may be a second visual input obtained from the sensor of the wearable device.
- Step 246c an automatic identification that the user utilized a second device may be performed based on the second visual input.
- the identifier may be determined based on input that is obtained from the second device within a timeframe from a time during which the visual input is obtained.
- an automatic determination that the subject performing a handbased action on the object may be performed.
- the handbased action may be automatically identified based on the visual input, while identification of the object may be performed based on the second input.
- the hand-based action may be picking up an object for sale, holding an object, returning an object to its initial location, putting the item in the real-time shopping cart, removing the object from the real-time shopping cart, changing the real-time shopping cart, or the like. Additionally or alternatively, the action may be related to the fulfillment of the shopping order performed by the picker, such as picking up an object, holding an object, returning an object to its initial location, placing the object in a shopping cart or a tote associated with the shopping order of the customer, removing the object form the shopping cart, or the like.
- a determination that the subject is holding the object in the first visual input may be performed, based on the identification of the object in the foreground of the first visual input.
- the identification of the object itself may be performed based on extracting the identifier of the object from the second input, such as from the background of the second visual input.
- a responsive action may be performed based on the automatic determination that the subject performed the hand-based action.
- the responsive action may comprise recording a log event in a digital record.
- the log event may indicate that the hand-based action was performed at a timestamp by the subject with respect to the object.
- the responsive action may comprise updating a virtual cart of the subject to include the object picked up by the subject as a purchased item.
- the responsive action may further comprise automatically calculating an updated check to include the price of the object.
- the responsive action may comprise identifying a corresponding item to the object in the list of items, and marking the corresponding item as fulfilled, or identifying a mismatch between the object and the list of items, and accordingly alerting the picker of the mismatch.
- the responsive action may comprise updating a digital mapping of objects to locations, such as by updating updates a location of a digital representation of the object from a first location to a second location.
- the user may be a worker tasked with arranging a store or a warehouse, managing stock, or the like.
- the responsive action may be updating a digital representation of the respective store or warehouse, to indicate an updated location of each object.
- the responsive action may be a fulfillment-related action related to the picking of the custom order.
- the object may be determined to be associated with the order of the customer.
- the fulfillment-related action may be performed (e.g., updating a virtual shopping cart, updating a check, or the like.
- the responsive action may comprise updating the virtual cart of the subject to exclude the object.
- the responsive action may be an alert action.
- the responsive action may comprise issuing an alert to the picker indicating the mismatch.
- the responsive action may comprise emitting an auditory cue indicating the removal of the object to the virtual cart, updating the check to exclude the price of the object, suggesting alternative items to the subject, or the like.
- FIG. 3A showing schematic illustrations of visual inputs provided by a hand action monitoring wearable device, in accordance with some exemplary embodiments of the disclosed subject matter.
- a wearable device such as 120 depicted in Figure 1, may be worn on a Hand 310a of a subject.
- the wearable device may comprise one or more sensors configured to capture at least an interior portion of Hand 310a, such as Sensor 110 depicted in Figure 1.
- the one or more sensors may be configured to be placed in a location and orientation in the wearable device enabling monitoring of Hand 310a.
- the one or more sensors may be configured to act as a SnorriCam, a POV camera, or the like, while Hand 310a appears in a fixed position in the center of the frame to enable monitoring activity thereof.
- the wearable device may be configured to be worn on a wrist of Hand 310a, whereby positioning the one or more sensors to face a palm of Hand 310a, or at least a distal portion thereof, such that enables to capture at least a portion of an object when the object is being held by Hand 310a, or when being grasped by fingers of Hand 310a, or the like.
- each object may be identified by an identifier, such as a serial number, a barcode, a QR code, an RFID code, any combination of numbers, letters or symbols, a combination of images, colors, or shapes, or the like.
- the identifier may be located on the object, printed thereon, attached to the object, stamped on the object, located on the surface of the object, embedded in the object, extractable from the image of the object, or the like.
- Object 320a may be identified using a barcode, using serial number 1105A8, or the like.
- a first visual input such as Image 302a
- a first visual input may be obtained from the visual sensor and analyzed to determine an object and an action performed thereon by Hand 310a.
- it may be identified that Hand 310a is holding an Object 320, performing an action thereon (such as placing Object 320a within a Basket 380a), collecting Object 320a, picking Object 320a, or the like.
- Object 320a may not be possible to identified based on Image 302a standalone.
- the identifier of Object 320a may be not be visible in Image 302a, may be obfuscated by Hand 310a, or the like.
- Image 302a even when the identifier of Object 320a appears in Image 302a, it may not always be extractable from Image 302a, as it may not be directly extracted therefrom, may not be fully appearing, may not be readable to humans, may be vouge, or the like.
- additional visual input associated with Image 302a such as Images 301a and 303a, may be utilized to automatically extract the identifier of Object 320a and identify Object 320 upon which Hand 310 is performing the action in Image 302a.
- the additional visual input may be obtained from the one or more sensors (herein also referred to as the sensors) utilized to obtain Image 302a.
- the sensors may be configured to provide visual input that at least a portion thereof comprises a view of the interior part of Hand 310a, such as Images 301a, 302a and 303a.
- Image 301a captures a view of Hand 310a in front of a Shelf 350a within the store, with a portion of the objects on the shelf, such as Object 320a.
- Image 302a captures Hand 310a being free from any object while approaching items on Shelf 350a.
- Image 302a captures a view of Hand 310a placing Object 320a in a Basket 380a.
- Image 303a captures Hand 310a along with a portion of Basket 380a, while placing another Object 325a within Basket 380a, and viewing Object 320a being already placed in Basket 380a, along with other objects picked by the user.
- Images 3Ola-3O3a may be a series of subsequent visual inputs obtained from the one or more sensors one after another.
- the series of subsequent visual inputs may comprise a first visual input (e.g., Image 301a), a second visual input (e.g., Image 302a), and a third visual input (e.g., Image 303a).
- Image 301a may precede Image 302a and Image 302a may precede Image 303a.
- the timeframe between each image and its successive image may be constant, or predetermined, such as about no more than 1 second, no more than 400 milliseconds, no more than 200 milliseconds, no more than 100 milliseconds, no more than 50 milliseconds, or the like.
- each successive image may be configured to capture the next frame or series of frames, such as no more than 30 frames, no more than 10 frames, no more than 5 frames, no more than a single frame, or the like.
- the timeframe between the analyzed image and the successive image being used for the object identification e.g., Image 302a and 303a
- the timeframe between the analyzed image and the successive image being used for the object identification may be different than timeframe between the analyzed image and the previous image being used for the object identification (e.g., Image 302a and Image 301a).
- Image 302a may be an immediate subsequent frame of Image 301a, such as with about 2 seconds apart, 5 seconds apart, or the like.
- Image 303a may be obtained several frames after Image 302a, such as with about 30 seconds apart, 1 minute apart, or the like.
- different images may capture different views associated with Hand 310a. However, at least a portion of the image (such as about 5%, about 10%, about 50%, or the like) may comprise a view of the portion of Hand 310a. Such portion may vary from one image to another, based on the angle of Hand 310a, the position thereof, the action being performed thereby, or the like. As an example, Image 301a captures a smaller portion of Hand 310a compared to Image 302a. It may be further noted that different images may capture different views of objects, such as from different angles, in different positions, different sizes, with different relative positions or locations to Hand 310a, or the like. As an example, Object 320a may appear in different positions, views, and locations of Images 3Ola-3O3a.
- the identifier may be located nearby the object, in a location, or on another object associated with the object, such as on a shelf the object is located on, on a similar object, on a holder of the object, on a container of the object, or the like.
- Identifier 355a may be a non-unique identifier utilized to identify all (potentially similar) objects located on Shelf 350a, including Object 320a. Identifier 355a may be stamped or attached to Shelf 350a.
- an initial identification of an object may be performed based on visual identifiers on the object, such as a name, a barcode, or the like.
- an action performed thereon may be verified.
- the system may verify that the human subject touches or picks the same object identified by the identifier.
- other information or properties related to the object may be stamped or printed on Object 320a, such as Expiry Date 321a, Nutritional Marking 322a, or the like. Such information may be utilized as additional identifiers, for validating the extracted identifier, for validating that Object 320a is the same identified object, or the like.
- the hand-based action performed by Hand 310 may be identified based on the series of the subsequent visual inputs, or a portion thereof, such as based on one or more images.
- the wearable device may be configured to recognize when Hand 310 is getting close to an object (such as Object 320a), picking, holding (e.g., the object remains being held by Hand 310a), moving it (e.g., background picture changed), releasing an object, or the like.
- the hand-based action may be identified based on a first subset of visual inputs (e.g., Image 302a), while identification of the object may be performed based on a second subset of the visual input (such as Image 301a, Image 303a, a combination thereof, or the like).
- a first subset of visual inputs e.g., Image 302a
- identification of the object may be performed based on a second subset of the visual input (such as Image 301a, Image 303a, a combination thereof, or the like).
- a positioning reading of the wearable device indicative of the location thereof, may be obtained, such as using a location sensor thereon, a location system of a device associated therewith, or the like.
- a subset of a catalog of items of the store may be determined based on the location, such as based on an input from the store, or the like.
- the subset of the catalog may comprise items located on Shelf 350, items located in the fridge comprising Shelf 350, diary items, or the like.
- a local identifier may be determined based on the subset of the catalog of items.
- a digital mapping of objects to locations may be updated automatically based on the automatic identification of the pick action and/or the release action.
- the digital mapping may comprise updating a location of a digital representation of Object 320 from a first location (e.g., Shelf 350) to a second location (e.g., Basket 340 or Shelf 360).
- FIG. 3B showing schematic illustrations of visual inputs provided by a hand action monitoring wearable device, in accordance with some exemplary embodiments of the disclosed subject matter.
- Images 304b-306b may be visual input capturing an environment in which Hand 310b performs action on object, such as a part of order fulfillment task, shopping task, inventory management task, assembly task, or the like.
- at least Image 305b may be obtained from a sensor located on a wearable device worn on Hand 310b of the subject, similar to Images 3Ola-3O3a in Figure 1A.
- a first visual input Image 305b may be obtained.
- An Object 330b may be determined to be an object of interest, e.g., by identifying that Hand 310b is performing or intending to perform an action thereon. The identifier of Object 330b may not extractable from Image 305b.
- the identifier of Object 330b may be determined based on additional visual input, such as Image 304b and 306b.
- Image 304b may be obtained before Image 305b and Image 306b may be obtained after Image 305b.
- Images 304b- 306b may be obtained from the same sensor.
- Images 304b and 306b may be obtained from a second sensor different than the sensor utilized to obtain Image 305b, may be obtained from two different sensors that are different than the sensor utilized to obtain Image 305b, or the like. The second sensor may or may not be located on the wearable device.
- the second sensor may be a visual sensor located on another wearable device worn by the subject, another computing device of the subject, from a sensor not associated with the user, such as camera of the store, a sensor on the Shelf 350a, or the like.
- Images 304b and 306b may be from a different type than Image 305b, such as being obtained from different types of visual sensors, capturing different features, of different formats, or the like.
- a First Portion 332b of the identifier may be identified based on Image 304b and a Second Portion 334b of the identifier may be identified based on Image 306b.
- the identifier of Object 330b may be determined based on a combination of First Portion 332b and Second Portion 334b, such as based on a concatenation thereof, based on a matching therebetween, or the like.
- First Portion 332b and Second Portion 334b may be substrings of the identifier, may comprise several visual elements of the identifier, or the like.
- First Portion 332b and Second Portion 334b may comprise overlapping portions of the identifier, distinct portions of the identifier, or the like.
- the identifier may be a 6 digits/characters code, such as 1105A8.
- the first 3 digits may be obfuscated in Image 304b, and the last 3 digits 5A8 may be extractable from Image 304b; while the first 3 digits 110 may be extractable from Image 306b.
- the identifier may be determined based on First Portion 332b and Second Portion 334b in combination with a Third Portion 336b.
- the third portion may be determined based on a non-visual sensory data.
- additional digits associated with a certain type of products may be extractable based on a geo-location data of Object 330b.
- Third Portion 336b may be determined based on features of Object 330b within Images 304b-306b.
- Object 330b may be a juice bottle.
- a color of liquid within Object 330b may be indicative of a type of the juice.
- Third Portion 336b may be extracted based on the type of the juice.
- FIG. 3C showing schematic illustrations of visual inputs provided by a hand action monitoring wearable device, in accordance with some exemplary embodiments of the disclosed subject matter.
- a first visual input Image 308c may be obtained.
- An Object 340c may be determined to be an object of interest, e.g., by identifying that Hand 310c is performing or intending to perform an action thereon. The identifier of Object 340c may not extractable from Image 308c.
- the identifier of Object 330b may be determined based on additional visual input, such as Image 307c.
- Image 307c may be obtained before Image 308c.
- Image 307c may be obtained from a different sensor, such as from Barcode Scanner 170 of Wearable Device 120 in Figure 1.
- Image 307c and Image 308c may be obtained within a timeframe of a predetermined maximal duration, such as simultaneously, within 2 seconds, within 5 seconds, or the like.
- the identifier of Object 340c may be a Barcode 344c identifying objects of a predetermined type of Object 340c.
- Barcode 344c may be a non-unique item identifier of Object 340c, but a unique type of object identifier of Object 340c.
- an Element 342c may be identified in Image 307c.
- Element 342c may be utilized to determine that the object appearing in Image 307c is Object 340c captured by Image 308c. Additionally or alternatively, Element 342c may be utilized to determine that Object 340c is captured on Image 307c within the timeframe of a predetermined maximal duration. Additionally or alternatively, Element 342c may be utilized in combination with Barcode 344c to extract the unique identifier of Object 340c.
- FIG. 4 showing a block diagram of a system, in accordance with some exemplary embodiments of the disclosed subject matter.
- a System 400 may be utilized to manage a self-service shopping of a User 405 in a store, online-shopping fulfillment for customers by User 405, mapping of items in a houseware, or the like. Additionally or alternatively, similar applications of System 400 may be utilized for other facilities to accurately identify objects or to perform monitoring of hand actions of users on objects, such as in healthcare systems to monitor actions of healthcare staff members, in airplanes to monitor actions of pilots, in augmented reality video games to monitor actions of players, or the like.
- System 400 may comprise a plurality of Wearable Devices 410 each of which is being worn on or held by a user such as a User 405.
- Each Wearable Device 410 may be configured to be worn by User 405 in a manner enabling monitoring of hand activity of User 405.
- Wearable Device 410 may be configured to be utilized to identify items grabbed by the hand of User 405 and moved to or from one or more physical shopping totes of User 405.
- Wearable Devices 410 may be worn on the hand of User 405, in a manner enabling capturing interior portion thereof, such as on the wrist of User 405, on the fingers of User 405, on a hand palm of User 405, or the like.
- Wearable Device 410 may comprise a Visual Sensor 412.
- Visual Sensor 412 may be located in a location and orientation enabling monitoring of hand activity of User 405.
- Visual Sensor 412 may be configured to continuously capture an interior portion of the hand of User 405.
- Wearable Device 410 may be configured to provide visual input captured by Visual Sensor 412 to be utilized to identify activity performed by the hand of User 405, such as an action performed by the hand, an object upon which the action is performed, or the like.
- Visual Sensor 412 may comprise a single lens, one or more lenses, or the like.
- Visual Sensor 412 may be configured to capture pictures, videos, signals, a combination thereof, or the like.
- Visual Sensor 412 may be configured to capture different portions of objects in different frames.
- Wearable Device 410 may comprise one or more Sensors 414 different than Visual Sensor 412, such as visual sensors from other types, non-visual sensors, other sensors monitoring other properties of the environment or providing other types of sensory data, or the like. Additionally or alternatively, Wearable Device 410 may comprise one or more Barcode Readers 416 configured to read barcodes of objects to identify the objects.
- Sensors 414 different than Visual Sensor 412, such as visual sensors from other types, non-visual sensors, other sensors monitoring other properties of the environment or providing other types of sensory data, or the like.
- Wearable Device 410 may comprise one or more Barcode Readers 416 configured to read barcodes of objects to identify the objects.
- Wearable Device 410 may comprise a communication unit (not shown).
- the communication unit may be configured to connect Wearable Device 410 to a controller external thereto, such as to a mobile Device 420 of User 405, Store Unit 430, Server 440, or the like.
- Wearable Device 410 may be automatically activated when connected to Store Unit 430, such as based on connecting to a Wi-Fi network in the store associated with Store Unit 430, using an activation interface associated with Store Unit 430, based on the location readings of Wearable Device 410 being conformed with the location of Store Unit 430, or the like.
- Wearable Device 410 may be deactivated when leaving the store, such as based on dis- connecting from Store Unit 430, based on store Unit identifying that User 405 left the store, or the like.
- internal storage may be utilized to retain images and data obtained from Visual Sensor 412, such as a predetermined number of latest frames captured by Visual Sensor 412, or the like. Additionally or alternatively, images and data obtained by Visual Sensor 412 may be transmitted and stored by Server 440 or storage associated therewith.
- Wearable Device 410 may comprise an Input/Output (I/O) module (not shown) configured to obtain input and provide output from Wearable Device 410 to other connected devices, such as providing visual input captured by Visual Sensor 412, readings of other sensors, or the like.
- I/O Input/Output
- Wearable Device 410 may be associated with an application of a computing Device 420 of User 405, such as a mobile app, or the like.
- the mobile app may be a standalone native app, a feature embedded in or hosted by third- party app(s), or the like.
- User 405 may receive data associated with the shopping session to Device 420, provide feedback, or the like. The data may be provided in real-time or post actions. In some exemplary embodiments, the data may be displayed on a screen of Device 420, using the designated application or the like.
- Device 420 may be utilized to display a Virtual Cart Display 422 for User 405, upon initiating a selfshopping session, or an online-shopping manual fulfillment process, indicating the items shopped thereby.
- Device 420 may be utilized to display a Shopping List 424 for User 405. Additionally or alternatively, Device 420 may be attached to or embedded with Wearable Device 410, such as in a smartwatch, a smart wristband with a touch screen, or the like.
- System 400 may comprise a Server 440.
- Server 440 may be configured to support the monitoring and identification of hand actions of users in the store such as User 405, to identify objects upon which the actions are performed, to perform respective responsive actions, to issue output to User 405, or Store Unit 430, or the like.
- Sensory Data Analysis Module 445 may be configured to analyze sensory data to assist Server 440 and its component in identifying objects and actions.
- the sensory data may comprise visual input obtained by visual sensors of monitoring devices, such as Visual Sensor 412 of Wearable Device 410, visual input from Store Unit 430, other types of sensory data obtained from other types of sensors, such as other sensors of Wearable Device 410, or the like.
- the visual input analyzed by Sensory Data Analysis Module 445 first and second visual inputs.
- the first and second visual inputs may be obtained from sensors of Wearable Device 410, such as from Visual Sensor 412.
- the first and second visual inputs may be connected, such as being obtained from the same visual sensor, capturing the same environment in subsequent times, capturing related environments, related to the same object, or the like.
- Server 440 may comprise an Object Identification Module 450.
- Object Identification Module 450 may be configured to identify an object in a given visual input.
- Object Identification Module 450 may be configured to identify any physical object, such as all physical objects in the visual input.
- Object Identification Module 450 may be configured to identify a certain physical object or a portion thereof, such as identifying a predetermined object being analyzed, identifying objects identified in previous or other visual input, or the like.
- Object Identification Module 450 may be configured to identify the object based on visual characteristics, such as identifying a predetermined shape or color, identifying a barcode or other unique identifier of the object, or the like.
- Server 440 may comprise an Action Identification Module 460 that is configured to identify actions performed by the hand of User 405, or actions refrained from being done thereon.
- Action Identification Module 460 may be configured to identify actions associated with modifying the content of a physical shopping tote of User 405.
- Action Identification Module 460 may be configured to recognize when the hand of User 405 is getting close to an item, picking an item, holding an item (e.g., while the object stays constant in the hand), moving an item (e.g., background picture changed), releasing an item, or the like.
- Object Identification Module 450 may be configured to identify an object or portions thereof in the obtained visual input.
- Action Identification Module 460 determining that User 405 is performing or about to perform an action on the object, exact identification of the object may be performed, such as by determining an identifier of the object by Identifier Extraction Module 455.
- Action Identification Module 460 may be configured to identify that User 405 is picking up or about to pick up an object being identified by Object Identification Module 450.
- Object Identification Module 450 may be configured to identify an object based on an identifier thereof. Object Identification Module 450 may be configured to utilize an Identifier Extraction Module 455 for extracting the identifier of the object. In some cases, an identifier of the object may or may not be extractable from the visual input, may or may not directly be extractable based on the captured portions of the object, or the like.
- Identifier Extraction Module 455 may be configured to automatically extract the identifier of an object in a first visual input based on a second sensor input associated with the first visual input, such as a second visual input obtained from Visual Sensor 412 before or after the visual input, a second visual obtained from other visual sensors of Wearable Device 410 or other devices, a second sensory data obtained from other types of sensors on Device 410 or from other devices such as Device 420, or the like.
- the object when the second visual input is obtained prior to obtaining the first visual input, the object may be identified by Object Identification Module 450 before the hand-based action is completed by User 405. Additionally or alternatively, when the second visual input is obtained after the first visual input is obtained, the object may be identified by Object Identification Module 450 only after the hand-based action is completed and not during the performance of the hand-based action and is identified.
- Action Identification Module 460 may be configured to automatically determine that the subject (e.g., User 405) is performing a hand-based action on an object being identified, based on the visual input obtained from Visual Sensor 412.
- Action Identification Module 460 may be configured to analyze the first and the second input in order to identify an action performed by the hands of User 405 and an object which upon the action is performed, and
- Object Identification Module 450 may be configured to analyze the first and the second input in order to identify to items or objects withing the visual input that the action is being performed on.
- Action Identification Module 460 may be configured to determine the action based on one of the first and the second inputs, while Object Identification Module 450 may be configured to identify the object based on the other input.
- Object Identification Module 450 may be configured to determine that the action is being performed with respect to the object based on the first visual input, but to accurately identify the object (e.g., extracting the identifier of the object) based on the second visual input.
- the first and second visual inputs may be obtained in a timeframe. During this timeframe, the object may be continuously observable by Visual Sensor 412, at least in part. Accordingly, identification in the second visual input can be attributed to the object appearing in the first visual input.
- Identifier Extraction Module 455 may be configured to determine that User 405 is continuously interacting with the object using the hand of the subject during the entirety of the timeframe, based on visual inputs obtained from Visual Sensor 412 throughout the timeframe.
- Identifier Extraction Module 455 may be configured to identify a first portion of the identifier based on the first visual input, and a second portion of the identifier based on the second visual input. Identifier Extraction Module 455 may be configured to determine the identifier of the object based on a combination of the first and second portions of the identifier.
- Identifier Extraction Module 455 may be configured to obtain a set of potential local identifiers that comprise all identifiers that can be potentially observed by Wearable Device 410 when the visual input is obtained by Visual Sensor 412.
- the set of potential local identifiers may be a subset of all identifiers of objects that are observable by Wearable Device 410.
- the set of potential local identifiers may be obtained from a Catalog Database 480, from Store Unit 430, or the like. Additionally or alternatively, the set of potential local identifiers may be determined by Server 440, such as based on observed properties of the object, based on the observed environment, or the like.
- Identifier Extraction Module 455 may be configured to rule out, based on the second sensor input, from the set of potential local identifiers all local identifiers except for the identifier.
- Action Identification Module 460 may be configured to automatically identify the action performed by User 405 on the object based on the visual input obtained from Visual Sensor 412, while Identifier Extraction Module 455 may be configured to identify the object based on a barcode obtained from Barcode Reader 416.
- the barcode may be utilized to identify objects of a predetermined type. It may be noted that the barcode may be a non-unique item identifier (e.g., a unique type of object identifier) and may be utilized to identify a plurality of objects of a same type as the object.
- the visual input utilized by Action Identification Module 460 to determine the action and the barcode Identifier Extraction Module 455 to identify the object may be obtained within a timeframe of a predetermined maximal duration, such as within 1 minute, within 30 seconds, within 10 seconds, within 5 seconds, or the like.
- a Control Module 470 of Server 440 may be configured to determine a responsive action based on automatically determining that the subject (e.g., User 405) performed the hand-based action on the object.
- Control Module 470 may be configured to determine the responsive action based on the action determined by Action Identified Module 460 and/or the object determined by Object Identification Module 450, and/or the identifier extracted by Identifier Extraction Module 455, or other components of Server 440.
- the responsive action may be executed by Control Module 470. Additionally or alternatively, Control Module 470 may be configured to instruct another entity or component to execute the responsive action.
- the responsive action may comprise recording in a digital record a log event, the log event indicating that the hand-based action was performed at a timestamp by the subject with respect to the object.
- Control Module 470 may be configured to update one or more Virtual Carts 422 in response to the identification of the action by the user, such as adding an item, removing an item, or the like.
- Virtual Cart 422 may indicate a list of items shopped by User 405 for a certain customer or according to a certain Customer Items List 424, or the like.
- Virtual Cart 422 may be automatically updated based on items moved to and from the physical shopping tote of User 405, such as by adding items to Virtual Cart 422 based on items picked up and put into the physical shopping tote of the certain customer order being picked by User 405 and removing items from Virtual Cart 422 based on items removed from the physical shopping tote of the certain customer order being picked by User 405.
- Control Module 470 may be configured to issue an output to User 405.
- the output may be issued to Device 420 of User 405, such as by displaying the content of Virtual Cart 422 to User 405 using Device 420 (Customer Virtual Cart 422), or the like, such as issuing an audio alert using a speaker on Wearable Device 410 or Device 420, using a LED light bulb on Wearable Device 410 or Device 420, or any other visual output, to provide an indication of an addition of an item to or removal of an item from Virtual Cart 422.
- Control Module 470 may be configured to issue an output to the customer whose order is being picked by User 405, such as providing updated prices, suggesting alternative items, or the like.
- the responsive action may comprise updating a digital mapping of objects to locations.
- Store Unit 430 may comprise a mapping module that may be configured to update the digital mapping of objects to locations based on the automatic identification of the pick action and release action by Action Identification Module 460, in accordance with the exact identification of the identifier of the object.
- the mapping module may be configured to update a location of a digital representation of the object from a first location to a second location identified based on the identified action.
- Control Module 470 may be configured to determine a mapping of geo-spatial locations of items in the store. Control Module 470 may be configured to identify a placement location of each object moved by User 405 or any other user, such as a worker in the store, and update the mapping to indicate the location of the object based on the placement location.
- Wearable Device 410 may be configured to be utilized for manual fulfillment of a shopping order of a customer by User 405.
- User 405 may be a picker tasked with picking items to fulfill the shopping order of the customer.
- Control Module 470 may be configured to identify that the object identified by Object Identification Module 450 is associated with the order of the customer.
- the shopping order may comprise a List 424 of items selected by the customer and transmitted to Device 420 of User 405.
- Action Identification Module 460 may be configured to identify a picking captured by Wearable Device 410, such as picking up an object and placing the object in a tote associated with the shopping order of the customer.
- Control Module 470 may be configured to identify a corresponding item to the item in the shopping order (e.g., in List 424) and mark the corresponding item as fulfilled. In response to a determination that the shopping order is fulfilled, Control Module 470 may be configured to perform a responsive action, such as invoking a payment module (not shown) to enable a transaction from the customer to Store Unit 430, based on the fulfilled shopping order, or a portion thereof determined as fulfilled. In response to Action Identification Module 460 identifying that User 405 (the picker) picked up the object and placed the object in a tote associated with the order of the customer, Control Module 470 may be configured to identify performing a fulfillment- related action.
- a responsive action such as invoking a payment module (not shown) to enable a transaction from the customer to Store Unit 430, based on the fulfilled shopping order, or a portion thereof determined as fulfilled.
- Action Identification Module 460 identifying that User 405 (the picker) picked up the object and placed the object in a tote associated with the
- Control Module 470 may be configured to identify a mismatch between the object and Customer Items List 424 associated with the customer. In response to Action Identification Module 460 identifying that User 405 (the picker) picked up the object and placed the object in a tote associated with the order of the customer, Control Module 470 may be configured to issue an alert to the picker indicating the mismatch.
- the present invention may be a system, a method, and/or a computer program product.
- the computer program product may include a computer-readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
- the computer-readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device.
- the computer-readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- a non- exhaustive list of more specific examples of the computer-readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing.
- RAM random access memory
- ROM read-only memory
- EPROM or Flash memory erasable programmable read-only memory
- SRAM static random access memory
- CD-ROM compact disc read-only memory
- DVD digital versatile disk
- memory stick a floppy disk
- mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon
- a computer-readable storage medium is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
- Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer-readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.
- the network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers.
- a network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer-readable storage medium within the respective computing/processing device.
- Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming languages such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
- the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- electronic circuitry including, for example, programmable logic circuitry, field- programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
- These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- These computer readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
- the computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or another device to cause a series of operational steps to be performed on the computer, other programmable apparatus or another device to produce a computer-implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
- each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
- the functions noted in the block may occur out of the order noted in the figures.
- two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
Procédé, appareil et produit-programme informatique destinés à l'identification d'articles capturés à l'aide de dispositifs pouvant être portés par l'utilisateur. Le procédé consiste à obtenir une entrée visuelle à partir d'un capteur situé sur un dispositif pouvant être porté par l'utilisateur qui est porté sur une main du sujet. Le capteur est configuré pour être placé dans un emplacement et une orientation permettant la surveillance de l'activité de la main du sujet. L'entrée visuelle comprend un objet configuré pour être identifié par un identifiant qui n'est pas extractible de l'entrée visuelle. L'identifiant est automatiquement extrait sur la base d'une seconde entrée de capteur associée à l'entrée visuelle afin d'identifier l'objet dans l'entrée visuelle, et à déterminer automatiquement que le sujet réalise une action basée sur la main sur l'objet sur la base de l'entrée visuelle. Sur la base de ladite détermination automatique du fait que le sujet a réalisé l'action basée sur la main, une action de réponse peut être réalisée.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202163252718P | 2021-10-06 | 2021-10-06 | |
US63/252,718 | 2021-10-06 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023058025A1 true WO2023058025A1 (fr) | 2023-04-13 |
Family
ID=85803993
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IL2022/051064 WO2023058025A1 (fr) | 2021-10-06 | 2022-10-06 | Identification d'objets à l'aide de dispositifs pouvant être portés par l'utilisateur |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2023058025A1 (fr) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2341461A1 (fr) * | 2009-12-30 | 2011-07-06 | Psion Teklogix Inc. | Dispositif de calcul portable multi-capteur adaptatif |
US10339595B2 (en) * | 2016-05-09 | 2019-07-02 | Grabango Co. | System and method for computer vision driven applications within an environment |
US10380914B2 (en) * | 2014-12-11 | 2019-08-13 | Toyota Motor Engineering & Manufacturnig North America, Inc. | Imaging gloves including wrist cameras and finger cameras |
US20210027360A1 (en) * | 2019-07-22 | 2021-01-28 | Pickey Solutions Ltd. | Hand actions monitoring device |
-
2022
- 2022-10-06 WO PCT/IL2022/051064 patent/WO2023058025A1/fr active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2341461A1 (fr) * | 2009-12-30 | 2011-07-06 | Psion Teklogix Inc. | Dispositif de calcul portable multi-capteur adaptatif |
US10380914B2 (en) * | 2014-12-11 | 2019-08-13 | Toyota Motor Engineering & Manufacturnig North America, Inc. | Imaging gloves including wrist cameras and finger cameras |
US10339595B2 (en) * | 2016-05-09 | 2019-07-02 | Grabango Co. | System and method for computer vision driven applications within an environment |
US20210027360A1 (en) * | 2019-07-22 | 2021-01-28 | Pickey Solutions Ltd. | Hand actions monitoring device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10977717B2 (en) | Hand actions monitoring device | |
JP7229174B2 (ja) | 人識別システム及び方法 | |
US10402634B2 (en) | Information processing device, information processing method, and computer program product | |
US20140307076A1 (en) | Systems and methods for monitoring personal protection equipment and promoting worker safety | |
US20180357886A1 (en) | System, devices and methods for health care worker training, monitoring and providing real time corrective guidance for procedures and practice related to hospital infection control | |
US20190244161A1 (en) | Inventory control | |
US20160203499A1 (en) | Customer behavior analysis system, customer behavior analysis method, non-transitory computer readable medium, and shelf system | |
US10304104B2 (en) | Information processing apparatus, information processing method, and non-transitory computer readable medium | |
JP7318321B2 (ja) | 情報処理装置、情報処理方法、人物検索システムおよび人物検索方法 | |
US20230214758A1 (en) | Automatic barcode based personal safety compliance system | |
SE2050058A1 (en) | Customer behavioural system | |
CN111259755A (zh) | 数据关联方法、装置、设备及存储介质 | |
JP2019139321A (ja) | 顧客行動分析システムおよび顧客行動分析方法 | |
WO2019077559A1 (fr) | Système de suivi de produits et d'utilisateurs dans un magasin | |
EP3474184A1 (fr) | Dispositif pour détecter l'interaction des utilisateurs avec des produits disposés sur un socle ou un présentoir d'un magasin | |
WO2023058025A1 (fr) | Identification d'objets à l'aide de dispositifs pouvant être portés par l'utilisateur | |
JP7362102B2 (ja) | 情報処理装置及び情報処理プログラム | |
JP2018195284A (ja) | 複数の作業台を管理するためのプログラム、方法、装置及びシステム | |
CN109325800A (zh) | 一种基于计算机视觉的超市智能货架的工作方法 | |
WO2023026277A1 (fr) | Surveillance d'actions des mains sur la base du contexte | |
Noor et al. | Context-aware perception for cyber-physical systems | |
US10910096B1 (en) | Augmented reality computing system for displaying patient data | |
Nadeem et al. | Ensuring safety of pilgrims using spatiotemporal data modeling and application for efficient reporting and tracking of missing persons in a large crowd gathering scenario | |
US20240320701A1 (en) | Wearable device, information processing method, non-transitory computer readable recording medium storing information processing program, and information providing system | |
TWI749314B (zh) | 影像追蹤系統 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22878088 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22878088 Country of ref document: EP Kind code of ref document: A1 |