WO2017071733A1 - Kiosque de réalité augmentée pour articles à prélever - Google Patents
Kiosque de réalité augmentée pour articles à prélever Download PDFInfo
- Publication number
- WO2017071733A1 WO2017071733A1 PCT/EP2015/074776 EP2015074776W WO2017071733A1 WO 2017071733 A1 WO2017071733 A1 WO 2017071733A1 EP 2015074776 W EP2015074776 W EP 2015074776W WO 2017071733 A1 WO2017071733 A1 WO 2017071733A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- shelving
- hand
- stand according
- control unit
- multimedia content
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/0304—Detection arrangements using opto-electronic means
Definitions
- the present invention relates to a stand to be used in a selling area, such as a store, a supermarket, a shop or the like, where a consumer picks up from the stand one or more item for sale.
- Items are normally labelled with required information that is prescribed according to national or regional laws. For example instructions for medicaments, ingredients for food, or the like.
- the need is felt to inform the consumer with more detailed information about an item for sale in order to assist when the consumer chooses the item that best fits his/her needs.
- the consumer may want to have additional information relating to the use of the product, such as examples, recipes or the like; after use additional information, for example relating to recycling; or pre-use additional information such as manufacturing technology, carbon dioxide equivalent of production process, origin of the product, price, nutritional, ingredients or the like.
- the object of the present invention is achieved by a stand according to claim 1.
- figure la is a left view of a stand assembly according to a first embodiment of the present invention ;
- FIG. 1 - figure lb is a right view of a stand assembly according to a second embodiment of the present invention.
- figure 2 is a front view of figures la, lb;
- FIG. 3 is a schematic view of a contactless position detecting device provided in the stand assembly of figures la, lb. DETAILED DESCRIPTION OF THE DRAWINGS
- numeral 1 refers to a stand assembly in a supermarket where items 2 for sale are placed waiting to be picked-up by a consumer.
- Stand 1 comprises a shelving 3 where items 2 are placed; and a support, e.g. legs 4, to keep shelving 3 off ground G.
- Preferably shelving 3 does not have lateral walls so that items 2 can also be viewed from a lateral orthogonal view (see figure 1) .
- Stand assembly 1 also comprises a contactless position detector 5 placed in a top position with respect to shelving 3 and monitoring, according to a first embodiment (figure la) a picking up space S to intercept a picker, for example a person's hand 7, in particular a consumer's hand, indicating a target item 2 e.g. by stopping hand 7 in a physical target position, e.g. on top of target item 2 before picking it up to put e.g. in a shopping trolley.
- a picker for example a person's hand 7, in particular a consumer's hand
- a target item 2 e.g. by stopping hand 7 in a physical target position, e.g. on top of target item 2 before picking it up to put e.g. in a shopping trolley.
- an optical axis 0 of position detector 5 intercepts shelving 3.
- position detector 5 is also configured to detect a pointing configuration of hand 7.
- a pointing configuration detected by the position detector 5 is when hand 7 points a physical target position on shelving 3 with an index finger.
- optical axis 0 is inclined with respect to ground G and does not intersect shelving 3.
- Picking up space S (figure la) is where items 2 are placed and shall be accessed by hand 7 in order to pick up item 2.
- picking up space S is laterally bound by external perimeter, in a plan view, of shelving 3.
- picking up space S is delimited by the highest edge of all items 2 on shelving 3 or, as an alternative, by the vertical position of detector 5.
- Contactless position detector 5 senses the access of hand 7 inside picking up space S, recognizes hand 7 and produces a signal that is elaborated by a control unit 8, to establish also the position of hand 7 in a predefined tridimensional or bidimensional reference frame.
- the signal from position detector 5 is elaborated by control unit 8 to be associated with a pre-stored 3D mapping of shelving 3 where a 3D point is associated to an item.
- signal from position detector 5 is further elaborated by control unit 8 in order to identify the position of hand 7 and also to identify a pointing direction, e.g. a direction that is parallel to an extended index finger of consumer's hand 7.
- control unit 8 elaborates pre- stored 3D mapping data relating to a predefined position of items 2 on shelving 3 in the same reference frame as that of the position of hand 7. In this way control unit 8 is able to match the physical target position indicated or pointed by hand 7 and predefined positions associated to items 2 placed on shelving 3.
- control unit 8 is configured to compare the position of hand 7 or the physical target position pointed by hand 7, both sensed by the contactless position detector 5 with respect to the stored position of items 2 within 3D mapping of shelving 3 and to select from a database an information or content that is associated to each physical target position. Such information is displayed on a display 10 that is preferably located on top of shelving 3 above detector 5. Association of information may be biunivocal, i.e. to each predefined position of item
- Sub-area A may be a portion or a shelf as a whole of shelving 3.
- shelving 3 is descending towards the consumer so that monitoring and recognition of pointing hand 7 and pointing direction by detector 5 and control unit 8 is more precise and without interferences or hidden areas.
- shelving 3 may comprise a single large horizontal platform or shelf where homogeneous groups of items 2 are placed in respective sub-areas A.
- stand assembly 1 is basket-like, for example is a horizontal freezer having an horizontal access opening or door.
- picking-up space is laterally bound by the basket-like structure and contactless position detector 5 intercepts hand 7 when accessing picking-up space through the main opening of the basket-like stand.
- position of hand 7 is also calculated by control unit 8 on the basis of data from contactless position detector 5 in order to select an information to be displayed on the basis of the position of hand 7 when accessing the picking-up space .
- contactless position detector 5 comprises a first sensor unit SI and a second sensor unit S2 different from first sensor unit SI.
- Sensor units SI, S2 are different in the sense that they respectively detect the same physical parameter, e.g. an electromagnetic radiation, in different wavelength bands, e.g. visible light and infrared light.
- sensor units SI, S2 respectively detect a different physical parameter, e.g. sound or other air pressure waves and electromagnetic radiation respectively.
- stand 1 comprises an emitter to emit an energy wave that propagates over the picking-up space S (figure la) or towards the consumer (figure lb) and sensor units SI, S2 detect the energy reflected in order to recognize hand 7 and/or pointing hand 7.
- sensor units SI, S2 may detect an energy wave emitted by hand 7.
- the provision of two different sensor units SI, S2 increases the reliability of detection because sets of data respectively coming from different fields, when matched, may be elaborated by known algorithms to enhance detection of errors or to implement recognition strategies of hand 7 and/or of the position of hand 7 and/or recognition of the direction pointed by hand 7.
- contactless position detector 5 is on the same side of shelving 3 with respect to a consumer in a picking-up position.
- stand 1 comprises a consumer facing side 12 opposite to a back face that, for example, contacts the wall of a shop or, as shown in figure 1, another stand assembly.
- Consumer facing front side 12 provides a furthermost front wall or portion or edge 13 that is the furthermost frontal projection proximal along the horizontal direction to the consumer in a picking-up position (figure la) .
- contactless position detector 5 is on the same side of shelving 3 and items 2 with respect to furthermost wall or portion or edge 13.
- optical axis 0 of detector 5 is on the same side of shelving 3 with respect to furthermost wall or portion or edge 13.
- optical axis 0 is on the opposite side of shelving 3 with respect to furthermost wall or portion or edge 13 and is inclined towards ground G in order to detect pointing hand 7.
- sensor unit SI is a 3D depth sensor and sensor unit S2 is a colour sensor.
- sensor unit S2 captures color 2D images of the user so that contactless position detector 5 registers and synchronizes the depth maps with the color images, and generates a data stream that includes the depth maps and image data for output to control unit 8.
- the depth maps, color images is output to the control unit 8 via a single port, for example, a Universal Serial Bus (USB) port.
- USB Universal Serial Bus
- color images are useful to recognize with the required level of precision hand 7 and pointing finger, i.e. index finger, of hand 7.
- Control unit 8 processes the data generated by contactless position detector 5 in order to extract 3D image information. For example, control unit 8 may segment the depth map in order to recognize the parts of the body of the consumer, in particular hand 7, and find their 3D locations. Control unit 8 uses this information to select from a database a multimedia content to be displayed on the basis of the position of hand 7 within the picking-up space S (figure la) or on the basis of a physical target position pointed by hand 7 (figure lb) on shelving 3.
- control unit 8 comprises a general-purpose computer processor, which is programmed in software to carry out these functions.
- the software may be downloaded to the processor in electronic form, over a network, for example, or it may alternatively be provided on tangible media, such as optical, magnetic, or electronic memory media .
- sensor unit SI comprises an illumination subassembly 15 illuminates the object, e.g. hand 7 with an appropriate pattern, such as a speckle pattern.
- the depth-related image data include an image of the pattern on the object, i.e. hand 7, and the processing circuitry is configured to generate the depth map by measuring shifts in the pattern relative to a reference image.
- Kinect TM
- Microsoft Registered trademark
- subassembly 15 typically comprises a suitable radiation source 16, such as a diode laser, LED or other light source, and optics, such as a diffuser 17 or a diffractive optical element, for creating the pattern on the object, i.e. hand 7, items 2, shelving 3 and other objects within the working range of illumination subassembly 15.
- Sensor unit SI also comprises a depth image capture subassembly 18 captures an image of the pattern on the object surface.
- Subassembly 18 typically comprises objective optics 19, which image the object surface onto a detector 20, such as a CMOS image sensor.
- Radiation source 16 typically emits IR radiation, although other radiation bands, in the visible or ultraviolet range, for example, may also be used.
- Detector 20 may comprise a monochrome image sensor, without an IR- cutoff filter, in order to detect the image of the projected pattern with high sensitivity.
- optics 19 or the detector itself may comprise a bandpass filter, which passes the wavelength of radiation source 16 while blocking ambient radiation in other bands.
- Sensor unit S2 comprises a color image capture subassembly 25 captures color images of the object.
- Subassembly 25 typically comprises objective optics 26, which image the object surface onto a detector 27, such as a CMOS color mosaic image sensor.
- Optics 26 or detector 27 may comprise a filter, such as an IR-cutoff filter, so that the pattern projected by illumination subassembly 15 does not appear in the color images captured by detector 27.
- a processing device 28 receives and processes image inputs from subassemblies 18 and 25. Details of these processing functions are presented for example in US8456517 and are implemented in Kinect (TM) devices by Microsoft (Registered trademark) . Briefly put, processing device 28 compares the image provided by subassembly 18 to a reference image of the pattern projected by subassembly 15 onto a plane 30, at a known distance Dl from contactless position detector 5. The reference image may be captured as part of a calibration procedure and stored in a memory 31, such as a flash memory, for example. The processing device 28 matches the local patterns in the captured image to those in the reference image and thus finds the transverse shift for each pixel 32, or group of pixels, within plane 30.
- the processing device Based on these transverse shifts and on the known distance D2 between the optical axes of subassemblies 15 and 18, the processing device computes a depth (Z) coordinate for each pixel.
- Known distance D3 between the optical axes of subassemblies 18 and 25 is used by processing device to compute a shift between the color and depth images.
- Processing device 28 synchronizes and registers the depth coordinates in each such 3D map with appropriate pixels in the color images captured by subassembly 25.
- the registration typically involves a shift of the coordinates associated with each depth value in the 3D map.
- the shift includes a static component, based on the known distance D2 between the optical axes of subassemblies 18 and 25 and any misalignment between the detectors, as well as a dynamic component that is dependent on the depth coordinates themselves.
- An example of the registration process is also described in US8456517 and is implemented in Kinect (TM) devices by Microsoft (Registered trademark) .
- processing device 28 After registering the depth maps and color images, processing device 28 outputs the depth and color data via a port, such as a USB port, to control unit 8.
- the output data may be compressed in order to conserve bandwidth.
- the consumer visually interacts with shelving 3 in order to select a target item 2.
- Display 10 is above shelving 3 in order to avoid any interference with the visual interaction of the consumer with shelving 3 during selection of the target item 2.
- the consumer shall move his/her eyes away from shelving 3 in order to watch multimedia contents on display 10.
- the latter further shows a visual feedback after control unit 8 has elaborated the position of hand 7 and has matched such position with a pre-loaded 3D mapping of shelving 3 in order to select the physical target position and, thus, the target item 2.
- Augmented reality provides the combination of multimedia contents and position detectors that interact with the consumer placed in a physical environment. Therefore multimedia contents are added to the physical environment, in the present case stand 1 with its items 2, and the consumer is not exposed to a complete virtual environment, as it happens in virtual reality systems.
- control unit 8 selects information from a pre-defined database where contents are stored and associated to a respective tag corresponding to either a punctual position on shelving 3 or a range of positions corresponding to sub-areas A.
- the relative content in the database appears on display 10 so that the user can read in larger font information on the item 2 placed in the punctual position or in sub-area A on shelving 3.
- Control unit 8 is programmed to select and show a visual feedback on display 10 when a physical target position on shelving 3 is identified.
- the visual feedback preferably comprises an image representing item 2 associated to the physical target position identified by control unit 8.
- the visual feedback is kept for a relatively short time on display 10, e.g. for no more than 4 seconds, in order to give the consumer a chance to adjust the pointing direction in case the desired item by the consumer is not the one detected by position detector 5.
- Control unit 8 further recognizes that physical target position is not changed by the consumer within a predefined time span, e.g. 3 seconds, and, in such a case, first sub ⁇ group of multimedia contents is shown on display 10.
- a predefined time span e.g. 3 seconds
- a second sub- group of multimedia contents is shown after a predefined time span during which the physical target position does not change.
- display 10 also shows a visual feedback of the time remaining before switching to the second sub group of multimedia contents.
- Items 2 shall be placed in the correct position, i.e. to match with the pre-stored database and the tags, on stand assembly 1 by the staff of the shop or supermarket or store.
- database is updated so that tags correspond to the position of items 2 on stand assembly 1.
- control unit 8 is programmed to recognize a pointing hand 7 and a pointing direction also when hand 7 enters picking up space S.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Processing Or Creating Images (AREA)
Abstract
L'invention concerne un ensemble kiosque de réalité augmentée comprenant un rayonnage (3) servant à supporter des articles (2) affichés à des fins de prélèvement, un dispositif de détection de position sans contact (5) placé au sommet du rayonnage (3) conçu pour reconnaître une main (7) indiquant ou pointant une position physique cible sur le rayonnage (3) correspondant à un article (2) cible et pour déterminer une position de la main (7) ; et une unité de commande (8) conçue pour sélectionner un contenu multimédia associé à la position physique cible sur le rayonnage (3) associée à l'article (2) cible sur la base de la position de la main (7) détectée par le dispositif de détection de position sans contact (5).
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201580084194.0A CN108292163A (zh) | 2015-10-26 | 2015-10-26 | 用于待挑选的物品的增强现实展台 |
PCT/EP2015/074776 WO2017071733A1 (fr) | 2015-10-26 | 2015-10-26 | Kiosque de réalité augmentée pour articles à prélever |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/EP2015/074776 WO2017071733A1 (fr) | 2015-10-26 | 2015-10-26 | Kiosque de réalité augmentée pour articles à prélever |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017071733A1 true WO2017071733A1 (fr) | 2017-05-04 |
Family
ID=54366199
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2015/074776 WO2017071733A1 (fr) | 2015-10-26 | 2015-10-26 | Kiosque de réalité augmentée pour articles à prélever |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN108292163A (fr) |
WO (1) | WO2017071733A1 (fr) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10085571B2 (en) | 2016-07-26 | 2018-10-02 | Perch Interactive, Inc. | Interactive display case |
US11488235B2 (en) | 2019-10-07 | 2022-11-01 | Oculogx Inc. | Systems, methods, and devices for utilizing wearable technology to facilitate fulfilling customer orders |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109448612B (zh) * | 2018-12-21 | 2024-07-05 | 广东美的白色家电技术创新中心有限公司 | 产品展示装置 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6141034A (en) * | 1995-12-15 | 2000-10-31 | Immersive Media Co. | Immersive imaging method and apparatus |
WO2014124612A2 (fr) * | 2013-02-18 | 2014-08-21 | Valencia Zapata Pablo Andrés | Etagères intelligentes pour point de vente |
US20150002388A1 (en) * | 2013-06-26 | 2015-01-01 | Float Hybrid Entertainment Inc | Gesture and touch-based interactivity with objects using 3d zones in an interactive system |
US20150102047A1 (en) * | 2013-10-15 | 2015-04-16 | Utechzone Co., Ltd. | Vending apparatus and product vending method |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103823556B (zh) * | 2006-07-28 | 2017-07-04 | 飞利浦灯具控股公司 | 用于被凝视物品的信息显示的凝视交互 |
KR20110010106A (ko) * | 2008-05-14 | 2011-01-31 | 코닌클리즈케 필립스 일렉트로닉스 엔.브이. | 뷰어 인터페이스의 표현 배경 내의 활성화 구역을 정의하기 위한 시스템 및 방법 |
TW201017474A (en) * | 2008-09-03 | 2010-05-01 | Koninkl Philips Electronics Nv | Method of performing a gaze-based interaction between a user and an interactive display system |
US20130316767A1 (en) * | 2012-05-23 | 2013-11-28 | Hon Hai Precision Industry Co., Ltd. | Electronic display structure |
-
2015
- 2015-10-26 CN CN201580084194.0A patent/CN108292163A/zh active Pending
- 2015-10-26 WO PCT/EP2015/074776 patent/WO2017071733A1/fr active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6141034A (en) * | 1995-12-15 | 2000-10-31 | Immersive Media Co. | Immersive imaging method and apparatus |
WO2014124612A2 (fr) * | 2013-02-18 | 2014-08-21 | Valencia Zapata Pablo Andrés | Etagères intelligentes pour point de vente |
US20150002388A1 (en) * | 2013-06-26 | 2015-01-01 | Float Hybrid Entertainment Inc | Gesture and touch-based interactivity with objects using 3d zones in an interactive system |
US20150102047A1 (en) * | 2013-10-15 | 2015-04-16 | Utechzone Co., Ltd. | Vending apparatus and product vending method |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10085571B2 (en) | 2016-07-26 | 2018-10-02 | Perch Interactive, Inc. | Interactive display case |
US11488235B2 (en) | 2019-10-07 | 2022-11-01 | Oculogx Inc. | Systems, methods, and devices for utilizing wearable technology to facilitate fulfilling customer orders |
Also Published As
Publication number | Publication date |
---|---|
CN108292163A (zh) | 2018-07-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11143736B2 (en) | Detector for determining a position of at least one object comprising at least one device to determine relative spatial constellation from a longitudinal coordinate of the object and the positions of reflection image and reference image | |
CN111033300B (zh) | 用于确定至少一项几何信息的测距仪 | |
US11051638B2 (en) | Image display device, image display system, image display method, and program | |
US20220083959A1 (en) | System and method for detecting products and product labels | |
CN112666714B (zh) | 注视方向映射 | |
US20220101551A1 (en) | Detector for determining a position of at least one object | |
US20200371237A1 (en) | Detector for determining a position of at least one object | |
US10282034B2 (en) | Touch sensitive curved and flexible displays | |
CN109001748B (zh) | 目标对象与物品的关联方法、装置及系统 | |
EP1761798B1 (fr) | Systeme et procede d'identification et de localisation sans fil | |
US20110141011A1 (en) | Method of performing a gaze-based interaction between a user and an interactive display system | |
JP5055516B2 (ja) | 拡張現実を使用して装置の保守命令および動作命令を表示するシステムおよび方法 | |
CN113498530A (zh) | 基于局部视觉信息的对象尺寸标注系统和方法 | |
US10198080B1 (en) | Virtual user interface | |
EP3794577B1 (fr) | Procédé et système intelligents de présentoir de comptoir sur plate-forme intelligente | |
KR20190002488A (ko) | 적어도 하나의 물체를 광학적으로 검출하기 위한 검출기 | |
WO2017151669A1 (fr) | Système et procédé de balayage 3d assisté | |
WO2014034188A1 (fr) | Dispositif de traitement d'images de vêtements, procédé d'affichage d'images de vêtements et programme | |
Hay et al. | Optical tracking using commodity hardware | |
US20210124943A1 (en) | Event trigger based on region-of-interest near hand-shelf interaction | |
Kurz | Thermal touch: Thermography-enabled everywhere touch interfaces for mobile augmented reality applications | |
WO2017071733A1 (fr) | Kiosque de réalité augmentée pour articles à prélever | |
WO2017163879A1 (fr) | Dispositif d'analyse de comportement, système d'analyse de comportement, procédé d'analyse de comportement, et programme | |
JP7512291B2 (ja) | 少なくとも1つの物体の位置を決定するための検出器 | |
US20210124952A1 (en) | Homography error correction using marker locations |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15788366 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15788366 Country of ref document: EP Kind code of ref document: A1 |