CN108292163A - Augmented reality exhibition booth for article to be selected - Google Patents

Augmented reality exhibition booth for article to be selected Download PDF

Info

Publication number
CN108292163A
CN108292163A CN201580084194.0A CN201580084194A CN108292163A CN 108292163 A CN108292163 A CN 108292163A CN 201580084194 A CN201580084194 A CN 201580084194A CN 108292163 A CN108292163 A CN 108292163A
Authority
CN
China
Prior art keywords
shelf
exhibition booth
hand
control unit
multimedia content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201580084194.0A
Other languages
Chinese (zh)
Inventor
卡洛·菲利波·拉蒂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Carlo Lahti Association Ltd
Carlorattiassociati SRL
Original Assignee
Carlo Lahti Association Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Carlo Lahti Association Ltd filed Critical Carlo Lahti Association Ltd
Publication of CN108292163A publication Critical patent/CN108292163A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A kind of augmented reality exhibition booth component, including:Support the shelf (3) for the article (2) being demonstrated being selected;Contactless position detection device (5) positioned at the top of shelf (3) is configured as the position of the physical target position on shelf (3) corresponding with target item (2) and determining hand (7) that identification hand (7) is indicated or is directed toward;And control unit (8), it is configured as selecting the associated multimedia content in physical target position on shelf (3) associated with target item (2) according to the position of the hand (7) detected by contactless position detection device (5).

Description

Augmented reality exhibition booth for article to be selected
Technical field
The present invention relates to a kind of exhibition booth being used for sales region (such as shop, supermarket, buffet etc.), consumer is from exhibition booth On select one or more commodity for sale.
Background technology
Article usually posts the label of the defined necessary information about country law.Such as drug, food at The explanation graded.
When consumer's selection is most suitable for the article that he/her needs, feel to need to notify consumer about for sale The more detailed information of article is to assist.For example, consumer may want to have additional information related with the purposes of product, Such as example, recipe etc.;After purposes additional information, such as the related information with recycling;Or the additional information before purposes, CO2 equivalent, rule of origin, price, nutrition, the ingredient of such as manufacturing technology, production process.
All above-mentioned requirements and undesired information cannot be all shown in the packaging of all items, because in certain situations Under, the region of the printing additional information of this packaging is limited, or because lack packaging (this appear in exhibition booth be used for as vegetables, In the case of the fresh foods such as fruit or in the Self-Service region of his/her favorite part of consumer's selection article).
It has been known that there is employees to provide the additional information about article for sale or for sale.
Invention content
The object of the present invention is to provide a kind of alternatives, specifically, using multimedia content and augmented reality to provide About will by the needs of article that people (consumer when especially buying article) selects and/or unwanted information,.
The purpose of the present invention is realized by exhibition booth described in claim 1.
The supplementary features of the present invention include in the dependent claims.
Description of the drawings
For a better understanding of the present invention, the latter is further disclosed with reference to attached drawing, wherein:
- Fig. 1 a are the left views of exhibition booth component according to a first embodiment of the present invention;
- Fig. 1 b are the right views of exhibition booth component according to a second embodiment of the present invention;
- Fig. 2 is the front view of Fig. 1 a, Fig. 1 b;And
- Fig. 3 is provided in the schematic diagram of the contactless position detection device in the exhibition booth component of Fig. 1 a, 1b.
Specific implementation mode
In fig 1 a, number 1 indicates the exhibition booth component in supermarket, and the article 2 for sale on the component of exhibition booth waits for It is selected by consumer.
Shelf 3 and holding shelf 3 of the exhibition booth 1 including placement article 2 leave the support element of ground G, such as leg 4.
Preferably, shelf 3 do not have side wall, so that article 2 can also be seen into orthogonal view (see Fig. 1) from side.
(Fig. 1 a) according to first embodiment, exhibition booth component 1 further include the contactless position for the top position for being placed in shelf 3 Detector 5, the contactless position detection arrangement monitors select space S to be placed into such as shopping cart selecting target item 2 Intercept before selector (such as people), be directed toward target item 2 (such as by the way that hand is parked in physical target position, in object The top of product 2) hand 7, the especially hand of consumer.Space S is selected in order to be completely covered, the optical axis O of position detector 5 is intercepted Shelf 3.As replacement (Fig. 1 b), position detector 5 is additionally configured to the direction configuration of detection hand 7.For example, by position detector The 5 direction configurations detected are physical target position of the hand 7 on index finger direction shelf 3.In order to detect the hand being directed toward, light Axis O is tilted relative to ground G and is not intersected with shelf 3.
It is the position that article 2 is placed to select space S (Fig. 1 a), and selecting space S can be by hand 7 close to select article 2. Specifically, space S is selected laterally to be limited by the outer boundary of shelf 3 in the plan view.Vertically, select space S by The highest of all items 2 on shelf 3 is edge limited, or is limited by the vertical position of detector 5 as substituting.Contactless position Set detector 5 and sense the close of hand 7 in space S selecting, identification hand 7 simultaneously generates the signal refined by control unit 8, with The position of hand 7 is created in scheduled three-dimensional or two-dimentional referential.When hand is parked in the top of target item 2 by user, carry out self-alignment Set the signal of detector 5 by control unit 8 refine with the pre-stored 3D mappings of associated with the article shelf of 3D points 3 It is associated.As replacement (Fig. 1 b), the signal from position detector 5 is further refined by control unit 8, to identify hand 7 Position and also identify the direction being directed toward, for example, being parallel to the direction of the index finger of the elongation of consumer's hand 7.Then, should Direction is associated with the position of hand 7, and to identify the physical target position on shelf 3, and therefore identification finger is being directed toward Target item 2.Preferably, the signal from position detector 5 enables the hand that control unit 8 is distinguished hand or is being directed toward With another object.
In both embodiments, in coordinate system identical with the position of hand 7, control unit 8 by with the object on shelf 3 The related pre-stored 3D mappings data precision in predefined position of product 2.In this way, control unit 8 can match The physical target position and precalculated position associated with the article 2 being placed on shelf 3 for being indicated or being pointed out by hand 7.
Specifically, control unit 8 be configured as comparing the position of the hand 7 sensed by contactless position detector 5 or Storage location of the physical target position that hand 7 is directed toward relative to the article 2 in the 3D mappings of shelf 3, and selected from database Information associated with each physical target position or content.On display 10, display 10 is preferred for such presentation of information The top of shelf 3 above detector 5.The association of information can be duality, that is, the predetermined adopted position of each of article 2 It sets and corresponds to and the position associated one and only one information.Alternatively, identical article 2 is positionable adjacent one another in shelf In 3 subregion A, and the specific information by 8 selected marker of control unit for subregion A, so that whenever hand 7 indicates Or the physical target position being directed toward belongs to subregion A.For example, when hand 7 be directed toward apple where shelf 3 on subregion A or It is detected in subregion A, and another multimedia content about apple or information are associated or labeled with subregion A When, then occur on display 10 about the multimedia content of apple or information.When his/her hand is moved to another by user On subregion A or when being directed toward another sub-regions A for the article 2 for placing another different food species or product, about another The multimedia content or information of a kind of food are shown.Subregion A can be 3 part of entire shelf or a shelf.It is preferred that Ground, shelf 3 are oblique towards consumer, so that the monitoring of detector 5 and control unit 8 and the hand 7 being directed toward and referring to The identification in the direction shown is more accurate and does not interfere with or hidden area.
As the layout of replacement, position detector is orientated in a manner of similar with Fig. 1 a, and shelf 3 may include single big The homogeneous groups of levelling bench or shelf, wherein article 2 are placed in corresponding subregion A.It is according to the present invention unshowned To be orientated with Fig. 1 a similar modes, exhibition booth component 1 is basket, e.g. has level for embodiment, wherein position detector Into opening or the horizontal freezing case of door.In this case, it selects space laterally to be limited by basket structure, when by basket For the main opening of exhibition booth into when selecting space, contactless position detector 5 intercepts hand 7.In addition, the position of hand 7 is also by controlling Unit 8 is calculated according to the data from contactless position detector 5, so as to when close to when selecting space according to the position of hand 7 It sets to select information to be shown.
According to the preferred embodiment of the present invention of the layout suitable for Fig. 1 a, Fig. 1 b, contactless position detector 5 includes First sensor cell S 1 and second sensor cell S 2 different from first sensor cell S 1.Sensor unit S1, S2 exist They are different in terms of detecting same physical parameter respectively, for example, different wavestrips electromagnetic radiation, such as visible light and Infrared light.Alternatively, sensor unit S1, S2 detect different physical parameters respectively, for example, detect respectively sound or other Barometric wave and electromagnetic radiation.Exhibition booth 1 can also include that transmitter is being selected space S (Fig. 1 a) or (schemed towards consumer to emit 1b) the energy wave propagated, and the energy of sensor unit S1, S2 detection reflection is to identify hand 7 and/or the hand being directed toward 7. Alternatively or in combination, sensor unit S1, S2 can detect the energy wave that hand 7 is sent out.Two different sensor lists are provided First S1, S2 increase the reliability of detection, because can be by known when data acquisition system of the matching respectively from different field Algorithm refine, to reinforce the detection to mistake or realize the direction being directed toward in the position of opponent 7 and/or hand 7 and/or hand 7 Identification recognition strategy.
It is also preferred that contactless position detector 5 is located at shelf 3 relative to the consumer in picking position Homonymy.Specifically, exhibition booth 1 includes the face 12 that satisfies the needs of consumers opposite with the back side, which for example contacts with the wall in shop, Or touch (as shown in Figure 1) with another exhibition booth attached components.The front side 12 to satisfy the needs of consumers provides farthest antetheca or part Or edge 13, the farthest antetheca or part or edge 13 (are schemed closest to the consumer in picking position along horizontal direction Farthest front projection 1a).Preferably, contactless position detector 5 is relative to farthest wall or part or edge 13 On the same side of shelf 3 and article 2.According to this layout, the designer of exhibition booth 1 there is the degree of freedom of bigger with obtain in order to Attract the concern of consumer and the minimum layout of article 2 for sale is presented.In addition, when exhibition booth 1 has the layout of Fig. 1 a When, at the vertical height of farthest wall or part or edge 13, the optical axial O of detector 5 is relative to farthest wall or portion Point or edge 13 be located at the same sides of shelf 3.Alternatively, according to the layout of Fig. 1 b, optical axis O is relative to farthest wall or part Or edge 13 is located at the opposite side of shelf 3 and is tilted towards ground G to detect the hand 7 being directed toward.When detector 5 includes When more than one optical axis, each optical axis should meet corresponding configuration discussed above.
According to a preferred embodiment of the invention, sensor unit S1 is 3D depth transducers, and sensor unit S2 is face Colour sensor.
Specifically, sensor unit S2 captures the colored 2D images of user, so that contactless position detector 5 is registrated And synchronous depth map and coloured image, and generate include depth map and image data data flow to be output to control unit 8. In some embodiments, as described below, depth map, coloured image are via single port (for example, the port universal serial bus (USB)) It is output to control unit 8.Specifically, coloured image contributes to being directed toward with required accurate horizontal identification hand 7 and hand 7 Finger (i.e. index finger).
The data that the processing of control unit 8 is generated by contactless position detector 5 are to select 3D rendering information.For example, control Unit 8 processed can divide depth map to identify the region of consumer, especially hand 7, and find their positions 3D It sets.Control unit 8 is according to position (Fig. 1 a) of the hand 7 in extraction space S or the physical target position of the shelf 3 according to the direction of hand 7 It sets (figure lb) and selects multimedia content to be shown from database using the information.
In general, control unit 8 includes general-purpose computer processor, software is programmed to execute these functions.This is soft Part for example electronically can download to processor by network, or alternatively be provided to such as optics, magnetism Or on the tangible medium of electronic memory media.
3D is mapped, including the sensor unit S1 of illumination sub-component 15 is shone with pattern appropriate (such as speckle pattern) Bright object (for example, hand 7).In this way, depth dependent image data include the image of the pattern on object (i.e. hand 7), and And processing circuit is configured as generating depth map by measuring pattern relative to the offset of reference picture.The example exists It is discussed in more detail in US8456517, and is implemented in Kinect TM devices by Microsoft's (registered trademark).Specifically, with this For the purpose of, sub-component 15 generally includes suitable radiation source 16 (such as diode laser, LED or other light sources and optics Element, such as diffusing globe 17 or diffraction optical element) with object (i.e. hand 7, article 2, shelf 3 and illumination sub-component 15 Working range in other objects) on generate pattern.Sensor unit S1 further includes depth image capture sub-component 18, is caught Obtain the image of the pattern on body surface.Sub-component 18 generally includes object optical element 19, body surface is imaged onto all As cmos image sensor detector 20 on.
The usually transmitting IR radiation of radiation source 16, but other radiation zones such as visible light or ultraviolet range can be used as. Detector 20 may include the monochrome image sensor of no IR cut-off filters, so as to highly sensitive detection projection pattern Image.In order to enhance the contrast of the image captured by detector 20, optical element 19 or detector itself may include band logical Filter prevents the ambient radiation in other frequency bands while across the wavelength of radiation source 16.
Sensor unit S2 includes coloured image capture sub-component 25, the coloured image of captures object.Sub-component 25 is logical Often include object optical element 26, body surface is imaged onto detector 27 (such as CMOS color mosaics imaging sensor) On.Optical element 26 or detector 27 may include the filter such as IR cut-off filters so that be projected by illumination sub-component 15 Pattern be not present in the coloured image captured by detector 27.
Processing unit 28 receives and processes the image input from sub-component 18 and 25.The details of these processing functions is for example It proposes in US8456517, and is realized in Kinect TM devices by Microsoft's (registered trademark).In brief, processing unit 28 The image that sub-component 18 provides is projected in the flat of the known distance D1 away from contactless position detector 5 with by sub-component 15 The reference picture of pattern on face 30 is compared.For example, reference picture can be captured simultaneously as a part for calibration process And it is stored in memory 31 (such as flash memory).Processing unit 28 will be in the local pattern and reference picture in the image of capture Local pattern matching, and therefore find the lateral shift of each pixel 32 or pixel groups in plane 30.According to these transverse directions Offset and the known distance D2 between sub-component 15 and 18 optical axis, processing unit calculate depth (Z) coordinate of each pixel. Processing unit calculates the offset between color and depth image using the known distance D3 between sub-component 18 and 25 optical axis.
In the coloured image that processing unit 28 captures the depth coordinate in each such 3D mappings with sub-component 25 Appropriate pixel is synchronous and is registrated.Registration is usually directed to the inclined of coordinate associated with each depth value in 3D mappings It moves.It does not overlap and depends on according to any between the known distance D2 and detector between sub-component 18 and 25 optical axis The dynamic component of depth coordinate itself, offset includes static component.The example of registration process is also illustrated in US8456517, And implemented in Kinect TM devices by Microsoft's (registered trademark).
After registration depth map and coloured image, processing unit 28 is via the port of such as USB port by depth and color Data are output to control unit 8.Output data can be compressed to save bandwidth.
According to the present invention, consumer visually interacts with shelf 3 with selection target article 2.Display 10 is on shelf 3 Side, to avoid in any interference of selection target article 2 period consumer and the visual interactive of shelf 3.Specifically, consumer answers This removes his/her eye from shelf 3 to watch the multimedia content on display 10.Control unit 8 accurately After having changed the position of hand 7 and the pre-loaded 3D mappings of the position and shelf 3 matching, the latter further shows Visual feedback is to select physics target location and therefore selection target article 2.Augmented reality provide multimedia content and The combination of position detector is interacted with the consumer being placed in physical environment.Therefore, there is the exhibition of article 2 in this example In platform 1, multimedia content is added in physical environment, and consumer is not exposed to complete virtual environment (as virtually existing As occurring in real system) in.
Specifically, hand 7 and/or the hand 7 that is being directed toward are identified relative to other objects, and control unit 8 is from predetermined Justice database in select information, in predefined database content stored and with corresponding to the accurate position on shelf 3 It sets or associated corresponding to a series of respective labels of positions of subregion A.When hand 7 by detector 5 be located in exact position or When in subregion A, the corresponding contents in database occur on display 10, so that user, which can read, is placed on accurate position It sets or the larger font information of the article 2 of the subregion A of shelf 3.
Preferably, it is divided into two or more subgroups for the information or multimedia content of each type of article 2.Control Unit 8 processed is programmed to select on display 10 when the physical target position on shelf 3 is identified and shows visual feedback. Visual feedback preferably includes the image for representing article associated with the physical target position identified by control unit 82.Vision Feedback keeps the relatively short time (for example, being no more than 4 seconds) on display 10, not to be in the desired article of consumer In the case of the article that position detector 5 detects, the chance in instruction direction is adjusted to consumer.
Further identification physical target position is interior not by consumer in scheduled time range (such as 3 seconds) for control unit 8 Change, and in this case, the first subgroup of multimedia content is shown on display 10.Preferably, in physics mesh After the immovable time predefined range of cursor position, the second subgroup of multimedia content is shown.In such a situation it is preferred to Be, when show multimedia content the first subgroup when, display 10 be additionally shown in be switched to multimedia content the second subgroup it The visual feedback of preceding remaining time.
Article 2 should be placed in correct position, i.e., with the staff of shop or supermarket or buffet on exhibition booth component 1 Pre-stored database and label match.Alternatively, article 2 is placed on exhibition booth component 1 it in staff Afterwards, database is updated so that label corresponds to the position of the article 2 on exhibition booth component 1.
It is finally apparent that in the case where not departing from the protection domain being defined by the following claims, it can be to herein Exhibition booth component 1 that is open and showing is modified.For example, control unit 8 can be programmed to when hand 7 enters and selects space S Identify the hand 7 being directed toward and the direction being directed toward.

Claims (12)

1. a kind of augmented reality exhibition booth component, including:Support is exhibited the shelf (3) of article for selection (2);It is placed in described The contactless position detection device (5) at the top of shelf (3), the contactless position detection device (5) are configured for It identifies hand (7) instruction or physical target corresponding with target item (2) position being directed toward on shelf (3) and determines hand (7) Position;And control unit (8), described control unit (8) are configured for according to by contactless position detection device (5) The position of the hand (7) detected, selection and the physical target position for being associated with the target item (2) on the shelf (3) Associated multimedia content.
2. exhibition booth according to claim 1, which is characterized in that described control unit (8) is configured for the hand (7) The direction of direction refines and determines the physics mesh on the shelf (3) according to the position of the hand (7) and the direction of direction Cursor position.
3. the exhibition booth according to any one of claims 1 or 2, which is characterized in that the exhibition booth includes satisfying the needs of consumers Front side 12, the front side 12 has in the horizontal direction near the distal-most edge of consumer (13), and the position is examined Survey the homonymy that device (5) is located at the shelf (3) relative to distal-most edge (13).
4. according to the exhibition booth described in claim 2 and 3, which is characterized in that the position detector (5) has at least one optical axis (O), wherein at the level height of the distal-most edge (13), the optical axis (O) is located at relative to the distal-most edge (13) The opposite side of the shelf (3) is simultaneously tilted towards ground (G).
5. exhibition booth according to any one of the preceding claims, which is characterized in that the shelf (3) are oblique.
6. exhibition booth according to any one of the preceding claims, which is characterized in that including being placed in above the shelf (3) At least one display (10), wherein after the physical target position on shelf (3) is identified, control unit (8) is programmed to Feedback Multimedia content, the feedback Multimedia content and the physical target position on shelf (3) are shown on display (10) It is associated.
7. exhibition booth according to claim 6, which is characterized in that the feedback Multimedia content retention time is no more than 4 seconds.
8. the exhibition booth described according to claim 6 or 7, which is characterized in that control unit (8) is programmed to according to the hand (7) It is detected and is maintained at the passage time of same position to change multimedia content.
9. exhibition booth according to claim 8, which is characterized in that control unit (8) is programmed to show on display (10) Change the visual feedback of the remaining time to before next multimedia content now.
10. exhibition booth according to any one of the preceding claims, which is characterized in that position detector (5) includes being used for 3D The first sensor unit (S1) of mapping and second sensor unit (S2) for 2D color mappings.
11. exhibition booth according to any one of the preceding claims, which is characterized in that the physical target position is by the control Unit (8) processed maps to identify according to the 3D of the shelf (3).
12. exhibition booth according to any one of the preceding claims, which is characterized in that the exhibition booth is shop or supermarket.
CN201580084194.0A 2015-10-26 2015-10-26 Augmented reality exhibition booth for article to be selected Pending CN108292163A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2015/074776 WO2017071733A1 (en) 2015-10-26 2015-10-26 Augmented reality stand for items to be picked-up

Publications (1)

Publication Number Publication Date
CN108292163A true CN108292163A (en) 2018-07-17

Family

ID=54366199

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201580084194.0A Pending CN108292163A (en) 2015-10-26 2015-10-26 Augmented reality exhibition booth for article to be selected

Country Status (2)

Country Link
CN (1) CN108292163A (en)
WO (1) WO2017071733A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109448612A (en) * 2018-12-21 2019-03-08 广东美的白色家电技术创新中心有限公司 Product display device

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10085571B2 (en) 2016-07-26 2018-10-02 Perch Interactive, Inc. Interactive display case
US11488235B2 (en) 2019-10-07 2022-11-01 Oculogx Inc. Systems, methods, and devices for utilizing wearable technology to facilitate fulfilling customer orders

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101495945A (en) * 2006-07-28 2009-07-29 皇家飞利浦电子股份有限公司 Gaze interaction for information display of gazed items
CN102027435A (en) * 2008-05-14 2011-04-20 皇家飞利浦电子股份有限公司 System and method for defining an activation area within a representation scenery of a viewer interface
CN102144201A (en) * 2008-09-03 2011-08-03 皇家飞利浦电子股份有限公司 Method of performing a gaze-based interaction between a user and an interactive display system
CN103425445A (en) * 2012-05-23 2013-12-04 鸿富锦精密工业(深圳)有限公司 Electronic display structure
US20150102047A1 (en) * 2013-10-15 2015-04-16 Utechzone Co., Ltd. Vending apparatus and product vending method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6141034A (en) * 1995-12-15 2000-10-31 Immersive Media Co. Immersive imaging method and apparatus
CO6760205A1 (en) * 2013-02-18 2013-09-30 Zapata Pablo Andres Valencia Smart tray for point of sale
US9348421B2 (en) * 2013-06-26 2016-05-24 Float Hybrid Entertainment Inc. Gesture and touch-based interactivity with objects using 3D zones in an interactive system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101495945A (en) * 2006-07-28 2009-07-29 皇家飞利浦电子股份有限公司 Gaze interaction for information display of gazed items
CN102027435A (en) * 2008-05-14 2011-04-20 皇家飞利浦电子股份有限公司 System and method for defining an activation area within a representation scenery of a viewer interface
CN102144201A (en) * 2008-09-03 2011-08-03 皇家飞利浦电子股份有限公司 Method of performing a gaze-based interaction between a user and an interactive display system
CN103425445A (en) * 2012-05-23 2013-12-04 鸿富锦精密工业(深圳)有限公司 Electronic display structure
US20150102047A1 (en) * 2013-10-15 2015-04-16 Utechzone Co., Ltd. Vending apparatus and product vending method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109448612A (en) * 2018-12-21 2019-03-08 广东美的白色家电技术创新中心有限公司 Product display device

Also Published As

Publication number Publication date
WO2017071733A1 (en) 2017-05-04

Similar Documents

Publication Publication Date Title
EP3227760B1 (en) Pointer projection for natural user input
TWI526888B (en) Vending machine and operating system and operating method thereof
US9658765B2 (en) Image magnification system for computer interface
US20110141011A1 (en) Method of performing a gaze-based interaction between a user and an interactive display system
CN103329079B (en) Multiple point touching based on camera is mutual and illuminator and method
CA3055222C (en) Display system and method for delivering multi-view content
EP3794577B1 (en) Smart platform counter display system and method
Kurz Thermal touch: Thermography-enabled everywhere touch interfaces for mobile augmented reality applications
CN113498530A (en) Object size marking system and method based on local visual information
CN107004279A (en) Natural user interface camera calibrated
CN108469899A (en) The method for identifying the aiming point or region in the observation space of wearable display device
US20170249061A1 (en) Method and Apparatus for Providing User Interfaces with Computerized Systems and Interacting with a Virtual Environment
CN108764998B (en) Intelligent display device and intelligent display method
US10146303B2 (en) Gaze-actuated user interface with visual feedback
US20150006245A1 (en) Method, arrangement, and computer program product for coordinating video information with other measurements
CN108292163A (en) Augmented reality exhibition booth for article to be selected
KR20150108571A (en) An information display device of a mirror display for advertisement and shopping by recognizing the reflected images on the mirror and method thereof
US20170358135A1 (en) Augmenting the Half-Mirror to Display Additional Information in Retail Environments
KR20150108570A (en) An augmented reality service apparatus for a mirror display by recognizing the reflected images on the mirror and method thereof
US9875726B2 (en) Hybrid-image display device
US9962606B2 (en) Game apparatus
WO2018082782A1 (en) Spatial augmented reality method, device and system
US20200059603A1 (en) A method of providing information about an object
Smith et al. Adaptive color marker for SAR environments
CN114080643A (en) Apparatus and method for performing image-based food quantity estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180717

WD01 Invention patent application deemed withdrawn after publication