WO2010026519A1 - Method of presenting head-pose feedback to a user of an interactive display system - Google Patents

Method of presenting head-pose feedback to a user of an interactive display system Download PDF

Info

Publication number
WO2010026519A1
WO2010026519A1 PCT/IB2009/053783 IB2009053783W WO2010026519A1 WO 2010026519 A1 WO2010026519 A1 WO 2010026519A1 IB 2009053783 W IB2009053783 W IB 2009053783W WO 2010026519 A1 WO2010026519 A1 WO 2010026519A1
Authority
WO
WIPO (PCT)
Prior art keywords
head
user
pose
display area
model
Prior art date
Application number
PCT/IB2009/053783
Other languages
French (fr)
Inventor
Tatiana A. Lashina
Evert J. Van Loenen
Igor Berezhnoy
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2010026519A1 publication Critical patent/WO2010026519A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements

Definitions

  • the invention describes a method of presenting head-pose feedback to a user of an interactive display system, and a method of performing a gaze-based interaction between a user and an interactive display system.
  • the invention also describes a head-pose feedback system, and an interactive display system.
  • shop window displays which are capable of presenting product-related information using, for example, advanced projection techniques, with the aim of making browsing or shopping more interesting and attractive to potential customers. Presenting products and product-related information in this way contributes to a more interesting shopping experience.
  • An advantage for the shop owner is that the display area is not limited to a number of physical items that must be replaced or arranged on a regular basis, but can display 'virtual' items using the projection and display technology now available.
  • Such an interactive shop window can present information about the product or products that specifically interest a potential customer. In this way, the customer might be more likely to enter the shop and purchase the item of interest.
  • Such display systems are also becoming more interesting in exhibitions or museums, since more information can be presented than would be possible using printed labels or cards for each item in a display case.
  • An interactive shop window system can detect when a person is standing in front of the window, and cameras are typically used to track the motion of the person's eyes. Techniques of gaze-tracking are applied to determine where the person is looking, i.e. the 'gaze heading', so that specific information can be presented to him.
  • a suitable response of the interactive shop window system can be to present the person with more detailed information about that object, for example the price, any technical details, other available colours or styles, special offers, etc. In a museum exhibit, a suitable response might be to present detailed information about an artefact at which the user is looking.
  • gaze-tracking is very new to the general public as a mode of interaction, this presents the challenge of how to clearly and concisely communicate to a person that a system can be controlled by means of gaze. This is especially relevant for interactive systems in public spaces, such as shopping areas, museums, galleries, amusement parks, etc., where interactive systems must be intuitive and simple to the user, so that anyone can interact with them without having to first consult a manual or to undergo training.
  • Eye-based gaze-tracking and head-tracking require different behaviour on the part of the user.
  • Eye-based gaze-tracking the user does not need to consciously do anything, looking into the display area will simply control the system.
  • head-tracking however, the user may need to move his head consciously, and some users may even have to exaggerate their head movements if they have a tendency to move their head only slightly or not at all while looking at objects.
  • users find head-tracking more comfortable than eye-based gaze tracking, and, as a result, applications for assistive technologies such as gaze-based interaction are more likely to apply head-tracking.
  • a user is not aware that an interactive display system uses a head-based tracking approach, he may not move his head accordingly, so that the interaction may fail, leading to dissatisfaction with the system.
  • the object of the invention is achieved by the method of presenting head- pose feedback according to claim 1, a method of performing a gaze-based interaction according to claim 7, a head-pose feedback system according to claim 11, and an interactive display system according to claim 15.
  • the term 'head-pose' is to be interpreted as the attitude or aspect taken by the user's head, and which can be used in estimating the direction in which the user is looking.
  • a user can see at a glance that the interactive display system is reacting to his head-pose, so that this method is particular advantageous in teaching or communicating to a user that the interactive display system is capable of gaze-based interaction.
  • a user new to such a system is given an intuitive indicator, namely a visual representation of his head, which mimics his head- pose.
  • the method of performing a gaze-based interaction between a user and an interactive display system with a preferably three-dimensional display area in which a number of objects is arranged, and comprising an observation means according to the invention comprises the steps detecting the presence of the user in front of a display area, observing the motion of the user's head to determine a head-pose for the user, and presenting head-pose feedback to the user as described above.
  • the user can quickly realize that the interactive display system can 'follow' his gaze, which is derived from his head-pose. Once this has been communicated to the user by means of the head-pose feedback, the user can participate in a gaze-based interaction with the interactive display system, for example using any technique of gaze-based interaction known from the state of the art.
  • a head-pose feedback system for presenting head-pose feedback to a user of an interactive display system with a preferably three- dimensional display area, comprises a head-pose determination unit for determining a head-pose for the user on the basis of an observed head-motion for that user.
  • the head- pose feedback system further comprises a rendering module for visibly rendering a user head model in the display area, and a model driving unit for driving the user head model according to the determined head-pose of the user, so that the head model essentially mimics the head-pose of the user.
  • An interactive display system comprises a preferably three-dimensional display area in which a number of objects is arranged, a detection means for detecting the presence of the user in front of a display area, and an observation means for observing the motion of the user's head to obtain head-motion information.
  • the interactive display system further comprises a head-pose feedback system as described above to present head-pose feedback to the user, and a display area controller to control the display area according to the determined head-pose.
  • the head-pose feedback system and interactive display systems according to the invention offer a particularly simple and easy way of 'teaching' potential users or customers about their capabilities.
  • a user seeing a model of a head that moves in the same way he does, or that appears to look at the same object that he is looking at, will immediately realise that the system is reacting to his own head-pose. Since this might well make a display area more interesting or accessible to a user, the proposed solution is applicable for any type of public display offering gaze-based interaction, such as interactive shop windows, interactive exhibitions, museum interactive exhibits, etc., in which an intuitive and easily understandable explanation of the system's capabilities is desired.
  • the systems and methods described by the invention are suitable for application in any appropriate environment, such as an interactive shop window in a shopping area, in an interactive display case in an exhibition, trade fair or museum environment, etc.
  • the display area can be three-dimensional, for example an area in which products are arranged for viewing, or an exhibit case.
  • the display area be two-dimensional, for example a rear-projection screen such as a HoloScreen ® upon which images, for example images of objects or products, can be displayed.
  • the display area may be assumed to be a three-dimensional shop window in the following.
  • a person who might interact with the interactive display system is referred to in the following as a 'user' or 'customer'. Even though the following description, for the sake of clarity, only deals with a single user interacting with a display system, the methods and systems according to the invention could be applied to the interactions of several users simultaneously.
  • the contents of the display area being presented can be referred to below as 'items', 'objects' or 'products', without restricting the invention in any way.
  • the detection means of the interactive display system can comprise a separate detection module for detecting the presence of a user in front of the display area, such as one or more pressure sensors or pressure tiles in the ground in front of the display area, any appropriate motion sensor, an infra-red sensor, or a camera vision system.
  • the observation means itself could be used to detect the presence of a user in front of the display area, for example by continually comparing images of the region in front of the display area with an 'empty' image, i.e. an image in which there are no people visible, so that the presence of a person in front of the display area can be determined.
  • the observation means can comprise an arrangement of cameras, for example a number of moveable cameras mounted inside the display area.
  • a observation means intended to track the movement of a person's head, in conjunction with any necessary hardware and software for performing the necessary image analysis, can also be referred to as a 'head tracker'.
  • Such a device is specifically configured to detect and 'track' a person's head in a sequence of images, so that the motion of the head can subsequently be analysed.
  • the head-pose of the user can be described as a vector in three- dimensional space, as will be known to a person skilled in the art.
  • the three dimensions constituting such a 'head-pose vector' are referred to as yaw or heading (horizontal rotation), pitch (vertical rotation) and roll (tilting the head from side to side).
  • yaw or heading horizontal rotation
  • pitch vertical rotation
  • roll tilt the head from side to side
  • a vector describing the direction of looking can include relevant information such as only the observed heading, or the observed heading together with an estimated pitch.
  • a head tracker as described above can determine the head-pose of the user, and can estimate the head heading as a result.
  • the estimated direction of looking is referred to in the following as the head-pose vector or 'gaze vector'.
  • a more complex 'gaze tracker' could conceivably also track the eyes in a person's face to deliver a more precise gaze heading.
  • Such eye-gaze tracking systems are however more costly, and require that the eyes of the person are clearly visible, which might conceivably be problematic in certain lighting conditions or for people wearing glasses. Therefore, without restricting the invention in any way, the following assumes that the more straightforward head-tracking is being carried out to determine the head-pose of the user in order to estimate his gaze direction.
  • an observation means which can also robustly detect the eyes of the user could be used to determine the user's head-pose and gaze direction.
  • the step of driving the user head model comprises mapping a motion of the user's head to a corresponding motion of the user head model such that the motion of the user head model mimics the motion of the user's head.
  • the 'teaching' effect of the head-pose feedback system can be regarded as having been successful.
  • a user who is already aware that there are interactive display systems capable of gaze-based interaction may stop in front of the shop window in order to participate in a gaze-based interaction.
  • Head-pose feedback should be presented in an easily recognizable manner, in other words it should be made clear to the user that his head movement is being tracked, in particular his head movement with respect to the display area.
  • the user head model can be driven, for example, to reflect the head motion of the user, or it can be driven so that it 'looks' at effectively the same point in the display area looked at by the user. This may depend largely on the position of the user head model in the display area, for example whether it is located at eye-level with the user, or low down in the display area.
  • a user head model at eye level could be driven to directly imitate the head motion of the user, while a user head model lower down could be driven to appear to look at the same point looked at by the user.
  • the user head model can comprise a mechanically moveable physical head model, positioned in the display area such that it is clearly visible to a passing user.
  • the step of driving this user head model can comprise controlling the physical head model according to the determined head-pose of the user, to express or reflect the movements of the user's head as a rotation and/or tilting of the physical head model.
  • a user passing by and looking into the display area can see the physical head model moving in the same manner as the user has moved his head. Once the user has seen this happening, he can realize that the display area will react to his head motion, and can conclude that a gaze-based interaction is possible for this display area.
  • the physical head model can include a small projector such as a mini monochrome laser projector, or a high-power mini-projector, built into the head model such that an object or region in the display area being looked at by the user can be illuminated or highlighted as head-pose feedback for the user.
  • a small projector such as a mini monochrome laser projector, or a high-power mini-projector, built into the head model such that an object or region in the display area being looked at by the user can be illuminated or highlighted as head-pose feedback for the user.
  • a small projector such as a mini monochrome laser projector, or a high-power mini-projector
  • the user head model comprises a virtual head model shown graphically in a display or screen
  • the step of driving the user head model comprises rendering the virtual head model in the display according to the determined head-pose of the user.
  • the word 'display' in this sense only refers to a screen or backdrop upon which an image can be graphically rendered, and is not to be confused with the terms 'shop window display' or 'display area', which only refer to the area in which products are arranged for presentation.
  • the term 'screen' is used in the following whenever reference is made to a display in which images can be graphically rendered.
  • the virtual head model explicitly represents the user's head, whether as a detailed representation of a human head, or a stylized representation. In either case, it should immediately be apparent to the user that the virtual head model mimics his head movements.
  • the virtual head model can be driven to reflect the movements of the user's head in a one-to-one manner if it is located at eye-level with the user, otherwise it can be driven to appear to look at the same point in the display area. It will be emphasized at this point that the Virtual head model' is not to be interpreted as a simple 'cursor', known from other, simpler, prior art interaction modalities.
  • the virtual head model is rendered in a graphical representation of the display area, such that this graphical representation also includes images of the contents of the display area.
  • the graphical representation of the display area can, for example, be rendered in an area of a screen showing the outlines of the objects in the display area from the user's point of view.
  • the outline can be a contour corresponding to the shape of the object, and can be rendered as a bold or thick line.
  • the virtual head model in this case can be an outline of a person's head shown on the screen.
  • the screen can be driven or controlled so that the head outline changes to emulate the motion of the user's head. In this way, the virtual head model can show the user that his head movements are being tracked by the system.
  • the head-pose of a user can be analyzed to estimate or determine the point at which he is most likely looking.
  • a virtual head-pose vector is also visibly rendered in the display area such that the virtual head-pose vector appears to originate from the virtual head model.
  • the virtual vector can be shown on the screen to originate from a point on the 'forehead' of the head outline, or from the 'nose' of the head outline.
  • the virtual head-pose vector reflects the determined gaze direction of the user, whether he is looking directly at an object in the display area, or at a point between objects.
  • visually emphasizing a region in the display area according to the determined gaze heading comprises rendering a virtual head-pose vector to represent the determined gaze heading.
  • the physical head model can be placed on the 'floor' of the display area, or at another location not in the line of sight of the user.
  • the second type of head-pose feedback where a virtual head model is shown graphically on a screen, this should be positioned so that the user can easily see it, for example in the user's line of sight.
  • the graphical representation of the head model, gaze vector, and display area contents could be projected onto a region behind the contents of the display area, so that the user can still see the objects but can also see the user head model being rendered with the gaze vector.
  • the rendered virtual head model and/or the rendered virtual head-pose vector are at least partially transparent, and are rendered between the user and the display area such that the user can see through the virtual head model and/or the rendered virtual head-pose vector into the display area. So that the user's gaze is not distracted or drawn away from the object he is looking at, the virtual head model and/or head-pose vector are rendered such that the point being 'looked at' by the virtual head model effectively coincides with the point being looked at by the user.
  • the virtual head model is rendered in a graphical representation of the display area according to the position of the user's head relative to the positions of the objects in the display area, so that the position of the virtual head model in the graphical representation of the display area effectively corresponds to the position of the user's head relative to the display area.
  • This can be achieved by use of an appropriate type of display screen that is essentially transparent, but which can be made opaque when desired, for example a display screen with different modes of transmission, ranging from opaque through semi- transparent to transparent.
  • a user may either look through such a screen at an object behind it when the screen is in a transparent mode, read information that appears on the screen for an object that is, at the same time, visible through the screen in a semi- transparent mode, or see only images projected onto the screen when the display is in an opaque mode.
  • the screen can comprise a low-cost passive matrix electrophoretic display.
  • a multiple-mode projection screen can be controlled according to the presence and actions of a user in front of the display area. For instance, in the case when no customers are detected in front of an interactive shop window, the screen can be placed in a type of 'stand-by mode', to display shop promotional content. Once a potential customer has been detected in front of the display area, as described above, the screen can become transparent, with only a small area being semi-transparent. In this small area, the virtual head model and virtual head-pose vector can be rendered to show the user that he can participate in a gaze-based interaction. To terminate or exit this 'teaching' mode, a suitable symbol could be rendered in a part of the screen, for example a virtual 'cancel' or 'continue' button could be displayed.
  • a user familiar with this type of gaze-based interaction can simply look at the 'cancel' button so that the gaze-based interaction can continue as normal. To ensure that the user does not inadvertently terminate the teaching mode, he may be required to direct his gaze at the 'cancel' button for a predefined length of time.
  • a user new to this type of interaction can first study the rendered information. Once he has realised that he can interact with the shop window, a glance at the 'cancel' button is sufficient to make the screen become transparent, and for the gaze-based interaction to proceed in the usual manner. The screen can become entirely translucent, allowing the user to look at any item in the display area. Once he 'selects' another item or object by looking at it, product-related information for that object can be rendered in the display area.
  • an object in the display area can be identified on the basis of the determined head-pose or gaze-heading, and the display area can be controlled to visually emphasise that object.
  • An object can be regarded as having been 'selected' if the determined gaze-heading lies within an 'interactive zone' or 'interactive boundary' for that object.
  • This interactive zone or boundary can be an area including the object itself, as well as a region surrounding that object, so that, on the one hand, the user does not have to explicitly look directly at the object, and, on the other hand, inaccuracies in the gaze determination process can be taken into account.
  • the display area can be controlled according to items looked at by the user.
  • object-related information such as price, available sizes, available colours, name of a designer etc.
  • the projector can be used to project object-related information onto a suitable backdrop or screen.
  • a suitable backdrop or screen Alternatively, in a interactive display system with, for example, an electrophoretic screen, information can be directly rendered in the screen.
  • product-related information is preferably presented in the line of sight of the user, so that he can easily view or read the information.
  • the information can fade out.
  • the visual emphasis of a region in the display area need not be limited to mere highlighting of 'selected' objects as mentioned above.
  • a type of virtual 'cursor' could be projected in the display area to follow the estimated gaze direction of the user.
  • An appropriate symbol could be projected when the user's gaze appears to be directed between objects in the display area, for example an easily understandable symbol such as a question mark, or a pair of eyes.
  • the virtual cursor can move across the display area to 'follow' the user's gaze.
  • One advantage of such an entertaining approach is that the attention of the user may be held, and he may be more interested in participating in a gaze-based interaction if he realises that his gaze is effectively being tracked by the display system.
  • interactive shop windows become commonplace, it may be preferable to provide users with a more explicit indication that a gaze-based interaction is possible.
  • a set of instructions could be provided to a user to let him know that he can interact with a display system. The instructions could be issued when the presence of a user is detected in front of the display area as a series of recorded messages output over a loudspeaker, in the form of written text, as an image or a sequence of images, as a video demonstration, etc.
  • the set of instructions might be projected visually within the display area so that the user can easily 'read' the instructions. Again, projecting text or information in this way is made possible by the available projection system technology.
  • Such a message can be either statically defined on the shop window display or it could be dynamically generated dependent on the user's position so that it would be centred relative to the user. In this way, the instructions can be optimally positioned for good readability, regardless of where the user is standing relative to the display area. This is of particular advantage when considering that the visibility of a projected image can depend on the angle from which it is being seen.
  • the instructions could be cancelled by the user, for example if the user has understood or if the user is already familiar with this type of interactive system, for example by a cancel button that the user can press, by speaking an appropriate command, by a virtual cancel 'button' shown in a display as already described above, or by any other suitable method.
  • Fig. Ia shows a first schematic representation of a user in front of a display area
  • Fig. Ib shows the scenario of Fig. Ia, with head-pose feedback being given to the user in a gaze interaction according to a first embodiment of the invention
  • Fig. 2a shows a second schematic representation of a user in front of a display area
  • Fig. 2b shows the scenario of Fig. 2a, with head-pose feedback being given to the user in a gaze interaction according to a second embodiment of the invention
  • Fig. 3 shows a schematic cross section of a display area with a gaze interaction system according to another embodiment of the invention.
  • like numbers refer to like objects throughout. Objects in the diagrams are not necessarily drawn to scale.
  • Fig. Ia shows a user 1 in front of a display area D, in this case a potential customer 1 in front of a shop window D.
  • this schematic representation has been kept very simple.
  • items 14, 15, 16 are arranged for display.
  • An electrophoretic screen 5 is positioned as a projection area 5 between the user 1 and the inside of the display area D.
  • a detection means 4 in this case a pressure mat 4 or pressure tile 4, is located at a suitable position in front of the shop window D so that the presence of a potential customer 1 who pauses in front of the shop window D can be detected.
  • An observation means 3, or head tracking means 3, with a camera arrangement is positioned in the display area D such that the head motion of the user 1 can be tracked as the user 1 looks into the display area D.
  • the head tracking means 3 can be activated in response to a signal 40 from the detection means 4 delivered to a control unit 20.
  • the head tracking means 3 could, if appropriately realized, be used in lieu of the detection means 4 for detecting the presence of a user 1 in front of the display area D.
  • the control unit 20 might comprise hardware and software modules, for example suitable algorithms running on a computer situated, for example, in an office or other location.
  • a simplified representation of the control unit 20 is shown to comprise a head-pose determination unit 21 which analyses the data 30 supplied by the observation means 3 to deduce the head-pose of the user 1, and therefore also the user's gaze direction G.
  • the control unit 20 also comprises an interaction control module 25, a head model rendering unit 23, and a database 27. These modules 21, 23, 25, 27 will be explained below in more detail.
  • Fig. Ib shows the same scenario as above, but with head-pose feedback being shown to the user 1.
  • the head-pose determination unit 21, using data 30 delivered by the observation means 3, has determined the head-pose of the user 1, i.e. that the user's head indicates that he is looking at the shoes 15.
  • the head- pose determination unit 21 delivers a suitable signal 22 to the head model rendering unit 23, which in turn generates appropriate control signals 24 to drive the multimode electrophoretic screen 5 positioned between the user 1 and the display area D, for example as part of the shop window glazing.
  • the electrophoretic screen 5 is essentially transparent, so that the user 1 can easily see through the screen 5 into the display area 5.
  • control signals 24 delivered by the head model rendering unit 23 cause a graphical representation of the display area D to be shown in a region 50 of the electrophoretic screen 5, so that this part 50 of the screen 5 becomes partially opaque, as indicated by the stippling in this region 50.
  • the display area D and its contents are shown in miniature, in this case such that the objects 14, 15, 16 presented in the display area are indicated by their outlines 54, 55, 56.
  • the head model rendering unit 23 applies software algorithms to generate image data for a virtual head model H v and head-pose vector V that mimic the user's head-pose and gaze.
  • the virtual head model H v shown as the outline of a human head, is graphically rendered in the visually emphasized region 50 of the screen 5.
  • the virtual head-pose vector V imitating the user's gaze direction G, is shown to extend from the head model H v to the outline 55 of the object 15 that the user 1 was actually looking at.
  • the head-pose feedback system in this example essentially includes the head-pose determination unit 21, the head model rendering unit 23, and the controllable display area 5. A first-time user 1 of such an interactive display system 2 can understand at this point that he can interact with the display area D on the basis of his head-pose and gaze.
  • the user 1 can look at an appropriate symbol 57, shown here in a corner of the visually emphasized region 50, to terminate the 'teaching' mode.
  • the symbol 57 can be a 'button' comprising the words 'OK', 'cancel', 'continue' or a similar easily understandable text.
  • the symbol 57 could also simply be an arrow, which can easily be interpreted to mean 'carry on' or 'continue'. Looking at this symbol 57 for a predefined length of time such as a dwell time of one or two seconds causes the system 2 to remove the visual emphasis and to proceed with the gaze-based interaction.
  • the user could cancel the teaching mode by simply pointing at the 'cancel' button or touching the appropriate part of the display. This action can be identified by the observation means.
  • the subsequent gaze-based interaction can involve highlighting looked-at objects and displaying product-related information for the items in the display area D, for example by showing 'pop-up menus' in the electrophoretic screen.
  • the actual gaze-based interaction is managed in the interaction control module 25, which also receives the head-pose information 22 as well as object position information 28 from a database 27, which keeps track of the placement of any objects in the display area D. With this information, the interaction control module 25 can determine which object is being looked at by the user 1, and can also determine for how long the user 1 has been looking at an object. Accordingly, the interaction control unit 25 issues control signals 24 to drive the rendering means 5, for example to display product-related information.
  • Figs. 2a and 2b show an interaction with a different type of head-pose feedback being given to the user 1.
  • the user 1 is situated in front of a display area D with a number of products 14, 15, 16 laid out for presentation.
  • a detection means 4 and observation means 3 detect the presence of the user 1 and monitor his head motion, respectively.
  • a head-pose determination unit 21 analyses the data 30 supplied by the observation means 3 to deduce the head-pose of the user 1, and therefore also his gaze direction G.
  • the user head model is a physical, mechanically controllable model H p h y of a person's head, placed in the front of the display area D so that it can 'look at' any of the objects 14, 15, 16 in the display area D.
  • the model H p h y can have a realistic appearance, with eyes, nose etc., so that it is immediately recognizable as a model of human head.
  • the head-pose determination unit 21 delivers a suitable signal 22 to a head model rendering unit 23', which in turn generates appropriate control signals 24' to drive the mechanically controllable model H p h y , for example by issuing signals to drive one or more stepper motors in the model H p h y to cause this to rotate and/or tilt to mimic the head-pose of the user 1.
  • the mechanically controllable model H p h y is equipped with a miniature laser projector 51, which is built into the model H p h y such that light L issued by the miniature laser projector 51 appears to originate from the 'eyes' of the model H p h y .
  • the control signals 24' in this example also include control signals for the miniature laser projector 51, so that it illuminates the object 15 that the user 1 is looking at. In the diagram, this is indicated by the 'aura' around the looked-at object 15. In this way, the user 1 can understand that the system 2 is capable of tracking his gaze.
  • the projector 51 could also cause product-related information to be projected onto a suitable backdrop (not shown) in the display area D, thus providing the user 1 with interesting product information for the item at which he is looking at any one time.
  • the head-pose feedback system essentially includes the head-pose determination unit 21, the head model rendering unit 23', the mechanically controllable model H p h y , and the miniature laser projector 51.
  • Fig. 1 is shown as a schematic side view in Fig. 3, again with a display area D for an interactive display system 2 in which a projection unit 9 is used to project an image onto a rear-projection screen 5, for example a HoloScreen ® 5.
  • an observation means 3 is used to observe the user's head H so that a head-pose for the user 1 and his gaze direction G can be determined as described above.
  • the projection unit 9 can project a virtual head model and a virtual gaze vector onto the region 50 of the HoloScreen ® 5 in the user's line of sight, so that the user 1 can easily see that his head-pose is being noted, and that he can interact using gaze with the display system. Later, in a normal gaze-based interaction, the projection unit 9 can show product-related information on the
  • HoloScreen ® 5 for any object 14, 15 looked at by the user 1.
  • the region 50 shown is not restricted to the dimensions shown, but can cover any small or large area of the screen 5, or can even take in the entire screen 5.
  • the HoloScreen ® can simply be transparent.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention describes a method of presenting head-pose feedback to a user (1) of an interactive display system (2) comprising a three-dimensional display area (D), which method comprises the steps of determining a head-pose for the user (1), visibly rendering a user head model (H v, H phy ) in the display area (D), and driving the user head model (H v, H phy ) according to the determined head-pose of the user (1). The invention further describes a head-pose feedback system, an interactive display system (2), and a method of performing a gaze-based interaction between a user (1) and an interactive display system (2).

Description

METHOD OF PRESENTING HEAD-POSE FEEDBACK TO A USER OF AN INTERACTIVE DISPLAY SYSTEM
FIELD OF THE INVENTION
The invention describes a method of presenting head-pose feedback to a user of an interactive display system, and a method of performing a gaze-based interaction between a user and an interactive display system. The invention also describes a head-pose feedback system, and an interactive display system. BACKGROUND OF THE INVENTION
In recent years, developments have been made in the field of interactive shop window displays, which are capable of presenting product-related information using, for example, advanced projection techniques, with the aim of making browsing or shopping more interesting and attractive to potential customers. Presenting products and product-related information in this way contributes to a more interesting shopping experience. An advantage for the shop owner is that the display area is not limited to a number of physical items that must be replaced or arranged on a regular basis, but can display 'virtual' items using the projection and display technology now available. Such an interactive shop window can present information about the product or products that specifically interest a potential customer. In this way, the customer might be more likely to enter the shop and purchase the item of interest. Such display systems are also becoming more interesting in exhibitions or museums, since more information can be presented than would be possible using printed labels or cards for each item in a display case.
An interactive shop window system can detect when a person is standing in front of the window, and cameras are typically used to track the motion of the person's eyes. Techniques of gaze-tracking are applied to determine where the person is looking, i.e. the 'gaze heading', so that specific information can be presented to him. A suitable response of the interactive shop window system can be to present the person with more detailed information about that object, for example the price, any technical details, other available colours or styles, special offers, etc. In a museum exhibit, a suitable response might be to present detailed information about an artefact at which the user is looking.
Prior art solutions for interactive shop windows assume that the user knows in advance that the shop window is interactive, and that he would stop to 'select' products by looking at them to obtain product-related information. However, since interactive display areas are still quite rare, being limited at present to a few touch-based interactive systems, it can be assumed that a user will generally not expect to be able to interact, and in particular he will not expect gaze-based interactivity. In the future, as interactive display areas will become more common, users will still need to be able to recognize which display area is interactive and which is not, and which could be controlled by gaze.
Since gaze-tracking is very new to the general public as a mode of interaction, this presents the challenge of how to clearly and concisely communicate to a person that a system can be controlled by means of gaze. This is especially relevant for interactive systems in public spaces, such as shopping areas, museums, galleries, amusement parks, etc., where interactive systems must be intuitive and simple to the user, so that anyone can interact with them without having to first consult a manual or to undergo training.
Another aspect to consider is that gaze-tracking could be carried out in different ways. Eye-based gaze-tracking and head-tracking require different behaviour on the part of the user. For eye-based gaze-tracking, the user does not need to consciously do anything, looking into the display area will simply control the system. For head-tracking, however, the user may need to move his head consciously, and some users may even have to exaggerate their head movements if they have a tendency to move their head only slightly or not at all while looking at objects. There are also indications that users find head-tracking more comfortable than eye-based gaze tracking, and, as a result, applications for assistive technologies such as gaze-based interaction are more likely to apply head-tracking. However, if a user is not aware that an interactive display system uses a head-based tracking approach, he may not move his head accordingly, so that the interaction may fail, leading to dissatisfaction with the system.
Therefore, it is an object of the invention to provide an easy and intuitive way of informing a user of the head-tracking capabilities of an interactive display system. SUMMARY OF THE INVENTION
The object of the invention is achieved by the method of presenting head- pose feedback according to claim 1, a method of performing a gaze-based interaction according to claim 7, a head-pose feedback system according to claim 11, and an interactive display system according to claim 15.
The method of presenting head-pose feedback to a user of an interactive display system comprising a preferably three-dimensional display area comprises the steps of determining a head-pose for the user, visibly rendering a user head model in the display area, and driving the user head model according to the determined head-pose of the user.
Here, the term 'head-pose' is to be interpreted as the attitude or aspect taken by the user's head, and which can be used in estimating the direction in which the user is looking. In the above method according to the invention, a user can see at a glance that the interactive display system is reacting to his head-pose, so that this method is particular advantageous in teaching or communicating to a user that the interactive display system is capable of gaze-based interaction. A user new to such a system is given an intuitive indicator, namely a visual representation of his head, which mimics his head- pose.
The method of performing a gaze-based interaction between a user and an interactive display system with a preferably three-dimensional display area in which a number of objects is arranged, and comprising an observation means according to the invention, comprises the steps detecting the presence of the user in front of a display area, observing the motion of the user's head to determine a head-pose for the user, and presenting head-pose feedback to the user as described above.
With the gaze-based interaction method described here, in which head- pose feedback is given to the user, the user can quickly realize that the interactive display system can 'follow' his gaze, which is derived from his head-pose. Once this has been communicated to the user by means of the head-pose feedback, the user can participate in a gaze-based interaction with the interactive display system, for example using any technique of gaze-based interaction known from the state of the art.
A head-pose feedback system according to the invention, for presenting head-pose feedback to a user of an interactive display system with a preferably three- dimensional display area, comprises a head-pose determination unit for determining a head-pose for the user on the basis of an observed head-motion for that user. The head- pose feedback system further comprises a rendering module for visibly rendering a user head model in the display area, and a model driving unit for driving the user head model according to the determined head-pose of the user, so that the head model essentially mimics the head-pose of the user.
An interactive display system according to the invention comprises a preferably three-dimensional display area in which a number of objects is arranged, a detection means for detecting the presence of the user in front of a display area, and an observation means for observing the motion of the user's head to obtain head-motion information. The interactive display system further comprises a head-pose feedback system as described above to present head-pose feedback to the user, and a display area controller to control the display area according to the determined head-pose.
The head-pose feedback system and interactive display systems according to the invention offer a particularly simple and easy way of 'teaching' potential users or customers about their capabilities. A user, seeing a model of a head that moves in the same way he does, or that appears to look at the same object that he is looking at, will immediately realise that the system is reacting to his own head-pose. Since this might well make a display area more interesting or accessible to a user, the proposed solution is applicable for any type of public display offering gaze-based interaction, such as interactive shop windows, interactive exhibitions, museum interactive exhibits, etc., in which an intuitive and easily understandable explanation of the system's capabilities is desired.
The dependent claims and the subsequent description disclose particularly advantageous embodiments and features of the invention. As already indicated, the systems and methods described by the invention are suitable for application in any appropriate environment, such as an interactive shop window in a shopping area, in an interactive display case in an exhibition, trade fair or museum environment, etc. The display area can be three-dimensional, for example an area in which products are arranged for viewing, or an exhibit case. However, it is conceivable that the display area be two-dimensional, for example a rear-projection screen such as a HoloScreen® upon which images, for example images of objects or products, can be displayed. For the sake of simplicity, but without restricting the invention in any way, the display area may be assumed to be a three-dimensional shop window in the following. Also, a person who might interact with the interactive display system is referred to in the following as a 'user' or 'customer'. Even though the following description, for the sake of clarity, only deals with a single user interacting with a display system, the methods and systems according to the invention could be applied to the interactions of several users simultaneously. The contents of the display area being presented can be referred to below as 'items', 'objects' or 'products', without restricting the invention in any way.
The detection means of the interactive display system according to the invention can comprise a separate detection module for detecting the presence of a user in front of the display area, such as one or more pressure sensors or pressure tiles in the ground in front of the display area, any appropriate motion sensor, an infra-red sensor, or a camera vision system. Naturally, the observation means itself could be used to detect the presence of a user in front of the display area, for example by continually comparing images of the region in front of the display area with an 'empty' image, i.e. an image in which there are no people visible, so that the presence of a person in front of the display area can be determined. The observation means can comprise an arrangement of cameras, for example a number of moveable cameras mounted inside the display area. A observation means intended to track the movement of a person's head, in conjunction with any necessary hardware and software for performing the necessary image analysis, can also be referred to as a 'head tracker'. Such a device is specifically configured to detect and 'track' a person's head in a sequence of images, so that the motion of the head can subsequently be analysed.
The head-pose of the user can be described as a vector in three- dimensional space, as will be known to a person skilled in the art. The three dimensions constituting such a 'head-pose vector' are referred to as yaw or heading (horizontal rotation), pitch (vertical rotation) and roll (tilting the head from side to side). Not all of this information is required to estimate the point at which the user is looking. For example, experiments have shown that, for the horizontal or heading component of the head-pose vector, some people move their head to a greater extent when looking at objects in a display area, while other people move their heads less. However, for the vertical or pitch component, this difference is not so pronounced. Therefore, particularly in the case of display areas in which products are generally arranged so that they can easily be seen without the user having to tilt or roll his head to any noticeable extent, a vector describing the direction of looking can include relevant information such as only the observed heading, or the observed heading together with an estimated pitch. A head tracker as described above can determine the head-pose of the user, and can estimate the head heading as a result. The estimated direction of looking is referred to in the following as the head-pose vector or 'gaze vector'.
A more complex 'gaze tracker' could conceivably also track the eyes in a person's face to deliver a more precise gaze heading. Such eye-gaze tracking systems are however more costly, and require that the eyes of the person are clearly visible, which might conceivably be problematic in certain lighting conditions or for people wearing glasses. Therefore, without restricting the invention in any way, the following assumes that the more straightforward head-tracking is being carried out to determine the head-pose of the user in order to estimate his gaze direction. Evidently, an observation means which can also robustly detect the eyes of the user could be used to determine the user's head-pose and gaze direction.
In many display area environments such as retail malls, potential users or customers generally move past shop windows and may only briefly look into the display area. The user should preferably be able to see at a glance whether the display area is an interactive one. Therefore, in a particularly preferred embodiment of the invention, the step of driving the user head model comprises mapping a motion of the user's head to a corresponding motion of the user head model such that the motion of the user head model mimics the motion of the user's head. In this way, a user passing a shop window, maybe slowing down, and turning his head to look into the display area, can immediately see a model of a head behaving in the same way, effectively imitating his head movements. This can hold his attention, and, if he is new to such a system, can cause him to continue to direct his gaze into the shop window to see what happens next. In such a case, the 'teaching' effect of the head-pose feedback system can be regarded as having been successful. A user who is already aware that there are interactive display systems capable of gaze-based interaction may stop in front of the shop window in order to participate in a gaze-based interaction.
Head-pose feedback should be presented in an easily recognizable manner, in other words it should be made clear to the user that his head movement is being tracked, in particular his head movement with respect to the display area. The user head model can be driven, for example, to reflect the head motion of the user, or it can be driven so that it 'looks' at effectively the same point in the display area looked at by the user. This may depend largely on the position of the user head model in the display area, for example whether it is located at eye-level with the user, or low down in the display area. A user head model at eye level could be driven to directly imitate the head motion of the user, while a user head model lower down could be driven to appear to look at the same point looked at by the user.
In one preferred embodiment of the invention, therefore, the user head model can comprise a mechanically moveable physical head model, positioned in the display area such that it is clearly visible to a passing user. The step of driving this user head model can comprise controlling the physical head model according to the determined head-pose of the user, to express or reflect the movements of the user's head as a rotation and/or tilting of the physical head model. A user passing by and looking into the display area can see the physical head model moving in the same manner as the user has moved his head. Once the user has seen this happening, he can realize that the display area will react to his head motion, and can conclude that a gaze-based interaction is possible for this display area. The physical head model can include a small projector such as a mini monochrome laser projector, or a high-power mini-projector, built into the head model such that an object or region in the display area being looked at by the user can be illuminated or highlighted as head-pose feedback for the user. Recent and on-going developments in projection and display technology allow more sophisticated head-pose feedback to be presented to the user. For example, head movements of the user could be shown graphically on a screen or backdrop in the display area, for example at the height of that user's head, so that any user can easily see that head-pose feedback is being shown. In a particularly preferred embodiment of the invention, therefore, the user head model comprises a virtual head model shown graphically in a display or screen, and the step of driving the user head model comprises rendering the virtual head model in the display according to the determined head-pose of the user. Note that the use of the word 'display' in this sense only refers to a screen or backdrop upon which an image can be graphically rendered, and is not to be confused with the terms 'shop window display' or 'display area', which only refer to the area in which products are arranged for presentation. To avoid confusion, the term 'screen' is used in the following whenever reference is made to a display in which images can be graphically rendered.
In the methods and systems according to the current invention, the virtual head model explicitly represents the user's head, whether as a detailed representation of a human head, or a stylized representation. In either case, it should immediately be apparent to the user that the virtual head model mimics his head movements. For example, as already mentioned, the virtual head model can be driven to reflect the movements of the user's head in a one-to-one manner if it is located at eye-level with the user, otherwise it can be driven to appear to look at the same point in the display area. It will be emphasized at this point that the Virtual head model' is not to be interpreted as a simple 'cursor', known from other, simpler, prior art interaction modalities. Preferably, the virtual head model is rendered in a graphical representation of the display area, such that this graphical representation also includes images of the contents of the display area. The graphical representation of the display area can, for example, be rendered in an area of a screen showing the outlines of the objects in the display area from the user's point of view. The outline can be a contour corresponding to the shape of the object, and can be rendered as a bold or thick line. The virtual head model in this case can be an outline of a person's head shown on the screen. The screen can be driven or controlled so that the head outline changes to emulate the motion of the user's head. In this way, the virtual head model can show the user that his head movements are being tracked by the system.
As mentioned above, the head-pose of a user can be analyzed to estimate or determine the point at which he is most likely looking. In a particularly preferred embodiment of the invention, therefore, a virtual head-pose vector is also visibly rendered in the display area such that the virtual head-pose vector appears to originate from the virtual head model. For instance, using the above example, the virtual vector can be shown on the screen to originate from a point on the 'forehead' of the head outline, or from the 'nose' of the head outline. The virtual head-pose vector reflects the determined gaze direction of the user, whether he is looking directly at an object in the display area, or at a point between objects. Therefore, in another embodiment of the invention, visually emphasizing a region in the display area according to the determined gaze heading comprises rendering a virtual head-pose vector to represent the determined gaze heading. When a person looks into the display area, it could well be because he has seen something of interest. Any head-pose feedback is therefore preferably presented to the user in such a way that it does not have a detrimental effect on the act of looking into the display area, in other words, the head-pose feedback should not hinder the user from continuing to look into the display area, or should not distract his gaze from the object of interest. In the first type of head-pose feedback described above, this is relatively easy to achieve, since the physical head model can be placed on the 'floor' of the display area, or at another location not in the line of sight of the user. In the second type of head-pose feedback, where a virtual head model is shown graphically on a screen, this should be positioned so that the user can easily see it, for example in the user's line of sight. In one approach, the graphical representation of the head model, gaze vector, and display area contents could be projected onto a region behind the contents of the display area, so that the user can still see the objects but can also see the user head model being rendered with the gaze vector. In a further preferred embodiment of the invention, the rendered virtual head model and/or the rendered virtual head-pose vector are at least partially transparent, and are rendered between the user and the display area such that the user can see through the virtual head model and/or the rendered virtual head-pose vector into the display area. So that the user's gaze is not distracted or drawn away from the object he is looking at, the virtual head model and/or head-pose vector are rendered such that the point being 'looked at' by the virtual head model effectively coincides with the point being looked at by the user. Preferably, the virtual head model is rendered in a graphical representation of the display area according to the position of the user's head relative to the positions of the objects in the display area, so that the position of the virtual head model in the graphical representation of the display area effectively corresponds to the position of the user's head relative to the display area. This can be achieved by use of an appropriate type of display screen that is essentially transparent, but which can be made opaque when desired, for example a display screen with different modes of transmission, ranging from opaque through semi- transparent to transparent. A user may either look through such a screen at an object behind it when the screen is in a transparent mode, read information that appears on the screen for an object that is, at the same time, visible through the screen in a semi- transparent mode, or see only images projected onto the screen when the display is in an opaque mode. Alternatively, the screen can comprise a low-cost passive matrix electrophoretic display.
A multiple-mode projection screen can be controlled according to the presence and actions of a user in front of the display area. For instance, in the case when no customers are detected in front of an interactive shop window, the screen can be placed in a type of 'stand-by mode', to display shop promotional content. Once a potential customer has been detected in front of the display area, as described above, the screen can become transparent, with only a small area being semi-transparent. In this small area, the virtual head model and virtual head-pose vector can be rendered to show the user that he can participate in a gaze-based interaction. To terminate or exit this 'teaching' mode, a suitable symbol could be rendered in a part of the screen, for example a virtual 'cancel' or 'continue' button could be displayed. A user familiar with this type of gaze-based interaction can simply look at the 'cancel' button so that the gaze-based interaction can continue as normal. To ensure that the user does not inadvertently terminate the teaching mode, he may be required to direct his gaze at the 'cancel' button for a predefined length of time. A user new to this type of interaction can first study the rendered information. Once he has realised that he can interact with the shop window, a glance at the 'cancel' button is sufficient to make the screen become transparent, and for the gaze-based interaction to proceed in the usual manner. The screen can become entirely translucent, allowing the user to look at any item in the display area. Once he 'selects' another item or object by looking at it, product-related information for that object can be rendered in the display area.
In a further preferred embodiment of the invention, an object in the display area can be identified on the basis of the determined head-pose or gaze-heading, and the display area can be controlled to visually emphasise that object. An object can be regarded as having been 'selected' if the determined gaze-heading lies within an 'interactive zone' or 'interactive boundary' for that object. This interactive zone or boundary can be an area including the object itself, as well as a region surrounding that object, so that, on the one hand, the user does not have to explicitly look directly at the object, and, on the other hand, inaccuracies in the gaze determination process can be taken into account. In the subsequent gaze-based interaction, therefore, the display area can be controlled according to items looked at by the user. For example, when the gaze of the user is directed within the interactive zone for an item for a minimum predefined 'dwell-time', object-related information such as price, available sizes, available colours, name of a designer etc., can be shown to the user. In the interactive display system using a physical head model with a built-in projector such as a front- or rear-projection unit, the projector can be used to project object-related information onto a suitable backdrop or screen. Alternatively, in a interactive display system with, for example, an electrophoretic screen, information can be directly rendered in the screen. In both cases, product-related information is preferably presented in the line of sight of the user, so that he can easily view or read the information. When the user's gaze moves away from the object of interest, the information can fade out.
The visual emphasis of a region in the display area need not be limited to mere highlighting of 'selected' objects as mentioned above. With the modern projection techniques available, more interesting ways of communicating with the user are possible. For example, a type of virtual 'cursor' could be projected in the display area to follow the estimated gaze direction of the user. An appropriate symbol could be projected when the user's gaze appears to be directed between objects in the display area, for example an easily understandable symbol such as a question mark, or a pair of eyes. The virtual cursor can move across the display area to 'follow' the user's gaze. One advantage of such an entertaining approach is that the attention of the user may be held, and he may be more interested in participating in a gaze-based interaction if he realises that his gaze is effectively being tracked by the display system. Until interactive shop windows become commonplace, it may be preferable to provide users with a more explicit indication that a gaze-based interaction is possible. For example, a set of instructions could be provided to a user to let him know that he can interact with a display system. The instructions could be issued when the presence of a user is detected in front of the display area as a series of recorded messages output over a loudspeaker, in the form of written text, as an image or a sequence of images, as a video demonstration, etc. However, in a noisy environment such as a shopping district or public area, this might be impracticable and unreliable. Therefore, the set of instructions might be projected visually within the display area so that the user can easily 'read' the instructions. Again, projecting text or information in this way is made possible by the available projection system technology. Such a message can be either statically defined on the shop window display or it could be dynamically generated dependent on the user's position so that it would be centred relative to the user. In this way, the instructions can be optimally positioned for good readability, regardless of where the user is standing relative to the display area. This is of particular advantage when considering that the visibility of a projected image can depend on the angle from which it is being seen. The instructions could be cancelled by the user, for example if the user has understood or if the user is already familiar with this type of interactive system, for example by a cancel button that the user can press, by speaking an appropriate command, by a virtual cancel 'button' shown in a display as already described above, or by any other suitable method.
Other objects and features of the present invention will become apparent from the following detailed descriptions considered in conjunction with the accompanying drawings. It is to be understood, however, that the drawings are designed solely for the purposes of illustration and not as a definition of the limits of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
Fig. Ia shows a first schematic representation of a user in front of a display area; Fig. Ib shows the scenario of Fig. Ia, with head-pose feedback being given to the user in a gaze interaction according to a first embodiment of the invention;
Fig. 2a shows a second schematic representation of a user in front of a display area; Fig. 2b shows the scenario of Fig. 2a, with head-pose feedback being given to the user in a gaze interaction according to a second embodiment of the invention;
Fig. 3 shows a schematic cross section of a display area with a gaze interaction system according to another embodiment of the invention. In the drawings, like numbers refer to like objects throughout. Objects in the diagrams are not necessarily drawn to scale. DETAILED DESCRIPTION OF THE EMBODIMENTS Fig. Ia shows a user 1 in front of a display area D, in this case a potential customer 1 in front of a shop window D. For the sake of clarity, this schematic representation has been kept very simple. In the shop window D, items 14, 15, 16 are arranged for display. An electrophoretic screen 5 is positioned as a projection area 5 between the user 1 and the inside of the display area D. A detection means 4, in this case a pressure mat 4 or pressure tile 4, is located at a suitable position in front of the shop window D so that the presence of a potential customer 1 who pauses in front of the shop window D can be detected. An observation means 3, or head tracking means 3, with a camera arrangement is positioned in the display area D such that the head motion of the user 1 can be tracked as the user 1 looks into the display area D. The head tracking means 3 can be activated in response to a signal 40 from the detection means 4 delivered to a control unit 20. Evidently, the head tracking means 3 could, if appropriately realized, be used in lieu of the detection means 4 for detecting the presence of a user 1 in front of the display area D.
In the diagram, only a single camera 3 is shown, but obviously any number of cameras could be implemented, and arranged unobtrusively in the display area D. The control unit 20 might comprise hardware and software modules, for example suitable algorithms running on a computer situated, for example, in an office or other location. In the diagram, a simplified representation of the control unit 20 is shown to comprise a head-pose determination unit 21 which analyses the data 30 supplied by the observation means 3 to deduce the head-pose of the user 1, and therefore also the user's gaze direction G. The control unit 20 also comprises an interaction control module 25, a head model rendering unit 23, and a database 27. These modules 21, 23, 25, 27 will be explained below in more detail. Generally, the control unit 20 and detection means 4 will be invisible to the user 1, and are therefore indicated by the dotted lines. Fig. Ib shows the same scenario as above, but with head-pose feedback being shown to the user 1. Here, the head-pose determination unit 21, using data 30 delivered by the observation means 3, has determined the head-pose of the user 1, i.e. that the user's head indicates that he is looking at the shoes 15. To supply feedback to the user 1 , showing the user 1 that the system 2 'knows' what he is looking at, the head- pose determination unit 21 delivers a suitable signal 22 to the head model rendering unit 23, which in turn generates appropriate control signals 24 to drive the multimode electrophoretic screen 5 positioned between the user 1 and the display area D, for example as part of the shop window glazing. When in an 'inactive' state, as in Fig. Ia above, the electrophoretic screen 5 is essentially transparent, so that the user 1 can easily see through the screen 5 into the display area 5. In this example, the control signals 24 delivered by the head model rendering unit 23 cause a graphical representation of the display area D to be shown in a region 50 of the electrophoretic screen 5, so that this part 50 of the screen 5 becomes partially opaque, as indicated by the stippling in this region 50. The display area D and its contents are shown in miniature, in this case such that the objects 14, 15, 16 presented in the display area are indicated by their outlines 54, 55, 56. The head model rendering unit 23 applies software algorithms to generate image data for a virtual head model Hv and head-pose vector V that mimic the user's head-pose and gaze. The virtual head model Hv, shown as the outline of a human head, is graphically rendered in the visually emphasized region 50 of the screen 5. To show the user 1 that the system 2 can tell where he is looking, the virtual head-pose vector V, imitating the user's gaze direction G, is shown to extend from the head model Hvto the outline 55 of the object 15 that the user 1 was actually looking at. The head-pose feedback system in this example essentially includes the head-pose determination unit 21, the head model rendering unit 23, and the controllable display area 5. A first-time user 1 of such an interactive display system 2 can understand at this point that he can interact with the display area D on the basis of his head-pose and gaze. To indicate to the system 2 that he has understood, in this example, the user 1 can look at an appropriate symbol 57, shown here in a corner of the visually emphasized region 50, to terminate the 'teaching' mode. The symbol 57 can be a 'button' comprising the words 'OK', 'cancel', 'continue' or a similar easily understandable text. The symbol 57 could also simply be an arrow, which can easily be interpreted to mean 'carry on' or 'continue'. Looking at this symbol 57 for a predefined length of time such as a dwell time of one or two seconds causes the system 2 to remove the visual emphasis and to proceed with the gaze-based interaction. Alternatively, the user could cancel the teaching mode by simply pointing at the 'cancel' button or touching the appropriate part of the display. This action can be identified by the observation means.
The subsequent gaze-based interaction as explained above, can involve highlighting looked-at objects and displaying product-related information for the items in the display area D, for example by showing 'pop-up menus' in the electrophoretic screen. The actual gaze-based interaction is managed in the interaction control module 25, which also receives the head-pose information 22 as well as object position information 28 from a database 27, which keeps track of the placement of any objects in the display area D. With this information, the interaction control module 25 can determine which object is being looked at by the user 1, and can also determine for how long the user 1 has been looking at an object. Accordingly, the interaction control unit 25 issues control signals 24 to drive the rendering means 5, for example to display product-related information.
Figs. 2a and 2b show an interaction with a different type of head-pose feedback being given to the user 1. In a similar scenario to Figs. Ia and Ib above, the user 1 is situated in front of a display area D with a number of products 14, 15, 16 laid out for presentation. Again, a detection means 4 and observation means 3 detect the presence of the user 1 and monitor his head motion, respectively. In a control unit 20', a head-pose determination unit 21 analyses the data 30 supplied by the observation means 3 to deduce the head-pose of the user 1, and therefore also his gaze direction G. In this example, the user head model is a physical, mechanically controllable model Hphy of a person's head, placed in the front of the display area D so that it can 'look at' any of the objects 14, 15, 16 in the display area D. The model Hphy can have a realistic appearance, with eyes, nose etc., so that it is immediately recognizable as a model of human head. The head-pose determination unit 21 delivers a suitable signal 22 to a head model rendering unit 23', which in turn generates appropriate control signals 24' to drive the mechanically controllable model Hphy, for example by issuing signals to drive one or more stepper motors in the model Hphy to cause this to rotate and/or tilt to mimic the head-pose of the user 1. Additionally, the mechanically controllable model Hphy is equipped with a miniature laser projector 51, which is built into the model Hphy such that light L issued by the miniature laser projector 51 appears to originate from the 'eyes' of the model Hphy. The control signals 24' in this example also include control signals for the miniature laser projector 51, so that it illuminates the object 15 that the user 1 is looking at. In the diagram, this is indicated by the 'aura' around the looked-at object 15. In this way, the user 1 can understand that the system 2 is capable of tracking his gaze. The projector 51 could also cause product-related information to be projected onto a suitable backdrop (not shown) in the display area D, thus providing the user 1 with interesting product information for the item at which he is looking at any one time. In this example, the head-pose feedback system essentially includes the head-pose determination unit 21, the head model rendering unit 23', the mechanically controllable model Hphy, and the miniature laser projector 51.
To aid understanding, the scenario of Fig. 1 is shown as a schematic side view in Fig. 3, again with a display area D for an interactive display system 2 in which a projection unit 9 is used to project an image onto a rear-projection screen 5, for example a HoloScreen® 5. Again, an observation means 3 is used to observe the user's head H so that a head-pose for the user 1 and his gaze direction G can be determined as described above. In a teaching or instructive mode, the projection unit 9 can project a virtual head model and a virtual gaze vector onto the region 50 of the HoloScreen® 5 in the user's line of sight, so that the user 1 can easily see that his head-pose is being noted, and that he can interact using gaze with the display system. Later, in a normal gaze-based interaction, the projection unit 9 can show product-related information on the
HoloScreen® 5 for any object 14, 15 looked at by the user 1. The region 50 shown is not restricted to the dimensions shown, but can cover any small or large area of the screen 5, or can even take in the entire screen 5. When in a 'standby' mode, or when no interaction is taking place, the HoloScreen® can simply be transparent. Although the present invention has been disclosed in the form of preferred embodiments and variations thereon, it will be understood that numerous additional modifications and variations could be made thereto without departing from the scope of the invention. For example, instead of physically arranging actual items in a display area, these could be virtually shown, for example projected on the display screen. With such an approach, the 'contents' of a display area can easily be changed at any time, for example using a computer user interface.
For the sake of clarity, it is to be understood that the use of "a" or "an" throughout this application does not exclude a plurality, and "comprising" does not exclude other steps or elements. A "unit" or "module" can comprise a number of units or modules, unless otherwise stated.

Claims

CLAIMS:
1. A method of presenting head-pose feedback to a user (1) of an interactive display system (2) comprising a display area (D), which method comprises the steps of determining a head-pose for the user (1); visibly rendering a user head model (Hv, Hphy) in the display area (D); and - driving the user head model (Hv, Hphy) according to the determined head- pose of the user (1).
2. A method according to claim 1 , wherein the step of driving the user head model (Hv, Hphy) comprises mapping a motion of the user's head (H) to a corresponding motion of the user head model (Hv, Hphy) such that the motion of the user head model (Hv, Hphy) mimics the motion of the user's head (H).
3. A method according to claim 1 or claim 2, wherein the user head model (Hphy) comprises a physical head model (Hphy), and the step of driving the user head model (Hphy) comprises controlling the physical head model (Hphy) according to the determined head-pose of the user (1).
4. A method according to claim 1 or claim 2, wherein the user head model (Hv) comprises a virtual head model (Hv), and the step of driving the user head model (Hv) comprises rendering the virtual head model (Hv) according to the determined head- pose of the user (1).
5. A method according to claim 4, wherein a virtual head-pose vector (V) is visibly rendered in the display area (D) such that the virtual head-pose vector (V) appears to originate from the virtual head model (Hv).
6. A method according to claim 4 or claim 5, wherein the rendered virtual head model (Hv) and/or the rendered virtual head-pose vector (V) are at least partially transparent, and are rendered between the user (1) and the display area (D) such that the user (1) can see through the virtual head model (Hv) and/or the rendered virtual head- pose vector (V) into the display area (D).
7. A method of performing a gaze-based interaction between a user (1) and an interactive display system (2) comprising a display area (D) in which a number of (physical) objects (14, 15, 16) is arranged, and comprising an observation means (3), which method comprises the steps of detecting the presence of the user (1) in front of a display area (D); observing the motion of the user's head (H) to determine a head-pose for the user (1); and presenting head-pose feedback to the user (1) using a method according to any of claims 1 to 6.
8. A method according to claim 7, which method comprises determining a gaze heading for the user (1) on the basis of the determined head-pose for that user (1); and - visually emphasizing a region (50, L) in the display area (D) according to the determined gaze heading.
9. A method according to claim 8, wherein visually emphasizing a region (50) in the display area (D) according to the determined gaze heading comprises rendering a virtual head-pose vector (V) to represent the determined gaze heading.
10. A method according to any of claims 7 to 9, wherein an object (15) in the display area (D) is identified on the basis of the determined gaze heading, and the display area (D) is controlled to visually emphasise that object (15).
11. A head-pose feedback system for presenting head-pose feedback to a user (1) of an interactive display system (2) comprising a display area (D), which system comprises a head-pose determination unit (20) for determining a head-pose for the user (1) on the basis of an observed head-motion (30) for that user (1); a rendering means for visibly rendering a user head model (Hv, Hphy) in the display area (D); and a head model driving unit (23, 23') for driving the user head model (Hv, Hphy) according to the determined head-pose of the user (1).
12. A head-pose feedback system according to claim 11 , in which the rendering means comprises a physical model (Hphy) of a head, which physical model (Hphy) is realized to be mechanically controllable in response to a signal (24') from the head model driving unit (23').
13. A head-pose feedback system according to claim 11 , in which the rendering means comprises a projection area (5) between the user (1) and the display area (D), and a projection unit (9) realized to visually render a representation of the contents of the display area (D), a virtual head model (Hv), and a virtual head-pose vector (V) onto the projection area (5), in response to a signal (24) from the head model driving unit (23).
14. A head-pose feedback system according to claim 13, wherein the projection area (5) comprises a rear-projection screen (5).
15. An interactive display system (2) comprising a display area (D) in which a number of objects (14, 15, 16) is arranged; a detection means (3, 4) for detecting the presence of a user (1) in front of a display area (D); an observation means (3) for observing the motion of the user's head (H) to obtain head motion information (30); and a head-pose feedback system according to any of claims 11 to 14 for presenting head-pose feedback to the user (1).
PCT/IB2009/053783 2008-09-03 2009-08-31 Method of presenting head-pose feedback to a user of an interactive display system WO2010026519A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP08105211.0 2008-09-03
EP08105211 2008-09-03

Publications (1)

Publication Number Publication Date
WO2010026519A1 true WO2010026519A1 (en) 2010-03-11

Family

ID=41327260

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2009/053783 WO2010026519A1 (en) 2008-09-03 2009-08-31 Method of presenting head-pose feedback to a user of an interactive display system

Country Status (2)

Country Link
TW (1) TW201028888A (en)
WO (1) WO2010026519A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BE1019551A3 (en) * 2010-10-25 2012-08-07 Mastervoice In Het Kort Mtv Nv USE OF A VIDEO CONFERENCE SYSTEM.

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017014728A1 (en) 2015-07-17 2017-01-26 Hewlett-Packard Development Company, L.P. Rotating platform for a computing device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004084054A2 (en) * 2003-03-21 2004-09-30 Queen's University At Kingston Method and apparatus for communication between humans and devices
EP1484665A2 (en) * 2003-05-30 2004-12-08 Microsoft Corporation Head pose assessment methods and systems
WO2007015200A2 (en) * 2005-08-04 2007-02-08 Koninklijke Philips Electronics N.V. Apparatus for monitoring a person having an interest to an object, and method thereof
US20070070072A1 (en) * 2005-09-28 2007-03-29 Templeman James N Open-loop controller
WO2007055865A1 (en) * 2005-11-14 2007-05-18 Microsoft Corporation Stereo video for gaming
WO2008012717A2 (en) * 2006-07-28 2008-01-31 Koninklijke Philips Electronics N. V. Gaze interaction for information display of gazed items

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004084054A2 (en) * 2003-03-21 2004-09-30 Queen's University At Kingston Method and apparatus for communication between humans and devices
EP1484665A2 (en) * 2003-05-30 2004-12-08 Microsoft Corporation Head pose assessment methods and systems
WO2007015200A2 (en) * 2005-08-04 2007-02-08 Koninklijke Philips Electronics N.V. Apparatus for monitoring a person having an interest to an object, and method thereof
US20070070072A1 (en) * 2005-09-28 2007-03-29 Templeman James N Open-loop controller
WO2007055865A1 (en) * 2005-11-14 2007-05-18 Microsoft Corporation Stereo video for gaming
WO2008012717A2 (en) * 2006-07-28 2008-01-31 Koninklijke Philips Electronics N. V. Gaze interaction for information display of gazed items

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HEURING J J ET AL: "Visual head tracking and slaving for visual telepresence", ROBOTICS AND AUTOMATION, 1996. PROCEEDINGS., 1996 IEEE INTERNATIONAL C ONFERENCE ON MINNEAPOLIS, MN, USA 22-28 APRIL 1996, NEW YORK, NY, USA,IEEE, US, vol. 4, 22 April 1996 (1996-04-22), pages 2908 - 2914, XP010163178, ISBN: 978-0-7803-2988-1 *
NAKATSURU T ET AL: "Image overlay on optical see-through displays for vehicle navigation", MIXED AND AUGMENTED REALITY, 2003. PROCEEDINGS. THE SECOND IEEE AND AC M INTERNATIONAL SYMPOSIUM ON 7-10 OCT. 2003, PISCATAWAY, NJ, USA,IEEE, 7 October 2003 (2003-10-07), pages 286 - 287, XP010662831, ISBN: 978-0-7695-2006-3 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BE1019551A3 (en) * 2010-10-25 2012-08-07 Mastervoice In Het Kort Mtv Nv USE OF A VIDEO CONFERENCE SYSTEM.

Also Published As

Publication number Publication date
TW201028888A (en) 2010-08-01

Similar Documents

Publication Publication Date Title
US11334145B2 (en) Sensory feedback systems and methods for guiding users in virtual reality environments
US20110141011A1 (en) Method of performing a gaze-based interaction between a user and an interactive display system
US20220101593A1 (en) Devices, Methods, and Graphical User Interfaces for Interacting with Three-Dimensional Environments
JP6730286B2 (en) Augmented Reality Object Follower
US20110128223A1 (en) Method of and system for determining a head-motion/gaze relationship for a user, and an interactive display system
ES2556678T3 (en) Automatic distribution of private showcases along a shop window
CN115167676A (en) Apparatus and method for displaying applications in a three-dimensional environment
US20210165484A1 (en) Information processing device, information processing method, and program
JP2022535316A (en) Artificial reality system with sliding menu
JP2023504992A (en) Posture-based virtual space composition
US20120188279A1 (en) Multi-Sensor Proximity-Based Immersion System and Method
JP2006301654A (en) Image presentation apparatus
CN115657848A (en) Device, method and graphical user interface for gaze-based navigation
EP3447610B1 (en) User readiness for touchless gesture-controlled display systems
KR20140109700A (en) Apparatus for displaying interactive image using transparent display, method for displaying interactive image using transparent display and recording medium thereof
WO2010026519A1 (en) Method of presenting head-pose feedback to a user of an interactive display system
Wischgoll et al. Display infrastructure for virtual environments
CN113168228A (en) Systems and/or methods for parallax correction in large area transparent touch interfaces
WO2012047905A2 (en) Head and arm detection for virtual immersion systems and methods
US20100045711A1 (en) System and method for control of the transparency of a display medium, primarily show windows and facades
EP2910151A1 (en) Interactive showcase with in-built display screen
KR102667544B1 (en) Sensory feedback systems and methods for guiding users in virtual reality environments
WO2024064231A1 (en) Devices, methods, and graphical user interfaces for interacting with three-dimensional environments
KR20230011146A (en) A apparatus for providing VR contents to visitors
KR20200031256A (en) Contents display apparatus using mirror display and the method thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09787049

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09787049

Country of ref document: EP

Kind code of ref document: A1