EP3447610B1 - Benutzerbereitschaft für berührungslose gestengesteuerte anzeigesysteme - Google Patents

Benutzerbereitschaft für berührungslose gestengesteuerte anzeigesysteme Download PDF

Info

Publication number
EP3447610B1
EP3447610B1 EP17187245.0A EP17187245A EP3447610B1 EP 3447610 B1 EP3447610 B1 EP 3447610B1 EP 17187245 A EP17187245 A EP 17187245A EP 3447610 B1 EP3447610 B1 EP 3447610B1
Authority
EP
European Patent Office
Prior art keywords
user
virtual
display
gesture
control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP17187245.0A
Other languages
English (en)
French (fr)
Other versions
EP3447610A1 (de
Inventor
Albrecht METTER
Artem SAVOTIN
Marcus Goetz
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ameria AG
Original Assignee
Ameria AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ameria AG filed Critical Ameria AG
Priority to EP17187245.0A priority Critical patent/EP3447610B1/de
Priority to EP20202016.0A priority patent/EP3783461A1/de
Priority to PCT/EP2018/072340 priority patent/WO2019038205A1/en
Publication of EP3447610A1 publication Critical patent/EP3447610A1/de
Application granted granted Critical
Publication of EP3447610B1 publication Critical patent/EP3447610B1/de
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Definitions

  • the present invention generally relates to gesture-controlling systems, and more particularly to touchless gesture-controlled display systems, as well as methods and computer programs for operating such systems.
  • digital signage and interactive signage systems such as advertising panels, digital shelf panels etc.
  • these are equipped with a touch sensor, which allows the user to interact with the signage in order to perceive different content.
  • a customer in an automotive showroom could browse the model range on a touch-based screen.
  • a customer in a shopping mall could approach a large stationary touch display and look up the location of a particular shop by providing touch inputs on the display.
  • touch-based systems are generally well-accepted by users because the man machine interface is intuitively known from the ubiquitous smartphones available nowadays.
  • touch systems have at least the following exemplary disadvantages:
  • a first exemplary disadvantage of touch-based systems is that the user is standing comparatively close to the display when interacting, thus - depending on the screen size - the user is required to step back and forth in order to see the content properly. This is especially the case for large displays.
  • a second exemplary disadvantage of touch-based systems is that due to their nature, touch screens get dirty rather fast (fingerprints, fat etc.) and are therefore required to be cleaned.
  • a third exemplary disadvantage is that people - especially in the US or China - are concerned about the hygiene of a touch screen because "you do not know who touched it before”. This becomes also apparent because many touch-based screens have a disinfectant dispenser standing next to it.
  • touchless gesture controlled systems enable users to interact with content from a distance of typically 1 to 2.5 meters without any touching of the device. This also provides for a broader and more direct view on large screen displays without the need of stepping back and forth.
  • gesture-controlled systems Despite the advantages of gesture-controlled systems, the technology is not yet that common. Typically, technology-sawy users know gesture control from game devices such as the Microsoft Xbox or from cars (e.g. Audi AirTouch). In other scenarios, such as the advertising and signage industry, however, gesture-controlled systems are not very common nowadays. Therefore, people often actually do not understand that they can interact with a gesture controlled system, such as an interactive panel, via gestures, but perceive the display as an ordinary passive screen or they try to apply touch gestures by tapping on the glass.
  • a gesture controlled system such as an interactive panel
  • Gesture sensors may generally be used for different domains. However, in most cases the field of application only considers professional environments and the systems are based on the following two premises: First, the users are expected to be aware of the gesture-control availability. Second, the users are expected to be experienced with the usage of gesture-control. Due to these two premises, the applications typically do not offer and do not require any kind of awareness or training approach.
  • Kinect 360 As well as its successor Kinect One. Both are part of the XBOX 360 / XBOX One.
  • the approach of these systems is that several manuals are provided in order to explain gesture-control. However, these are not included in the actual application (e.g. the game), but on a website or in a printed manual. These manuals contain some static visualization of the concept of positioning the user.
  • one readily apparent drawback of this approach is that - in the case of interactive displays, in particular in systems in public places directed to a priori unknown passers-by - people do not want to read manuals or have the time to read them.
  • EP 3 043 238 A1 describes a similar approach.
  • the drawback here is also the display of static acknowledgement of the gesture of the user.
  • the approach only considers the response that a dedicated gesture has been detected but it does not consider the specific awareness-raising and training of the user.
  • the gesture-controlled rear-projection system "Virtual Promoter" of the applicant which is described in EP 2 849 442 B2 , also considers a different approach so far - the so-called position feedback.
  • This approach focuses more on the relation between a focus area, i.e. the area in front of the display area where the user has to stand in order to interact, and the real position of the user.
  • the user is visualized on the display as a small and static icon that represents the user and the focus area is visualized as a virtual marker on the display.
  • the icon representing the user moves along with the user, it remains static and small.
  • both the visualization and technical implementation do not allow for an easy perception that the system can be controlled via touchless gestures and it has not yet been able to overcome the problem of raising awareness for the gesture-control functionality.
  • the main disadvantages of the prior art are the lacking usability as well as the lacking awareness of gesture controllability.
  • people do not understand that the interactive display is in fact interactive and that they can control it via gestures.
  • people typically do not map themselves on the screen, and thus do not understand that they can control the application via gestures.
  • it remains hidden to the user how he can interact with the application.
  • the implementation of existing approaches is always an individual task per project or per application, thus scalability and standardization is not feasible.
  • US 2011/0304632 A1 discloses techniques for interacting with a user interface via feedback provided by an avatar.
  • the avatar can move relative to the user interface control (e.g. either closer to or farther from the user interface control).
  • US 2014/0232816 A1 discloses a tele-immersive environment that provides interaction among participants using a mirror metaphor.
  • the present invention provides a touchless gesture-controlled display system.
  • the system may comprise a display for displaying one or more virtual user representations each corresponding to a user located nearby the display.
  • the system may be configured for increasing the size of the virtual user representation when the corresponding user moves towards the display and for decreasing the size of the virtual user representation when the corresponding user moves away from the display.
  • the user is virtually represented on the display of the system by a virtual user representation (also referred to as "avatar” or “virtual avatar”).
  • a virtual user representation also referred to as "avatar” or “virtual avatar”
  • the display preferably displays an individual avatar for each user.
  • the size of the avatar changes in relation to the distance between the user and the display, i.e. the avatar's size increases when the distance decreases and vice versa.
  • the user instinctively recognizes that it is him who is virtually represented on the display of the system.
  • the user recognizes immediately that the system is interacting with him and that the content on the display is not only a self-induced arbitrary image. Therefore, the user is directly aware that he can interact with and control the system, i.e. that the system does not only provide passive one-directional information but provides a man-machine interface for the user.
  • the inventors have found that by using the mirror metaphor, also false inputs, such as touch inputs, can be avoided as the users perceive on their way towards the system (what they perceive as a touch display) that the system is not deemed to be controlled via touch gestures, as they see themselves approaching the display. Thus, the users perceive that the display is not for touching, just as a mirror is.
  • the mirror metaphor can also help to avoid false inputs to the touchless gesture-controlled system and therefore help to provide an improved man-machine interface.
  • the system may also be configured for moving the virtual user representation on the display according to the user's movement.
  • the mirror metaphor is even more prominent, as not only the distance to the gesture-controlled system is reflected by the display of the device, but also the physical location of a person passing by with respect to the system. This way, the user may even more quickly recognize that the system is interacting with him and that the content on the display is not a self-induced arbitrary image.
  • the system is further configured for reflecting a movement of the user's head, such as nodding and/or moving to the side, on the virtual user representation.
  • a movement of the user's head such as nodding and/or moving to the side
  • the virtual representation of the user becomes even more realistic and more compliant to the mirror metaphor known by the user, thus enabling for an immediate recognition that the system interacts with the user.
  • the system is further configured for displaying a virtual control position and further configured for displaying a visual indication for directing the user to a position from where the system can be controlled by the user which corresponds to the virtual control position.
  • the visual indication comprises a graphical element which connects the virtual user representation's base to the virtual control position.
  • the visual indication comprises one or more arrows, more preferably a plurality of arrows.
  • the system is further configured for entering a training mode when the user is detected at a position from where the system can be controlled by the user.
  • the system is configured for simultaneously displaying a plurality of virtual user representations corresponding to a plurality of users and for entering the training mode when one of the plurality of users is detected at a position from where the system can be controlled by the user.
  • the mirror metaphor can be applied by the system even more consequently, thus enabling the plurality of users to immediately understand that the system is interacting with them.
  • the graphical interface makes it clear to the users which one is in charge of controlling the system (i.e. the user whose avatar "stands" on the virtual control position).
  • the system instead of entering the training mode, may also enter a control mode (which will be explained in more detail further below) straight away once the user (or one of the users) is detected at a position from where the system can be controlled by the user.
  • a control mode which will be explained in more detail further below
  • the system after entering the training mode and/or control mode, is further configured for removing the visual indication from the display and/or for changing the color of the virtual user representation. This way, the system signals to the user that he already undertook the first step of interaction with the system successfully.
  • the system is further configured for displaying one or more hands and/or arms of the virtual user representation corresponding to a physical location of the hands and/or arms of the user, and further configured for mapping the user's hand and/or arm movement.
  • the system is further configured for mapping the user's shoulder, elbow and hand movement. This way, the system shows to the user that his hands are of importance for the upcoming interaction between the system and the user. Furthermore, this can be further emphasized by applying the mirror metaphor to the user's shoulder, elbow and hand movement, thus achieving a more realistic mirroring provided by the system.
  • the system is further configured for displaying at least one graphical control element, such as a button.
  • the system is configured for displaying two graphical control elements, each graphical control element being displayed above a corresponding hand of the virtual user representation. This way, the training mode is started and interaction between the user and the system is further enabled by means of the at least one graphical control element. Further, the users understand that they are urged to move their hands as the displaying of an appearing graphical control element corresponds to a pop-up window as known by virtually all users, even those who use the system for the first time.
  • the system is further configured for displaying at least one virtual indication urging the user to select the at least one graphical control element.
  • the at least one virtual indication preferably comprises a graphical element which connects a hand of the virtual user representation to the at least one graphical control element.
  • the visual indication preferably comprises one or more arrows, more preferably a plurality of arrows. This way, the system teaches the user how to interact with the system in case that the user has not provided an immediate feedback to the system. This is preferably clarified by means of the virtual indication connecting a hand of the user to the at least one graphical element. Thus, the connection between the user's hand in the real world and its indication for the control of the system is further clarified.
  • the system is further configured for entering a control mode in which the display displays an application.
  • the system is preferably further configured for shrinking the virtual user representation.
  • shrinking the virtual user representation This way, the user has been taught how to interact with the system.
  • an application can be displayed on the display of the system with which the user has learned to interact.
  • the achievement of the user having learned how to interact with the system may be preferably further acknowledged by shrinking the avatar. This way, the user realizes that the training is now over and that he is ready to interact with and control the application displayed on the display of the system.
  • the touchless gesture-controlled display system is useable in public places such as shopping windows, stores and/or trade shows. It is particularly advantageous to let users use the system in public places as in those places it is highly likely that users are first-time users of a gesture controlled system, or at least unexperienced users.
  • the touchless gesture-controlled display system is a stationary system. This way, the system may be employed stationary at locations of particular interest.
  • the touchless gesture-controlled display system is comprised in a single housing. This way, the system is easily transportable and/or more appealing to users. Further, due to the integral housing, the system components cannot be stolen or vandalized, which is important for systems in public places.
  • the touchless gesture-controlled display system comprises a rear-projection system. This way, the system can also be implemented on touchless gesture-controlled devices as those described in EP 2 849 442 B2 of the applicant.
  • the display comprises a display area with a height of approximately at least 32 inch, more preferably approximately at least 60 inch.
  • a display area with a height of approximately at least 32 inch, more preferably approximately at least 60 inch.
  • a virtual person / avatar can generally be displayed in any size, but the users then tend not to perceive it as a person but rather a video or cartoon. Therefore, the avatar / person needs is ideally real-size or nearly real-size.
  • the "smallest" avatar used in systems of the applicant is about 1.4 m high, and thus a screen of about 60 inch is preferred.
  • an upper limit there is none from a technical perspective. However, the following two factors need to be considered in terms of height: Firstly, the higher the content is, the harder it is to perceive it (e.g. if there is content in 3 m height above the ground, people will probably not read it). Secondly, typical gesture control sensors limit the distance of the user to the screen to a maximum of 4 m. Thus, if the "hotspot" / control area is configured to 4 m, a user can probably look at 3 m max. In summary, thus, the upper limit is a matter of concept but less one of technology.
  • the system further comprises at least one of: a sound system for providing audio instructions, a gesture control module for enabling the one or more users to control the system with touchless gestures, a gesture sensor, preferably for generating RGB and/or depth images a software module for processing gesture sensor data and for detecting users and/or user gestures, a software module for generating a skeleton and/or silhouette of the user, and/or a software module for reacting to detected gestures.
  • a sound system for providing audio instructions
  • a gesture control module for enabling the one or more users to control the system with touchless gestures
  • a gesture sensor preferably for generating RGB and/or depth images
  • a software module for processing gesture sensor data and for detecting users and/or user gestures
  • a software module for generating a skeleton and/or silhouette of the user
  • a software module for reacting to detected gestures can be used in order to further improve the experience provided to a user by the gesture-controlled system.
  • the virtual user representation reflects gender, age and/or ethnicity of the corresponding user. This way, the mirror metaphor can be employed even more consequently, which enables the user to immediately recognize that the system is interacting with him and that the content on the display is not a self-induced arbitrary image.
  • the invention also provides a method for operating a touchless gesture-controlled display system as explained above.
  • a computer program comprising instructions for implementing the method is also provided.
  • Embodiments of the invention build upon a concept named "User-Readiness".
  • this approach extends the previously described position feedback approach in prior systems of the applicant by using appropriate visualizations that help users to understand the availability of gesture-control as well as to train them in the usage of gesture-control for interactive displays. That is, User-Readiness allows teaching users themselves in order to understand that a system is gesture-based.
  • User-readiness presents a solution for making users aware of the fact that they can actually interact with a touchless gesture-controlled system (e.g. a screen, projection) purely via gestures but not via touch.
  • User-Readiness introduces a standard to the usage of gesture-control systems by making the user aware that he or she is actually controlling the system and/or digital application running on the system.
  • One main technical advantage is that User-Readiness explicitly describes the functionality of a gesture-control and makes users interact with gesture-controlled devices or applications.
  • Gesture controllers have been developing recently from game platforms to business solutions, and companies start to integrate gesture controllers in order to implement interactive advertising channels such as mall displays or shopping windows.
  • people - especially in environments where they do not intent to interact - do not realize that it is possible to control a system via touchless gestures. Instead, people tend to try using touch control.
  • User-Readiness introduces the user to touchless gesture control and teaches him or her how to use the application. User-readiness allows to be integrated into the gesture systems to extend the possibilities and a delivers standard approach of making users learn how to use gesture controlling.
  • User-Readiness also makes it possible to associate a real human with the gesture control via a virtual representation (hereinafter "avatar" ), which preferably reflects user motions in a three-dimensional manner. It is possible to integrate the invention into any standard application that uses gesture control functionality. The invention unifies the way to introduce gesture-control and to teach users in the usage of gesture-controlled interactive displays.
  • avatar virtual representation
  • Embodiments of the invention may comprise different parts, such as hardware components, software components and/or specific processes.
  • embodiments of the invention may comprise any subset of the following features:
  • Fig. 2 illustrates an exemplary touchless gesture-controlled system according to one embodiment in which the "User Readiness" approach can be implemented.
  • the system comprises the following parts: A gesture control sensor 1 which can detect and/or track humans. Suitable sensors are available in different products and/or solutions, such as the Kinect 360 of Kinect One of Microsoft.
  • a digital display 2 e.g. a screen, one or more monitors, an image projected by one or more projectors, a rear-projection foil irradiated by a rear-projection device etc., which displays graphical content.
  • Such displays are well known and a great variety of products is available.
  • a virtual user representation which reflects motions and/or behavior of a user 5 standing in front of the display 2.
  • the invention preferably utilizes an avatar which clearly represents a person without actually distinguishing between man and woman.
  • a virtual control position (virtual hotspot) 4, which reflects the ideal point of control for the user 5 (working area).
  • This virtual point 4 is mapped to a real-world physical point which represents the point of control and it is required that the virtual avatar 3 is located on top of this hotspot 4 in order to make the user 5 interact with the system.
  • This component also differs from other applications, which typically do not rely on a virtual button but simply place a physical object on the ground (e.g. a sticker or a plate).
  • a processing component e.g. an information handling system of any kind, e.g. a computer or laptop
  • Such an information handling system may be responsible for the actual computations and/or for broadcasting the content of an application to be displayed on the display 2.
  • Fig. 1 illustrates a method 100 of an exemplary embodiment of the "User-Readiness" approach which combines all of the advantageous aspects of the invention. It should be noted, however, that embodiments pf the invention may also comprise only parts of the process, which can be implemented independent of each other.
  • the method may start at step 102 by detecting the user.
  • the method may display a virtual avatar on the display of the underlying system.
  • the method may mirror the user's motion and/or position on the avatar.
  • the method may guide the user to the working area (the location from where the user can interact with the system).
  • the method may introduce hand cursor control, e.g. by familiarizing the user that his hands are of particular importance to control the system.
  • the user may be trained by the system, e.g. by entering a training mode. If all or some of the previous steps may have been successful, the method may activate an application, as the user is now ready for the experience at step 114.
  • the gesture sensor 1 detects the (potential) user 5 and technically tracks his or her position (see step 102 of Fig. 1 ), and displays a virtual avatar 3 corresponding to the user 5 (see step 104 of Fig. 1 ).
  • the system maps the location and/or movement of the user 5 to the virtual avatar 3, thus the avatar 3 always moves in the same direction (and speed) as the user 5.
  • This movement happens in a three-dimensional space, so not only the movement along the display 2 is tracked, but also the actual distance of the user 5 to the display 2.
  • the movement is mapped to the avatar 3 in a way that the size of the avatar 3 actually represents the distance of the user 5 to the screen 2. That is, if the user 5 is standing close to the display 2, the avatar 3 is bigger than for a user 5 who is standing far away.
  • the avatar 3 maps the position of the user 5 in the real world to the position in the virtual world and to due this mapping, the user 5 actually perceives that he or she represents the virtual avatar 3.
  • gesture-based systems typically only react on the presence and movement of a user (e.g. playing a sound when passing-by) or let the user control a fictional avatar on a fixed position.
  • the three-dimensional mapping of a user's behavior and movement to a virtual avatar 3 is one of the key factors of the present invention.
  • the movement of the user 5 is preferably also reflected on the head of the avatar 3, i.e. when the user 5 nods or moves his/her head to the side, this movement is also performed by the virtual avatar 3.
  • the invention does not require a physical marker or sticker for defining the optimal working area.
  • embodiments of the invention provide a virtual marker 4 on the display 2 ("virtual control position" 4), as illustrated in Fig. 2 . This has the advantage that physical markers, which get easily worn or removed completely over time, in particular in public places, are avoided.
  • the system guides the user 5 to the actual working area. As illustrated in Fig. 3 , this may be achieved by one or more visual indications 6.
  • the animated arrow shown in Fig. 3 according to a preferred embodiment of the invention, is fixed on the center of the virtual hotspot 4 and the other end is connected to the virtual avatar 3. That is, as soon as the avatar 3 moves in the 3D-space, also the arrow 6 moves accordingly whereas the target destination of the arrow 6 always remains on the virtual hotspot 4.
  • This arrow 6 visually shows the user 5 what he or she has to do in order to move to position oneself on the working area (i.e., position the virtual avatar on the virtual hotspot).
  • the arrow 6 clearly visualizes where the user 5 has to move.
  • Fig. 4 shows the user 5, more precisely his avatar 3, standing on the virtual control position 4.
  • the visual indication 6 e.g. the animated arrow as shown in Fig. 3
  • the color of the virtual avatar 3 may also change.
  • the movement of the head of the user 5 may still be mapped to the virtual avatar 3, still emphasizing that the virtual avatar 3 represents the user 5.
  • the first step for gesture control is completed and the user 5 can use the system via touchless gestures.
  • many users are not familiar or experienced with touchless gesture control and lack experience.
  • user-readiness considers another component: teaching gesture control.
  • Fig. 5 shows the system in a training mode.
  • the virtual avatar 3 may change slightly and hands 7 may appear as a part of the body of the virtual avatar 3.
  • Those hands 7 represent the hands of the interacting user 5, thus hand and arm movement of the user 5 is directly mapped and mirrored by the virtual avatar 3. That is, the full movement of both arms (including shoulder, elbow and hand) of the user 5 is preferably mapped on the virtual avatar 3. All motions of the user 5 are repeated by the virtual avatar 3 and the user 5 associates himself and his hands with the avatar 3.
  • the invention makes the user 5 understand that he can control a mouse cursor 7 via his hands - both left and right handed as shown in the example of Fig. 5 .
  • This is also a differentiation to other avatar-based solutions in which such a "transformation" of the user typically does not happen, which further emphasizes the innovation.
  • touchless gesture-based interactive displays require a hand-cursor which is controlled via the hands of the user 5.
  • the invention demonstrates that the hand movement of the user 5 is mirrored to the movement of the virtual avatar 3, thus the user 5 understands that he or she controls the virtual avatar 3.
  • Fig. 6 shows the system in training mode.
  • buttons may appear, such as buttons. Due to their visualization and/or caption, it is clear that the user 5 can and should press one of them. This may further be emphasized by additional indications, such as by audio instruction or the like (see above).
  • buttons 8 which should be pressed.
  • Two animated arrows 9 may be visualized on the display 2.
  • One of the ends of each arrow 9 may be fixed on the buttons 8 (separating left and right) and the other end is connected to the hands 7 of the virtual avatar 3 (similar to the approach of the animated arrow described earlier). That is, as soon as the virtual avatar 3 moves (i.e. the user 5 moves his or her hands), the arrows 9 move accordingly in the 3D-space as shown in Fig. 6 .
  • the user 5 Due to the visualization, animation and/or movement, the user 5 understands how to control the virtual avatar 3 and he or she also perceives that the buttons 8 are the target where to move one of the hands 7.
  • the user 5 intuitively follows the instruction and understands how to use his or her hand in the real world as a hand-cursor 7 in the virtual world. This allows an instant learning which rapidly enables the user 5 to use gesture-control for interactive displays.
  • Fig. 7 shows a user 5 interacting with the system in a training mode.
  • the user 5 moves the hand-cursor 7 over one of the graphical control elements 8, such as the two buttons.
  • the graphical control elements 8 such as the two buttons.
  • a right-handed person will activate the right button 8 whereas a left-handed person will activate the left button 8.
  • the user 5 After training how to use gesture-control, the user 5 eventually activated a graphical control element 8, such as at least one of the two buttons.
  • a graphical control element 8 such as at least one of the two buttons. This activity demonstrates the successful execution of User-Readiness and proves that the "user is ready" for a gesture-controlled interactive display.
  • the system may enter a control mode, as shown in Fig. 8 .
  • the user 5 may receive a success message (either visually or verbal, or both) and an application may be started. Examples of such applications are without limitation:
  • the virtual avatar 3 may shrink to a considerably smaller size in order to not cover the content of the application, as depicted in Fig. 8 .
  • the small virtual avatar 3 may be still present in order to always remind the user 5 that the user actually controls the application via gestures - this is also a difference and advantage compared to other approaches.
  • the small virtual avatar 3 may still mirror the user's gestures in order to remind the user 5 that he is in control of the system.
  • Touchless gesture-controlled systems are particularly advantageous in busy environments, such as shopping malls, pedestrian areas or trade fairs. That is, in most cases there is not only one user 5 standing in front of the display but many users (some of them passing by, others standing and watching). Unless the system is in control mode, it may not yet be clear who will actually control the system.
  • the invention is not limited to one user 5 but can visualize and/or track several users 5.
  • Fig. 9 shows such a multiple user use case.
  • the actual number of users 5 is not limited conceptually by the invention but only by the used hardware, in particular the gesture sensor 1 (e.g. the Microsoft Kinect can track up to six users).
  • Each user 5 is represented by a virtual avatar 3 in the 3D-space, thus the position as well as the size of the user 5 in the virtual world represents a mirroring of the real world environment.
  • the motions and position of the users 5 are mapped in a 3D-manner to the virtual world, so each individual user 5 can associate him or herself with the corresponding avatar 3.
  • Fig. 9 three users are standing in front the system, wherein one is standing already very close to the working area (virtual control position) whereas the other two are standing behind (potentially watching).
  • one drawback of the concept of user-readiness presented herein is that the training of the users prior to the actual usage of the system / application takes some time (tests have shown that the tutorial, i.e. the above sequence of the preferred embodiment, takes users approximately 10 to 15 seconds to finish). This time requirement leads to the fact that some people already quit the interaction without actually consuming the content. This is for example the case for impatient people but also for people who are already familiar with gesture-control but do not want to be lectured by some virtual avatar.
  • this drawback is accepted in embodiments of the invention, because - as soon as user-readiness is applied to real public applications - the amount of valid and relevant users will increase.
  • a "skip this" option is included to enable users to skip the training and go straight to the application.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Claims (12)

  1. Ein berührungsloses, gestengesteuertes Anzeigesystem, umfassend:
    eine Anzeige (2) zum Anzeigen von einer oder mehreren virtuellen Benutzerdarstellungen (3), die jeweils einem Benutzer (5) zugeordnet sind, der sich in der Nähe der Anzeige (2) befindet, wobei die virtuelle Benutzerrepräsentation die Position des Benutzers (2) spiegelt;
    dadurch gekennzeichnet, dass
    das System zur Anzeige einer virtuellen Steuerposition (4) konfiguriert ist, um eine visuelle Indikation (6) anzuzeigen, um den Benutzer (5) zu einer Position zu leiten, von der aus das System von dem Benutzer gesteuert werden kann, die der virtuellen Steuerposition (4) entspricht.
  2. Das System nach Anspruch 1,
    wobei das System ferner so konfiguriert ist, dass es die virtuelle Benutzerrepräsentation (3) auf der Anzeige entsprechend der Bewegung des Benutzers bewegt.
  3. Das System nach Anspruch 1 oder 2,
    wobei das System ferner dazu konfiguriert ist, eine Bewegung des Kopfes des Benutzers, wie z.B. ein Nicken und/oder eine Bewegung zur Seite, auf der virtuellen Benutzerrepräsentation (3) zu reflektieren.
  4. Das System nach einem der vorhergehenden Ansprüche,
    wobei die visuelle Indikation (6) ein grafisches Element umfasst, das die Basis der virtuellen Benutzerrepräsentation mit der virtuellen Steuerposition verbindet; wobei die visuelle Indikation (6) bevorzugt einen oder mehrere Pfeile umfasst, mehr bevorzugt eine Mehrzahl von Pfeilen.
  5. Das System nach einem der vorhergehenden Ansprüche,
    wobei das System ferner konfiguriert ist, um eine oder mehrere Hände (7) und/oder Arme der virtuellen Benutzerrepräsentation (3) entsprechend einer physischen Position der Hände und/oder Arme des Benutzers (5) anzuzeigen, und ferner konfiguriert ist, um die Hand- und/oder Armbewegung des Benutzers abzubilden, vorzugsweise zum Abbilden der Schulter-, Ellbogen- und Handbewegung.
  6. Das System nach einem der vorhergehenden Ansprüche,
    wobei das System ferner konfiguriert ist, um mindestens ein grafisches Bedienelement (8), wie beispielsweise eine Schaltfläche, anzuzeigen; wobei das System vorzugsweise zum Anzeigen von zwei grafischen Steuerelementen (8) konfiguriert ist, wobei jedes grafische Steuerelement über eine entsprechende Hand (7) der virtuellen Benutzerrepräsentation (3) angezeigt wird.
  7. Das System nach Anspruch 6,
    wobei das System ferner konfiguriert ist, um mindestens eine virtuellen Indikation (9) anzuzeigen, die den Benutzer (5) auffordert, mindestens ein grafisches Steuerelement auszuwählen (8);
    wobei die mindestens eine virtuelle Indikation (9) vorzugsweise ein grafisches Element aufweist, das eine Hand (7) der virtuellen Benutzerrepräsentation (3) mit dem mindestens einen grafischen Steuerelement (8) verbindet; wobei die visuelle Indikation (9) bevorzugt einen oder mehrere Pfeile umfasst, mehr bevorzugt eine Mehrzahl von Pfeilen.
  8. Das System nach einem der vorhergehenden Ansprüche, wobei das berührungslose, gestengesteuerte Anzeigesystem an öffentlichen Orten benutzbar ist, wobei die öffentlichen Orte Schaufenster, Geschäfte und/oder Messen sind; und/oder
    wobei das berührungslose, gestengesteuerte Anzeigesystem ein stationäres System ist; und/oder
    wobei das berührungslose, gestengesteuerte Anzeigesystem in einem einzigen Gehäuse umfasst ist; und/oder
    wobei das berührungslose, gestengesteuerte Anzeigesystem ein Rückprojektionssystem umfasst; und/oder
    wobei die Anzeige (2) eine Anzeigefläche umfasst mit einer Höhe von etwa mindestens 32 Zoll, oder etwa mindestens 60 Zoll.
  9. Das System nach einem der vorhergehenden Ansprüche, ferner umfassend mindestens eines von:
    einem Tonsystem zur Bereitstellung von Audioanweisungen;
    einem Gestensteuerungsmodul (1), um dem einen oder mehreren Benutzern (5) zu ermöglichen, das System mit berührungslosen Gesten zu steuern;
    einen Gesten-Sensor zur Erzeugung von RGB- und/oder Tiefenbildern;
    ein Softwaremodul zur Verarbeitung von Gestensensordaten und zur Erkennung von Benutzern und/oder Benutzergesten;
    ein Softwaremodul zur Erzeugung eines Skeletts und/oder einer Silhouette des Benutzers; und/oder
    ein Softwaremodul zum Reagieren auf erkannte Gesten.
  10. Das System nach einem der vorhergehenden Ansprüche,
    wobei die virtuelle Benutzerrepräsentation (3) das Geschlecht, das Alter und/oder die ethnische Zugehörigkeit des entsprechenden Benutzers (5) widerspiegelt.
  11. Verfahren zum Betreiben eines berührungslosen, gestengesteuerten Anzeigesystems, wobei das System eine Anzeige (2) zur Anzeige einer oder mehrerer virtueller Benutzerrepräsentationen (3) umfasst, die jeweils einem Benutzer (5) entsprechen, der sich in der Nähe der Anzeige (2) befindet, wobei die virtuelle Benutzerrepräsentation die Position des Benutzers spiegelt, wobei das Verfahren
    dadurch gekennzeichnet ist, dass es umfasst:
    Anzeigen einer virtuellen Steuerposition (4); und
    Anzeigen einer visuellen Indikation (6), um den Benutzer (5) zu einer Position zu leiten, von der aus das System durch den Benutzer gesteuert werden kann, die der virtuellen Steuerposition (4) entspricht.
  12. Computerprogramm mit Anweisungen zur Implementierung des Verfahrens nach Anspruch 11.
EP17187245.0A 2017-08-22 2017-08-22 Benutzerbereitschaft für berührungslose gestengesteuerte anzeigesysteme Active EP3447610B1 (de)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP17187245.0A EP3447610B1 (de) 2017-08-22 2017-08-22 Benutzerbereitschaft für berührungslose gestengesteuerte anzeigesysteme
EP20202016.0A EP3783461A1 (de) 2017-08-22 2017-08-22 Benutzerbereitschaft für berührungslose gestengesteuerte anzeigesysteme
PCT/EP2018/072340 WO2019038205A1 (en) 2017-08-22 2018-08-17 USER ARRANGEMENT FOR NON-TOUCH-CONTACT GESTURE CONTROLLED DISPLAY SYSTEMS

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP17187245.0A EP3447610B1 (de) 2017-08-22 2017-08-22 Benutzerbereitschaft für berührungslose gestengesteuerte anzeigesysteme

Related Child Applications (2)

Application Number Title Priority Date Filing Date
EP20202016.0A Division EP3783461A1 (de) 2017-08-22 2017-08-22 Benutzerbereitschaft für berührungslose gestengesteuerte anzeigesysteme
EP20202016.0A Division-Into EP3783461A1 (de) 2017-08-22 2017-08-22 Benutzerbereitschaft für berührungslose gestengesteuerte anzeigesysteme

Publications (2)

Publication Number Publication Date
EP3447610A1 EP3447610A1 (de) 2019-02-27
EP3447610B1 true EP3447610B1 (de) 2021-03-31

Family

ID=59683481

Family Applications (2)

Application Number Title Priority Date Filing Date
EP20202016.0A Pending EP3783461A1 (de) 2017-08-22 2017-08-22 Benutzerbereitschaft für berührungslose gestengesteuerte anzeigesysteme
EP17187245.0A Active EP3447610B1 (de) 2017-08-22 2017-08-22 Benutzerbereitschaft für berührungslose gestengesteuerte anzeigesysteme

Family Applications Before (1)

Application Number Title Priority Date Filing Date
EP20202016.0A Pending EP3783461A1 (de) 2017-08-22 2017-08-22 Benutzerbereitschaft für berührungslose gestengesteuerte anzeigesysteme

Country Status (2)

Country Link
EP (2) EP3783461A1 (de)
WO (1) WO2019038205A1 (de)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3985491A1 (de) 2020-10-19 2022-04-20 ameria AG Steuerungsverfahren für berührungslose gestensteuerung
DE102022107274B4 (de) 2022-03-28 2024-02-15 BEAMOTIONS Rüddenklau Hänel GbR (vertretungsberechtigte Gesellschafter: Rene Rüddenklau, 80687 München und Christian Hänel, 85586 Poing) System und Verfahren für die Gestenerkennung und/oder Gestensteuerung
CN114911384B (zh) * 2022-05-07 2023-05-12 青岛海信智慧生活科技股份有限公司 镜子显示器及其远程控制方法
CN115097995A (zh) * 2022-06-23 2022-09-23 京东方科技集团股份有限公司 界面交互方法、界面交互装置以及计算机存储介质

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2188737A4 (de) * 2007-09-14 2011-05-18 Intellectual Ventures Holding 67 Llc Verarbeitung von benutzerinteraktionen auf gestenbasis
US20100306685A1 (en) * 2009-05-29 2010-12-02 Microsoft Corporation User movement feedback via on-screen avatars
US20110107216A1 (en) * 2009-11-03 2011-05-05 Qualcomm Incorporated Gesture-based user interface
US8749557B2 (en) * 2010-06-11 2014-06-10 Microsoft Corporation Interacting with user interface via avatar
US9195345B2 (en) 2010-10-28 2015-11-24 Microsoft Technology Licensing, Llc Position aware gestures with visual feedback as input method
CN103797440B (zh) 2011-09-15 2016-12-21 皇家飞利浦有限公司 具有用户反馈的基于姿势的用户界面
US9325943B2 (en) * 2013-02-20 2016-04-26 Microsoft Technology Licensing, Llc Providing a tele-immersive experience using a mirror metaphor
US20150033192A1 (en) * 2013-07-23 2015-01-29 3M Innovative Properties Company Method for creating effective interactive advertising content
ES2629697T3 (es) 2013-09-16 2017-08-14 Ameria Gmbh Sistema de retroproyección controlado por gestos

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Also Published As

Publication number Publication date
EP3783461A1 (de) 2021-02-24
WO2019038205A1 (en) 2019-02-28
EP3447610A1 (de) 2019-02-27

Similar Documents

Publication Publication Date Title
Speicher et al. VRShop: a mobile interactive virtual reality shopping environment combining the benefits of on-and offline shopping
EP3447610B1 (de) Benutzerbereitschaft für berührungslose gestengesteuerte anzeigesysteme
Herskovitz et al. Making mobile augmented reality applications accessible
US10055894B2 (en) Markerless superimposition of content in augmented reality systems
US20170345215A1 (en) Interactive virtual reality platforms
Varona et al. Hands-free vision-based interface for computer accessibility
KR102304023B1 (ko) 증강현실 기반 인터렉티브 저작 서비스 제공 시스템
US20110304632A1 (en) Interacting with user interface via avatar
US11048375B2 (en) Multimodal 3D object interaction system
Aghajan et al. Human-centric interfaces for ambient intelligence
CN103218041A (zh) 增强的基于相机的输入
Vermeulen et al. Proxemic flow: Dynamic peripheral floor visualizations for revealing and mediating large surface interactions
Vogiatzidakis et al. ‘Address and command’: Two-handed mid-air interactions with multiple home devices
Kurdyukova et al. Direct, bodily or mobile interaction? Comparing interaction techniques for personalized public displays
Mäkelä et al. " It's Natural to Grab and Pull": Retrieving Content from Large Displays Using Mid-Air Gestures
Santos et al. Developing 3d freehand gesture-based interaction methods for virtual walkthroughs: Using an iterative approach
Fourney et al. Gesturing in the wild: understanding the effects and implications of gesture-based interaction for dynamic presentations
Martins et al. Low-cost natural interface based on head movements
JP6834197B2 (ja) 情報処理装置、表示システム、プログラム
CN103752010A (zh) 用于控制设备的增强现实覆盖
Wang et al. Virtuwander: Enhancing multi-modal interaction for virtual tour guidance through large language models
CN112424736A (zh) 机器交互
Doerner et al. Interaction in Virtual Worlds
JP6699406B2 (ja) 情報処理装置、プログラム、位置情報作成方法、情報処理システム
Bozgeyikli et al. Virtual reality interaction techniques for individuals with autism spectrum disorder

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20180907

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20200701

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: AMERIA AG

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20201124

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1377664

Country of ref document: AT

Kind code of ref document: T

Effective date: 20210415

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602017035569

Country of ref document: DE

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG9D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210331

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210331

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210630

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210630

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210331

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210331

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210331

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20210331

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1377664

Country of ref document: AT

Kind code of ref document: T

Effective date: 20210331

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210331

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210331

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210331

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210331

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210331

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210331

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210731

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210331

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210802

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210331

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210331

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602017035569

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210331

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210331

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210331

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20220104

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210331

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20210831

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210831

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210831

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210731

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210822

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210331

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210822

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210831

REG Reference to a national code

Ref country code: DE

Ref legal event code: R082

Ref document number: 602017035569

Country of ref document: DE

Representative=s name: BANZHAF, FELICITA, DIPL.-ING., DE

Ref country code: DE

Ref legal event code: R082

Ref document number: 602017035569

Country of ref document: DE

Representative=s name: BEST, BASTIAN, DIPL.-INF. UNIV., DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210331

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20170822

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210331

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20230621

Year of fee payment: 7

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20230621

Year of fee payment: 7

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20230724

Year of fee payment: 7

REG Reference to a national code

Ref country code: DE

Ref legal event code: R082

Ref document number: 602017035569

Country of ref document: DE

Representative=s name: BEST, BASTIAN, DIPL.-INF. UNIV., DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210331