WO2021175800A1 - Method and system for operating a menu of a graphical user interface based on a detection of a rotational free-space gesture - Google Patents

Method and system for operating a menu of a graphical user interface based on a detection of a rotational free-space gesture Download PDF

Info

Publication number
WO2021175800A1
WO2021175800A1 PCT/EP2021/055090 EP2021055090W WO2021175800A1 WO 2021175800 A1 WO2021175800 A1 WO 2021175800A1 EP 2021055090 W EP2021055090 W EP 2021055090W WO 2021175800 A1 WO2021175800 A1 WO 2021175800A1
Authority
WO
WIPO (PCT)
Prior art keywords
menu
rotational
free
scrolling
gesture
Prior art date
Application number
PCT/EP2021/055090
Other languages
French (fr)
Inventor
Finn Jacobsen
Wenhao CAI
Marcus Muttersbach
Lisa JÄSCHKE
Original Assignee
Gestigon Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gestigon Gmbh filed Critical Gestigon Gmbh
Publication of WO2021175800A1 publication Critical patent/WO2021175800A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/10Input arrangements, i.e. from user to vehicle, associated with vehicle functions or specially adapted therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0236Character input methods using selection techniques to select from displayed items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/11Instrument graphical user interfaces or menu aspects
    • B60K2360/113Scrolling through menu items
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/11Instrument graphical user interfaces or menu aspects
    • B60K2360/115Selection of menu items
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/126Rotatable input devices for instruments
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/146Instrument input by gesture
    • B60K2360/14643D-gesture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/20Optical features of instruments
    • B60K2360/21Optical features of instruments using cameras

Definitions

  • the present invention relates to the field of human machine interfaces (HMI), e.g. for vehi cles, such as automobiles.
  • HMI human machine interfaces
  • the invention is directed to a method and a system for operating a menu of a graphical user interface (GUI) based on a detection of a rotational free-space gesture being performed by a human user, e.g. a driver or other passenger of a vehicle.
  • GUI graphical user interface
  • Modern HMIs including such for vehicles, are no longer restricted to providing various push buttons or switches or other conventional input devices for receiving user inputs.
  • Specifi cally in addition to touch sensitive surfaces, including in particular touchscreens, recently the detection of free-space gestures being performed by a human user have been added by some car manufacturers to the set of available input methodologies for automotive HMIs.
  • such gestures are detected by a single 2D or 3D-camera provided in the vehicle when they are being performed in the camera’s field of view and are used to control a func tionality, e.g. a sound volume, of an entertainment system, e.g. a head unit, of a vehicle, or to turn on/off a light in the vehicle.
  • Non-automotive applications where HMIs based on de tection of free-space gestures may be used, include particularly HMIs for medical devices or electronic gaming equipment.
  • fixed image sensors of various types may be integrated in the vehicle itself at one or more suitable locations along with respective HMIs for their operation and for handling pictures taken with them.
  • browsing or selecting pictures or other objects in an HMI remains a chal lenge, including a safety challenge, in such an environment.
  • a first aspect of the invention is directed to a method of operating a rotational menu of a graphical user interface, GUI, based on a detection of a free-space gesture being performed by a human user.
  • the method comprises: (i) detecting, using a free-space gesture detection sensor, a rotational free-space gesture, e.g.
  • a continuous rotational arm, forearm, or finger motion being performed by a human user in a field of view of the free-space gesture de tection sensor; (ii) displaying on a display device at least a section of a rotational menu comprising a plurality of individually selectable menu items arranged in a sequential order along an arc comprising a specific predefined selection position along the arc; (iii) synchro nously and uniformly scrolling the sequence of menu items in the GUI along the arc in re sponse to the detected rotational free-space gesture such that the scrolling speed is aligned, according to a predefined transformation ratio, with a detected rotational speed and direction of rotation of the detected rotational free-space gesture; (iv) determining a start and a termination of the detected rotational free-space gesture and an accrued rotational scrolling angle or distance between a start point of the scrolling motion of the menu items that corresponds to the start of the gesture and an end point of the scrolling motion of the menu items that
  • free-space gesture refers particularly to a bodily motion or state of a human user, i.e. a gesture, being performed in ordinary three-dimensional space to control or otherwise interact with one or more devices, e.g. with one or more display screens displaying a GUI, but without a. need to physically touch them.
  • the term “tree-space gesture detection sensor”, as used herein refers to a sensor being adapted to detect a tree-space gesture.
  • such a sensor may comprise a camera of the time-of-flight (TOF) type.
  • TOF time-of-flight
  • field of view refers to the spatial extent of the observable world that is detectable at any given moment by the sensor.
  • FOV field of view
  • sensors e.g. cameras
  • it is usually a solid angle through which a detector is sensitive to electromagnetic radiation.
  • arc refers to a curve or line which is not straight, at least not everywhere. Particularly, an arc may be defined on a two-dimensional surface and may be bent only in one direction. Specifically, an arc may be, without limitation, a circular arc. An arc may also be a closed loop of any form, e.g. a complete circle or ellipse.
  • the above method provides an easy-to-use and easy-to-learn contactless HMI in the form of a GUI for receiving user inputs in the form of free-space gestures, more specifically rota tional free-space gestures, by means of which a selection or preselection of a menu item from a menu of multiple items displayed on a display of the HMI may be made reliably and safely by a human user, even in a safety-relevant environment, such as a moving vehicle. This is specifically crucial for achieving a high user satisfaction level for such an input method.
  • rotational tree-space gestures as a basis for the (pre-)selection process has the advantage that typically a human user can perform such a rotational gesture in an “endless” manner, in contrast to most other gestures (e.g. linear gestures) that suffer from limitations being inherent in the movability of respective body parts of a human user.
  • a swiping, panning or pinching gesture e.g. from left to right or top/down
  • a rotational free-space gesture such as turning a finger, preferably an index finger, around its first (major) knuckle, or a whole arm around the shoulder joint, in circles does not.
  • Performing such a free-space gesture is generally easier than operating a specific button, switch or other classical HMI element, because it suffices that the gesture be performed within the field of view of the free-space gesture detection sensor without the need to exactly position a finger on a typically small particular spot of a classical HMI element, e.g. a par ticular surface of a pushbutton or toggle switch. Furthermore, there is no need for the user to watch the performance of the gesture in order for carrying it out correctly. This allows the user, in particular a driver of a vehicle, to continue watching the environment, esp. the traffic around him, while performing the gesture.
  • a classical HMI element e.g. a par ticular surface of a pushbutton or toggle switch.
  • the operation of the HMI such that it is based on detection of rotational free-space gestures and on a plurality of menu items being ar ranged in sequence along an arc is not just an arbitrary choice, but rather a very specific choice. It allows for a particularly efficient, reliable and easy-to-use way of performing and detecting user inputs for the purpose of making a selection or preselection among a given limited or even unlimited set of available choices (menu items) provided by the HMI.
  • the user can comfortably adjust the rotational speed of the rotational free- space gesture and thus indirectly also that of the scrolling motion of the sequence of menu items at any moment as he or she likes, without exiting a current input process or GUI mode and without having to modify any further parameters.
  • the menu items of the rotational menu are uniformly distributed along the arc.
  • the threshold value is predefined as corresponding to at least 30%, preferably by at least 50%, of the scrolling angle or distance needed to replace by way of the scrolling motion a first menu item being located at the selection position by an immediately following menu item in the sequence of menu items at that selection position.
  • the threshold value is preferably defined as corresponding to not more than 80%, preferably to not more than 60% of the scrolling angle or distance needed to replace by way of the scrolling motion a first menu item being located at the selection position by an immediately following menu item in the sequence of menu items at that selection position. This has been found to be advantageous as well, because it limits the amount of rotational motion by per forming the rotational free-space gesture that is necessary to move the sequence of menu items by one position or equivalently a respective next menu item to the selection position and thus helps to increase user comfort when performing the gesture.
  • detecting the rotational free-space gesture being performed by a human user in a field of view of the free-space gesture detection sensor comprises detecting a rotational motion of one or more of an arm, a forearm, or at least one finger, e.g. an index finger, of a hand of the human user. All of these rotational motions with any one or more of these body parts has the advantage that they may be performed, in principle, as continuous free-space gestures with freely-selectable starting point , end point, and number of rotations (or accrued rotational angle), thus enabling a very broad range of different possible user inputs. This in turn allows using even a menu having a large number of different menu items from which the user can select.
  • the arc of the rotational menu is designed as an endless closed loop, preferably as a regular closed loop, such as for example a circle or ellipse.
  • a closed loop design might particularly be a preferable choice in connection either with a de fined limited number of menu items, such as a defined set of selectable functionalities, or even within a previously unknown number of menu items, such as a potentially variable number of photos or music album covers to be browsed.
  • Designing the arc as a regular shape has the advantage that it helps to increase simplicity of a corresponding rotational free-space gesture to be performed.
  • the method further comprises triggering a predefined event in re sponse to the determination of the selected menu item based on the detected free-space rotational gesture, wherein the event is associated with the selected menu item.
  • a predefined event might be a particular functionality of the HMI itself, or of one or more other functionalities or devices being controllable via the HMI.
  • functionalities might relate to selecting a mode of operation of a particular device, e.g. an entertainment or navigation system of a vehicle.
  • the method may be used to not only select a particular menu item, but also to trigger a corresponding desired event being associated with that particular menu item.
  • the method further comprises triggering a prede fined event in response to a combination of both the determination of the pre-selected menu item based on the detected free-space rotational gesture and an additional user input con firming the pre-selection of the pre-selected menu item as an intended selection, wherein the event is associated with the thus-selected menu item.
  • an additional user input might be provided by the user performing another predetermined free- space gesture, e.g. a non-rotational free-space gesture, such as for example a linear motion of a body part within the field of view of the free-space gesture detection sensor.
  • the transformation ratio is defined as a configurable parameter and the method further comprises: (i) receiving a user input for selecting a desired value for that parameter; and (ii) using a transformation ratio corresponding to the selected value of the parameter for the subsequent operation of the rotational menu.
  • the user input may even itself be provided by means of a free-space gesture of a rotational or non-rotational type.
  • a higher transformation ratio i.e. a higher scroll rate per angular speed of the rotational gesture
  • a lower transformation ratio might be preferable
  • the latter may also apply, in particular, for safety relevant applications, where effectively avoiding unintentionally selecting a neighboring menu item of the actually desired menu item might be a critical requirement.
  • the transformation ratio is defined such that the scrolling direction coincides with the direction of rotation of the detected gesture. This may be considered a preferable choice in many situations, because it helps to simplify the interaction of the user with the HMI and to reduce his or her cognitive burden when providing an input. On the other hand, in some selected situations, using different directions of rotation for the scrolling motion on the one hand and the rotational gesture on the other hand may be useful, for example in gaming applications, where increasing the cognitive burden of the user might even be desired.
  • the transformation ratio is defined such that the rotational speed of the scrolling motion is equal to or lower than the rotational speed of the detected rotational free-space gesture.
  • this may be specifically useful in cases, where there is a relatively limited number of available menu items in the rotational menu and when avoiding errors in the selection process is of a high importance. It may also be useful in situations, where the relevant user may have difficulties following a fast-moving scrolling motion, which may particularly be the case for some elderly or handicapped users.
  • the menu item that has been determined as a selected or prese lected menu item is temporarily animated in the GUI such as to indicate its determined selection or preselection, respectively, to the user.
  • the method further comprises emitting an individual characteristic sound being associated a particular menu item when during the motion of the sequence of menu items in response to the free-space gesture that particular menu item is currently located at the selection position.
  • the method further comprises (i) receiving or taking with an image sensor a plurality of digital pictures; and (ii) associating each of at least two of the digital pictures by a respective one of the menu items of the rotational menu such that a respective one of these pictures is selected or preselected based on the detected free-space rotational gesture when its associated menu item is being selected.
  • the method may thus be used to scroll through a set of pictures that have been taken and to select a desired one, e.g. for triggering a presentation thereof or for magnifying its presentation within the GUI.
  • the method further comprises displaying the selected or preselected picture in a predefined format, size or orientation differing from that of the associated menu item in response to its selection or preselection, respectively.
  • One possible application of these picture-related embodiments is browsing and (pre-se lecting so-called selfies, e.g. in an automotive environment, where a vehicle might be equipped with a photo camera to take such pictures. Particularly, taking such pictures might be triggered by another specific predetermined free-space gesture that can be detected by the above-mentioned free-space gesture detection sensor.
  • German patent application DE 102020106003.3 which is incorporated herein in its entirety by way of reference, discloses methods and systems for triggering picture taking based on free-space gestures to be performed by a human user.
  • the methods and systems of the present invention may be combined with the methods and systems disclosed in DE 102020106003.3 Specifically, a same system may be used to im plement both the method of the present invention and one or more of the methods disclosed DE 102020106003.3 (cf. Fig. 1 ).
  • the free-space gesture detection sensor is or comprises a time-of- flight, TOF, camera, which is used to detect said specific predetermined free-space gesture.
  • TOF camera is a 3D camera system that measures distances on the basis of a time of flight (TOF) method. The measuring principle is based on the fact that the scene to be rec orded is illuminated by a light pulse, and the camera measures for each pixel the time that the light takes to reach the object and back again. This time is directly proportional to the distance due to the constancy of the speed of light. The camera thus provides the distance of the object imaged on each pixel.
  • TOF camera system represents a particu larly effective and high-resolution implementation option for a 3D image sensor and may particularly be used to reliably and efficiently detect free-space gestures.
  • detecting said rotational free-space gesture comprises at least one of: (i) processing sensor data being generated by the free-space gesture detection sensor to examine whether or not the sensor data represents a situation where a user performs said specific predetermined free-space gesture; and (ii) communicating sensor data being generated by the tree-space gesture detection sensor to a remote (e.g. vehicle external) processing platform, e.g. a server that can be reached over the internet or another commu nication link, and receiving in response result data being generated by the processing plat form and indicating whether or not the sensor data represents a situation where a user performs said specific predetermined free-space gesture.
  • a remote processing platform e.g. a server that can be reached over the internet or another commu nication link
  • option (i) is particularly use ful for an autonomous solution that can be fully implemented by a system to be integrated into a vehicle
  • option (ii) has the advantage that external processing power, e.g. that of a powerful server, which might otherwise not be available for the HMI (e.g. in a vehicle) or would add to the complexity and cost of it, may be used instead. This applies, in particular, to the usually calculation-intensive processing needed in connection with properly recog nizing a specific gesture based on sensor data provided by the free-space gesture detection sensor.
  • the processing of the sensor data is performed separately from a process for controlling the free-space gesture detection sensor.
  • the processing of the sensor data in connection with the detection of said specific predetermined rotational free-space gesture may be performed by a remote processing platform, such as a server in a backend that can be reached over the Internet or another communication link, while an application defining the GUI and which is used to control the operation of the free-space gesture detection sensor and the display may be run on a local (e.g. vehicle-based) processor platform (e.g. in a head unit or another control unit of the vehicle).
  • each one of a predetermined limited set of multiple different prede termined free-space gestures that may potentially be performed by the user is defined as qualifying as said specific predetermined rotational free-space gesture.
  • this may be used to enable an implementation of the possibility for the user to select among various scrolling options, such as without limitation different scrolling speed options (abso lute transformation ration values) or between positive and negative transformation ratios (i.e. matching or opposite rotational directions).
  • a second aspect of the present invention is directed to a system for operating a rotational menu of a graphical user interface, GUI, based on a detection of a free-space gesture being performed by a human user, the system being configured to perform the method of the first aspect of the present invention, as described in particular above.
  • a third aspect of the present invention is directed to a computer program, or a computer program product, comprising instructions to cause the system of the second aspect to per form the method of the first aspect of the present invention.
  • the computer program (product) may in particular be implemented in the form of a data carrier on which one or more programs for performing the method are stored.
  • this is a data carrier, such as a CD, a DVD or a flash memory module.
  • This may be advan tageous, if the computer program product is meant to be traded as an individual product in individual product independent from the processor platform on which the one or more pro grams are to be executed.
  • the computer program (product) is provided as a file on a data processing unit, in particular on a server, and can be down loaded via a data connection, e.g. the Internet or a dedicated data connection, such as a proprietary or local area network.
  • the system of the second aspect may accordingly have a program memory in which the computer program of the third aspect is stored.
  • the system may also be set up to access a computer program available externally, for example on one or more servers or other data processing units, via a communication link, in particular to exchange with it data used during the course of the execution of the computer program or representing out puts of the computer program.
  • FIG. 1 schematically illustrates an exemplary vehicle comprising a system according to an embodiment of the present invention
  • FIG. 2 schematically illustrates an exemplary GUI that is operable in accordance with a method of the present invention, such as the method embodiment illustrated in Fig. 4;
  • FIG. 3 shows a further illustration of an exemplary GUI, such as the GUI of Fig. 2, that is operable in accordance with a method of the present invention, such as the method em bodiment illustrated in Fig. 4; and
  • Fig. 4 shows a flow chart illustrating an exemplary embodiment of the method according to the present invention.
  • an exemplary vehicle 100 comprises a system 105 for controlling, in particular, triggering of picture-taking of the interior of the vehicle 100, at least in parts.
  • system 105 comprises a free-space gesture detection sensor 110 which is arranged within the passenger compart ment at an interior surface of the roof of the vehicle 100.
  • Free-space gesture detection sensor 110 is designed as a 3D image sensor of the “time-of-flight” type (TOF camera).
  • Free-space gesture detection sensor 110 is configured to detect free-space gestures G being performed within its field of view 115 by a human user U, such as a driver of vehicle 100.
  • a human user U such as a driver of vehicle 100.
  • user U per forms a specific predetermined free-space gesture G with two fingers of her right hand, wherein these two fingers form a “V”-shape within a virtual plane that is substantially hori zontal and thus predominantly perpendicular to the predominantly vertical central direction of the cone defining the FOV 115 of gesture detection sensor 110. Accordingly, the “V”- shape is "visible” to and thus detectable as such by gesture detection sensor 110.
  • user U may perform a rotational gesture by rotating the index finger of her hand around its major knuckle, e.g. such as to “draw” a virtual circle, and such a rotational gesture may be detected in a similar way, in particular if the gesture detection sensor 210 is a 3D sensor.
  • System 105 further comprises image sensor 120 in the form of a 2D photo camera which is sensitive at least parts of both the visible and the infrared range of the electromagnetic spectrum, so that it can take pictures not only when the interior of vehicle 100 is illuminated, whether artificially or by daylight, but also when it is relatively dark, in particular at night.
  • Image sensor 120 is arranged at the rear-view mirror of the vehicle inside the passenger compartment and has a field of view 125, which predominantly extends towards and covers, at least in parts, the location of the driver seat and optionally also of one or more further passenger seats of vehicle 100. Accordingly, image sensor 120 is arranged to take photos or videos of one or more passengers of vehicle 100 while seated in their respective pas senger seats, i.e. in particular “selfies”.
  • gesture detection sensor 110 and the field of view 125 of image sensor 120 overlap in parts, they do not fully coincide, such that particularly a spatial area above the middle console of vehicle 100 is only located within FOV 115 of the gesture detection sensor 110, but not within FOV 125 of the image sensor 120. Accordingly, when a free-space gesture G is performed by user U within that spatial area, body parts or other objects user U may use to perform gesture G will not be pictured within photos or videos taken by image sensor 120.
  • Each one of sensors 110 and 120 may have an own associated illumination unit 111 or 121 , respectively, for illuminating the related field of view 115 or 125, respectively, with electro magnetic radiation, to which the respective sensor is sensitive, e.g. in the visible or infrared part of the electromagnetic spectrum.
  • Each of these illumination units 111 and 121 may be activated and deactivated individually. Specifically, they may be controlled such that at any given point in time only one or none of the two illumination units 111 and 121 is active.
  • system 105 comprises a processing unit 130 being signal-coupled to both ges ture detection sensor 110 and image sensor 120 in order to control them and receive their respective sensor data for further processing.
  • processing unit 130 may com prise in a respective memory one or more computer programs being configured as an ap plication for free-space-gesture-triggered picture-taking, such that a user U can initiate pic ture-taking by performing within FOV 115 of image detection sensor 110a specific prede termined gesture corresponding to such picture-taking.
  • processing unit 130 may be configured to recognize one or more different predetermined gestures, if such are represented by the sensor data provided by gesture detection sensor 110. Accordingly, gesture detection sensor 110 in combination with pro cessing unit 130 is then capable of detecting one or more different predetermined free- space gestures, including rotational free-space gestures, being performed by user U within the FOV 115 of sensor 110. In particular, one or more of these gestures may be predeter mined as being gestures, which when properly detected, trigger picture-taking by image sensor 120.
  • system 105 may further comprise a communication unit 135 being configured to exchange data via a communication link with a vehicle-external processing platform 145, such as a backend server.
  • a vehicle-external processing platform 145 such as a backend server.
  • the communication link may be a wireless link and accordingly communication unit 135 may be signal-connected to an antenna 140 of the vehicle 102 to therewith send and receive RF signals for communication over the wireless link.
  • the setup may be used to communicate sensor data being generated by the gesture detection sensor 110 to processing platform 145 for performing a similar processing as described above with reference to processing unit 130 in the context of gesture detection.
  • This may be specifically advantageous, when a significant number of different and some times complex gestures needs to be reliably detected such that a high processing power is needed to perform the recognition and discrimination of these various gestures.
  • it might be easier and more efficient to provide such high processing power outside of the vehicle on a dedicated processing platform 145 instead of designing processing unit 130 as a respective high processing power unit.
  • the latter approach might have all kinds of nega tive effects, including higher average cost of gesture detection per vehicle and gesture, or more demanding space requirements, power requirements, or cooling requirements etc..
  • system 105 comprises an HMI comprising a display 150 for displaying infor mation to user U.
  • HMI is configurable to display a rotational menu, as will be described below in detail with reference to Figs. 2 and 3.
  • Processing unit 130 may particularly be configurable, e.g. by means of a respective com puter program, to also control display 150 and to perform, in conjunction particularly with display 150 and gesture detection sensor 110, the method according to the present inven tion, such as method embodiment 300 illustrated below with reference to Fig. 4.
  • GUI 200 that is operable in accordance with a method of the present invention, such as the method embodiment illustrated in Fig. 4, is designed to be presented on a display screen 150 (cf. Fig. 1 ), which may particularly be a rectangular screen.
  • GUI 200 is defined on a virtual plane comprising a first section 205 which is currently not being displayed on display 150, and a second section 210, which has substantially the same shape as the display screen 150 and is currently being displayed thereon.
  • a plurality of menu items 215, for example photos or other graphical objects are uniformly distributed in sequence along an arc, which may particularly be a circular arc, or a section thereof, as shown in Fig. 2.
  • the distribution of the menu objects along the arc may in particular be referred to as a menu wheel, specifically when the arc is defined as a closed (i.e. endless) loop, such as a circle or an ellipse.
  • the sequence of menu items 215 (e.g. menu wheel) is moved (rotated) according to a transformation ratio T along a direction of rotation 220, as will be explained below in more detail with reference to Fig. 3. Accordingly, during that motion of the sequence of menu items and any given point in time, only a respective section of that sequence is being displayed in section 210 of GUI 200, while the remaining menu items 215 are currently located in the non-displayed section 205 of GUI 200 until, when the detected gesture continues, they are eventually moved into or even completely across the displayed section 210.
  • a transformation ratio T along a direction of rotation 220
  • a spe cific position which can be occupied by one menu item 215a at a time, is defined as a selection position 225.
  • this position may be defined as a position in the center of the visible part of the arc within section 210 of GUI 200.
  • a menu item 215a which is located at the selection position 225 when a termination of a predetermined rotational free-space gesture is being detected, is thereby selected or preselected as a user’s choice.
  • Fig. 3 shows a detailed view of a subsection of the sequence of menu items 215 which for the purpose of simplicity of illustration comprises only three ex emplary menu items 215-1 , 215-2 and 215-3.
  • another angle alpha a may be defined to indicate the angle between the re spective positions of two neighboring menu items within the sequence along the arc.
  • this angle a may be used to define a criterion for determining, whether upon termination of the rotational gesture (i) the menu item 215 currently leaving the selection position is being chosen as selected or preselected menu item, or (ii) instead the next immediately following menu item which is about to reach the selection position, but which at that point in time might not have completely entered the selection position yet.
  • Method 300 comprises a step 305, wherein a free-space gesture detection sensor 110 of the TOF camera type continuously “listens”, i.e. monitors, its field-of-view 115, for free-space gestures being performed by a user U, e.g. a driver or other passenger of vehicle 100.
  • a free-space gesture detection sensor 110 of the TOF camera type continuously “listens”, i.e. monitors, its field-of-view 115, for free-space gestures being performed by a user U, e.g. a driver or other passenger of vehicle 100.
  • gesture detection sensor 110 is active also its illumination unit 111 is active and illuminates the field-of-view 115, while illumination unit 121 of image sensor 120 is deactivated at that time. Accordingly, the sensing activity of gesture detection sensor 110 is not adversely affected by illumination being emitted by illu mination unit 121 of image sensor 120 (de-synchronization).
  • the gesture detection sensor detects the start of a predetermined rotational free- space gesture being performed by a user U and qualifying as an operation of the GUI shown in Fig. 2 and 3, it triggers the step 310, in which the sequence of menu items 215 (menu wheel) is being scrolled according to a preconfigured parameter defining the transformation ratio T, as explained above with reference to Fig. 3. If T is a positive value, then the direction of rotation of the menu wheel corresponds to that of the rotational gesture, otherwise it is opposed to it. The higher the absolute value of T, the faster will the scrolling motion of the menu wheel be relative to the motion of the gesture, i.e. to the rotational motion of the user’s finger performing the gesture.
  • this total accrued scrolling angle f of the menu wheel is compared to a predefined threshold angle f t, which may particularly be defined as a value between 40% and 60%, e.g. 50% of the angle a between the respective positions of two neighboring menu items within the sequence along the arc, as illustrated in Fig. 3.
  • a step 330 s triggered, in which the menu wheel is automatically rotated back to an initial position, which may particularly be defined as that position of the menu wheel, in which the last menu item 215a that was about to leave the selection position 225 during the scrolling motion was still centered at the selection position 225. Then the method 300 loops back to step 305 for another run.
  • a step 335 is triggered, in which the menu wheel is automatically rotated forward to a position such that a next menu item following immediately the last menu item 215a that was about to leave the selection position 225 during the scroll ing motion moves to the center of the selection position 225.
  • a confirmation step 340 follows, in which the preselection of menu item 215a being currently present at the selection position after the automatic rotation can be confirmed, or discarded, by an additional user input as an intended selection.
  • step 350 a specific event being associated with the new position of the menu wheel, i.e. with the selected menu item, is being triggered in a step 350.
  • an event could comprise displaying a magnified version of the selected menu item, e.g. photo, within the GUI 200.
  • steps 340 and 345 are omitted, such that step 350 immediately follows step 335.
  • vehicle 105 system for free-space gesture-based triggering of picture-taking 110 free-space gesture detection sensor of TOF type 111 illumination unit for free-space gesture detection sensor 110
  • field-of-view of free-space gesture detection sensor 110 120 image sensor, photo camera 121 illumination unit for image sensor 120 125 field-of-view of image sensor 120 130 processing unit
  • vehicle-external processing platform e.g. server 150 display

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention is directed to a method and a corresponding system for operating a rotational menu of a GUI, based on a detection of a free-space gesture. The method comprises (i) Detecting, using a free-space gesture detection sensor, a rotational free-space gesture being performed by a user; (ii) Displaying on a display device at least a section of a rotational menu comprising a plurality of individually selectable menu items arranged in a sequential order along an arc comprising a specific predefined selection position along the arc; (iii) Synchronously and uniformly scrolling the sequence of menu items in the GUI along the arc in response to the detected gesture such that the scrolling speed is aligned, according to a predefined transformation ratio, with a detected rotational speed and direction of rotation of the detected gesture; (iv) Determining a start and a termination of the detected gesture and an accrued rotational scrolling angle or distance between a start point and an end point of the scrolling motion; (v) When the accrued scrolling angle or scrolling distance, respectively, reaches or extends beyond a predefined threshold value, automatically further scrolling the sequence of menu items by a remaining angle or distance such that thereby the respective next menu item is being moved along the direction of rotation to the selection position, and otherwise automatically reversing the scrolling motion of the sequence of menu items such as to return the last menu item to previously occupy the selection position back to the selection position; and (vi) Determining that particular menu item as a currently selected or pre-selected menu item which is located at the selection position after termination of the automatic scrolling.

Description

METHOD AND SYSTEM FOR OPERATING A MENU OF A GRAPHICAL USER INTERFACE BASED ON A DETECTION OF A ROTATIONAL FREE-SPACE GESTURE
The present invention relates to the field of human machine interfaces (HMI), e.g. for vehi cles, such as automobiles. Specifically, the invention is directed to a method and a system for operating a menu of a graphical user interface (GUI) based on a detection of a rotational free-space gesture being performed by a human user, e.g. a driver or other passenger of a vehicle.
Modern HMIs, including such for vehicles, are no longer restricted to providing various push buttons or switches or other conventional input devices for receiving user inputs. Specifi cally, in addition to touch sensitive surfaces, including in particular touchscreens, recently the detection of free-space gestures being performed by a human user have been added by some car manufacturers to the set of available input methodologies for automotive HMIs. Typically, such gestures are detected by a single 2D or 3D-camera provided in the vehicle when they are being performed in the camera’s field of view and are used to control a func tionality, e.g. a sound volume, of an entertainment system, e.g. a head unit, of a vehicle, or to turn on/off a light in the vehicle. Non-automotive applications, where HMIs based on de tection of free-space gestures may be used, include particularly HMIs for medical devices or electronic gaming equipment.
Another trend that has recently become important is taking of so-called “selfies”, i.e. of dig ital snapshots, i.e. photos or videos, of oneself and optional also others, often for use on social networks. Usually, handheld electronic devices such as smart phones or tablet com puters are used for that purpose. While taking and browsing through such pictures in sta tionary in environments, such as for example in restaurants, offices or private homes, has become ubiquitous based on the use of such handheld electronic devices, taking and browsing pictures in the same way inside a moving vehicle, such as a car, might be in conflict with security requirements and given space constraints, in particular, if the driver is meant to take or browse the pictures himself or to be on the pictures. Accordingly, fixed image sensors of various types may be integrated in the vehicle itself at one or more suitable locations along with respective HMIs for their operation and for handling pictures taken with them. However, browsing or selecting pictures or other objects in an HMI remains a chal lenge, including a safety challenge, in such an environment. It is an object of the present invention to provide an easy-to-use HMI for receiving user inputs, which HMI can be reliably and safely operated by a human user even in a safety relevant environment, such as a moving vehicle.
A solution to this problem is provided by the teaching of the independent claims. Various preferred embodiments of the present invention are provided by the teachings of the de pendent claims.
A first aspect of the invention is directed to a method of operating a rotational menu of a graphical user interface, GUI, based on a detection of a free-space gesture being performed by a human user. The method comprises: (i) detecting, using a free-space gesture detection sensor, a rotational free-space gesture, e.g. a continuous rotational arm, forearm, or finger motion, being performed by a human user in a field of view of the free-space gesture de tection sensor; (ii) displaying on a display device at least a section of a rotational menu comprising a plurality of individually selectable menu items arranged in a sequential order along an arc comprising a specific predefined selection position along the arc; (iii) synchro nously and uniformly scrolling the sequence of menu items in the GUI along the arc in re sponse to the detected rotational free-space gesture such that the scrolling speed is aligned, according to a predefined transformation ratio, with a detected rotational speed and direction of rotation of the detected rotational free-space gesture; (iv) determining a start and a termination of the detected rotational free-space gesture and an accrued rotational scrolling angle or distance between a start point of the scrolling motion of the menu items that corresponds to the start of the gesture and an end point of the scrolling motion of the menu items that corresponds to the termination of the gesture; (v) when the accrued scroll ing angle or scrolling distance, respectively, reaches or extends beyond a predefined threshold value, automatically further scrolling the sequence of menu items by a remaining angle or distance such that thereby the respective next menu item is being moved along the direction of rotation to the selection position, and otherwise automatically reversing the scrolling motion of the sequence of menu items such as to return the last menu item to previously occupy the selection position back to the selection position; and (vi) determining that particular menu item as a currently selected or pre-selected menu item which is located at the selection position after termination of the automatic scrolling.
The term " free-space gesture", as used herein, refers particularly to a bodily motion or state of a human user, i.e. a gesture, being performed in ordinary three-dimensional space to control or otherwise interact with one or more devices, e.g. with one or more display screens displaying a GUI, but without a. need to physically touch them. Accordingly, the term “tree-space gesture detection sensor”, as used herein, refers to a sensor being adapted to detect a tree-space gesture. In particular and without limitation, such a sensor may comprise a camera of the time-of-flight (TOF) type.
The term “field of view” (FOV) of a sensor, as used herein, refers to the spatial extent of the observable world that is detectable at any given moment by the sensor. In the case of optical instruments or sensors, e.g. cameras, it is usually a solid angle through which a detector is sensitive to electromagnetic radiation.
The term “arc”, as used herein, refers to a curve or line which is not straight, at least not everywhere. Particularly, an arc may be defined on a two-dimensional surface and may be bent only in one direction. Specifically, an arc may be, without limitation, a circular arc. An arc may also be a closed loop of any form, e.g. a complete circle or ellipse.
The terms “first”, “second”, “third” and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances and that the embodiments of the invention described herein are capable of operation in other sequences than described or illustrated herein.
Where the term "comprising" or “including” is used in the present description and claims, it does not exclude other elements or steps. Where an indefinite or definite article is used when referring to a singular noun e.g. "a" or "an", "the", this includes a plural of that noun unless something else is specifically stated.
Further, unless expressly stated to the contrary, "or" refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present).
The above method provides an easy-to-use and easy-to-learn contactless HMI in the form of a GUI for receiving user inputs in the form of free-space gestures, more specifically rota tional free-space gestures, by means of which a selection or preselection of a menu item from a menu of multiple items displayed on a display of the HMI may be made reliably and safely by a human user, even in a safety-relevant environment, such as a moving vehicle. This is specifically crucial for achieving a high user satisfaction level for such an input method. Furthermore, using rotational tree-space gestures as a basis for the (pre-)selection process has the advantage that typically a human user can perform such a rotational gesture in an “endless” manner, in contrast to most other gestures (e.g. linear gestures) that suffer from limitations being inherent in the movability of respective body parts of a human user. For example, a swiping, panning or pinching gesture (e.g. from left to right or top/down) naturally comes to an end when the possible range of motion of the finger or hand performing the gesture is exhausted, while for example a rotational free-space gesture such as turning a finger, preferably an index finger, around its first (major) knuckle, or a whole arm around the shoulder joint, in circles does not.
Performing such a free-space gesture is generally easier than operating a specific button, switch or other classical HMI element, because it suffices that the gesture be performed within the field of view of the free-space gesture detection sensor without the need to exactly position a finger on a typically small particular spot of a classical HMI element, e.g. a par ticular surface of a pushbutton or toggle switch. Furthermore, there is no need for the user to watch the performance of the gesture in order for carrying it out correctly. This allows the user, in particular a driver of a vehicle, to continue watching the environment, esp. the traffic around him, while performing the gesture.
Thus, defining, in accordance with the method, the operation of the HMI such that it is based on detection of rotational free-space gestures and on a plurality of menu items being ar ranged in sequence along an arc is not just an arbitrary choice, but rather a very specific choice. It allows for a particularly efficient, reliable and easy-to-use way of performing and detecting user inputs for the purpose of making a selection or preselection among a given limited or even unlimited set of available choices (menu items) provided by the HMI.
Furthermore, the user can comfortably adjust the rotational speed of the rotational free- space gesture and thus indirectly also that of the scrolling motion of the sequence of menu items at any moment as he or she likes, without exiting a current input process or GUI mode and without having to modify any further parameters.
In the following, preferred embodiments of the method are described, which can be arbi trarily combined with each other or with other aspects of the present invention, unless such combination is explicitly excluded or technically impossible.
In some embodiments, the menu items of the rotational menu are uniformly distributed along the arc. In addition, the threshold value is predefined as corresponding to at least 30%, preferably by at least 50%, of the scrolling angle or distance needed to replace by way of the scrolling motion a first menu item being located at the selection position by an immediately following menu item in the sequence of menu items at that selection position. Such a choice of the threshold value has been found to be advantageous, because it de fines a sufficiently high threshold for the amount of rotational motion by performing the ro tational free-space gesture that is necessary to move the sequence of menu items by one position (or equivalently a respective next menu item to the selection position). Furthermore, the threshold value is preferably defined as corresponding to not more than 80%, preferably to not more than 60% of the scrolling angle or distance needed to replace by way of the scrolling motion a first menu item being located at the selection position by an immediately following menu item in the sequence of menu items at that selection position. This has been found to be advantageous as well, because it limits the amount of rotational motion by per forming the rotational free-space gesture that is necessary to move the sequence of menu items by one position or equivalently a respective next menu item to the selection position and thus helps to increase user comfort when performing the gesture.
In some embodiments, detecting the rotational free-space gesture being performed by a human user in a field of view of the free-space gesture detection sensor comprises detecting a rotational motion of one or more of an arm, a forearm, or at least one finger, e.g. an index finger, of a hand of the human user. All of these rotational motions with any one or more of these body parts has the advantage that they may be performed, in principle, as continuous free-space gestures with freely-selectable starting point , end point, and number of rotations (or accrued rotational angle), thus enabling a very broad range of different possible user inputs. This in turn allows using even a menu having a large number of different menu items from which the user can select.
In some embodiments, the arc of the rotational menu is designed as an endless closed loop, preferably as a regular closed loop, such as for example a circle or ellipse. Such a closed loop design might particularly be a preferable choice in connection either with a de fined limited number of menu items, such as a defined set of selectable functionalities, or even within a previously unknown number of menu items, such as a potentially variable number of photos or music album covers to be browsed. Designing the arc as a regular shape has the advantage that it helps to increase simplicity of a corresponding rotational free-space gesture to be performed. Particularly, such an approach might be advantageous, if the rotational free-space gesture is meant to match the shape of the arc (loop) to a high degree, thus simplifying learning and actual performance of such a gesture. In some embodiments, the method further comprises triggering a predefined event in re sponse to the determination of the selected menu item based on the detected free-space rotational gesture, wherein the event is associated with the selected menu item. Specifically, such an event might be a particular functionality of the HMI itself, or of one or more other functionalities or devices being controllable via the HMI. For example, such functionalities might relate to selecting a mode of operation of a particular device, e.g. an entertainment or navigation system of a vehicle. Thus, according to these embodiments, the method may be used to not only select a particular menu item, but also to trigger a corresponding desired event being associated with that particular menu item.
In the alternative, in some embodiments, the method further comprises triggering a prede fined event in response to a combination of both the determination of the pre-selected menu item based on the detected free-space rotational gesture and an additional user input con firming the pre-selection of the pre-selected menu item as an intended selection, wherein the event is associated with the thus-selected menu item. By way of example, such an additional user input might be provided by the user performing another predetermined free- space gesture, e.g. a non-rotational free-space gesture, such as for example a linear motion of a body part within the field of view of the free-space gesture detection sensor. These embodiments allow for a two-step selection process, where in the first (rotational) free- space gesture is used to make a preselection among a set of available options (menu items), while a second, different free-space gesture (or any other predetermined user input) is used to confirm the selection and to only trigger the event once that confirmation has been made. This may particularly be useful in applications, e.g. safety relevant applications, where avoiding unintended inputs might be crucial.
In some embodiments, the transformation ratio is defined as a configurable parameter and the method further comprises: (i) receiving a user input for selecting a desired value for that parameter; and (ii) using a transformation ratio corresponding to the selected value of the parameter for the subsequent operation of the rotational menu. Specifically, in some of these embodiments, the user input may even itself be provided by means of a free-space gesture of a rotational or non-rotational type. The possibility to configure the transformation ratio, in particular by the user itself, allows for achieving an optimal personal fit of the be havior of the HMI to the user's preferences, which may vary from user to user or from ap plication to application. For example, when scrolling through a large list of menu items such as a list of many photos, having a higher transformation ratio, i.e. a higher scroll rate per angular speed of the rotational gesture, may be preferable, while in a situation, where there are only relatively few menu items to select from, a lower transformation ratio might be preferable. The latter may also apply, in particular, for safety relevant applications, where effectively avoiding unintentionally selecting a neighboring menu item of the actually desired menu item might be a critical requirement.
In some embodiments, the transformation ratio is defined such that the scrolling direction coincides with the direction of rotation of the detected gesture. This may be considered a preferable choice in many situations, because it helps to simplify the interaction of the user with the HMI and to reduce his or her cognitive burden when providing an input. On the other hand, in some selected situations, using different directions of rotation for the scrolling motion on the one hand and the rotational gesture on the other hand may be useful, for example in gaming applications, where increasing the cognitive burden of the user might even be desired.
In some embodiments, the transformation ratio is defined such that the rotational speed of the scrolling motion is equal to or lower than the rotational speed of the detected rotational free-space gesture. As already discussed above, this may be specifically useful in cases, where there is a relatively limited number of available menu items in the rotational menu and when avoiding errors in the selection process is of a high importance. It may also be useful in situations, where the relevant user may have difficulties following a fast-moving scrolling motion, which may particularly be the case for some elderly or handicapped users.
In some embodiments, the menu item that has been determined as a selected or prese lected menu item is temporarily animated in the GUI such as to indicate its determined selection or preselection, respectively, to the user.
In some embodiments, the method further comprises emitting an individual characteristic sound being associated a particular menu item when during the motion of the sequence of menu items in response to the free-space gesture that particular menu item is currently located at the selection position.
These two particular groups of embodiments may also be used in particular for enabling a safe operation of the HMI/GUI by the user, in particular for avoiding unintended selections.
In some embodiments, the method further comprises (i) receiving or taking with an image sensor a plurality of digital pictures; and (ii) associating each of at least two of the digital pictures by a respective one of the menu items of the rotational menu such that a respective one of these pictures is selected or preselected based on the detected free-space rotational gesture when its associated menu item is being selected. Accordingly, the method may thus be used to scroll through a set of pictures that have been taken and to select a desired one, e.g. for triggering a presentation thereof or for magnifying its presentation within the GUI.
In some embodiments, the method further comprises displaying the selected or preselected picture in a predefined format, size or orientation differing from that of the associated menu item in response to its selection or preselection, respectively.
One possible application of these picture-related embodiments is browsing and (pre-se lecting so-called selfies, e.g. in an automotive environment, where a vehicle might be equipped with a photo camera to take such pictures. Particularly, taking such pictures might be triggered by another specific predetermined free-space gesture that can be detected by the above-mentioned free-space gesture detection sensor.
Specifically, German patent application DE 102020106003.3, which is incorporated herein in its entirety by way of reference, discloses methods and systems for triggering picture taking based on free-space gestures to be performed by a human user. In particular, the methods and systems of the present invention may be combined with the methods and systems disclosed in DE 102020106003.3 Specifically, a same system may be used to im plement both the method of the present invention and one or more of the methods disclosed DE 102020106003.3 (cf. Fig. 1 ).
In some embodiments, the free-space gesture detection sensor is or comprises a time-of- flight, TOF, camera, which is used to detect said specific predetermined free-space gesture. A "TOF camera" is a 3D camera system that measures distances on the basis of a time of flight (TOF) method. The measuring principle is based on the fact that the scene to be rec orded is illuminated by a light pulse, and the camera measures for each pixel the time that the light takes to reach the object and back again. This time is directly proportional to the distance due to the constancy of the speed of light. The camera thus provides the distance of the object imaged on each pixel. The use of a TOF camera system represents a particu larly effective and high-resolution implementation option for a 3D image sensor and may particularly be used to reliably and efficiently detect free-space gestures.
In some embodiments, detecting said rotational free-space gesture comprises at least one of: (i) processing sensor data being generated by the free-space gesture detection sensor to examine whether or not the sensor data represents a situation where a user performs said specific predetermined free-space gesture; and (ii) communicating sensor data being generated by the tree-space gesture detection sensor to a remote (e.g. vehicle external) processing platform, e.g. a server that can be reached over the internet or another commu nication link, and receiving in response result data being generated by the processing plat form and indicating whether or not the sensor data represents a situation where a user performs said specific predetermined free-space gesture. While option (i) is particularly use ful for an autonomous solution that can be fully implemented by a system to be integrated into a vehicle, option (ii) has the advantage that external processing power, e.g. that of a powerful server, which might otherwise not be available for the HMI (e.g. in a vehicle) or would add to the complexity and cost of it, may be used instead. This applies, in particular, to the usually calculation-intensive processing needed in connection with properly recog nizing a specific gesture based on sensor data provided by the free-space gesture detection sensor.
Specifically, in some related embodiments, the processing of the sensor data is performed separately from a process for controlling the free-space gesture detection sensor. Particu larly, the processing of the sensor data in connection with the detection of said specific predetermined rotational free-space gesture may be performed by a remote processing platform, such as a server in a backend that can be reached over the Internet or another communication link, while an application defining the GUI and which is used to control the operation of the free-space gesture detection sensor and the display may be run on a local (e.g. vehicle-based) processor platform (e.g. in a head unit or another control unit of the vehicle). This enables, in particular, on the one hand an optimal partitioning of the pro cessing needed to perform the method, and on the other hand the possibility to separately configure, optimize, scale, extend, maintain, or replace the respective processing means, be it on the hardware side or on the software side, or both.
In some embodiments, each one of a predetermined limited set of multiple different prede termined free-space gestures that may potentially be performed by the user is defined as qualifying as said specific predetermined rotational free-space gesture. This means that there may be more than one specific predetermined free-space gesture qualifying for con trolling the scrolling motion of the sequence of menu items in the GUI. This may be advan tageous in at least two ways: Firstly, this may be used to increase the ease of use and thus also the reliability of the method by also defining further similar but not identical gestures as qualifying gestures for controlling the scrolling, as long as they can be safely distinguished from other gestures that are not intended to similarly control the scrolling. Secondly, this may be used to enable an implementation of the possibility for the user to select among various scrolling options, such as without limitation different scrolling speed options (abso lute transformation ration values) or between positive and negative transformation ratios (i.e. matching or opposite rotational directions).
A second aspect of the present invention is directed to a system for operating a rotational menu of a graphical user interface, GUI, based on a detection of a free-space gesture being performed by a human user, the system being configured to perform the method of the first aspect of the present invention, as described in particular above.
A third aspect of the present invention is directed to a computer program, or a computer program product, comprising instructions to cause the system of the second aspect to per form the method of the first aspect of the present invention.
The computer program (product) may in particular be implemented in the form of a data carrier on which one or more programs for performing the method are stored. Preferably, this is a data carrier, such as a CD, a DVD or a flash memory module. This may be advan tageous, if the computer program product is meant to be traded as an individual product in individual product independent from the processor platform on which the one or more pro grams are to be executed. In another implementation, the computer program (product) is provided as a file on a data processing unit, in particular on a server, and can be down loaded via a data connection, e.g. the Internet or a dedicated data connection, such as a proprietary or local area network.
The system of the second aspect may accordingly have a program memory in which the computer program of the third aspect is stored. Alternatively, the system may also be set up to access a computer program available externally, for example on one or more servers or other data processing units, via a communication link, in particular to exchange with it data used during the course of the execution of the computer program or representing out puts of the computer program.
The various embodiments and advantages described above in connection with the first as pect of the present invention similarly apply to the other aspects of the invention.
Further advantages, features and applications of the present invention are provided in the following detailed description and the appended figures, wherein: Fig. 1 schematically illustrates an exemplary vehicle comprising a system according to an embodiment of the present invention;
Fig. 2 schematically illustrates an exemplary GUI that is operable in accordance with a method of the present invention, such as the method embodiment illustrated in Fig. 4;
Fig. 3 shows a further illustration of an exemplary GUI, such as the GUI of Fig. 2, that is operable in accordance with a method of the present invention, such as the method em bodiment illustrated in Fig. 4; and
Fig. 4 shows a flow chart illustrating an exemplary embodiment of the method according to the present invention.
In the figures, identical reference signs are used for the same or mutually corresponding elements of the systems described herein.
Referring to Fig. 1 , an exemplary vehicle 100 according to an embodiment of the present invention comprises a system 105 for controlling, in particular, triggering of picture-taking of the interior of the vehicle 100, at least in parts. To that purpose, system 105 comprises a free-space gesture detection sensor 110 which is arranged within the passenger compart ment at an interior surface of the roof of the vehicle 100. Free-space gesture detection sensor 110 is designed as a 3D image sensor of the “time-of-flight” type (TOF camera). It has a field-of-view (FOV) 115, which has substantially the form of a cone having its tip at free-space gesture detection sensor 100 and extending predominantly downwards, but op tionally also with a horizontal directional component, towards a middle console located next to the driver seat of vehicle 100. Free-space gesture detection sensor 110 is configured to detect free-space gestures G being performed within its field of view 115 by a human user U, such as a driver of vehicle 100. In the specific example illustrated in Fig. 1 , user U per forms a specific predetermined free-space gesture G with two fingers of her right hand, wherein these two fingers form a “V”-shape within a virtual plane that is substantially hori zontal and thus predominantly perpendicular to the predominantly vertical central direction of the cone defining the FOV 115 of gesture detection sensor 110. Accordingly, the “V”- shape is "visible" to and thus detectable as such by gesture detection sensor 110. Similarly, user U may perform a rotational gesture by rotating the index finger of her hand around its major knuckle, e.g. such as to “draw” a virtual circle, and such a rotational gesture may be detected in a similar way, in particular if the gesture detection sensor 210 is a 3D sensor. System 105 further comprises image sensor 120 in the form of a 2D photo camera which is sensitive at least parts of both the visible and the infrared range of the electromagnetic spectrum, so that it can take pictures not only when the interior of vehicle 100 is illuminated, whether artificially or by daylight, but also when it is relatively dark, in particular at night. Image sensor 120 is arranged at the rear-view mirror of the vehicle inside the passenger compartment and has a field of view 125, which predominantly extends towards and covers, at least in parts, the location of the driver seat and optionally also of one or more further passenger seats of vehicle 100. Accordingly, image sensor 120 is arranged to take photos or videos of one or more passengers of vehicle 100 while seated in their respective pas senger seats, i.e. in particular “selfies”. While the field of view 115 of gesture detection sensor 110 and the field of view 125 of image sensor 120 overlap in parts, they do not fully coincide, such that particularly a spatial area above the middle console of vehicle 100 is only located within FOV 115 of the gesture detection sensor 110, but not within FOV 125 of the image sensor 120. Accordingly, when a free-space gesture G is performed by user U within that spatial area, body parts or other objects user U may use to perform gesture G will not be pictured within photos or videos taken by image sensor 120.
Each one of sensors 110 and 120 may have an own associated illumination unit 111 or 121 , respectively, for illuminating the related field of view 115 or 125, respectively, with electro magnetic radiation, to which the respective sensor is sensitive, e.g. in the visible or infrared part of the electromagnetic spectrum. Each of these illumination units 111 and 121 may be activated and deactivated individually. Specifically, they may be controlled such that at any given point in time only one or none of the two illumination units 111 and 121 is active.
In addition, system 105 comprises a processing unit 130 being signal-coupled to both ges ture detection sensor 110 and image sensor 120 in order to control them and receive their respective sensor data for further processing. Specifically, processing unit 130 may com prise in a respective memory one or more computer programs being configured as an ap plication for free-space-gesture-triggered picture-taking, such that a user U can initiate pic ture-taking by performing within FOV 115 of image detection sensor 110a specific prede termined gesture corresponding to such picture-taking.
Furthermore, processing unit 130 may be configured to recognize one or more different predetermined gestures, if such are represented by the sensor data provided by gesture detection sensor 110. Accordingly, gesture detection sensor 110 in combination with pro cessing unit 130 is then capable of detecting one or more different predetermined free- space gestures, including rotational free-space gestures, being performed by user U within the FOV 115 of sensor 110. In particular, one or more of these gestures may be predeter mined as being gestures, which when properly detected, trigger picture-taking by image sensor 120.
Optionally, system 105 may further comprise a communication unit 135 being configured to exchange data via a communication link with a vehicle-external processing platform 145, such as a backend server. For example, as illustrated in Fig. 1 , the communication link may be a wireless link and accordingly communication unit 135 may be signal-connected to an antenna 140 of the vehicle 102 to therewith send and receive RF signals for communication over the wireless link.
Specifically, the setup may be used to communicate sensor data being generated by the gesture detection sensor 110 to processing platform 145 for performing a similar processing as described above with reference to processing unit 130 in the context of gesture detection. This may be specifically advantageous, when a significant number of different and some times complex gestures needs to be reliably detected such that a high processing power is needed to perform the recognition and discrimination of these various gestures. Typically, it might be easier and more efficient to provide such high processing power outside of the vehicle on a dedicated processing platform 145 instead of designing processing unit 130 as a respective high processing power unit. The latter approach might have all kinds of nega tive effects, including higher average cost of gesture detection per vehicle and gesture, or more demanding space requirements, power requirements, or cooling requirements etc..
In addition, system 105 comprises an HMI comprising a display 150 for displaying infor mation to user U. Specifically, HMI is configurable to display a rotational menu, as will be described below in detail with reference to Figs. 2 and 3.
Processing unit 130 may particularly be configurable, e.g. by means of a respective com puter program, to also control display 150 and to perform, in conjunction particularly with display 150 and gesture detection sensor 110, the method according to the present inven tion, such as method embodiment 300 illustrated below with reference to Fig. 4.
Referring now to Fig. 2, an exemplary GUI 200 that is operable in accordance with a method of the present invention, such as the method embodiment illustrated in Fig. 4, is designed to be presented on a display screen 150 (cf. Fig. 1 ), which may particularly be a rectangular screen. GUI 200 is defined on a virtual plane comprising a first section 205 which is currently not being displayed on display 150, and a second section 210, which has substantially the same shape as the display screen 150 and is currently being displayed thereon. Within the GUI 200, a plurality of menu items 215, for example photos or other graphical objects, are uniformly distributed in sequence along an arc, which may particularly be a circular arc, or a section thereof, as shown in Fig. 2. The distribution of the menu objects along the arc may in particular be referred to as a menu wheel, specifically when the arc is defined as a closed (i.e. endless) loop, such as a circle or an ellipse.
When a respective rotational free-space gestures is being detected, the sequence of menu items 215 (e.g. menu wheel) is moved (rotated) according to a transformation ratio T along a direction of rotation 220, as will be explained below in more detail with reference to Fig. 3. Accordingly, during that motion of the sequence of menu items and any given point in time, only a respective section of that sequence is being displayed in section 210 of GUI 200, while the remaining menu items 215 are currently located in the non-displayed section 205 of GUI 200 until, when the detected gesture continues, they are eventually moved into or even completely across the displayed section 210.
At one position of the arc and within the currently displayed section 210 of GUI 200, a spe cific position, which can be occupied by one menu item 215a at a time, is defined as a selection position 225. For example, this position may be defined as a position in the center of the visible part of the arc within section 210 of GUI 200. As will be discussed in more detail below with reference to Fig. 4, a menu item 215a, which is located at the selection position 225 when a termination of a predetermined rotational free-space gesture is being detected, is thereby selected or preselected as a user’s choice.
Referring now to Fig. 3, which shows a detailed view of a subsection of the sequence of menu items 215 which for the purpose of simplicity of illustration comprises only three ex emplary menu items 215-1 , 215-2 and 215-3. When a respective rotational free-space ges ture is being detected by the gesture detection sensor 120 (cf. Fig. 1), an accrued angle of rotation Q corresponding to the accrued angle of rotation of the motion of a particular body part, such as an index finger of the user performing the gesture, is being continuously de termined and in accordance therewith, the sequence 215 of menu items is being rotated within GUI 200 along the same direction of rotation as that of the gesture. Flowever, the angular velocity of the rotational motion of the sequence of menu items 215 may differ from that of the gesture, e.g. the rotational motion of the index finger of the user, in a predeter mined way. If, for example, an angle ^ indicates the accrued degree of rotation of the se quence of menu items 215 and another angle Q indicates the accrued degree of rotation of the finger of the user performing the gesture, then these two angles may be different. Spe cifically, a transformation ratio T may be defined as the derivative T = d f/άq which in case of a constant speed equals the ratio f/q of the angles themselves. Specifically, it has been found by extensive testing that values for T in the range of 50:1 to 90:1 are particularly well- received choices providing a good balance between efficiency and selection speed on the one hand, and a reliable selection process, which largely avoids unintended selections on the other hand. Accordingly, with such values for T a high degree of user satisfaction can be achieved. For example, a value for T = 72:1 has been found to be particularly satisfac tory. In this case, the menu wheel turns by f = 5° when the rotational gesture corresponds to an angle Q = 360°.
Furthermore, another angle alpha a may be defined to indicate the angle between the re spective positions of two neighboring menu items within the sequence along the arc. As will be explained below with reference to Fig. 4, this angle a may be used to define a criterion for determining, whether upon termination of the rotational gesture (i) the menu item 215 currently leaving the selection position is being chosen as selected or preselected menu item, or (ii) instead the next immediately following menu item which is about to reach the selection position, but which at that point in time might not have completely entered the selection position yet.
Referring now to Fig. 4, which illustrates a method 300 according to an embodiment of the present invention. In addition, for the sake of better explanation and without limitation, ad ditional reference is made again Figs. 1 to 3. Method 300 comprises a step 305, wherein a free-space gesture detection sensor 110 of the TOF camera type continuously “listens”, i.e. monitors, its field-of-view 115, for free-space gestures being performed by a user U, e.g. a driver or other passenger of vehicle 100. When gesture detection sensor 110 is active also its illumination unit 111 is active and illuminates the field-of-view 115, while illumination unit 121 of image sensor 120 is deactivated at that time. Accordingly, the sensing activity of gesture detection sensor 110 is not adversely affected by illumination being emitted by illu mination unit 121 of image sensor 120 (de-synchronization).
When the gesture detection sensor detects the start of a predetermined rotational free- space gesture being performed by a user U and qualifying as an operation of the GUI shown in Fig. 2 and 3, it triggers the step 310, in which the sequence of menu items 215 (menu wheel) is being scrolled according to a preconfigured parameter defining the transformation ratio T, as explained above with reference to Fig. 3. If T is a positive value, then the direction of rotation of the menu wheel corresponds to that of the rotational gesture, otherwise it is opposed to it. The higher the absolute value of T, the faster will the scrolling motion of the menu wheel be relative to the motion of the gesture, i.e. to the rotational motion of the user’s finger performing the gesture.
When the user terminates the free-space gesture, this is being detected in a further step 320 of method 300, and the total accrued scrolling angle f of the menu wheel, which has accrued between the starting and the termination of the rotational gesture, is being deter mined.
Then, in a step 325 this total accrued scrolling angle f of the menu wheel is compared to a predefined threshold angle ft, which may particularly be defined as a value between 40% and 60%, e.g. 50% of the angle a between the respective positions of two neighboring menu items within the sequence along the arc, as illustrated in Fig. 3.
If in a subsequent step 325 it is found that the angle f has not reached the threshold angle ft yet (325 - no; i.e. f < fϊ), then a step 330 s triggered, in which the menu wheel is automatically rotated back to an initial position, which may particularly be defined as that position of the menu wheel, in which the last menu item 215a that was about to leave the selection position 225 during the scrolling motion was still centered at the selection position 225. Then the method 300 loops back to step 305 for another run.
Otherwise (325 - yes; i.e. f ³ ft), a step 335 is triggered, in which the menu wheel is automatically rotated forward to a position such that a next menu item following immediately the last menu item 215a that was about to leave the selection position 225 during the scroll ing motion moves to the center of the selection position 225.
Then, according to a first implementation option (preselection option, illustrated in dashed lines), a confirmation step 340 follows, in which the preselection of menu item 215a being currently present at the selection position after the automatic rotation can be confirmed, or discarded, by an additional user input as an intended selection.
If the preselection is discarded (345 - no), method 300 loops back to step 305 for another run, in which user can retry to provide a correct input by means of the rotational gesture. Otherwise, if the preselection is confirmed (345 - yes), the menu item being currently lo cated at the selection position 225 is determined as being the currently selected menu item 115a. As a consequence of the confirmed selection a specific event being associated with the new position of the menu wheel, i.e. with the selected menu item, is being triggered in a step 350. Byway of example, such an event could comprise displaying a magnified version of the selected menu item, e.g. photo, within the GUI 200. According to an alternative implementation option (immediate selection option), steps 340 and 345 are omitted, such that step 350 immediately follows step 335.
While above at least one exemplary embodiment of the present invention has been de scribed, it has to be noted that a great number of variation thereto exists. Furthermore, it is appreciated that the described exemplary embodiments only illustrate non-limiting exam- pies of how the present invention can be implemented and that it is not intended to limit the scope, the application or the configuration of the herein-described apparatus’ and methods. Rather, the preceding description will provide the person skilled in the art with constructions for implementing at least one exemplary embodiment of the invention, wherein it has to be understood that various changes of functionality and the arrangement of the elements of the exemplary embodiment can be made, without deviating from the subject-matter defined by the appended claims and their legal equivalents.
LIST OF REFERENCE SIGNS
100 vehicle 105 system for free-space gesture-based triggering of picture-taking 110 free-space gesture detection sensor of TOF type 111 illumination unit for free-space gesture detection sensor 110
115 field-of-view of free-space gesture detection sensor 110 120 image sensor, photo camera 121 illumination unit for image sensor 120 125 field-of-view of image sensor 120 130 processing unit
135 communication unit 140 antenna for wireless communication link 145 vehicle-external processing platform, e.g. server 150 display
200 exemplary view of a Graphical user interface showing a rotational menu
205 virtual plane of rotational menu, specifically non-displayed section thereof
210 displayed part of rotational menu
215 individual pictures arranged in a uniformly distributed sequence along an arc 215a individual picture located at selection position
220 scrolling direction 225 selection position
300 exemplary embodiment of a method of operating a rotational menu of a GUI 305-350 steps of method 300

Claims

1. A method (300) of operating a rotational menu of a graphical user interface, GUI, (200) based on a detection of a free-space gesture being performed by a human user (U), the method comprising:
Detecting (305), using a free-space gesture detection sensor (120), a rotational free- space gesture (G) being performed by a human user (U) in a field of view (115) of the free-space gesture detection sensor;
Displaying on a display device (150) at least a section (210) of a rotational menu comprising a plurality of individually selectable menu items (215) arranged in a se quential order along an arc comprising a specific predefined selection position (225) along the arc;
Synchronously and uniformly scrolling (310) the sequence of menu items in the GUI along the arc in response to the detected rotational free-space gesture such that the scrolling speed is aligned, according to a predefined transformation ratio (T), with a detected rotational speed and direction of rotation of the detected rotational free- space gesture;
Determining (305; 320) a start and a termination of the detected rotational free- space gesture and an accrued rotational scrolling angle (f) or distance between a start point of the scrolling motion of the menu items that corresponds to the start of the gesture and an end point of the scrolling motion of the menu items that corre sponds to the termination of the gesture;
When the accrued scrolling angle or scrolling distance, respectively, reaches or ex tends beyond a predefined threshold value (ft), automatically further scrolling the sequence of menu items by a remaining angle or distance such that thereby the respective next menu item is being moved along the direction of rotation to the se lection position, and otherwise automatically reversing the scrolling motion of the sequence of menu items such as to return the last menu item to previously occupy the selection position back to the selection position; and
Determining that particular menu item as a currently selected or pre-selected menu item which is located at the selection position after termination of the automatic scrolling.
2. The method (300) of claim 1 , wherein: the menu items (215) of the rotational menu are uniformly distributed along the arc; and the threshold value (ft) is predefined as corresponding to at least 30% of the scroll ing angle (a) or distance needed to replace by way of the scrolling motion a first menu item being located at the selection position (225) by an immediately following menu item in the sequence of menu items at that selection position.
3. The method (300) of claim 2, wherein the threshold value (ft) is predefined as cor responding to at least 50% of the scrolling angle (a) or distance needed to replace by way of the scrolling motion a first menu item being located at the selection posi tion (225) by an immediately following menu item (215) in the sequence of menu items at that selection position.
4. The method (300) of any one of the preceding claims, wherein detecting the rota tional free-space gesture being performed by a human user in a field of view of the free-space gesture detection sensor comprises detecting a rotational motion of one or more of an arm, a forearm, or at least one finger of a hand of the human user.
5. The method (300) of any one of the preceding claims, wherein the arc of the rota tional menu is designed as an endless closed loop.
6. The method of any one of the preceding claims, further comprising triggering (350) a predefined event in response to the determination of the selected menu item (115a) based on the detected free-space rotational gesture, wherein the event is associated with the selected menu item.
7. The method (300) of any one of claims 1 to 5, further comprising triggering (350) a predefined event in response to a combination of both the determination of the pre selected menu item based on the detected free-space rotational gesture and an ad ditional user input (340) confirming the pre-selection of the pre-selected menu item as an intended selection, wherein the event is associated with the thus-selected menu item.
8. The method (300) of any one of the preceding claims, wherein the transformation ratio (T) is defined as a configurable parameter and the method further comprises: receiving a user input for selecting a desired value for that parameter; and using a transformation ratio corresponding to the selected value of the parameter for the subsequent operation of the rotational menu.
9. The method (300) of any one of the preceding claims, wherein the transformation ratio (T) is defined such that the scrolling direction (220) coincides with the direction of rotation of the detected gesture.
10. The method (300) of any one of the preceding claims, wherein the transformation ratio is defined such that the rotational speed of the scrolling motion is equal to or lower than the rotational speed of the detected rotational free-space gesture.
11. The method (300) of any one of the preceding claims, wherein the menu item that has been determined as a selected or preselected menu item is temporarily ani mated in the GUI such as to indicate its determined selection or preselection, re spectively, to the user.
12. The method (300) of any one of the preceding claims, further comprising emitting an individual characteristic sound being associated a particular menu item when during the motion of the sequence of menu items in response to the free-space gesture that particular menu item is currently located at the selection position.
13. The method (300) of any one of the preceding claims, further comprising:
Receiving or taking with an image sensor a plurality of digital pictures; and associating each of at least two of the digital pictures by a respective one of the menu items of the rotational menu such that a respective one of these pictures is selected or preselected based on the detected free-space rotational gesture when its associated menu item is being selected.
14. The method (300) of claim 13, further comprising displaying the selected or prese lected picture in a predefined format, size or orientation differing from that of the associated menu item in response to its selection or preselection, respectively.
15. System (105, 150) for operating a rotational menu of a graphical user interface, GUI, (200) based on a detection of a free-space gesture (G) being performed by a human user (U), the system being configured to perform the method (300) of any one of the preceding claims.
16. Computer program comprising instructions to cause the system (105, 150) of claim 15 to perform the method of any one of claims 1 to 14.
PCT/EP2021/055090 2020-03-05 2021-03-02 Method and system for operating a menu of a graphical user interface based on a detection of a rotational free-space gesture WO2021175800A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102020106021.1A DE102020106021A1 (en) 2020-03-05 2020-03-05 METHOD AND SYSTEM FOR OPERATING A SELECTION MENU OF A GRAPHIC USER INTERFACE BASED ON THE CAPTURE OF A ROTATING CLEAR GESTURE
DE102020106021.1 2020-03-05

Publications (1)

Publication Number Publication Date
WO2021175800A1 true WO2021175800A1 (en) 2021-09-10

Family

ID=74856842

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2021/055090 WO2021175800A1 (en) 2020-03-05 2021-03-02 Method and system for operating a menu of a graphical user interface based on a detection of a rotational free-space gesture

Country Status (2)

Country Link
DE (1) DE102020106021A1 (en)
WO (1) WO2021175800A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130285899A1 (en) * 2012-04-30 2013-10-31 Pixart Imaging Incorporation Method for outputting command by detecting object movement and system thereof
EP2806334A1 (en) * 2013-05-23 2014-11-26 Samsung Electronics Co., Ltd Method and apparatus for user interface based on gesture
WO2018237172A1 (en) * 2017-06-21 2018-12-27 Quantum Interface, Llc Systems, apparatuses, interfaces, and methods for virtual control constructs, eye movement object controllers, and virtual training

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100101389A (en) 2009-03-09 2010-09-17 삼성전자주식회사 Display apparatus for providing a user menu, and method for providing ui applied thereto
US9037354B2 (en) 2011-09-09 2015-05-19 Thales Avionics, Inc. Controlling vehicle entertainment systems responsive to sensed passenger gestures
US10152136B2 (en) 2013-10-16 2018-12-11 Leap Motion, Inc. Velocity field interaction for free space gesture interface and control
DE102020106003A1 (en) 2020-03-05 2021-09-09 Gestigon Gmbh METHOD AND SYSTEM FOR TRIGGERING A PICTURE RECORDING OF THE INTERIOR OF A VEHICLE BASED ON THE DETERMINATION OF A GESTURE OF CLEARANCE

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130285899A1 (en) * 2012-04-30 2013-10-31 Pixart Imaging Incorporation Method for outputting command by detecting object movement and system thereof
EP2806334A1 (en) * 2013-05-23 2014-11-26 Samsung Electronics Co., Ltd Method and apparatus for user interface based on gesture
WO2018237172A1 (en) * 2017-06-21 2018-12-27 Quantum Interface, Llc Systems, apparatuses, interfaces, and methods for virtual control constructs, eye movement object controllers, and virtual training

Also Published As

Publication number Publication date
DE102020106021A1 (en) 2021-09-09

Similar Documents

Publication Publication Date Title
US10481757B2 (en) Eye gaze control system
US10746560B2 (en) Interactive mapping
US9858702B2 (en) Device and method for signalling a successful gesture input
US20190073040A1 (en) Gesture and motion based control of user interfaces
EP2270767B1 (en) Device, Method and Program for Displaying Map Information
US20150346836A1 (en) Method for synchronizing display devices in a motor vehicle
US20130204457A1 (en) Interacting with vehicle controls through gesture recognition
US20120207345A1 (en) Touchless human machine interface
EP2328061A2 (en) Input apparatus
EP3249497A1 (en) Eye tracking
KR20100117036A (en) Input apparatus
US20220244789A1 (en) Method for operating a mobile terminal using a gesture recognition and control device, gesture recognition and control device, motor vehicle, and an output apparatus that can be worn on the head
JP2024508102A (en) Gesture interaction method and device, electronic device and storage medium
US11119576B2 (en) User interface and method for contactlessly operating a hardware operating element in a 3-D gesture mode
JP2017211884A (en) Motion detection system
WO2021175749A1 (en) Method and system for triggering picture-taking of the interior of a vehicle based on a detection of a free-space gesture
US10296101B2 (en) Information processing system, information processing apparatus, control method, and program
CN117492557A (en) Multi-screen interaction method and device, terminal equipment and vehicle
EP3659848A1 (en) Operating module, operating method, operating system and storage medium for vehicles
JP2013149257A (en) Adaptive interface system
JP6855616B2 (en) Operating devices, mobile devices, and their control systems
WO2021175800A1 (en) Method and system for operating a menu of a graphical user interface based on a detection of a rotational free-space gesture
JP6685742B2 (en) Operating device, moving device, and control system thereof
WO2023230291A2 (en) Devices, methods, and graphical user interfaces for user authentication and device management
CN111602102A (en) Method and system for visual human-machine interaction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21709646

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21709646

Country of ref document: EP

Kind code of ref document: A1