WO2024126351A1 - Procédé et système de capture d'entrées d'utilisateur - Google Patents

Procédé et système de capture d'entrées d'utilisateur Download PDF

Info

Publication number
WO2024126351A1
WO2024126351A1 PCT/EP2023/085080 EP2023085080W WO2024126351A1 WO 2024126351 A1 WO2024126351 A1 WO 2024126351A1 EP 2023085080 W EP2023085080 W EP 2023085080W WO 2024126351 A1 WO2024126351 A1 WO 2024126351A1
Authority
WO
WIPO (PCT)
Prior art keywords
hand
area
user
control
input surface
Prior art date
Application number
PCT/EP2023/085080
Other languages
German (de)
English (en)
Inventor
Finn Jacobsen
Sina BUCH
Original Assignee
Gestigon Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gestigon Gmbh filed Critical Gestigon Gmbh
Publication of WO2024126351A1 publication Critical patent/WO2024126351A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/10Input arrangements, i.e. from user to vehicle, associated with vehicle functions or specially adapted therefor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/20Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor
    • B60K35/21Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor using visual output, e.g. blinking lights or matrix displays
    • B60K35/22Display screens
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/80Arrangements for controlling instruments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/141Activation of instrument input devices by approaching fingers or pens
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/146Instrument input by gesture
    • B60K2360/14643D-gesture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/20Optical features of instruments
    • B60K2360/33Illumination features
    • B60K2360/334Projection means

Definitions

  • the present invention relates to a method and a system for detecting user inputs in an input device, in particular in a vehicle.
  • user inputs in a vehicle can be detected without contact.
  • Modern vehicles usually have at least one control unit installed, which the driver of the vehicle or other passengers can use to control various functions. These can be various vehicle or comfort functions, such as settings for the navigation system, the air conditioning, seat settings, lighting settings and the like. Various functions of an infotainment system can also be operated, such as playing music, making phone calls and the like.
  • Conventional control units can be controlled using corresponding control buttons.
  • Modern systems usually have at least one display, which can be located centrally in the dashboard, for example.
  • the individual functions, menus and the like can be shown here.
  • These displays are often touch-sensitive, i.e. they are designed as touchscreens so that the desired function can be controlled by touching the screen.
  • the control elements are designed in such a way that they can be reached and operated with a finger.
  • a function can be called up when the corresponding area of the touch-sensitive screen is touched.
  • Other control options can also be provided, such as holding and dragging a control element, swiping gestures and the like.
  • control units require a touch-sensitive display and are therefore limited in their design.
  • Modern vehicles often have multiple displays and increasingly larger displays.
  • space is limited and the shape of the displays is usually essentially flat and rectangular and they cannot be distributed arbitrarily in a vehicle. They must be accessible to a user with their hand in order to be able to control a function by touch.
  • Other surfaces in the vehicle without a display are not available for user input.
  • it is also known to control certain functions contactlessly using gestures without reference to a display. To do this, a user performs certain predefined gestures in free space, for example in an area of the vehicle cabin above the center console or in front of the dashboard.
  • Gestures in free space can be used to control functions of the infotainment system, for example, or other vehicle functions such as opening and closing a sunroof. While these gestures can be learned, they are performed in free space and a user does not receive immediate feedback, for example where or how exactly a certain gesture should be performed so that it successfully and reliably controls the desired function. Users are therefore often unsure and it can be difficult to perform the gestures correctly until they are recognized by the system.
  • the present invention is based on the object of improving the detection of user inputs in an input device, in particular in a vehicle.
  • contactless detection of user inputs is to be improved.
  • a first aspect of the invention relates to a method, in particular a computer-implemented method, for detecting user inputs in an input device, wherein the input device has an input surface on which a graphical user interface with at least one control element is displayed, and a detection device.
  • a user's hand is detected by means of the detection device and a position of the hand in a detection area is determined, wherein the detection area is a three-dimensional spatial area which is assigned to the input surface.
  • the activation area is a part of the detection area which is arranged at a distance from the input surface, and determining whether the user's hand is in a control region, the control region being a portion of the sensing region disposed between the input surface and the activation region. If the user's hand is determined to be in the control region, a status indicator is presented indicating a period of time during which the hand is determined to be in the control region.
  • the aforementioned method according to the first aspect is therefore based in particular on the fact that user inputs are recorded without contact.
  • the position of the hand is determined in a three-dimensional spatial area which is assigned to the input surface, for example a spatial area which is located in front of the input surface.
  • a detection area is divided into two areas.
  • An activation area is initially arranged at a distance from the input surface. Feedback can already be given to the user when the hand is in this area, as explained in more detail later. If the hand is then in the control area which is arranged between the activation area and the input surface, i.e. closer to the input surface, a status indicator is displayed.
  • the status indicator provides the user with feedback about the successful operation of the control.
  • the status indicator is designed to show the user how long he has been operating the control. This provides a method for dealing with delayed, inaccurate user touches and finger occlusions. The user may expect the control to be operated immediately, which is not always easy to determine due to inaccuracies. Therefore, the status indicator now gives the user a way to see how the system is reacting.
  • the term "user interface” or "graphical user interface” used here refers in particular to a graphical representation of control elements which are linked to a specific function and allow a user to control the function.
  • the user interface (“UI” or “graphical user interface” - GUI) can contain “control elements”("UIelements") such as input areas, buttons, symbols, buttons, icons, sliders, toolbars, selection menus and the like which a user can operate, in particular in the sense of the present invention, without touching them.
  • the (graphical) user interface can also be referred to as a (graphical) user interface.
  • status indicator refers in particular to a graphical representation of a status of a control, in particular a time-dependent status.
  • the status indicator shows a period of time. This can give the user the impression that the control is “charging” when he or she holds it down.
  • the status indicator can therefore also be referred to as a "loading bar” or something similar to illustrate the charging of the control. If the status indicator is associated with a control, the associated control can also be referred to as “charged”, “charging” and the like.
  • the term "input surface” used here refers in particular to any surface, especially in a vehicle, on which the user interface can be graphically displayed. It can be a display or a projection surface as described in more detail below.
  • the input surface can have any design. It can be flat, level and rectangular, or it can have any shape. In particular, a point on the input surface can be described with two-dimensional coordinates.
  • the term "detection device” used here refers in particular to a device that can detect objects in three-dimensional space without contact and determine their position.
  • the detection device can detect and localize a user's hand.
  • optical methods can be used to detect a user's hand in space.
  • the detection device can consist of one part or several parts, which can depend on which detection area is to be covered.
  • one camera can be provided, or several cameras.
  • three-dimensional spatial area used here refers in particular to an area that can be described by three-dimensional coordinates. A position in the three-dimensional spatial area has unique three-dimensional coordinates.
  • the coordinate system can be chosen arbitrarily.
  • a coordinate system of the detection device can be selected, or a coordinate system relating to the input surface. In particular, there is a relationship between the detection area and the input surface in order to be able to assign the position of the hand to corresponding locations on the input surface.
  • vehicle refers in particular to a passenger car, including all types of motor vehicles, hybrid and battery-powered electric vehicles, as well as vehicles such as sedans, vans, buses, trucks, delivery vans and the like.
  • function refers in particular to technical features that can be present in a vehicle, for example in the interior, in order to be controlled by a corresponding control system.
  • these can be functions of the vehicle and/or an infotainment system, such as lighting, audio output (e.g. volume), air conditioning, telephone, etc.
  • a condition A or B is satisfied by one of the following conditions: A is true (or exists) and B is false (or absent), A is false (or absent) and B is true (or exists), and both A and B are true (or exists).
  • the term “configured” or “set up” to perform a specific function (and respective modifications thereof) is to be understood in the sense of the invention that the corresponding device is already in a design or setting in which it can perform the function or that it is at least adjustable - i.e. configurable - so that it can perform the function after the corresponding setting.
  • the configuration can be carried out, for example, by setting parameters of a process sequence or switches or the like to activate or deactivate functionalities or settings.
  • the device can have several predetermined configurations or operating modes, so that configuration can be carried out by selecting one of these configurations or operating modes.
  • a function of the user interface is only controlled when a predetermined period of time has elapsed during which the user's hand is determined to be in the control area.
  • the status indicator shows a portion of the predetermined time period that has elapsed since the user's hand entered the control area. This can be done graphically, for example, by a bar running during the time period, for example in a circle, with the full circle being reached when the predetermined time period has elapsed. The user can thus see how much time he still has to hold the control element until it is actually triggered or how much time is left to correct the selection of the control element.
  • the status indicator is associated with the control and a function associated with the control is only controlled when a predetermined period of time has elapsed during which the user's hand is determined to be in the control area.
  • a clear association of the status indicator with a control makes it easier for the user to understand the feedback.
  • a separate status indicator can be provided for each control, for example in spatial proximity on the user interface.
  • a status indicator can be displayed around a button in the form of a growing bar.
  • a position of a pointer on the input surface is further determined, the position of the pointer being determined by projecting the position of the hand onto the input surface when the position of the hand is detected in the detection area.
  • the projection can in particular be an orthogonal projection, the position of the hand in three-dimensional space being mapped onto two-dimensional coordinates of the input surface.
  • a pointer can be determined. This indicates in particular a location on the input surface, which is determined from the position of the hand.
  • the position of the pointer can be determined by an orthogonal projection of the position of the hand from three-dimensional space onto the two-dimensional input surface.
  • the position of the hand can be defined by a point on the hand that is closest to the input surface, as explained below. This can also be a fingertip that points in the direction of the input surface.
  • an orthogonal projection it can also be provided to determine a pointing direction of a finger, for example an outstretched index finger, and to determine the position of the pointer on the input surface by extending it in the pointing direction.
  • the status indicator indicates a period of time when the user's hand is determined to be in the control region and the position of the pointer is within a region of the user interface on the input surface associated with the control during which the hand is determined to be in the control region and the position of the pointer is within the region of the user interface.
  • the position of the pointer is advantageously used to select a corresponding control element.
  • the pointer reflects the position of the hand from the three-dimensional space on the input surface, ie on the graphical user interface. The pointer can be moved as soon as the hand is in the detection area. When the hand is in the control area, a control element can be selected depending on the position of the pointer and a corresponding status indicator can be displayed.
  • the representation of the user interface indicates the position of the pointer when the position of the hand is detected in the detection area.
  • the representation can show the pointer itself, e.g. as a point on the input surface.
  • it can also be provided to display the pointer in another way, for example by highlighting an area in which the pointer is located, such as a button which is located in the area of the pointer. This serves to give the user appropriate feedback as to where the pointer is located on the input surface in order to simplify operation.
  • the area in which the pointer is located becomes the focus for the user, so that the activation area can also be referred to as the “focus area”.
  • the control area has a base area which contains at least the input surface and the control area extends from the base area to a cover area, wherein a distance between the base area and the cover area of the control area corresponds to the distance between the input surface and the activation area.
  • the base area can also correspond substantially to the input surface.
  • the base area and cover area can be the same size.
  • the control area can be cuboid-shaped with the input surface as the base area. The control area thus directly adjoins the input surface and occupies the space in front of the input surface, in particular over the entire surface of the input surface.
  • the distance can be selected according to requirements, for example about 2 cm to about 5 cm, such as about 3 cm.
  • control area is the area in the immediate vicinity of the input surface. So if a user moves his hand close to the input surface (or touches it), i.e. into the control area, a function can be controlled. If the hand remains outside the control area, in other words, further away than the distance (or the “depth” of the control area), no control or “release” of the user interface occurs.
  • the activation area borders the control area, the activation area having a base area that contains at least the top surface of the control area.
  • the base area of the activation area can correspond to the top surface of the control area.
  • the activation area can be cuboid-shaped, for example. The activation area thus forms an area that is further away from the input surface than the control area.
  • a user will typically first enter the activation area and only then the control area. If the hand is in the activation area, this can be used, for example, to "wake up" the user interface, which can be visually displayed. The user thus receives early feedback before the actual execution of the control if his hand is in the activation area. This can facilitate the subsequent control, since the user can see that his hand has been properly detected, which can facilitate navigation. This can be helpful, especially since the user moves his hand freely in the space in front of the input surface.
  • a distance between the base and the top surface of the activation area is greater than the distance between the input surface and the activation area.
  • the activation area is thus larger than the control area. This means that feedback can be given to the user in a larger area, while the actual control or operation of the user interface takes place in a smaller area near the input surface. Depending on the application, the dimensions can be adjusted accordingly.
  • the activation area can, for example, have a depth of 10 cm or more.
  • the position of the user's hand is determined as a position of a point on the user's hand that is closest to the input surface.
  • a point on the hand is advantageously selected that represents the position of the hand.
  • any point on the hand can be selected as the position of the hand, such as a center of the hand.
  • the frontmost point of the hand can be considered particularly relevant from the user's perspective for the input on the input surface.
  • detecting the user's hand includes determining whether at least one finger of the hand is pointing in the direction of the input surface.
  • the position of the hand can be arbitrary and the position of the hand can be determined in the detection area as described above. For a more precise determination, however, it can be determined whether the user is extending a finger in the direction of the input surface, for example the index finger. A user can use this hand position intuitively for operation. It can then be provided to control a function only when the user makes a pointing movement, for example by extending his index finger. If the user does not make a pointing movement, the control of a function can be suppressed, since in such a case the user may not intend to control a function.
  • Determining the hand position can therefore improve the accuracy of operation.
  • the position of a fingertip can be determined with greater accuracy. This position can then be assumed to be the relevant position of the hand, in particular if the tip of the extended finger is the point on the hand that is closest to the input surface, as described above.
  • the function of the user interface is activated when it is determined that the hand enters the control area and deactivated when the hand leaves the control area.
  • a control surface of the graphical user interface can be pressed when the user enters the control area with his hand (or his fingertip), i.e. comes very close to the input surface (or touches it).
  • the control surface can then be held, for example. It can then be released when the user withdraws his hand, i.e. when the hand (or the fingertip) leaves the control area.
  • the entry and exit of the hand into the control area can thus be advantageously used for interaction with the user interface. Further possibilities for how different events that can be triggered by the position of the hand, more precisely the distance of the hand from the input surface, can be explained below.
  • At least one transition event is further determined.
  • a transition event can be determined when the hand passes from one of the areas to another, i.e. leaves one area and enters another. This can be the case in particular when it is determined that: the hand enters the control area, the hand leaves the control area, the hand enters the activation area, the hand leaves the activation area, the hand When the hand leaves the detection area or the hand enters the detection area. Together with the position on the input surface, ie the pointer position as explained above, this allows complete control of the user interface to be implemented.
  • a second aspect of the invention relates to a system for data processing, comprising at least one processor which is configured to carry out the method according to the first aspect of the invention.
  • the system also has at least one display device which is configured to display a graphical user interface on an input surface, and a detection device which is configured to detect a hand of a user in a three-dimensional spatial area and to determine a position of the hand in the three-dimensional spatial area.
  • the display device comprises a projection device which is set up to project a representation of the graphical user interface onto a surface.
  • a projection device which is set up to project a representation of the graphical user interface onto a surface.
  • This allows the user interface to be freely displayed on any surface onto which the user interface can be projected.
  • areas of the dashboard of a vehicle can be used to display the user interface without having to be equipped with a display or the like.
  • Other areas, such as the center console or the A-pillar of a vehicle can also be used flexibly to display the user interface.
  • a projector can be located in the roof liner, e.g. in the area of the interior mirror, and project the user interface onto any surface, which can then be used as an input surface for contactless control as described above.
  • the display device can also be designed as a display, which, however, does not have to be touch-sensitive in the sense of the invention.
  • the detection device comprises at least one image detection device, in particular a camera.
  • the position of a user's hand in three-dimensional space can be determined in a simple manner using a camera.
  • One camera or several cameras can be provided.
  • the camera can be an infrared camera.
  • the at least one camera is advantageously a time-of-flight camera (ToF camera).
  • TOF camera time-of-flight camera
  • a third aspect of the invention relates to a computer program with instructions which, when executed on a system according to the second aspect, cause the system to carry out the method according to the first aspect.
  • the computer program can in particular be stored on a non-volatile data carrier.
  • a non-volatile data carrier is preferably a data carrier in the form of an optical data carrier or a flash memory module.
  • the computer program can be present as a file on a data processing unit, in particular on a server, and can be downloaded via a data connection, for example the Internet or a dedicated data connection, such as a proprietary or local network.
  • the computer program can have a plurality of interacting individual program modules.
  • the system according to the second aspect can accordingly have a program memory in which the computer program is stored.
  • the system can also be set up to access an external computer program, for example on one or more servers or other data processing units, via a communication connection, in particular to exchange data with it that are used during the execution of the method or computer program or represent outputs of the computer program.
  • Fig. 1 schematically shows an input device according to an embodiment with an input surface, a display device and a detection device
  • Fig. 2 schematically shows a three-dimensional view of a detection area in front of an input surface
  • Fig. 3 schematically shows a side view of a detection area in front of a
  • Input interface (hand in focus area);
  • Fig. 4 schematically shows a side view of a detection area in front of a
  • Input interface (hand in control area);
  • Fig. 5 schematically shows a control element with a pointer in different positions
  • Fig. 6 different views of a button.
  • Fig. 1 shows an input device for detecting a user input.
  • a graphical user interface (GUI) is shown on an input surface 1. This can be projected onto the input surface 1 by means of a projector 2.
  • the input surface 1 can also be formed by a screen, such as a conventional display, which does not have to be a touch-sensitive display.
  • any surface can be used to detect proximity, for example any surface in a vehicle.
  • a detection device 3 is provided as a sensor, which detects a user's hand and determines its position in three-dimensional space.
  • the detection device 3 can, for example, be a 3D sensor, such as a time-of-flight camera, or can comprise optical 2D sensors.
  • the system enables the user to touch the input surface 1 (or possibly a screen) if he or she wishes to do so. In this way, a similar experience to conventional touchscreens can be achieved, but with certain differences in behavior. However, control is basically possible without touching.
  • the detection device 3 forms a hand recognition system that detects a person's hand in real time in a suitable field of view (the detection area) with a suitable level of detail. This means that the detection device 3 can be able to detect fingertip positions in a defined 3D world coordinate system (which may or may not coincide with the sensor coordinate system).
  • a classification is made as to whether or not an extended (pointing) finger is present, i.e. whether a detected hand is currently making a pointing gesture.
  • a control unit (“controller”; not shown) can also be provided that calibrates the system. In particular, the controller knows the physical position and size of the input surface 1 (in world coordinates), so that it can geometrically relate the detection data of a hand (e.g. finger positions) to the surface positions. The controller receives the inputs from the detection system and converts them into suitable inputs for the GUI application.
  • the system enables the user to control a pointer 9 that represents a logical position on the input surface 1 or the user interface 4.
  • the pointer 9 may have a visual appearance, but does not have to. It may be represented as a point, for example, or it may be invisible per se.
  • the pointer 9 can be logically activated or deactivated at any time. The pointer 9 is deactivated when the hand is not currently in the vicinity of the input surface 1, i.e. outside the detection area 10, is not in a pointing position (so that no pointing finger can be detected) or for other reasons that are not currently detected by the hand recognition.
  • the user is provided to control the pointer 9 with a hand 7 that is in a pointing position, i.e. with an extended index finger 8 that is located near the input surface 1.
  • the user can trigger a control (analogous to a mouse click) that is determined by the distance of the hand from the input surface 1.
  • a control analogous to a mouse click
  • the position of the index finger 8, as detected by the detection device 3, determines the position of the pointer 9 on the input surface 1.
  • the finger position is a position in three-dimensional space (detection area 10)
  • the pointer position is a value in two-dimensional space (input surface 1 or screen).
  • a simple and effective variant of the assignment is an orthogonal projection of the position of the hand 7 (i.e. the tip of the index finger 8) onto the input surface 1.
  • the detection device 3 detects a pointing finger 8 and the position of its tip.
  • the position of the pointer 9 can then simply be the orthogonal projection of the position of the fingertip onto the input surface 1. If no pointing finger is detected, the pointer can be deactivated.
  • a further simplified variant of this solution can consist in simply detecting the position of a handtip, i.e. the frontmost point of the hand in a certain forward direction, and interpreting this as the finger pointing position. If the forward direction is suitably selected (e.g. orthogonal to the screen or input surface 1), then this detected position is actually the index finger position when the user points at the screen. In this way, no classification of the finger or pointing position is required.
  • the detection device 3 recognizes a pointing finger 8 including its pointing direction.
  • an index finger beam is determined that begins at the lower end of the finger and runs along the finger direction.
  • the pointer position 9 is then the intersection point of the index beam with the input surface 1.
  • this requires that an index finger is reasonably straight and a pointing direction can be assigned. If this is not the case, the recognition system would reject the input and the pointer would be deactivated.
  • the 3D finger position not only determines the pointer position, but also (when interpreted in a temporal context) a control or a trigger that leads to the control of a function.
  • the detection area 10 in front of the input surface 1 can be divided into two three-dimensional spatial areas, which are referred to here as the control area 5 and the activation area 6. In the simplest case, these areas are cuboid-shaped boxes, as shown in Fig. 2.
  • the control area 5 can also be referred to as a "trigger box” and the activation area 6 as a "focus box". However, these areas can also take on other shapes depending on the shape of the input surface 1.
  • the sizes given are only examples and can vary depending on the circumstances or the application. for example, depending on the location and size of the input surface 1 , content of the GUI, etc.
  • the base area of the control area 5 corresponds to the input surface 1.
  • the depth of the control area 5 can be chosen appropriately, for example about 2 to 3 cm.
  • the activation area 6 is another area that is located "in front of” or "above" the control area
  • the hand 7 has a certain approximation state with respect to the input surface 1 and thus the GUI at any time, which can be expressed by the presence or absence of the hand 7 in the detection area 10 or, more precisely, the activation area 6 and the control area 5. This can be one of three defined states, as explained below, and is determined by the position of the index finger 8 (if a pointing hand is detected).
  • this state can be referred to as “focus”. If, as shown in Fig. 4, the position of the index finger 8 is in the control area 5 (the “trigger box”), this state can be referred to as “trigger”.
  • the “idle” state can be a situation in which the position of the index finger 8 is not in any of the fields, i.e. outside the detection area 10. This is especially true if no hand or index finger is detected at all. It is useful to apply a filter to avoid flickering between the states at times when the detected finger position is close to the box boundaries. A hysteresis, for example, is a suitable filter.
  • the hand input relevant to the system is completely described by the current pointer position (x and y coordinates) and the current approach state (z coordinate).
  • This data a pair of pointer position and approach state, can be referred to as the control state.
  • the information within the control state is conceptually the same as the (relevant) information about the detected hand, but is fully expressed in screen coordinates (or logical, non-geometric information) and no longer depends on the geometric setup and calibration of the system (position of the sensor, etc.). Therefore, the control state is a self-sufficient input to a graphical user interface (GUI) that provides complete Provides information via a pointing device, similar to a mouse or touch input.
  • GUI graphical user interface
  • Particularly relevant for control are the moments in which the state of the approach changes, i.e. when the position of the hand leaves one of the areas and/or enters one of the areas.
  • the (temporal) detection of the hand position can be based on frames, so that a state change of the approach occurs when the control state determined in frame n+1 results in a different state in the approach than the previous frame n. Therefore, the following transition events can be defined, which correspond to state changes and can be relevant for controlling the functions of the user interface.
  • transition event "TriggerStateEnterEvent” is triggered when the state changes to "trigger”.
  • the transition event “TriggerStateLeaveEvent” is triggered when the state “trigger” changes to any other state.
  • These two transition events can be used to select functions, similar to a mouse click or touching a touchscreen. In this way, drag-and-drop functionality can also be implemented. As long as the hand (or the tip of the index finger 8) remains in the control area 5, an object of the user interface is held and then released when the hand 7 leaves the control area 5.
  • the transition event “FocusStateEnterEvent” is triggered when the state changes to “Focus”.
  • the transition event “FocusStateLeaveEvent” is triggered when the state changes from “Focus” to any other state.
  • the transition event “IdleStateEnterEvent” is triggered when the state changes to "Idle”.
  • the transition event “IdleStateLeaveEvent” is triggered when the state changes from "Idle” to any other state.
  • the user thus receives feedback that the system has successfully responded to his hand 7 and he can continue operating the system.
  • appropriate feedback is very helpful for usability.
  • contactless interaction in particular, one problem is giving the user useful, supportive feedback.
  • the user often feels lost, does not know how to interact and what the system actually recognizes.
  • the system described here can provide direct feedback through spatial information about the finger position, so that the user understands what the system recognizes in order to react and adjust the finger position and movement.
  • UI elements Various states for user interface controls
  • the system shows what it recognizes and which UI elements can be operated.
  • acoustic feedback can also be provided, for example various clicking sounds.
  • Such feedback is particularly beneficial in a situation where a user is sitting in a vehicle and wants to interact with virtual objects on a dashboard.
  • clickable elements can appear larger and be easily highlighted. If they then move their finger over one of these elements, it can be additionally highlighted by a "hover effect". If they now click on the element, it can visually appear like a real button for clicking down. The user learns to interact better over time, so that their "click" is recognized by the system.
  • Fig. 5 schematically shows a control element 13, which is shown in the form of a round button and is thus visible to a user as part of the user interface 4.
  • the visible area of the control element 13 is surrounded by an area 14.
  • this is square along the x and y coordinates.
  • On the left a situation is shown in which the pointer 9 is outside the control element 13 and in particular also outside the area 14. The control element 13 is not addressed. However, if the pointer 9 is within the area 14, as shown on the right in Fig. 5, the control element 13 is addressed.
  • the pointer 9 hits the control element 13 itself or at least the surrounding area 14 of a control element 13 in order to trigger the control element 13.
  • the pointer 9 is considered to be "on the control element 13" if its position is at least in the area 14.
  • Fig. 6 the control element 13 is shown during a period in which it is operated by a user.
  • a status indicator 15 is provided to provide feedback during operation.
  • the user can be given feedback as to whether and when he activates the control element 13, i.e. presses the button. This allows any inaccuracies in the calculation of the distance of the index finger 8 to the input surface 1 to be compensated for the user, since the user does not directly feel the point in time when his finger enters the control area 5.
  • a user will usually expect the control element 13 to react immediately, at the latest when he touches the input surface 1 with his finger.
  • a status indicator in the form of a circular loading bar 15 serves as the display.
  • the user thus receives feedback about the operation of the button 13, in particular time-dependent feedback about the duration of the button operation.
  • Fig. 6 shows various stages of "charging", i.e. the period of time during which the button 13 is held down. At first, the button 13 appears as such (far left in Fig. 6) without a status indicator. As soon as the user enters the control area 5 (and the pointer 9 hits the button 13 accordingly, see above), the loading bar 15 begins to be displayed. The arrow shows the direction in which the loading bar 15 is built up. If the user holds the button 13 for a predetermined period of time, the loading bar 15 is complete (far right in Fig. 6) and the corresponding function is controlled.
  • An advantage of this delayed control is that the activation of the button 13 can be cancelled during the specified period of time, more precisely during the time in which the loading bar 15 is being built up. This can prevent incorrect operations, as the user can remove his finger during the loading (second and third image in Fig. 6) and thus prevent triggering. This can be useful, for example, if a user sees two buttons in front of him which are close to each other and he accidentally hits the wrong button. While the If button 13 is still loading, the user can now interrupt the interaction by withdrawing his finger. The loading bar 15 disappears. He can then press the correct button. In addition, the design allows the user to see a reaction despite the button 13 being hidden, since the loading bar 15 runs around the button 13 and is not or only slightly hidden by the finger. It is understood that other designs or animations may also be suitable for this purpose, for example a straight loading bar, size animations, color animations and the like.
  • the described behavior of displaying a loading bar 15 when operating a button 13 can be implemented using the transition events explained above.
  • the events are assigned to the states of the button 13 or the loading bar 15 as follows.
  • buttons in the system can be marked as “clickable” when the user moves a hand into the activation area 6 (transition event "FocusStateEnterEvent,,).
  • a button 13 that is to be charged can already indicate in its design that a loading bar 15 is first charged when it is pressed. This lets the user know that this button 13 will not be triggered immediately, but must first be “charged”.
  • the loading bar 15 begins to build up as soon as the user's hand or fingertip 8 enters the control area 5 (transition event "TriggerStateEnterEvent”). As long as the fingertip remains in the control area 5, the button 13 remains pressed and the loading bar 15 continues to build up. A corresponding function is controlled (triggered) as soon as the button 13 is fully charged. After a function has been triggered, it can be specified that no other control element can be operated until a "TriggerStateLeaveEvent" is detected. The user must then withdraw his finger and move it back into the control area 5 in order to operate a different button or the same button 13 again.
  • TriggerStateLeaveEvent a departure from the control area 5 is detected (transition event “TriggerStateLeaveEvent”) before the button is fully loaded, i.e. before the loading bar 15 is complete, an abort is executed and the button 13 is returned to the default state without triggering an action.
  • An abort is executed when the finger moves away from the button 13. This can, on the one hand, mean that the finger leaves the boundaries of the button 13 (or the area 14 surrounding the button 13) in the x or y direction. It can also mean that a finger is lifted, ie is removed from the input surface 1 and is too far away from the input surface 1 in the z direction, in particular a distance greater than the distance 11 .
  • the loading bar 15 there are a variety of design options for the loading bar 15, particularly with regard to its shape and also its animation during charging. Two variants of how the loading bar 15 can be animated are explained below as examples.
  • a simple option is an animation of the charging bar 15, which runs at a constant speed.
  • a linear formula for charging can be applied here. For example, a charging status can be calculated in percent by dividing an elapsed time since the start of charging by a given total charging time. For example, if the total time for charging is set to 1 second, the charging bar would be halfway after 0.5 seconds.
  • the charging function can alternatively be represented by a more complex, non-linear function.
  • button 13 can be charged faster at first and then more slowly.
  • the charging process can take just as long as the linear function just described, e.g. 1 second. This means that the user receives feedback more quickly at the beginning and sees a change immediately, but still has the same amount of time to cancel the process.
  • a charging speed at a certain point in time can, for example, be calculated as the product of a base speed with a time factor.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Position Input By Displaying (AREA)

Abstract

L'invention concerne un procédé de capture d'entrées d'utilisateur dans un dispositif d'entrée, le dispositif d'entrée comprenant : une surface d'entrée (1) sur laquelle une interface utilisateur graphique (4) est affichée ; et un dispositif de capture (3). Dans le procédé, une main (7) d'un utilisateur est capturée au moyen du dispositif de capture (3), et une position (8) de la main (7) dans une zone de capture (10) est déterminée, la zone de capture (10) étant une zone spatiale tridimensionnelle qui est attribuée à la surface d'entrée (1). Il est déterminé si la main (7) de l'utilisateur est située dans une zone d'activation (6), la zone d'activation (6) étant une partie de la zone de capture (10) qui est située à une distance (11) de la surface d'entrée (1), et il est déterminé si la main (7) de l'utilisateur est située dans une zone de commande (5), la zone de commande (5) étant une partie de la zone de capture (10) qui est située entre la surface d'entrée (1) et la zone d'activation (6). S'il est déterminé que la main (7) de l'utilisateur se trouve dans la zone de commande (5), un indicateur d'état (15) est affiché qui indique une période de temps pendant laquelle la main (7) est déterminée comme étant dans la zone de commande (5).
PCT/EP2023/085080 2022-12-15 2023-12-11 Procédé et système de capture d'entrées d'utilisateur WO2024126351A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102022133389.2A DE102022133389A1 (de) 2022-12-15 2022-12-15 Verfahren und system zum erfassen von benutzereingaben
DE102022133389.2 2022-12-15

Publications (1)

Publication Number Publication Date
WO2024126351A1 true WO2024126351A1 (fr) 2024-06-20

Family

ID=89427319

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2023/085080 WO2024126351A1 (fr) 2022-12-15 2023-12-11 Procédé et système de capture d'entrées d'utilisateur

Country Status (2)

Country Link
DE (1) DE102022133389A1 (fr)
WO (1) WO2024126351A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050253807A1 (en) * 2004-05-11 2005-11-17 Peter Hohmann Method for displaying information and information display system
US20110219340A1 (en) * 2010-03-03 2011-09-08 Pathangay Vinod System and method for point, select and transfer hand gesture based user interface
US20120229377A1 (en) * 2011-03-09 2012-09-13 Kim Taehyeong Display device and method for controlling the same

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050253807A1 (en) * 2004-05-11 2005-11-17 Peter Hohmann Method for displaying information and information display system
US20110219340A1 (en) * 2010-03-03 2011-09-08 Pathangay Vinod System and method for point, select and transfer hand gesture based user interface
US20120229377A1 (en) * 2011-03-09 2012-09-13 Kim Taehyeong Display device and method for controlling the same

Also Published As

Publication number Publication date
DE102022133389A1 (de) 2024-06-20

Similar Documents

Publication Publication Date Title
EP1998996B1 (fr) Serveur interactif et procédé permettant de faire fonctionner le serveur interactif
EP2574881B1 (fr) Procédé de fonctionnement d'un système de commande pour un véhicule et système de commande pour un véhicule
EP2750914B1 (fr) Procédé et dispositif de mise à disposition d'une interface utilisateur, en particulier dans un véhicule
EP3067244B1 (fr) Vehicule avec mode de conduite s'adaptant automatiquement a la situation
WO2012110207A1 (fr) Procédé et dispositif pour fournir une interface utilisateur, notamment dans un véhicule
WO2013104389A1 (fr) Procédé et dispositif de commande de fonctions dans un véhicule à l'aide de gestes effectués dans l'espace tridimensionnel ainsi que produit-programme d'ordinateur correspondant
DE102010048745A1 (de) Benutzerschnittstelle und Verfahren zum Bedienen einer Benutzerschnittstelle
EP3418861B1 (fr) Procédé de fonctionnement d'un dispositif d'affichage ainsi qu'un véhicule automobile
DE102012206247A1 (de) Verfahren und Vorrichtung zur Anzeige einer Hand eines Bedieners eines Bedienelements eines Fahrzeugs
DE102013000069B4 (de) Kraftfahrzeug-Bedienschnittstelle mit einem Bedienelement zum Erfassen einer Bedienhandlung
DE102015012720B4 (de) Interaktives Bediensystem und Verfahren zum Durchführen einer Bedienhandlung bei einem interaktiven Bediensystem
DE102017106578A1 (de) Fahrzeuganzeigevorrichtung
EP3298477B1 (fr) Procédé de fonctionnement d'un dispositif de commande ainsi que dispositif de commande pour un véhicule automobile
WO2014067798A1 (fr) Mise à disposition d'une entrée de commande à l'aide d'un visiocasque
DE102009039114B4 (de) Bedienvorrichtung für ein Fahrzeug
WO2024126351A1 (fr) Procédé et système de capture d'entrées d'utilisateur
WO2024126352A1 (fr) Procédé et système de capture d'entrées d'utilisateur
WO2024126348A1 (fr) Procédé et système de capture d'entrées d'utilisateur
WO2024126342A1 (fr) Procédé et système de capture d'entrées d'utilisateur
EP3963434B1 (fr) Procédé et dispositif pour sélectionner des champs d'entrée affichés sur un écran et/ou pour activer un contenu d'entrée affiché dans un champ d'entrée sélectionné sur l'écran au moyen d'entrées manuelles
DE102020207040B3 (de) Verfahren und Vorrichtung zur manuellen Benutzung eines Bedienelementes und entsprechendes Kraftfahrzeug
EP3025214B1 (fr) Procédé de fonctionnement d'un dispositif d'entrée et dispositif d'entrée
EP3108332A1 (fr) Interface utilisateur et procédé de commutation d'un premier mode de commande d'une interface utilisateur en un mode gestuel en 3d
DE102012218155A1 (de) Erleichtern der Eingabe auf einer berührungsempfindlichen Anzeige in einem Fahrzeug
DE102012206960B4 (de) Verfahren und Vorrichtung zur Erkennung zumindest einer vorgegebenen Betätigungsbewegung für eine Bedieneinrichtung eines Fahrzeugs

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23832688

Country of ref document: EP

Kind code of ref document: A1