DE102014005064A1 - An input system and method for providing input information - Google Patents

An input system and method for providing input information

Info

Publication number
DE102014005064A1
DE102014005064A1 DE102014005064.5A DE102014005064A DE102014005064A1 DE 102014005064 A1 DE102014005064 A1 DE 102014005064A1 DE 102014005064 A DE102014005064 A DE 102014005064A DE 102014005064 A1 DE102014005064 A1 DE 102014005064A1
Authority
DE
Germany
Prior art keywords
input
input area
b1
object
es
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
DE102014005064.5A
Other languages
German (de)
Inventor
Manfred Mittermeier
Ulrich Müller
Manuel Kühner
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Audi AG
Original Assignee
Audi AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Audi AG filed Critical Audi AG
Priority to DE102014005064.5A priority Critical patent/DE102014005064A1/en
Publication of DE102014005064A1 publication Critical patent/DE102014005064A1/en
Application status is Withdrawn legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Arrangement of adaptations of instruments
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K37/00Dashboards
    • B60K37/04Arrangement of fittings on dashboard
    • B60K37/06Arrangement of fittings on dashboard of controls, e.g. controls knobs
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0425Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/044Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/045Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means using resistive elements, e.g. single continuous surface or two parallel surfaces put in contact
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2370/00Details of arrangements or adaptations of instruments specially adapted for vehicles, not covered by groups B60K35/00, B60K37/00
    • B60K2370/10Input devices or features thereof
    • B60K2370/12Input devices or input features
    • B60K2370/143Touch sensitive input devices
    • B60K2370/1438Touch screens
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2370/00Details of arrangements or adaptations of instruments specially adapted for vehicles, not covered by groups B60K35/00, B60K37/00
    • B60K2370/10Input devices or features thereof
    • B60K2370/12Input devices or input features
    • B60K2370/146Input by gesture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2370/00Details of arrangements or adaptations of instruments specially adapted for vehicles, not covered by groups B60K35/00, B60K37/00
    • B60K2370/20Optical features of instruments
    • B60K2370/21Optical features of instruments using cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/041Indexing scheme relating to G06F3/041 - G06F3/045
    • G06F2203/04106Multi-sensing digitiser, i.e. digitiser using at least two different sensing technologies simultaneously or alternatively, e.g. for detecting pen and finger, for saving power or for improving position detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/041Indexing scheme relating to G06F3/041 - G06F3/045
    • G06F2203/04108Touchless 2D- digitiser, i.e. digitiser detecting the X/Y position of the input means, finger or stylus, also when it does not touch, but is proximate to the digitiser's interaction surface without distance measurement in the Z direction

Abstract

An input system (ES) for providing input information (EI) includes an input area (EF) having at least two first input areas (B1) and a second input area (B2), a camera (KA) for capturing an electronic image (EB), an input area Detection unit (BEH), a touch sensor (BS) and an output unit (AE). The input area determination unit (BEH) is configured to determine (130) a first input area (B1 ') to which an object (OB) approaches and to approach the first input area (B1') to which the object (OB) approaches, by means of an evaluation (130 ') of the electronic image (EB) to determine. The output unit (AE) is configured to generate (160) input information (EI) in consideration of the first input area (B1 ') detected by the input area determination unit (BEH) and a touch (140) detected by the touch sensor (BS). The invention also relates to a vehicle (FZ) having an input system (ES) according to the invention and a corresponding method (100) for providing input information (EI).

Description

  • The present invention relates to an input system for providing input information. The input system includes an input surface having at least two first input regions and a second input region, a camera for capturing an electronic image, an input region detection unit, a touch sensor, and an output unit. The input area determination unit is configured to determine one of the first input areas to which an object approaches. The input area detection unit is configured to determine the first input area to which the object approaches by means of an evaluation of the electronic image acquired by the camera. The touch sensor is configured to detect contact of the second input area with the object. The output unit is configured to generate input information in consideration of the first input range determined by the input range detecting unit and the touch detected by the touch sensor. Typically, the input area is flat. Alternatively, it is also possible that the input surface is completely or partially curved. Regardless, the input area may be one or more parts. The camera is preferably a 3D camera. A 3D camera is understood here to be a camera that provides electronic images whose pixels have at least one distance value. The distance value typically indicates a distance between the camera and a location on a surface of the object that the respective pixel images.
  • Moreover, the invention relates to a vehicle.
  • Moreover, the invention relates to a method for providing input information. The method comprises the following steps: approximating an object to one of at least two first input areas of an input area; Capturing an electronic image of the object; Determining the first input area to which the object approaches by means of evaluating the electronic image; Touching a second input area of the input area by the object; Detecting the touching of the second input area by the object by means of a touch sensor and generating the input information in consideration of the first input area detected by the input area detection unit and the touch detected by the touch sensor.
  • In vehicles, systems are increasingly being provided which serve to detect persons who are in the vehicle in a three-dimensional manner. These systems can be camera-based. For example, 3D cameras are suitable for this purpose. Known applications for such systems are the detection and recognition of gestures and / or intentions of a driver and / or a passenger.
  • The US 2008/0211766 A1 describes an input system and a method in which a hand profile is generated by means of a first detection mechanism and a touch profile by means of a second detection mechanism. The hand profile, which can be generated by means of an image sensor, shows individual fingers of a hand and their position within a plane. The touch profile shows one or more touch areas defined by touch with fingers and by a location of the fingers in the plane.
  • It is an object of the present invention to provide an input system that can be produced with less effort than the known input system. In addition, it is an object of the present invention to provide a vehicle with a generic input system that can be produced with less effort than the known input system. Moreover, it is an object of the present invention to provide a method of providing input information that can be performed with an input system that can be manufactured with less effort than the known input system.
  • This object is achieved by an input system according to claim 1, by a vehicle according to claim 9 and a method according to claim 10. Advantageous developments of the present invention are specified in the subclaims.
  • The input system for providing input information includes an input surface having at least two first input regions and a second input region, a camera for taking an electronic image, an input region detection unit, a touch sensor, and an output unit. The input area determination unit is configured to determine one of the first input areas to which an object approaches. The input area detection unit is configured to determine the first input area to which the object approaches by means of an evaluation of the electronic image acquired by the camera. The touch sensor is configured to detect contact of the second input area with the object. The output unit is configured to generate input information in consideration of the first input range determined by the input range detecting unit and the touch detected by the touch sensor. The second Input area comprises at least two of the first input areas each at least partially.
  • Accordingly, a vehicle according to the invention has an input system according to the invention.
  • The method according to the invention for providing input information comprises the following steps: approximating an object to one of at least two first input areas of an input area; Capturing an electronic image of the object; Determining the first input area to which the object approaches by means of evaluating the electronic image; Touching a second input area of the input area by the object, wherein the second input area at least partially comprises at least two of the first input areas; Detecting the touching of the second input area by the object by means of a touch sensor and generating the input information in consideration of the first input area detected by the input area detection unit and the touch detected by the touch sensor. The second input area comprises at least two of the first input areas, each at least partially.
  • Due to the fact that the second input area at least partially comprises at least two of the first input areas, requirements for a spatial resolution of the touch sensor are lower than in the known input system. Due to the lower spatial resolution of the touch sensor, the input system according to the invention can be produced with less effort than known input systems.
  • In a first embodiment, the input system is prepared to first determine the first input range by means of the input range detection unit after the touch sensor has detected a touching of the object. As a result, a calculation effort (as well as an associated power consumption and / or occupancy of a processing resource) for determining the first input range can be avoided until the second input range has been touched by the object.
  • In a second embodiment, the input system is prepared to detect the touching of the object with the second input area of the input area only after the input area determination unit has determined the first input area to which the object is approaching. In this way, it can be achieved that the information about which of the first input areas the object has last approached when touching the second input area is present without delay immediately after the detection of the touching of the input area by the object.
  • It is particularly preferred if the second input area has all first input areas of the input area. To realize this embodiment, a single touch sensor with no spatial resolution for the entire input surface is sufficient. As a result, an input system with minimal effort for the touch sensor can be produced.
  • It is also preferable if the touch sensor has a force sensor and / or a pressure sensor. By means of a force sensor and / or a pressure sensor, a deliberate contact of the input surface can be distinguished from an inadvertent contact with the input surface. This is particularly advantageous for mobile applications in vehicles in which accidental contact with the input surface is likely (for example, due to a bump or a braking event, at sea or in an aircraft due to an air hole). The input system may include a comparator for comparing a threshold with a force detectable with the force sensor. In this way, a defined minimum force can be specified, which is considered sufficient for the assumption that a voluntary input has taken place. Alternatively or additionally, the input system may include a comparator for comparing a threshold value with a pressure (force per unit area) detectable with the pressure sensor. In this way, a defined minimum pressure can be specified, which is considered sufficient for the assumption that a voluntary input has taken place.
  • It is also preferable if the input surface is part of an electronic display device. As a result, the input system can be an input component of a touchscreen. An advantage of such a touch screen is cost-effective manufacturability compared to known touch screens.
  • The object may be a body part of a user or a tip of a stylus. In the first case, an input without a further aid, for example by means of a fingertip is possible. In the second case, a stylus is required as input aid. With a stylus first input ranges can be reliably selected, which may be smaller than if the input is provided by means of a fingertip. The stylus may be a component of the input system.
  • Regardless, it may be expedient if the display device has an actuator for generating a haptic feedback. The haptic feedback can be done for example by means of a pulse generator for generating a mechanical pulse or by means of a vibrator. In any case, the pulse generator can have an electromechanical drive. The electromechanical drive can have, for example, an electromagnet and / or a piezoceramic.
  • The present invention will now be explained in more detail with reference to the accompanying drawings. Showing:
  • 1 a schematic representation of an input system for providing an input information;
  • 2 schematically an embodiment of an arrangement of a plurality of first input areas and a second input area in an input surface of a display device;
  • 3 a schematic flow diagram of a first embodiment of the inventive method for providing an input information; and
  • 4 a schematic flow diagram of a second embodiment of the inventive method for providing an input information.
  • The embodiments described in more detail below represent preferred embodiments of the present invention.
  • That in the 1 Input system ES shown for providing input information EI comprises a display device AV, a camera KA, an input range determination unit BEH, an output unit AE, an evaluation unit AW and an actuator AK. The display device AV has a touch sensor BS. The touch sensor BS is typically a force sensor KS and / or a pressure sensor DS. The force sensor KS detects a strength of a vector component FVK of a force F acting on the input surface EF perpendicular to the input surface EF (ie, opposite to the direction of a surface vector FV of the input surface EF) of a body part KT (typically a finger) of a user or of a tip SP of a stylus ST is applied to the input surface EF.
  • The camera KA is aligned to receive images BI in an area in front of the input surface EF. The camera KA is preferably a 3D camera KA. A 3D camera is understood here to mean a camera KA which provides electronic images BI whose pixels each have at least one distance value. The distance value typically indicates a distance between the camera KA and a location on a surface of the object OB that the respective pixel images. The 3D camera may be based on one or more of the following (known per se) principles for determining a distance to the object OB:
    • Picking up the object OB from at least two mutually spaced locations (for example with at least two cameras or a stereo camera and evaluating the parallax);
    • Projecting a defined pattern onto the object OB by means of a light source (for structured light) and evaluating a distortion of the pattern;
    • Determining distances to the object OB by means of direct or indirect transit time measurement (for example by means of a PMD sensor, PMD = photonic mixer detector);
    • - Evaluating differences between a measuring and an object beam using the functional principle of an interferometer;
    • - Using a microlens array with subsequent creation of a depth map by evaluating directions of the light rays, each leading to a pixel.
  • The electronic images BI generated by the camera KA are transmitted via a first data line D1 to an input area determination unit BEH. The input range determination unit BEH is designed to generate from one or more of the electronic images BI a location and / or presence information S1 about a location and / or presence of the body part KT of the user and / or the stylus ST: This location and / or presence information S1 is transmitted via a second data line D2 to an output unit AE. The output unit AE receives information S2 from the contact, force and / or pressure sensor BS, DS, KS via a third data line D3 via a force exerted on the input surface EF and / or via a pressure exerted on the input surface EF. The output unit AE derives an input signal EI from the location and / or presence information S1 in conjunction with the force and / or pressure information S2. The input signal EI is transmitted via a fourth data line D4 to an evaluation unit AW.
  • A first option provides that the input system ES comprises at least two 3D cameras KA in order to obtain a reliability of the recognition by means of a comparison of the location and / or presence information S1, which are determined by different (not necessarily different) 3D cameras KA to the selected actuation function improve. Alternatively or additionally, for this reason, a separate input range determination unit BEH can also be provided for each 3D camera KA.
  • An independent second option provides that the evaluation unit AW comprises an application that uses the input system ES (for example, a driver assistance system or an entertainment system).
  • An independent third option provides that the evaluation unit AW via a fifth data line D5 a display for displaying information (for example, a map display or an input menu) controls. The display can be combined with the touch, force and / or pressure sensor BS, KS, DS in a display device AV. When the display device AV is so incorporated into the remaining arrangement of the input system ES, the display device AV has the function of a touch screen from the user's point of view.
  • An independent fourth option provides that the evaluation unit AW via a sixth data line D6 actuates an actuator AK by means of a feedback signal RMS to output a feedback RM. In the simplest case, the acknowledgment RM consists of a change in a representation of an information displayed on the display (for example, brightness or color change of an operating field). An alternative or additional possibility is to emit a signal tone by means of the actuator AK. Preferably, a haptic feedback RM, which can be done alternatively or additionally. The haptic feedback RM can, for example, consist in a vibration of the display device AV or in a sudden change in the arrangement of the input surface EF (by a few hundredths or tenths of a millimeter) by means of a mechanical actuator AK. The mechanical actuator AK can for example comprise an electromagnet (not explicitly shown in the figures) or a piezoceramic which outputs a mechanical signal which can be perceived haptically by a user.
  • Individual or a subset or all of the mentioned data lines D1, D2, D3, D4, D5, D6 can also be implemented by means of radio transmission (for example by means of an automotive WLAN or by means of Bluetooth).
  • Due to a device for three-dimensional detection of persons located in the vehicle FZ, which is already present in a vehicle FZ for any other reason, a position of a finger KT of a user can be known anyway in the operation of a touch screen AV. The present invention makes it possible to save a previously required, complex touch sensor system with high resolution in the touch screen AV. Instead, touching the input surface EF can be detected by means of a large-area contact sensor BS and / or a large-area force sensor KS for detecting an operating force or an actuating pressure DS. Here, a position of a finger KT or a stylus ST lateral to the input surface EF can be detected via a 3D camera KA and an input range detection unit BEH. A position of a finger KT or a stylus ST or a degree of contact of the finger KT or the stylus ST in parallel with a surface vector FV of the input surface EF can be detected indirectly by means of the touch, force and / or pressure sensor BS, KS, DS.
  • By means of the force and / or pressure sensor KS, DS, it is possible to compensate for an insufficient accuracy of the local detection of a fingertip position (parallel to the surface vector FV of the input surface EF). The combination of three-dimensional detection and contact, force and / or pressure sensor BS, KS, DS makes it possible to provide a touch screen without the otherwise required, complex touch sensors. Thus, the technical problem is solved that a local resolution of available systems for three-dimensional detection of the vehicle interior in the foreseeable future is not yet accurate enough to reliably detect an actual touch can.
  • To further improve operating safety, it may be expedient for an application function to be triggered only when the user or the tip SP of the stylus ST exerts an operating force F on the input surface EF whose force component FVK is parallel to a surface vector FV of the input surface EF exceeds a predetermined threshold.
  • Optionally, by means of the display device AV and / or an actuator AK a haptic feedback RM done to give the user a clear feedback RM about a successfully triggered function.
  • The inventive concept is applicable to all surfaces that can be equipped with a contact, force or pressure sensor BS, KS, DS. For this purpose, it is not absolutely necessary that the display device AV has an active display. For example, a control panel AV of a machine tool may include a 3D camera (or a plurality of 3 cameras), printed switch labels, and a single microswitch with which an operating force on the control panel AV can be detected. The actuation of the microswitch can be independent of the switching function selected by the user. The same applies to a dashboard of a vehicle FZ. The microswitch is then used to detect an operation will and a timing of the will to operate. The 3D camera KA with the input area determination unit BEH then serves to recognize which of the switching functions offered on the dash panel has been selected by the user at the time of actuation.
  • The 2 shows an embodiment of an arrangement of a plurality of first input areas B1 and a second input area B2 in an input area EF of a display device AV. The second input area B2 comprises each of the first input areas B1 at least partially.
  • In the 3 shown first embodiment of the method 100 for providing input information EI comprises the following steps. In a first step 110 an object OB is approximated to one of at least two first input areas B1 of an input area EF. In a second step 120 an electronic image EB is picked up by the object OB. In a third step 130 is the first input area B1 ', to which the object OB approaches, by means of evaluation 130 ' of the electronic image EB determined. In a fourth step 140 a second input area B2 of the input area EF is touched by the object OB. In a fifth step 150 will the touch 140 of the second input area B2 detected by the object OB by means of a touch sensor BS. In a sixth step 160 becomes the input information EI in consideration of the first input area B1 detected by the input area determination unit BEH and the touch detected by the touch sensor BS 140 generated. The second input area B2 comprises at least two of the first input areas B1 at least partially.
  • In the 4 shown second embodiment of the method 100 for providing input information EI is different from that in 2 shown embodiment in that the third step 130 determining the first input area B1, to which the object OB approaches, after the fifth step 150 of detecting the touching 140 the input surface EF by the object OB takes place.
  • In principle, further other sequences of the method steps may be expedient.
  • QUOTES INCLUDE IN THE DESCRIPTION
  • This list of the documents listed by the applicant has been generated automatically and is included solely for the better information of the reader. The list is not part of the German patent or utility model application. The DPMA assumes no liability for any errors or omissions.
  • Cited patent literature
    • US 2008/0211766 A1 [0005]

Claims (10)

  1. An input system (ES) for providing input information (EI), wherein the input system (ES) comprises: - an input surface (EF) having at least two first input regions (B1) and a second input region (B2); A camera (KA) for taking an electronic picture (EB); An input range determination unit (BEH) for determining a (B1 ') of the first input ranges (B1) to which an object (OB) approaches by means of evaluation ( 130 ' ) of the electronic image (EB); A touch sensor (BS) for detecting ( 150 ) of touching ( 140 ) of the second input area (B2) through the object (OB); An output unit (AE) for generating ( 160 ) of input information (EI) in consideration of the first input area (B1 ') detected by the input area determination unit (BEH) and the touch detected by the touch sensor (BS) ( 140 ); characterized in that the second input area (B2) at least two of the first input areas (B1) each comprise at least partially.
  2. Input system (ES) according to claim 1, characterized in that the input system (ES) is prepared to first determine the first input range (B1) by means of the input range detection unit (BEH) after the touch sensor (BS) touch ( 140 ) of the object (OB).
  3. Input system (ES) according to claim 1, characterized in that the input system (ES) is prepared to prevent the touching ( 140 ) of the object (OB) with the second input area (B2) of the input area (EF) only after the input area determining unit (BEH) has determined the first input area (B1) to which the object (OB) is approaching.
  4. Input system (ES) according to one of claims 1 to 3, characterized in that the second input area (B2) comprises all first input areas (B1) of the input area (EF).
  5. Input system (ES) according to one of claims 1 to 4, characterized in that the touch sensor (BS) comprises a force sensor (KS) and / or a pressure sensor (DS).
  6. Input system (ES) according to one of claims 1 to 5, characterized in that the input surface (EF) is part of an electronic display device (AV).
  7. Input system (ES) according to one of claims 1 to 6, characterized in that the object (OB) is a body part (KT) of a user or a tip (SP) of a stylus (ST).
  8. Input system (ES) according to one of claims 1 to 7, characterized in that the display device (AV) an actuator (AK) for generating ( 170 ) has a haptic feedback (RM).
  9. Vehicle (FZ), characterized in that the vehicle (FZ) has an input system (ES) according to one of claims 1 to 8.
  10. Procedure ( 100 ) for providing input information (EI), the method ( 100 ) comprises the following steps: - approaching ( 110 ) of an object (OB) to one of at least two first input areas (B1) of an input area (EF); - Take up ( 120 ) of an electronic image (EB) of the object (OB); - Determine ( 130 ) of the first input area (B1 '), to which the object (OB) approaches, by means of evaluation ( 130 ' ) of the electronic image (EB); - Touch ( 140 ) of a second input area (B2) of the input area (EF) through the object (OB), wherein the second input area (B2) at least partially comprises at least two of the first input areas (B1); - To capture ( 150 ) of touching ( 140 ) of the second input area (B2) through the object (OB) by means of a touch sensor (BS); and - generating ( 160 ) of the input information (EI) in consideration of the first input area (B1) detected by the input area determination unit (BEH) and the touch detected by the touch sensor (BS) ( 140 ).
DE102014005064.5A 2014-04-05 2014-04-05 An input system and method for providing input information Withdrawn DE102014005064A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
DE102014005064.5A DE102014005064A1 (en) 2014-04-05 2014-04-05 An input system and method for providing input information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
DE102014005064.5A DE102014005064A1 (en) 2014-04-05 2014-04-05 An input system and method for providing input information

Publications (1)

Publication Number Publication Date
DE102014005064A1 true DE102014005064A1 (en) 2015-10-08

Family

ID=54146130

Family Applications (1)

Application Number Title Priority Date Filing Date
DE102014005064.5A Withdrawn DE102014005064A1 (en) 2014-04-05 2014-04-05 An input system and method for providing input information

Country Status (1)

Country Link
DE (1) DE102014005064A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080211766A1 (en) 2007-01-07 2008-09-04 Apple Inc. Multitouch data fusion
US20130342459A1 (en) * 2012-06-20 2013-12-26 Amazon Technologies, Inc. Fingertip location for gesture input

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080211766A1 (en) 2007-01-07 2008-09-04 Apple Inc. Multitouch data fusion
US20130342459A1 (en) * 2012-06-20 2013-12-26 Amazon Technologies, Inc. Fingertip location for gesture input

Similar Documents

Publication Publication Date Title
US9671867B2 (en) Interactive control device and method for operating the interactive control device
JP4274997B2 (en) Operation input device and operation input method
US7847786B2 (en) Multi-view display
US20110260965A1 (en) Apparatus and method of user interface for manipulating multimedia contents in vehicle
US20100026723A1 (en) Image magnification system for computer interface
KR20110076921A (en) Display and control system in a motor vehicle having user-adjustable representation of displayed objects, and method for operating such a display and control system
EP1983402A1 (en) Input device and its method
US20150022664A1 (en) Vehicle vision system with positionable virtual viewpoint
US20120068956A1 (en) Finger-pointing, gesture based human-machine interface for vehicles
US20150367859A1 (en) Input device for a motor vehicle
JP2014518422A (en) Apparatus and method for non-contact detection of objects and / or persons and gestures and / or operating procedures performed and / or performed thereby
DE102009051202A1 (en) Method for operating an operating device and operating device
US9965169B2 (en) Systems, methods, and apparatus for controlling gesture initiation and termination
US10025388B2 (en) Touchless human machine interface
DE102013012394A1 (en) Method and device for remote control of a function of a vehicle
CN102411438A (en) Input means
EP2068230A3 (en) Calibration of a gesture recognition interface system
KR101732765B1 (en) Method and apparatus for providing a user interfacein particular in a vehicle
US9959461B2 (en) Vehicle gesture recognition system and method
JP2010184600A (en) Onboard gesture switch device
CN103870802B (en) System and method using the user interface in paddy operation vehicle is referred to
DE102011011143A1 (en) Method for changing the state of an electronic device
US9994233B2 (en) Hands accelerating control system
JP5124397B2 (en) I / O devices for automobiles
JP5392900B2 (en) In-vehicle device operation device

Legal Events

Date Code Title Description
R012 Request for examination validly filed
R016 Response to examination communication
R016 Response to examination communication
R120 Application withdrawn or ip right abandoned