WO2014094699A1 - Procédé d'utilisation d'un dispositif qui présente une interface utilisateur comportant un détecteur de contact, et dispositif correspondant - Google Patents

Procédé d'utilisation d'un dispositif qui présente une interface utilisateur comportant un détecteur de contact, et dispositif correspondant Download PDF

Info

Publication number
WO2014094699A1
WO2014094699A1 PCT/DE2013/000605 DE2013000605W WO2014094699A1 WO 2014094699 A1 WO2014094699 A1 WO 2014094699A1 DE 2013000605 W DE2013000605 W DE 2013000605W WO 2014094699 A1 WO2014094699 A1 WO 2014094699A1
Authority
WO
WIPO (PCT)
Prior art keywords
touch sensor
data
user
keyboard
virtual
Prior art date
Application number
PCT/DE2013/000605
Other languages
German (de)
English (en)
Inventor
Jörg Edelmann
Philipp Mock
Andreas Schilling
Original Assignee
Eberhard Karls Universität Tübingen
Stiftung Medien in der Bildung
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Eberhard Karls Universität Tübingen, Stiftung Medien in der Bildung filed Critical Eberhard Karls Universität Tübingen
Publication of WO2014094699A1 publication Critical patent/WO2014094699A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus

Definitions

  • the invention relates to a method for operating a device, the one
  • a user interface with a touch sensor, according to the preamble of claim 1, as well as a device with a user interface having a touch sensor, according to the preamble of claim 1 first
  • a user interface having a touch sensor may be, for example, a stationary or portable computer or a computer-controlled machine.
  • a touch sensor is an input device in which a program flow of the device can be controlled by touching certain areas of the surface.
  • a touch sensor may be combined with a screen.
  • a touch screen combines the electronic display of a screen with a screen
  • a touch screen is therefore a combined input and output device.
  • a controller connected to the touch screen generates image contents of the
  • Touch screen serves to receive and process input signals obtained by touching touch areas on the surface, e.g. be generated by finger pressure.
  • the controller is typically configured to operate in at least one mode of operation
  • Touchscreen a virtual keyboard can be generated with a variety of virtual buttons.
  • keyboard an input device is referred to, which contains as controls a number of fingers to be pressed with keys. For example, are known
  • Typewriter keyboards with mechanical keys that move when pressed, triggering input signals with mechanical keys that move when pressed, triggering input signals.
  • a “virtual keyboard” has “virtual keys”.
  • Virtual buttons are generated on a touch screen usually appear as a button-like appearing symbols (soft keys or icons) electronically. Virtual buttons can also be created on a touch sensor by printing, recording or otherwise.
  • the keyboard layout or the keyboard layout of a mechanical or virtual keyboard describes both the coding of the individual keys as well as their position and number on the keyboard.
  • Touchscreen is usually less accurate and more error-prone or slower than on a hardware keyboard with mechanical buttons. Unlike mechanical keyboards, the user does not receive any haptic feedback about the position of the fingers on the keyboard because the edges of the keys are imperceptible and the finger position can not be corrected without visual control. This makes fast typing difficult with the ten-finger system, and even experienced users often have to look at the virtual keys as they type. This results in lower writing speeds for text input and higher error rates.
  • WO 2012/048380 AI a method is described in WO 2012/048380 AI, according to which an adapted to the laid finger keyboard layout can be determined.
  • a keyboard model is created in which "home keys" of the individual fingers as well as linked keys are defined.
  • the keys ASDF and JKL represent the home keys for an accomplished keyboard user now the hand in the
  • the virtual keys move to the corresponding positions of the fingertips.
  • different keyboard layouts can be called up.
  • the virtual keyboard thus adapts to the natural finger positions of the respective user by changing the position of the individual virtual buttons.
  • US 2009/023736 A1 describes another possibility for automatically adapting the keyboard layout to the personal characteristics of a user.
  • the user places his hand on the touch screen so that one or more fingers and the palm of his hand touch the surface of the touch screen.
  • the keyboard layout is then determined at least in part from the distance between the detected point of contact of the handball and the positions of the finger touch and adapted accordingly to the user.
  • AI methods are described, which allow the keyboard layout of a virtual keyboard in a learning process to the respective user adapts.
  • the computer system recognizes the typing pattern of a user and then changes the positions, the sizes and / or orientations of the virtual keys in such a way that the user gets comfortable working, whereby the typing errors are to be reduced.
  • the patent application US 2011/0254772 Al describes methods for detecting movements of one or more touches in the area of a virtual keyboard of a touch screen.
  • certain characteristics of a touch such as e.g. the intensity, the size of the contact surface or the support angle used to perform actions in addition to the text input with the virtual keyboard.
  • Gestures are interpreted from successive touches. These can also be created by a user himself and are then recognized by the system.
  • the gesture recognition allows, among other things, to distinguish between touch by means of a finger, a thumb or a palm.
  • the keyboard layout is not adapted to the user in this procedure.
  • Adaptation can be visible or not visible to the user.
  • additional geometrical characteristics of the touch such as the size and orientation of the contact zone, and movement of the touch during key press are evaluated to assign keystrokes to individual virtual keys. It has been shown that customization of the virtual keyboard to the user can increase the input speed.
  • the invention provides a method having the features of claim 1 and a device having the features of claim 11.
  • a virtual keyboard is generated with a plurality of virtual buttons on the touch sensor. If the touch sensor with a screen to a
  • the virtual buttons are generated in the form of electronically generated icons on the screen. It is also possible to generate virtual keys in other ways, e.g. by optical projection onto a surface of the touch sensor or in the form of permanent, invariable virtual buttons, e.g. can be generated by printing or recording or laying on the surface of the touch sensor. An example of this is known as "Touch Cover" for tablet computers from Microsoft.
  • Each virtual key has a given position and shape.
  • user-specific touch sensor data is detected. These include information about individual peculiarities of the user in keyboard usage, such as the way certain virtual keys are hit by the user.
  • the user-specific touch sensor data is used to provide a user-specific
  • the keyboard model is thus the result of a learning process.
  • the response of virtual keys of the same virtual keyboard or other corresponding virtual keyboard to the user in use is adjusted.
  • Keyboard model is thus used to classify keystrokes when using a virtual keyboard.
  • the step of associating a typing action and the input triggered by this typing action, ie, selecting the virtual key that is considered to be typed, is called a "classification.”
  • the classification is a fingerprint at a particular location of a particular key of a plurality of
  • the hardware used for this purpose and the program components (software) that are active for this purpose form the
  • the term "keyboard model” is an alternative term for a classifier.
  • the task of the keyboard model is, for example, to interpret a keystroke as an input of a specific letter.
  • the term “response” refers to the classification result or to the " Answer "of the virtual keyboard at a finger pressure.
  • position data refers to data representing the position of the contact zone of a finger with the surface of the touch screen when a key is pressed.
  • fingerprint data refers to data representing an image of the area of contact between the finger and the surface of the touch sensor when a button is pressed by a finger
  • the area of contact covered by the image includes, but may be, the immediate area of contact between the finger and the surface
  • Touch area may e.g. rectangular or otherwise polygonal.
  • the image is preferably two-dimensional, but in certain cases could also be one-dimensional. Instead of a single image per key press, the time course of the
  • Fingerprint data so a series of multiple images are determined.
  • image hereby refers to the raw data of the area covered in the area of contact
  • the detected raw data may be e.g. represent the spatial distribution of light intensity in the area of the fingerprint.
  • the detected raw data may represent the local distribution of the measured capacitance.
  • Modified raw data is data that also contains this information essentially unadulterated, but usually with a reduced amount of data.
  • the modified raw data include, in particular, such data as is available from the raw data by compression methods (eg JPEG compression), filtering (eg low pass) or methods for dimensional reduction (eg Principal component analysis or discriminant analysis).
  • Modified raw data can also be derived from the raw data by calibration or normalization.
  • Modified raw data in the sense of this application are to be distinguished from such data that are generated by evaluating raw data according to specific, pre-defined, reasonable criteria by feature extraction.
  • raw data is e.g. in terms of geometric criteria, such as relative fingertip position or relative fingerprint center position or moving direction of the fingertip for a fingerprint, before the data processed according to pre-defined criteria are used to classify a classifier.
  • geometric criteria such as relative fingertip position or relative fingerprint center position or moving direction of the fingertip for a fingerprint
  • the fingerprint data is fed to a classifier without further pre-processing.
  • the full primary information in a fingerprint is unadulterated to the
  • the claimed invention is based on the insight that in general it can not be determined in advance which concrete information leads to a better distinction of the fingerprints.
  • the learning algorithm separates the data based on the information given in the raw data or modified raw data.
  • the criteria of separation need not always correspond to a writable property (e.g., size or orientation of a fingerprint assumed to be elliptical, etc.).
  • the classification works similar to human information processing. Again, it is not always possible to specify clear criteria according to which a person distinguishes objects.
  • the keyboard model could have the effect of changing the visible keyboard layout as a result of customization. However, it is considered advantageous if the customization of the virtual keyboard to the individual user is imperceptible to the user.
  • the appearance of the virtual keyboard ie, the visible keyboard layout, remains unchanged, so that the user can orient himself to the visible positions of the virtual keys as needed.
  • the virtual keyboard is made by customizing the
  • Such a method differs from known methods which adapt the keyboard size of the key likely to be next pressed using a language model, in that the adaptation of the response of the virtual keyboard is independent of the meaning of the information represented by the keyboard input.
  • One difference to methods where creating a customized keyboard layout by placing fingers or other hand parts on the touch screen is that the virtual keyboard need not show any change visible to the user. Similar to a hardware keyboard, the virtual keyboard can have a timeless appearance, so that a user is in a working situation familiar from mechanical keyboards. This will not distract the user and prevent the user from adapting to a changed keyboard layout. To some extent, this can prevent the user's writing behavior from gradually deteriorating. Similar advantages also result over known methods, in which the position and / or size of individual keys is adapted to the typing position of the user.
  • the position data is derived from the touch sensor data.
  • the geometric center of gravity of a fingerprint or a weighted according to certain criteria focus of a fingerprint can be determined as a position of the associated keystroke and further processed.
  • a support vector machine is used to classify the fingerprint data, which is essentially an algorithm for pattern recognition, which can be implemented in a computer program and is capable of detecting the recorded fingerprints on the basis of the Fingerprint data can be reliably assigned to different classes or to different virtual keys.
  • SVM support vector machine
  • Also other classification algorithms can be used, eg neural networks or classification with Gaussian processes or logistic regression.
  • these two pieces of information can also be used in combination in a classification method.
  • the use of fingerprint data requires that the touch sensor be configured to exist within the typical size contact zone of a finger with the surface, a plurality of measurement points for a finger-size variable physical size, such that an image of the touch area using the spatially resolved measured values of the measured variable can be generated.
  • the term "measuring point” here refers to a measuring area of small extent, which is not punctiform in the mathematical sense. The extent of a measuring point can e.g. in tenths of a millimeter range or less.
  • the touch sensor is an optical touch sensor, in which changes in the sensor properties in the respective contact region can be detected spatially resolved optically by means of a two-dimensional image of picture elements (pixels). It may, for example, be a grayscale image in which measuring points which were exposed to a greater pressure when the key was pressed receive a different gray level in the computer-aided image than weaker pressure-loaded measuring points. With an optical touch sensor, if necessary, a shadow can also be detected in the immediate vicinity of the contact zone. It is also possible to use the method when using a capacitive touch sensor whose local resolving power is so large that within the typical size of a fingerprint multiple measurement points lie whose capacity can be determined independently of the capacity of adjacent measurement points. The measuring points of a capacitive touch sensor whose local resolving power is so large that within the typical size of a fingerprint multiple measurement points lie whose capacity can be determined independently of the capacity of adjacent measurement points.
  • the measuring points of a capacitive touch sensor whose local resolving power is so large that
  • Touch sensors are e.g. formed by crossing points of perpendicular to each other in two staggered planes extending tracks, which form a local capacity at each crossing point, the size of which changes under pressure load and / or finger touch.
  • the spatial density of measurement points or in one or more transverse to each other parallel to the surface of the touch sensor directions should be at least 0.5 / mm, in particular 1 / mm or more. As a result, a sufficiently good spatial resolution can be achieved.
  • the function of the keyboard model may, in some embodiments, be described as associating with each virtual key an area-defined active area having a different position and / or shape for one or more virtual buttons than the area of the virtual button visible on the touch-sensor.
  • the active area is usually larger than the visible area of the virtual button, so that even typing actions that are at the edge of the visible virtual button or even outside the visible area of the virtual button are correctly assigned (classified).
  • the active area may be laterally displaced from the visible area of the virtual button in a particular direction, which may be the case, for example, when a user taps regularly in a particular direction adjacent to the correct position of the button.
  • the active regions each have an irregular shape, so in particular are neither circular nor rectangular.
  • the shape and size of the active areas active on the key press and their location are determined dynamically using the user-specific keyboard model. The generation of such active areas allows error-free input even with very imprecise keystrokes, where necessary, by a
  • Keystroke desired input also occurs when the assigned key is missed while typing.
  • the classification using fingerprint data offers advantages beyond just assigning keystrokes to specific virtual keys.
  • a user classifier is generated that performs a user classification such that a keystroke based on the fingerprint data is assigned to a particular user. Then, further actions of the device may be controlled depending on the result of the user classification. For example, a
  • Prompt for entering a password are generated and the subsequent keystrokes are analyzed using the user classifier not only on whether the expected password (eg a pre-defined and stored sequence of letters, numbers and / or characters) is entered, but in addition Also, whether the correct or expected password is entered by an authorized user.
  • the expected password eg a pre-defined and stored sequence of letters, numbers and / or characters
  • Security settings may be set up to release certain or all other actions of the device only if the correct password is entered by an authorized user.
  • the invention also relates to a device with a user interface, the one
  • the Touch sensor has. The device is for carrying out the method in the
  • the device has a user interface that has a
  • a touch sensor having within a typical size of a contact zone of a finger with the surface of the touch sensor, a plurality of measuring points for a through
  • a controller coupled to the touch sensor is configured to receive and process touch sensor data obtained by contacting touch areas on the surface of the touch sensor
  • Touch sensor can be generated.
  • the device is configured such that in at least one operating mode, a virtual keyboard having a plurality of virtual keys can be created on the touch sensor.
  • a virtual keyboard having a plurality of virtual keys can be created on the touch sensor.
  • this mode of operation when acquiring touch sensor data for each press of a finger, the above-described "position data" and
  • the position data and fingerprint data are classified using a user-specific keyboard model, with each of the virtual keys under Using the user-specific keyboard model, associating an active area of the touch sensor that has a different position and / or shape to one or more of the virtual buttons than the area of the virtual button visible on the touch sensor, wherein a touch is at a touch location within an active area is interpreted and processed as touching the associated virtual key.
  • the invention can be implemented on different devices with touch sensor and appropriate control.
  • the ability to carry out embodiments of the invention may be implemented in the form of additional program parts or program modules in the control software of such controllers.
  • another aspect of the present invention relates to a computer program product which is stored, in particular, on a computer-readable medium or signal, wherein the computer program product, when loaded into the memory of a suitable computer and executed by a computer, causes the computer program product controlled device performs a method according to the invention or a preferred embodiment thereof.
  • Fig. 1 shows schematically components of an embodiment of a device, the one
  • User interface having an optical touch sensor
  • Fig. 2 shows in Figs. 2A to 2D grayscale images of various fingerprints of a particular one
  • FIG. 3 shows in FIG. 3A a two-dimensional normal spreading functions of key presses of a specific user on the virtual keyboard for determining a position model and in FIG. 3B the active areas of the virtual keys determined on the basis of the position model;
  • Fig. 4 shows in Figs. 4A to 4D the effect of the application of a keyboard model based on a
  • Fig. 5 shows a diagram with a comparison of error rates with different
  • optical touch sensor which forms an optical touch screen in combination with a screen.
  • Virtual keyboards have the advantage over mechanical keyboards that they can be adapted at runtime, ie during use by a user.
  • SUR40 captures a complete grayscale image of the surface in the infrared light spectrum become. For each touch on the touch sensor, an image of the brightness profile can be captured in two dimensions.
  • This information can be derived from the readings provided by the touch screen or the corresponding raw data. The differences are reflected in the raw data.
  • a method step serves to determine user-specific touch sensor data by detecting the typing behavior of a user from a plurality of user's typing actions on the virtual keyboard.
  • This training or training process usually takes place at least in part, possibly even completely, before the actual use phase.
  • Training process can also extend into the phase of normal use, so that the keyboard model created or refined during normal use or
  • the teach-in process can be performed on the same touch sensor on which the subsequent intended use is made. It can also be another touch sensor, e.g. a substantially identical, construction-like or functionally identical variant.
  • the trained keyboard model can be present as a computer program product, so that a transfer from one device to another device is possible.
  • Typing behavior recorded by twelve subjects. Each user had to type a given text. This made it possible to simulate a "perfect" keyboard by being inaccurate Inputs anyway the correct letter (ie the letter that should follow in the given text next) was selected as input. Based on the inputs, a user-specific keyboard model was determined or developed for each user.
  • FIG. 1 shows a schematic section through the touch screen equipped with an optical touch sensor 100 in the area of the virtual keyboard 150 displayed on the screen.
  • a warp-resistant plane-parallel acrylic glass pane 102 carries on its planar upper side a thin projection film 104 whose free surface 106 is that of the user forms the touching surface of the touch sensor of the touch screen.
  • an infrared-sensitive camera (IR camera) 120 is arranged at a distance behind the opposite rear side.
  • Infrared lamps 122 are used for uniform illumination of the acrylic glass pane.
  • the virtual buttons 140 of the virtual keyboard 150 were displayed on this touch screen.
  • the layout of the keyboard corresponded to a standard QWERTY keyboard with im
  • buttons with about 19 mm edge length.
  • the virtual keyboard was generated by a computer program and projected from the back to the surface of the touch screen within the console using a projector 170.
  • the optical touch sensor works on the principle of frustrated total internal reflection (FTIR). In the area of the fingerprint, or in the area of contact between the finger and the surface when the button is pressed, the total internal reflection at the surface of the fingerprint becomes
  • the intensity of the scattered light - represented by the length of the arrows - is u.a.
  • a higher scattered light intensity results than in the edge region of lighter contact pressure (shorter arrows).
  • the infrared-sensitive camera was arranged so that it covered the surface in the area of the entire virtual keyboard from behind in its image field. For each touch, not only the position but also a two-dimensional grayscale image of the touch area in which the contact area 132 of the finger 130 lies on the surface is recorded.
  • the camera had a resolution of 640 * 480 pixels at 60 frames per second.
  • the image analysis was set up so that for each fingerprint a rectangular Grayscale image of the touch area with 42 * 42 pixels around a touch center in the area of the contact zone was detected.
  • FIG. 2A to 2D show grayscale images of various fingerprints of a particular user on different virtual keys when writing the word "BANK.”
  • the gray scale image in Fig. 2A corresponds to the letter “B”
  • Fig. 2B represents the letter “A”
  • Fig. Fig. 2C represents the letter “N”
  • Fig. 2D represents the letter "K”.
  • Fingerprint data saved.
  • the information included the location and a timestamp for each touch, as well as a reference to the associated fingerprint file.
  • a learning algorithm implemented in a computer was trained, which generated an individual keyboard model for each user.
  • these touch sensor data provided by the touch sensor be in the form of raw data without prior analysis for particular features, e.g. geometric features such as shape of the fingerprint, orientation of the fingerprint, etc., were used directly for training (learning) of the classifier.
  • a combination of two classifiers has been used to determine a high-fidelity association between a particular user's keystroke and that through it
  • an image-based fingerprint classifier which takes into account the image information (fingerprint data) derived from the fingerprints to classify key presses.
  • the position of a fingerprint or a keypress is for the
  • Key classification is an important feature and should be properly mapped into a keyboard model to be really useful.
  • An approach that would rely only on the limitations of the on-screen keyboard will bring only modest results in the inventors' experience, as keys are often not precisely hit and the actual touch positions of a key often deviate from the actual center of the virtual key. It has been found that the spatial distribution of the various key strokes of a person dedicated to a particular key can often be well described with a bivariate normal distribution. Such two-dimensional
  • Fig. 3A shows the normal distribution functions of keystrokes of a particular user on the virtual keyboard. It can be seen that the local distribution functions are not symmetrical on all virtual keys and that some keys are statistically much more precisely hit (steeper distribution) than other keys (shallower distribution). For further data processing, two-dimensional normal distributions offer the advantage of being natural
  • each key can be assigned an active area which corresponds to the spatially limited area around a virtual key in which a keystroke is assigned to the respective key on the basis of the position model.
  • Figure 3B is a schematic representation of the complete rectangular keypad which is divided into irregular shaped active areas (e.g., active areas 301-304). Directly adjacent active areas directly border, i. without gaps, so that there are no places in the rectangle area of the virtual keyboard without a clear assignment to a virtual key. In the tendency, those virtual keys precisely hit by the respective user with high repeatability obtain a relatively small active area, while those virtual keys hit with less precision and greater dispersion at the individual key presses receive a larger active area.
  • SVM support vector machine
  • An SVM is a classifier that classifies a set of objects into classes that are as wide as possible around the class boundaries
  • An SVM therefore belongs to the class of so-called “Large Margin Classifiers”.
  • the starting point is a set of training objects, each of which knows which class they belong to.
  • the training objects are given by the keystrokes during teaching, while the virtual keys form the classes to which the keystrokes are to be assigned.
  • Each object is represented by a vector in a vector space.
  • the SVM works by fitting in this space a hyperplane that acts as a separation surface and divides the training objects into two classes, maximizing the distance of those vectors closest to the hyperplane. This wide margin around the class boundaries makes it possible to also classify (assign) the key presses that do not correspond exactly to the objects detected during training (key presses) as reliably as possible.
  • f is the normal density function with parameters ⁇ and ⁇ of the key k at the coordinate x pos and K represents the total number of keys.
  • Xjmg represents the fingerprint data of a keystroke x.
  • FIG. 3B shows a distribution of active regions that are based exclusively on an individual
  • Position model that is, only with the help of the position data, were determined for a particular user.
  • Fingerprint data used for classification resulting in typing dynamic active areas of other size and / or shape and / or position, since in addition to the position information using the fingerprint data and information about how are processed (eg with which finger) a user a particular Key presses or wants to press.
  • Fingertip touches the surface of the touch sensor the fingerprint data and the position data are determined.
  • the keyboard model recognizes from the fingerprint data to which key the fingerprint belongs. It does not need to be explicitly determined which finger is used. It only determines how similar the sensor information is.
  • a key e.g. the spacebar, can also be used with two fingers. The classifier would then correctly assign keystrokes of one finger and the other.
  • the combined information results in all finger touches occurring in the grayed-out active area 301 '(marked “B") in the usual way for the user being interpreted as input of the letter "B".
  • the size and shape of the active area which is determined based on the combination of position data and fingerprint data, has a different shape and area than the active area 301, which is based solely on of a position model.
  • the additional fingerprint information may cause a particular finger touch to still be interpreted as tapping the "B" key when it is relatively far from the center of the virtual key "B", but with the "right” finger. that is, with the finger with which the user usually presses the "B” key User operates different keys, thereby derived from the fingerprint data, which were detected and evaluated during the learning phase.
  • a corresponding dynamic definition of an active area also results in the subsequent key presses for the letters "A”, "N” and "K”, their dynamically generated during typing active areas 3012 ', 303' and 304 'in Fig. 4B until 4D are highlighted in gray.
  • neither the positions nor the shapes of the virtual keys change during learning or during intended use, so that the user-visible appearance of the keyboard layout remains unchanged, regardless of the position and shape of the active areas. This can promote high write speeds at low error rates.
  • the proposed method was developed and tested using real user data. First, typing behavior data was recorded by twelve users. The touchscreen displayed a classic QWERTY keyboard. to
  • Model Ml A model based on the displayed keyboard
  • Model M2 A model based on individual tip positions
  • Model M3 A model based on the entire fingerprint data in the form of raw data Model M4 A combined model of individual tip positions and fingerprint data Model M5 To the presented method with Findlater et al. was an additional model with the extracted geometric used there
  • the error rates ER are shown graphically in FIG. 5 as a function of the models M1 to M5.
  • model M2 Individual position-based models (models M2, M4 and M5) result in a significantly lower error rate compared to a static model according to the keyboard actually represented (model M1).
  • model M4 reduces the error rate unlike the other models.
  • the model M5 which is based on individual tip positions and the in Findlater et al. used extracted geometric properties (these represent size and
  • Alignment of the contact zone as well as the movement of the touch during the press of a button does not lead to a significant improvement in the error rate compared to the model based on individual jogging positions. It is currently believed that these are in advance Specified selection of certain geometric criteria results in certain information relevant to key classification being lost during processing of the raw data provided by the touch sensor. The lost information can then not be taken into account in the classification. If, on the other hand, fingerprint data in the form of raw data are used directly for key classification, a significantly larger part of the information contained in the raw data, if appropriate all the relevant information, can be used.
  • the method exemplified here for generating a virtual keyboard on a touch screen can be used on touch sensors of different shape and size.
  • the method is particularly advantageous for larger touch sensors that can be operated with multiple fingers.
  • tablet computers such as the Apple iPad or tablet computer based on the Android operating system can be optimized by using the method with regard to the error rate when typing.
  • a user can write notes quickly and without errors.
  • tablet computers are usually only used by one person for a long time, so that the virtual keyboard can be trained over a longer period of time, whereby the performance can be further improved.
  • a virtual keyboard of the type shown here can also be used as a replacement for mechanical keyboards. Particularly in the medical environment or in clean rooms, the use of mechanical keyboards is problematic because they are difficult to clean.
  • Touch sensor possibly even without screen, may be advantageous in the use of the method. By customizing the response of virtual buttons, text input can also be improved for this scenario.
  • Typing behavior of a user from a multiplicity of typing actions of the user on the virtual keyboard ie the learning process or training procedure for positioning the keyboard model, does not have to be limited to the time before the actual use.
  • the keyboard model can also be further optimized during use.
  • the fingerprint data and the position data derived therefrom may also be recorded during normal use and stored for Creation or optimization of the keyboard model can be used. This may be done, for example, by capturing and storing fingerprint data and positional data while writing a text, and after the text has been generated, that is, when the user saves the text as error free, the detected keystrokes will be the correct letter or virtual key be assigned.
  • position data and fingerprint data may be used for creating the keyboard model and for later use for classification.
  • further sensor information about each keystroke may also be determined and used in the classification, e.g. Data of an acceleration sensor, e.g. can detect the velocity of a keystroke.
  • a classifier is generated which can assign keystrokes to a user based on the fingerprint data. This method can be used, in particular, to make password entries more secure. The computer system then not only expects the correct letter sequence, but also the fingerprint data of the touch sensor corresponding to the expected user.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Input From Keyboards Or The Like (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne un procédé d'utilisation d'un dispositif qui présente une interface utilisateur comportant un détecteur de contact. Dans ce procédé, un clavier virtuel comportant des touches virtuelles est généré au niveau du détecteur de contact et des données du détecteur de contact sont générées par la saisie du comportement de frappe d'un utilisateur à partir d'une pluralité d'actions de frappe de l'utilisateur sur le clavier virtuel. Par utilisation des données du détecteur de contact, est généré un modèle de clavier spécifique à l'utilisateur qui est utilisé pour la classification de pression des touches lors de l'utilisation d'un clavier virtuel. Lors de la détermination de données du détecteur de contact pour chaque pression de touche d'un doigt, sont déterminées des données de position et des données d'empreintes digitales, les données de position représentant la position d'un contact de la surface du détecteur de contact lors d'une pression de touche et les données d'empreintes digitales représentant une image de la réponse locale du détecteur au contact de la surface du détecteur de contact lors d'une pression de touche, sous la forme de données brutes du détecteur de contact ou de données brutes modifiées, qui sont tirées des données brutes, de telle sorte que l'information concernant la classification qui est contenue dans les données brutes reste essentiellement conservée. Le modèle de clavier est généré en utilisant les données de position et les données d'empreintes digitales. Grâce à ce procédé, il est possible de réduire la fréquence des erreurs d'entrée lors de l'utilisation d'un clavier virtuel sur des dispositifs qui présentent une interface utilisateur comportant un détecteur de contact.
PCT/DE2013/000605 2012-10-19 2013-10-16 Procédé d'utilisation d'un dispositif qui présente une interface utilisateur comportant un détecteur de contact, et dispositif correspondant WO2014094699A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102012219129.1A DE102012219129B4 (de) 2012-10-19 2012-10-19 Verfahren zum Betreiben einer Vorrichtung, die eine Benutzerschnittstelle mit einem Berührungssensor aufweist, sowie entsprechende Vorrichtung
DE102012219129.1 2012-12-19

Publications (1)

Publication Number Publication Date
WO2014094699A1 true WO2014094699A1 (fr) 2014-06-26

Family

ID=49724928

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/DE2013/000605 WO2014094699A1 (fr) 2012-10-19 2013-10-16 Procédé d'utilisation d'un dispositif qui présente une interface utilisateur comportant un détecteur de contact, et dispositif correspondant

Country Status (2)

Country Link
DE (1) DE102012219129B4 (fr)
WO (1) WO2014094699A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104537365A (zh) * 2015-01-07 2015-04-22 小米科技有限责任公司 触摸按键和指纹识别实现方法、装置及终端设备
CN104035722B (zh) * 2014-07-03 2017-03-29 南昌欧菲生物识别技术有限公司 移动终端及其防止虚拟按键误操作的方法
US10241621B2 (en) 2014-09-30 2019-03-26 Hewlett-Packard Development Company, L.P. Determining unintended touch rejection
US11157621B1 (en) * 2018-12-06 2021-10-26 NortonLifeLock Inc. Systems and methods to detect and prevent auto-click attacks

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114363049A (zh) * 2021-12-30 2022-04-15 武汉杰创达科技有限公司 基于个性化交互差异的物联设备多id识别方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070247442A1 (en) * 2004-07-30 2007-10-25 Andre Bartley K Activating virtual keys of a touch-screen virtual keyboard
US20090023736A1 (en) 2007-04-30 2009-01-22 Synta Pharmaceuticals Corp. Compounds for treating proliferative disorders
US20100259561A1 (en) 2009-04-10 2010-10-14 Qualcomm Incorporated Virtual keypad generator with learning capabilities
KR20110031808A (ko) * 2009-09-21 2011-03-29 삼성전자주식회사 터치 스크린을 갖는 이동 단말기에서 터치 처리 장치 및 방법
US20110179374A1 (en) * 2010-01-20 2011-07-21 Sony Corporation Information processing apparatus and program
US20110254772A1 (en) 2009-12-18 2011-10-20 Adamson Peter S Techniques for recognizing movement of one or more touches across a location on a keyboard grid on a touch panel interface
WO2012048380A1 (fr) 2010-10-14 2012-04-19 University Of Technology, Sydney Clavier virtuel

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8217910B2 (en) 2008-12-19 2012-07-10 Verizon Patent And Licensing Inc. Morphing touch screen layout

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070247442A1 (en) * 2004-07-30 2007-10-25 Andre Bartley K Activating virtual keys of a touch-screen virtual keyboard
US20090023736A1 (en) 2007-04-30 2009-01-22 Synta Pharmaceuticals Corp. Compounds for treating proliferative disorders
US20100259561A1 (en) 2009-04-10 2010-10-14 Qualcomm Incorporated Virtual keypad generator with learning capabilities
KR20110031808A (ko) * 2009-09-21 2011-03-29 삼성전자주식회사 터치 스크린을 갖는 이동 단말기에서 터치 처리 장치 및 방법
US20110254772A1 (en) 2009-12-18 2011-10-20 Adamson Peter S Techniques for recognizing movement of one or more touches across a location on a keyboard grid on a touch panel interface
US20110179374A1 (en) * 2010-01-20 2011-07-21 Sony Corporation Information processing apparatus and program
WO2012048380A1 (fr) 2010-10-14 2012-04-19 University Of Technology, Sydney Clavier virtuel

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
FINDLATER, L.; WOBBROCK, J. O.: "Proc. CHI", vol. 12, 2012, ACM PRESS, article "Personalized Input: Improving Ten-Finger Touchscreen Typing through Automatic Adaptation", pages: 815 - 824
J.C.PLATT: "Advanced in Large Margin Classifiers", 1999, MIT PRESS, article "Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods", pages: 61 - 74
LEAH FINDLATER ET AL: "Personalized Input: Improving Ten-Finger Touchscreen Typing through Automatic Adaptation ACM Classification Keywords", 10 May 2012 (2012-05-10), XP055103478, Retrieved from the Internet <URL:http://terpconnect.umd.edu/~leahkf/pubs/CHI2012-findlater-PersonalizedTyping.pdf> [retrieved on 20140220] *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104035722B (zh) * 2014-07-03 2017-03-29 南昌欧菲生物识别技术有限公司 移动终端及其防止虚拟按键误操作的方法
US10241621B2 (en) 2014-09-30 2019-03-26 Hewlett-Packard Development Company, L.P. Determining unintended touch rejection
US10599267B2 (en) 2014-09-30 2020-03-24 Hewlett-Packard Development Company, L.P. Determining unintended touch rejection
CN104537365A (zh) * 2015-01-07 2015-04-22 小米科技有限责任公司 触摸按键和指纹识别实现方法、装置及终端设备
US11157621B1 (en) * 2018-12-06 2021-10-26 NortonLifeLock Inc. Systems and methods to detect and prevent auto-click attacks

Also Published As

Publication number Publication date
DE102012219129A1 (de) 2014-04-24
DE102012219129B4 (de) 2019-07-11

Similar Documents

Publication Publication Date Title
DE69718259T2 (de) Auswahlvorrichtung für berührungsaktive Anzeige
DE69204045T2 (de) Verfahren und Vorrichtung zum optischen Eingang von Befehlen oder Daten.
DE69715314T2 (de) Virtuelle Hinweisanordnung für Berührungsbildschirme
DE102012109058B4 (de) Steuerverfahren und elektronische Einrichtung
DE102014117345B4 (de) Kontaktsignatursteuerung eines Geräts
DE102013111978B4 (de) Identifikation und Verwendung von Gesten in der Nähe eines Sensors
DE112017007804B4 (de) Mobiles Endgerät und Hochfrequenz-Fingerabdruck-Identifikationsvorrichtung und -verfahren dafür
DE112014000441T5 (de) Dynamische Benutzerinteraktionen für Displaysteuerung und Angepaßte Gesten Interpretation
DE102012219129B4 (de) Verfahren zum Betreiben einer Vorrichtung, die eine Benutzerschnittstelle mit einem Berührungssensor aufweist, sowie entsprechende Vorrichtung
DE102018100809A1 (de) Verfahren, vorrichtung und endgerät zum anzeigen einer virtuellen tastatur
DE202005021427U1 (de) Elektronische Vorrichtung mit berührungsempfindlicher Eingabeeinrichtung
DE112013004437B4 (de) Verfahren zum Definieren einer Eingebetaste auf einer Tastatur und Verfahren zur Interpretation von Tastenanschlägen
WO2003054680A2 (fr) Introduction flexible de donnees sur ordinateur
DE102006039767A1 (de) Eingabevorrichtung
DE102010036906A1 (de) Konfigurierbares Pie-Menü
EP2137599A1 (fr) Dispositif de mesure de pression et procédé correspondant
DE112010002760T5 (de) Benutzerschnittstelle
DE102015116477A1 (de) Datenverarbeitungsverfahren und Elektronikgerät
DE102015016443A1 (de) Berühungseingabeeinrichtung und Fahrzeug mit dieser
WO2004034241A2 (fr) Dispositif de saisie rapide
DE102016105816A1 (de) Apparatus and method for verifying an identity of a user
DE102018122896A1 (de) Verfahren und Vorrichtung zur Kontrolle eines mobilen Eingabegeräts
EP2977855B1 (fr) Clavier virtuel et procédé de saisie pour un clavier virtuel
EP3990301B1 (fr) Unité de commande comportant une surface de commande tactile
DE102014224599A1 (de) Verfahren zum Betreiben einer Eingabevorrichtung, Eingabevorrichtung

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13801981

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 13801981

Country of ref document: EP

Kind code of ref document: A1