US20220374099A1 - Artificial intelligence model for enhancing a touch driver operation - Google Patents
Artificial intelligence model for enhancing a touch driver operation Download PDFInfo
- Publication number
- US20220374099A1 US20220374099A1 US17/323,757 US202117323757A US2022374099A1 US 20220374099 A1 US20220374099 A1 US 20220374099A1 US 202117323757 A US202117323757 A US 202117323757A US 2022374099 A1 US2022374099 A1 US 2022374099A1
- Authority
- US
- United States
- Prior art keywords
- touch input
- touch
- user
- run
- parameters
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/0416—Control or interface arrangements specially adapted for digitisers
- G06F3/0418—Control or interface arrangements specially adapted for digitisers for error correction or compensation, e.g. based on parallax, calibration or alignment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/0416—Control or interface arrangements specially adapted for digitisers
- G06F3/04166—Details of scanning methods, e.g. sampling time, grouping of sub areas or time sharing with display driving
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G06N3/0445—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/044—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means
Definitions
- Computing devices with touch-sensitive displays enable a user to interact with the computer via touch from a stylus or a digit.
- the touch-sensitive displays are configured for universal use, regardless of user characteristics such as handedness, writing pressure, writing speed, pen tilt, and the like.
- the computing system includes a touch-sensitive display and one or more processors.
- the touch-sensitive display is configured to detect a run-time touch input from a user.
- the one or more processors are configured to execute instructions using portions of associated memory to implement a touch driver of the touch-sensitive display and an artificial intelligence model.
- the touch driver is configured to process the run-time touch input based at least in part on a plurality of calibration parameters and output a touch event and a plurality of run-time touch input parameters associated with the touch input event.
- the artificial intelligence model is configured to receive, as input, the run-time touch input parameters. Responsive to receiving the run-time touch input parameters, the artificial intelligence model is configured to output a personalized user touch driver profile including a plurality of updated calibration parameters for the touch driver.
- the artificial intelligence model is trained in an initial training phase with a training data set that includes a plurality of training data pairs.
- Each training data pair may include training phase touch input parameters indicating a characteristic of the touch input and ground truth output indicating calibration parameters for the paired touch input parameters.
- Training phase touch input parameters from a plurality of users may be interpreted by a clustering algorithm to identify a plurality of user clusters, and each user cluster may define a cohort of users.
- the training data set used to train the artificial intelligence model may be created from training data examples derived from a cohort of users.
- the artificial intelligence model is trained in a feedback training phase.
- the artificial intelligence model may collect user feedback via an implicit or explicit user feedback interface and perform feedback training based at least in part on the user feedback.
- the artificial intelligence model may be further configured to adjust internal weights to enhance one or more calibration parameters via a backpropagation algorithm.
- the artificial intelligence model includes one or more neural networks.
- a first neural network of the one or more neural networks may be configured to output an X coordinate offset and a Y coordinate offset of the touch input.
- a second neural network of the one or more neural networks may be configured to output a tilt offset and an azimuth offset of the touch input.
- FIG. 1 is a general schematic diagram of a computing system for enhancing a touch driver operation according to an embodiment of the present disclosure.
- FIG. 2 is a detailed schematic diagram of the computing system of FIG. 1 .
- FIG. 3 is an illustration of user clusters identified via a clustering algorithm of the computing system of FIG. 1 .
- FIG. 4 is a schematic diagram of a touch driver enhancer artificial intelligence model of the computing system of FIG. 1 during an initial training phase.
- FIG. 5 is a schematic diagram of a template profile classifier of the computing system of FIG. 1 during a run-time phase.
- FIG. 6 is a schematic diagram of a touch driver enhancer artificial intelligence model of the computing system of FIG. 1 during initial training and feedback training phases.
- FIGS. 7 and 8 are illustrations of a graphical user interface for touch input calibration of the computing system of FIG. 1 .
- FIG. 9 is a schematic diagram of a touch driver enhancer artificial intelligence model of the computing system of FIG. 1 during a run-time phase.
- FIG. 10 shows the extraction of features from run-time touch input parameters of the computing system of FIG. 1 .
- FIG. 11 is a flowchart of a method for enhancing a touch driver operation according one example configuration of the present disclosure.
- FIG. 12 is a flowchart of a method for training a touch driver enhancer artificial intelligence model according one example configuration of the present disclosure.
- FIG. 13 is an example computing system according to one implementation of the present disclosure.
- HCI human-computer interactions
- Recognizing user-specific touch input parameters at run-time would additionally facilitate calibrating the touch input according to different users of the same computing device, resulting in a smooth transition between users and/or mode of touch input.
- a computing system with an artificial intelligence model configured to enhance a touch driver operation according to a user's profile and preferences would increase the user's comfort and enhance productivity, both during touch input from a digit or hand of the user, as well as from a stylus.
- heretofore challenges have existed in the development of such a system.
- the computing system 100 includes a computing device 10 , which may, for example, take the form of a desktop computing device, laptop computing device, or a mobile computing device, such as a tablet. In another example, the computing device 10 may take other suitable forms, such as a smart phone device, a wrist mounted computing device, or the like.
- the computing device 10 includes a touch-sensitive display 12 configured to detect touch input from digit or stylus of a user, as described in detail below. It will be appreciated that “touch input” as used herein refers both to touch input from a digit or hand of a user, as well as touch input from a stylus, unless otherwise specified.
- a touch driver 14 of the touch-sensitive display 12 is stored in an associated memory 16 and implemented by one or more processors 18 .
- the touch driver 14 is configured to process the run-time touch input 20 based at least in part on a plurality of calibration parameters 22 .
- the calibration parameters 22 may include, for example, X coordinate and Y coordinate offsets based at least in part on the sensed position of the detected touch input 20 and/or tilt and azimuth offsets based at least in part on a sensed thickness of the detected touch input 20 .
- the touch driver 14 Upon processing the run-time touch input 20 , the touch driver 14 outputs a touch input event 24 and a plurality of run-time touch input parameters 26 .
- the touch input event may be passed to a program 42 , which in turn generates display output 44 based thereon.
- the run-time touch input parameters 26 may be stored in memory as they are gathered over a period of time based at least in part on a series of run-time touch inputs, and may be characteristics associated with each touch input event 24 such as measurements of digit size, handedness, inking smoothness, stroke speed, stutter, latency, accuracy, repeatability, pressure, tilt, and azimuth.
- the memory 16 further includes an artificial intelligence model that may be implemented as a touch driver enhancer artificial intelligence model 28 .
- the touch driver enhancer artificial intelligence model 28 is configured to receive the run-time touch input parameters 26 and, in response, output a personalized user touch driver profile 30 that is enhanced according to the processed touch input 20 and includes a plurality of updated calibration parameters 32 for the touch driver 14 .
- the touch driver enhancer artificial intelligence model 28 is configured to optimize the touch driver profile 30 by computing updated calibration parameters 32 that optimize the performance of the touch driver 14 .
- the touch driver enhancer artificial intelligence model 28 may be configured to output updated calibration parameters 32 that are an improvement over the original calibration parameters 22 , but are not necessarily optimized.
- the touch input 20 may be performed with a stylus 20 A or one or more digits 20 B of the user.
- the touch-sensitive display 12 may be configured as a capacitive touch screen equipped with a plurality of conductive layers.
- the touch-sensitive display 12 may further include a digitizer layer that enables the computing device 10 to be used with an active stylus 20 A, such as a digital pen.
- the touch input 20 may be detected via sensors 34 , such as touch sensors that detect stimulation of an electrostatic field of the touch-sensitive display 12 and/or sensors in the digitizer layer.
- the touch input 20 is processed by the touch driver 14 based at least in part on calibration parameters 22 associated with the touch input 20 .
- the touch driver 14 then outputs touch data 36 including the touch input event 24 and a plurality of run-time touch parameters 26 associated with the touch input event 24 .
- the touch input event 24 is transferred to an input stack 38 , and an input handler 40 sends it to an application program 42 .
- the application program 42 is configured to convert data from the touch input event 24 into a display output 44 that is sent to a display driver 46 to be displayed to the user on the touch-sensitive display 12 .
- the application program 42 further includes a feedback module 48 configured to communicate data from the touch input event 24 associated with the touch input 20 to the touch driver enhancer artificial intelligence model 28 where it may be used to enhance the touch driver 14 according to characteristics of the user's touch input 20 .
- the touch input parameters 26 are recorded in a log 50 and sent to a profile module 52 that includes the touch driver enhancer artificial intelligence model 28 .
- the touch driver enhancer artificial intelligence model 28 outputs a personalized user touch driver profile 30 , including updated calibration parameters 32 , based at least in part on the run-time touch input parameters 26 associated with the user's touch input 20 .
- the updated calibration parameters 32 are sent to the touch driver 14 such that the touch driver 14 may be updated with the personalized user touch driver profile 30 .
- the profile module 52 further includes a template profile classifier 54 that is configured to receive the run-time touch input parameters 26 , classify them according to a plurality of user template profiles, and output a template profile 56 for the user performing the touch input 20 .
- calibration parameters 22 and touch input parameters 26 associated with the touch input 20 are additionally collected as crowd data 58 and interpreted with a clustering algorithm 60 to identify user clusters 62 that each define a cohort of users.
- Template profiles are created for each cohort of users, with each template profile including calibration parameters 22 associated with a respective cohort. Examples of template profiles 56 A, 56 B, 56 C, each including respective calibration parameters 22 A, 22 B, 22 C, are shown in FIG. 2 .
- a plurality of template profiles 56 n are communicated to the template profile classifier 54 , which determines a template profile 56 for the touch driver 14 based at least in part on the touch input parameters 26 of the user's touch input 20 .
- the template profile 56 provides a starting point for the calibration parameters 22 used by the touch driver 14 to process the run-time touch input 20 .
- the touch driver enhancer artificial intelligence model 28 is configured to receive touch input parameters 26 from the touch driver 14 and output updated calibration parameters 32 included in the personalized user touch driver profile 30 to enhance the user experience during the human-computer interaction.
- touch input parameters 26 associated with the touch input 20 may be collected as crowd data 58 from a plurality of users at run-time.
- the crowd data 58 may be sent to a storage device in the cloud where it is analyzed and used for touch driver 14 updates. Additionally or alternatively, the crowd data 58 may be collected from a plurality of users to curate training phase touch input parameters 66 that are used in an initial training phase of the touch driver enhancer artificial intelligence model 28 .
- the one or more processors 18 are configured to received training phase touch input parameters 66 associated with touch input from a plurality of users and instruct the clustering algorithm 60 to interpret the training phase touch input parameters 66 .
- a plurality of user clusters 62 are identified, with each user cluster 62 defining a cohort of users having similar training phase touch input parameters 66 .
- the cohorts may be clustered according to similarities detected in digit size, handedness, inking smoothness, stroke speed, stutter, latency, accuracy, repeatability, pressure, tilt, and/or azimuth.
- the clustering of cohorts of users having similar characteristics and the resulting plurality of template profiles 56 n can be used to initially create the template profile 56 for the user, thereby reducing the number of touch inputs and time required to calibrate the touch driver 14 and create the personalize touch driver profile 30 .
- FIG. 4 shows a schematic diagram of the touch driver enhancer artificial intelligence model 28 during an initial training phase.
- the touch driver enhancer artificial intelligence model 28 is trained with a training data set 68 including a plurality of training data pairs 70 from a cohort of training data examples derived from a cohort of users, i.e., a user cluster 62 , identified via the clustering algorithm 60 .
- Each training data pair 70 includes training phase touch input parameters 66 and ground truth output 72 .
- the touch driver enhancer artificial intelligence model 28 further includes one or more neural networks 64 that may have an input layer, one or more convolutional layers, one or more fully connected hidden layers, and an output layer. It will be appreciated that the one or more neural networks 64 may be configured to include one or more convolutional layers, one or more fully connected hidden layers, or both one or more convolutional layers and one or more fully connected hidden layers.
- the input layer includes a plurality of nodes corresponding to the training phase touch input parameters 66 , which indicate a characteristic of the training phase touch input 68 .
- the output layer includes a plurality of output nodes corresponding to the ground truth output 72 , which indicate training phase calibration parameters 74 for the training data pair 70 .
- the touch driver enhancer artificial intelligence model 28 is configured to adjust internal weights to enhance one or more of the plurality of calibration parameters 74 via a backpropagation algorithm according to a loss function during training to increase the accuracy of the output nodes during run-time.
- the touch driver enhancer artificial intelligence model 28 may include a first neural network 64 A configured to output X coordinate offset and Y coordinate offset of the touch input 20 .
- the touch driver enhancer artificial intelligence model 28 may further include a second neural network 64 B configured to output a tilt offset and an azimuth offset, which together define an angle of the stylus 20 A that contributes to a thickness of the touch input 20 .
- the touch driver enhancer artificial intelligence model 28 may include a neural network configured to output offsets related to hand pose.
- FIG. 5 A schematic diagram of the template profile classifier 54 during a run-time phase is shown in FIG. 5 .
- the template profile classifier 54 receives run-time touch input parameters 26 , classifies them according to a plurality of user template profiles 56 n based at least in part on cohorts of users, and outputs a template profile 56 for the user performing the touch input 20 .
- the template profile classifier 54 is configured to perform cluster analysis on the run-time touch input parameters 26 to determine the cohort of users with which the user is associated.
- the template profile classifier 54 then outputs the template profile created for the determined cohort and sets the calibration parameters 22 for the personalized user touch driver profile 30 according to the calibration parameters 22 included in the template profile 56 associated with the determined cohort of users.
- the calibration parameters 22 used by the touch driver 14 to process the touch input 20 are based at least in part on template profiles 56 n of cohorts of users having characteristics similar to that of the user.
- the template profile classifier 54 may determine that the user's touch input is similar to that of cohorts of users that are right-handed and have a moderate stroke speed, and the calibration parameters 22 are then set according to template profiles created for user cohorts having these characteristics.
- the template profile 56 for the user may be initially set using a plurality of template profiles 56 n having characteristics similar to that of the user, thereby reducing the time and effort required to calibrate the touch driver 14 .
- the personalized user touch driver profile 30 continues to be fine-tuned with the touch driver enhancer artificial intelligence model 28 based at least in part on the touch input 20 provided by the user in a feedback training phase.
- FIG. 6 a schematic diagram of the touch driver enhancer artificial intelligence model 28 during initial training and feedback training phases is illustrated. It will be appreciated that the initial training phase, indicated in FIG. 6 with solid lines, corresponds to the initial training phase described above with reference to FIG. 4 . The feedback training phase is indicated in FIG. 6 with dashed lines.
- the feedback module 48 is configured to communicate the run-time touch input parameters 26 associated with the touch input 20 to the touch driver enhancer artificial intelligence model 28 to enhance the touch driver 14 according to characteristics of the user's touch input 20 .
- the feedback module 48 may further include a user feedback interface 76 that is configured to collect user feedback 78 that can be used in the feedback training phase.
- the user feedback 78 may be implicit, such as the user frequently erasing or undoing the touch input 20 , or explicit, such as performing touch input calibration exercises and indicating whether the touch input 20 is accurately depicted by the display output 44 shown on the touch-sensitive display 12 .
- the user feedback interface 76 may be configured as one or both of an implicit user feedback interface or an explicit user feedback interface.
- the user feedback 78 and run-time touch input 20 form a feedback pair 80 with which the touch driver enhancer artificial intelligence model 28 performs feedback training.
- the touch driver enhancer artificial intelligence model 28 is configured to adjust internal weights to enhance one or more of the plurality of calibration parameters 22 via a backpropagation algorithm according to a loss function during the feedback training phase.
- FIGS. 7 and 8 illustrate examples of an explicit user feedback interface 76 , which may be presented during a calibration phase and configured as a touch input calibration graphical user interface (GUI) 82 .
- the computing device 10 may be configured to solicit user feedback 78 in conditions such as configuring a new computing device for touch input from a digit or stylus, configuring or updating a user profile, or upon pairing a stylus to the computing device, for example. Additionally or alternatively, the user may access the touch input calibration GUI 82 to provide explicit user feedback 78 when the touch input 20 is not calibrated to their satisfaction.
- the calibration phase may be implemented to assess X coordinate and Y coordinate offsets between the touch input 20 and the display output 44 .
- the touch input calibration GUI 82 may be configured to present the user with a calibration graphic 84 A to be traced such that the touch input 20 can be processed, and a position of the corresponding display output 44 A can be compared to the calibration graphic 84 A for accuracy.
- the calibration graphic 84 A is depicted with a circle, as indicated by the dashed line.
- the user performs touch input 20 to trace the calibration graphic 84 A, and the touch input 20 is processed and displayed as the display output 44 A, indicated by the solid line.
- the user may additionally provide explicit user feedback 78 based at least in part on how accurately the display output 44 shown on the touch-sensitive display 12 matches the calibration graphic 84 A.
- FIG. 8 shows an example implementation of the calibration phase for assessing tilt and azimuth offsets between the touch input 20 and the display output 44 .
- the touch input 20 is performed with the stylus 20 A in this implementation of the calibration phase.
- the touch input calibration GUI 82 may be configured to present the user with a calibration graphic 84 B such that a thickness of the corresponding display output 44 B can be processed and compared to the calibration graphic 84 B for accuracy.
- the calibration graphic 84 B is depicted as a dashed line increasing in thickness from left to right.
- the user performs touch input 20 to match the increasing thickness of the calibration graphic 84 B, and the touch input 20 is processed and displayed as the display output 44 B, indicated by the solid line above the calibration graphic 84 B.
- the user may additionally provide explicit user feedback 78 based at least in part on how accurately the display output 44 B shown on the touch-sensitive display 12 matches the calibration graphic 84 B.
- the calibration graphic 84 displayed on the touch input calibration GUI 82 during the calibration phase may be configured to assess X coordinate and Y coordinate offsets concurrently with tilt and azimuth offsets.
- the explicit user feedback 78 may be processed by the feedback module 48 and sent to the touch driver enhancer artificial intelligence model 28 , which outputs the personalized user touch driver profile 30 enhanced according to the explicit user feedback 78 , including a plurality of updated calibration parameters 32 for the touch driver 14 .
- the touch driver enhancer artificial intelligence model 28 may include the first neural network 64 A configured to output X coordinate offset and Y coordinate offset of the touch input 20 , and the second neural network 64 B configured to output a tilt offset and an azimuth offset.
- Each neural network has an input layer, one or more fully connected hidden layers, and an output layer.
- the touch driver 14 upon processing the run-time touch input 20 , the touch driver 14 is configured to output the touch event 24 and a plurality of run-time touch input parameters 26 indicating characteristics associated with the touch input event 24 .
- the touch input parameters 26 are recorded in the log 50 as raw input values for user touch input 20 on the touch-sensitive display 12 .
- feature extractors 86 A, 86 B are configured to extract respective touch input features 88 A, 88 B from raw input values of the touch input parameters 26 , and the extracted features 88 A, 88 B comprise the plurality of nodes included in the input layers for the respective neural networks 64 A, 64 B.
- the touch driver enhancer artificial intelligence model 28 then outputs a personalized user touch driver profile 30 , including updated calibration parameters according to the X coordinate and Y offsets coordinate output by the neural network 64 A and the tilt and azimuth offsets output by the neural network 64 B. It will be appreciated that the output can be considered in terms of a difference between an initial center of mass from the touch input parameters 26 to a target position of the touch input 20 on the touch-sensitive display 12 .
- FIG. 10 An example of the extraction of features 88 A for the neural network 64 A is shown in FIG. 10 .
- the touch input parameters 26 are recorded as raw input values comprising nine magnitudes of the touch input 20 signal in X, Y, and time (t) for a tip of the stylus 20 A and for a ring of the stylus 20 A.
- the touch input parameters 26 are processed by the feature extractor 86 A to calculate the center of mass (CoM) in X, Y, and time (t) for each of the tip and the ring.
- the calculations for the CoM are then used to create the extracted features 88 A, including CoM in terms of speed and angle, that are used as the input layer for the neural network 64 A. While the example shown in FIG. 10 is for determining the X coordinate and Y coordinate offsets, it will be appreciated that the tilt and azimuth offsets are determined by the neural network 64 B using similar calculations.
- FIG. 11 is a flowchart of a method 200 for enhancing a touch driver operation according one example configuration of the present disclosure.
- Method 200 is preferably implemented by one or more processors of a computing system including a touch-sensitive display configured to detect a run-time touch input from a user.
- the method 200 may comprise detecting a run-time touch input on a touch-sensitive display.
- the run-time touch input may be received from a digit or a stylus of a user.
- the touch-sensitive display may be configured as a capacitive touch screen equipped with a plurality of conductive layers.
- the touch-sensitive display may further include a digitizer layer that enables the computing device to be used with an active stylus, such as a digital pen.
- the touch input may be detected via sensors, such as touch sensors that detect stimulation of an electrostatic field of the touch-sensitive display and/or sensors in the digitizer layer.
- the method 200 may include processing the run-time touch input.
- the touch input may be processed by a touch driver based at least in part on a plurality of calibration parameters.
- the calibration parameters may include, for example, X coordinate and Y coordinate offsets based at least in part on the sensed position of the detected touch input and/or tilt and azimuth offsets based at least in part on a sensed thickness of the detected touch input.
- the method 200 may include outputting a touch input event.
- the touch input event may be transferred to an input stack, and an input handler may send it to an application program.
- the application program may be configured to convert data from the touch input event into a display output that is sent to a display driver. Accordingly, advancing from step 206 to step 208 , the method 200 may include displaying output to the user on the touch-sensitive display.
- the method 200 may additionally or alternatively continue to step 210 .
- the method 200 may include outputting touch input parameters.
- the touch input parameters may be recorded in a log and sent to a profile module that includes a touch driver enhancer artificial intelligence model.
- the method 200 may include, receiving, by the touch driver enhancer artificial intelligence model, the run-time touch input parameters.
- the run-time touch input parameters may be characteristics associated with the touch input event, such as measurements of digit size, handedness, inking smoothness, stroke speed, stutter, latency, accuracy, repeatability, pressure, tilt, and azimuth, for example.
- the method 200 may include outputting, by the touch driver enhancer artificial intelligence model, a personalized user touch driver profile.
- the personalized user touch driver profile be based at least in part on the run-time touch input parameters associated with the user's touch input, and may include updated calibration parameters.
- the updated calibration parameters may be sent to the touch driver such that the touch driver may be updated with the personalized user touch driver profile.
- the application program may include a feedback module.
- the method 200 may include collecting user feedback at step 216 .
- the feedback module may be configured to collect user feedback and communicate user feedback associated with the touch input to the touch driver enhancer artificial intelligence model where it may be used to enhance the touch driver according to characteristics of the user's touch input.
- FIG. 12 is a flowchart of a method 300 for training a touch driver enhancer artificial intelligence model according one example configuration of the present disclosure.
- the method 300 is preferably implemented on an artificial intelligence model for a computing system having a touch-sensitive display configured to detect touch input from a user.
- the method 300 may comprise receiving training phase touch input parameters from a plurality of users.
- Touch input parameters may be characteristics associated with touch input, such as measurements of digit size, handedness, inking smoothness, stroke speed, stutter, latency, accuracy, repeatability, pressure, tilt, and azimuth, for example. These parameters may be collected from a plurality of users to be processed for training the artificial intelligence model.
- the method 300 may include interpreting the training phase touch input parameters.
- the training phase touch input parameters may be interpreted with a clustering algorithm.
- the method 300 may include identifying a plurality of user clusters. Touch input parameters from the plurality of users may be grouped by like characteristics by the clustering algorithm to identify user clusters. As such, each user cluster may define a cohort of users having similar training phase touch input parameters. For example, the cohorts may be clustered according to similarities detected in digit size, handedness, inking smoothness, stroke speed, stutter, latency, accuracy, repeatability, pressure, tilt, and/or azimuth.
- the method 300 may include training the artificial intelligence model with training data sets derived from the user clusters.
- the training data sets may include a plurality of training data pairs from a cohort of training data examples derived from respective a cohort of users, i.e., a user cluster identified via the clustering algorithm.
- Each training data pair may include training phase touch input parameters and ground truth output.
- the method 300 may include creating a template profile with calibration parameters for each user cluster. Template profiles are created for each cohort of users, with each template profile including calibration parameters associated with a respective cohort. A plurality of template profiles may be communicated to a template profile classifier, which determines a template profile for the touch driver at run-time based at least in part on the touch input parameters of a user's touch input. The template profile provides a starting point for the calibration parameters used by the touch driver to process the run-time touch input such that a personalized user touch driver profile may be created.
- the methods and processes described herein may be tied to a computing system of one or more computing devices.
- such methods and processes may be implemented as a computer application program or service, an application-programming interface (API), a library, and/or other computer program product.
- API application-programming interface
- FIG. 13 schematically shows a non-limiting embodiment of a computing system 900 that can enact one or more of the methods and processes described above.
- Computing system 900 is shown in simplified form.
- Computing system 900 may embody the computing device 10 described above and illustrated in FIG. 1 .
- Computing system 900 may take the form of one or more personal computers, server computers, tablet computers, home-entertainment computers, network computing devices, gaming devices, mobile computing devices, mobile communication devices (e.g., smart phone), and/or other computing devices, and wearable computing devices such as smart wristwatches and head mounted augmented reality devices.
- Computing system 900 includes a logic processor 902 volatile memory 904 , and a non-volatile storage device 906 .
- Computing system 900 may optionally include a display subsystem 908 , input subsystem 910 , communication subsystem 912 , and/or other components not shown in FIG. 13 .
- Logic processor 902 includes one or more physical devices configured to execute instructions.
- the logic processor may be configured to execute instructions that are part of one or more applications, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more components, achieve a technical effect, or otherwise arrive at a desired result.
- the logic processor may include one or more physical processors (hardware) configured to execute software instructions. Additionally or alternatively, the logic processor may include one or more hardware logic circuits or firmware devices configured to execute hardware-implemented logic or firmware instructions. Processors of the logic processor 902 may be single-core or multi-core, and the instructions executed thereon may be configured for sequential, parallel, and/or distributed processing. Individual components of the logic processor optionally may be distributed among two or more separate devices, which may be remotely located and/or configured for coordinated processing. Aspects of the logic processor may be virtualized and executed by remotely accessible, networked computing devices configured in a cloud-computing configuration. In such a case, these virtualized aspects are run on different physical logic processors of various different machines, it will be understood.
- Non-volatile storage device 906 includes one or more physical devices configured to hold instructions executable by the logic processors to implement the methods and processes described herein. When such methods and processes are implemented, the state of non-volatile storage device 906 may be transformed, e.g., to hold different data.
- Non-volatile storage device 906 may include physical devices that are removable and/or built-in.
- Non-volatile storage device 906 may include optical memory (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory (e.g., ROM, EPROM, EEPROM, FLASH memory, etc.), and/or magnetic memory (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), or other mass storage device technology.
- Non-volatile storage device 906 may include nonvolatile, dynamic, static, read/write, read-only, sequential-access, location-addressable, file-addressable, and/or content-addressable devices. It will be appreciated that non-volatile storage device 906 is configured to hold instructions even when power is cut to the non-volatile storage device 906 .
- Volatile memory 904 may include physical devices that include random access memory. Volatile memory 904 is typically utilized by logic processor 902 to temporarily store information during processing of software instructions. It will be appreciated that volatile memory 904 typically does not continue to store instructions when power is cut to the volatile memory 904 .
- logic processor 902 volatile memory 904 , and non-volatile storage device 906 may be integrated together into one or more hardware-logic components.
- hardware-logic components may include field-programmable gate arrays (FPGAs), program- and application-specific integrated circuits (PASIC/ASICs), program- and application-specific standard products (PSSP/ASSPs), system-on-a-chip (SOC), and complex programmable logic devices (CPLDs), for example.
- FPGAs field-programmable gate arrays
- PASIC/ASICs program- and application-specific integrated circuits
- PSSP/ASSPs program- and application-specific standard products
- SOC system-on-a-chip
- CPLDs complex programmable logic devices
- module may be used to describe an aspect of computing system 900 typically implemented in software by a processor to perform a particular function using portions of volatile memory, which function involves transformative processing that specially configures the processor to perform the function.
- a module, program, or engine may be instantiated via logic processor 902 executing instructions held by non-volatile storage device 906 , using portions of volatile memory 904 .
- modules, programs, and/or engines may be instantiated from the same application, service, code block, object, library, routine, API, function, etc.
- the same module, program, and/or engine may be instantiated by different applications, services, code blocks, objects, routines, APIs, functions, etc.
- the terms “module,” “program,” and “engine” may encompass individual or groups of executable files, data files, libraries, drivers, scripts, database records, etc.
- display subsystem 908 may be used to present a visual representation of data held by non-volatile storage device 906 .
- the visual representation may take the form of a graphical user interface (GUI).
- GUI graphical user interface
- the state of display subsystem 908 may likewise be transformed to visually represent changes in the underlying data.
- Display subsystem 908 may include one or more display devices utilizing virtually any type of technology. Such display devices may be combined with logic processor 902 , volatile memory 904 , and/or non-volatile storage device 906 in a shared enclosure, or such display devices may be peripheral display devices.
- input subsystem 910 may comprise or interface with one or more user-input devices such as a keyboard, mouse, touch screen, or game controller.
- the input subsystem may comprise or interface with selected natural user input (NUI) componentry.
- NUI natural user input
- Such componentry may be integrated or peripheral, and the transduction and/or processing of input actions may be handled on- or off-board.
- NUI componentry may include a microphone for speech and/or voice recognition; an infrared, color, stereoscopic, and/or depth camera for machine vision and/or gesture recognition; a head tracker, eye tracker, accelerometer, and/or gyroscope for motion detection and/or intent recognition; as well as electric-field sensing componentry for assessing brain activity; and/or any other suitable sensor.
- communication subsystem 912 may be configured to communicatively couple various computing devices described herein with each other, and with other devices.
- Communication subsystem 912 may include wired and/or wireless communication devices compatible with one or more different communication protocols.
- the communication subsystem may be configured for communication via a wireless telephone network, or a wired or wireless local- or wide-area network, such as a HDMI over Wi-Fi connection.
- the communication subsystem may allow computing system 900 to send and/or receive messages to and/or from other devices via a network such as the Internet.
- the computing system may comprise a touch-sensitive display and one or more processors.
- the touch-sensitive display may be configured to detect a run-time touch input from a user.
- the one or more processors may be configured to execute instructions using portions of associated memory to implement a touch driver of the touch-sensitive display and an artificial intelligence model.
- the touch driver of the touch-sensitive display may be configured to process the run-time touch input based at least in part on a plurality of calibration parameters, and output a touch input event and a plurality of run-time touch input parameters associated with the touch input event.
- the artificial intelligence model may be configured to receive, as input, the run-time touch input parameters. Responsive to receiving the run-time touch input parameters, the artificial intelligence model may output a personalized user touch driver profile including a plurality of updated calibration parameters for the touch driver.
- the one or more processors may be configured to receive training phase touch input parameters from a plurality of users, and instruct a clustering algorithm to interpret the training phase touch input parameters and identify a plurality of user clusters, each user cluster defining a cohort of users.
- the artificial intelligence model may have been trained in the initial training phase with a training data set including a plurality of training data pairs from a cohort of training data examples derived from a cohort of users.
- Each training data pair may include training phase touch input parameters indicating a characteristic of the touch input and ground truth output indicating calibration parameters for the paired touch input parameters.
- the one or more processors may be configured to create a template profile for each cohort of users. Each template profile may include calibration parameters associated with a respective cohort.
- the one or more processors may be configured to perform cluster analysis on the run-time touch input parameters to determine the cohort of users with which the user is associated, and set the calibration parameters for the personalized user touch driver profile according to the calibration parameters included in the template profile associated with the determined cohort of users.
- the artificial intelligence model in a feedback training phase, may be configured to collect user feedback via an implicit or explicit user feedback interface, and perform feedback training of the artificial intelligence model based at least in part on the user feedback.
- the artificial intelligence model may be configured to adjust internal weights to enhance one or more of the plurality of calibration parameters via a backpropagation algorithm.
- the touch input parameters may be recorded as raw input values for user touch input on the touch-sensitive display, and touch input features may be extracted from the raw input values.
- the artificial intelligence model may include one or more neural networks.
- a first neural network of the one or more neural networks may be configured to output an X coordinate offset and a Y coordinate offset of the touch input.
- a second neural network of the one or more neural networks may be configured to output a tilt offset and an azimuth offset of the touch input.
- the touch input may be performed with a stylus or one or more digits of the user.
- the characteristic of the touch input may be associated with at least one of digit size, handedness, inking smoothness, stroke speed, stutter, latency, accuracy, repeatability, pressure, tilt, and azimuth.
- the method may comprise, at one or more processors of a computing system including a touch-sensitive display configured to detect a run-time touch input from a user, processing the run-time touch input based at least in part on a plurality of calibration parameters.
- the method may further include outputting a touch input event and a plurality of run-time touch input parameters associated with the touch input event.
- the method may further include receiving, as input for an artificial intelligence model, the run-time touch input parameters. Responsive to receiving the run-time touch input parameters, the method may further include outputting, by the artificial intelligence model, a personalized user touch driver profile including a plurality of updated calibration parameters for a touch driver of the touch-sensitive display.
- the method may further comprise, in an initial training phase, receiving training phase touch input parameters from a plurality of users, and instructing a clustering algorithm to interpret the training phase touch input parameters and identify a plurality of user clusters, each user cluster defining a cohort of users.
- the method may further comprise, in the initial training phase, training the artificial intelligence model with a training data set including a plurality of training data pairs from a cohort of training data examples derived from a cohort of users.
- Each training data pair may include training phase touch input parameters indicating a characteristic of the touch input and ground truth output indicating calibration parameters for the paired touch input parameters.
- the method may further comprise creating a template profile for each cohort of users.
- Each template profile may include calibration parameters associated with a respective cohort.
- the method may further comprise, in a run-time phase, performing cluster analysis on the run-time touch input parameters to determine the cohort of users with which the user is associated, and setting the calibration parameters for the personalized user touch driver profile according to the calibration parameters included in the template profile associated with the determined cohort of users.
- the method may further comprise, in a feedback training phase, collecting user feedback via an implicit or explicit user feedback interface, and performing feedback training of the artificial intelligence model based at least in part on the user feedback.
- the computing system may comprise a touch-sensitive display and one or more processors.
- the touch-sensitive display may be configured to detect a run-time touch input from a user.
- the one or more processors may be configured to execute instructions using portions of associated memory to implement a touch driver of the touch-sensitive display and an artificial intelligence model.
- the touch driver of the touch-sensitive display may be configured to process the run-time touch input based at least in part on a plurality of calibration parameters, and output a touch input event and a plurality of run-time touch input parameters associated with the touch input event.
- the artificial intelligence model may be configured to receive, as input, the run-time touch input parameters.
- the artificial intelligence model may output a personalized user touch driver profile including a plurality of updated calibration parameters for the touch driver.
- the artificial intelligence model may include a first neural network configured to output an X coordinate offset and a Y coordinate offset of the touch input.
- the artificial intelligence model may further include a second neural network configured to output a tilt offset and an azimuth offset of the touch input.
- the artificial intelligence model may be configured to collect user feedback via an implicit or explicit user feedback interface, and perform feedback training of the artificial intelligence model based at least in part on the user feedback.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
A computing system includes a touch-sensitive display and one or more processors. The touch-sensitive display is configured to detect a run-time touch input from a user. The one or more processors are configured to execute instructions using portions of associated memory to implement a touch driver of the touch-sensitive display and an artificial intelligence model. The touch driver is configured to process the run-time touch input based on a plurality of calibration parameters and output a touch event and a plurality of run-time touch input parameters associated with the touch input event. The artificial intelligence model is configured to receive, as input, the run-time touch input parameters. Responsive to receiving the run-time touch input parameters, the artificial intelligence model is configured to output a personalized user touch driver profile including a plurality of updated calibration parameters for the touch driver.
Description
- Computing devices with touch-sensitive displays enable a user to interact with the computer via touch from a stylus or a digit. The touch-sensitive displays are configured for universal use, regardless of user characteristics such as handedness, writing pressure, writing speed, pen tilt, and the like.
- To address the issues discussed herein, a computing system is provided. According to one aspect, the computing system includes a touch-sensitive display and one or more processors. The touch-sensitive display is configured to detect a run-time touch input from a user. The one or more processors are configured to execute instructions using portions of associated memory to implement a touch driver of the touch-sensitive display and an artificial intelligence model. The touch driver is configured to process the run-time touch input based at least in part on a plurality of calibration parameters and output a touch event and a plurality of run-time touch input parameters associated with the touch input event. The artificial intelligence model is configured to receive, as input, the run-time touch input parameters. Responsive to receiving the run-time touch input parameters, the artificial intelligence model is configured to output a personalized user touch driver profile including a plurality of updated calibration parameters for the touch driver.
- In some configurations, the artificial intelligence model is trained in an initial training phase with a training data set that includes a plurality of training data pairs. Each training data pair may include training phase touch input parameters indicating a characteristic of the touch input and ground truth output indicating calibration parameters for the paired touch input parameters. Training phase touch input parameters from a plurality of users may be interpreted by a clustering algorithm to identify a plurality of user clusters, and each user cluster may define a cohort of users. The training data set used to train the artificial intelligence model may be created from training data examples derived from a cohort of users.
- In some configurations, the artificial intelligence model is trained in a feedback training phase. The artificial intelligence model may collect user feedback via an implicit or explicit user feedback interface and perform feedback training based at least in part on the user feedback. The artificial intelligence model may be further configured to adjust internal weights to enhance one or more calibration parameters via a backpropagation algorithm.
- In some configurations, the artificial intelligence model includes one or more neural networks. A first neural network of the one or more neural networks may be configured to output an X coordinate offset and a Y coordinate offset of the touch input. When the touch input is performed with a stylus, a second neural network of the one or more neural networks may be configured to output a tilt offset and an azimuth offset of the touch input.
- This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
-
FIG. 1 is a general schematic diagram of a computing system for enhancing a touch driver operation according to an embodiment of the present disclosure. -
FIG. 2 is a detailed schematic diagram of the computing system ofFIG. 1 . -
FIG. 3 is an illustration of user clusters identified via a clustering algorithm of the computing system ofFIG. 1 . -
FIG. 4 is a schematic diagram of a touch driver enhancer artificial intelligence model of the computing system ofFIG. 1 during an initial training phase. -
FIG. 5 is a schematic diagram of a template profile classifier of the computing system ofFIG. 1 during a run-time phase. -
FIG. 6 is a schematic diagram of a touch driver enhancer artificial intelligence model of the computing system ofFIG. 1 during initial training and feedback training phases. -
FIGS. 7 and 8 are illustrations of a graphical user interface for touch input calibration of the computing system ofFIG. 1 . -
FIG. 9 is a schematic diagram of a touch driver enhancer artificial intelligence model of the computing system ofFIG. 1 during a run-time phase. -
FIG. 10 shows the extraction of features from run-time touch input parameters of the computing system ofFIG. 1 . -
FIG. 11 is a flowchart of a method for enhancing a touch driver operation according one example configuration of the present disclosure. -
FIG. 12 is a flowchart of a method for training a touch driver enhancer artificial intelligence model according one example configuration of the present disclosure. -
FIG. 13 is an example computing system according to one implementation of the present disclosure. - Several significant challenges exist to the effective implementation of personalized human-computer interactions (HCI) in modern computing systems. Touch input from a stylus or a digit of a user has become a prominent HCI tool. However, a need exists to personalize the HCI and replace generic procedures that produce a single set of touch input parameters regardless of a user's personal preferences or touch input characteristics. Training an artificial intelligence model to recognize user-specific touch input parameters and adapt processing of the touch input accordingly would improve the user experience and lead to an efficient and more natural HCI. Further, tuning the touch input during run-time would provide a consistent user experience, even under conditions of fatigue. Recognizing user-specific touch input parameters at run-time would additionally facilitate calibrating the touch input according to different users of the same computing device, resulting in a smooth transition between users and/or mode of touch input. A computing system with an artificial intelligence model configured to enhance a touch driver operation according to a user's profile and preferences would increase the user's comfort and enhance productivity, both during touch input from a digit or hand of the user, as well as from a stylus. However, heretofore challenges have existed in the development of such a system.
- As schematically illustrated in
FIG. 1 , to address the above identified issues, acomputing system 100 for enhancing a touch driver operation is provided. Thecomputing system 100 includes acomputing device 10, which may, for example, take the form of a desktop computing device, laptop computing device, or a mobile computing device, such as a tablet. In another example, thecomputing device 10 may take other suitable forms, such as a smart phone device, a wrist mounted computing device, or the like. Thecomputing device 10 includes a touch-sensitive display 12 configured to detect touch input from digit or stylus of a user, as described in detail below. It will be appreciated that “touch input” as used herein refers both to touch input from a digit or hand of a user, as well as touch input from a stylus, unless otherwise specified. - A
touch driver 14 of the touch-sensitive display 12 is stored in an associatedmemory 16 and implemented by one ormore processors 18. In a run-time phase, thetouch driver 14 is configured to process the run-time touch input 20 based at least in part on a plurality ofcalibration parameters 22. Thecalibration parameters 22 may include, for example, X coordinate and Y coordinate offsets based at least in part on the sensed position of the detectedtouch input 20 and/or tilt and azimuth offsets based at least in part on a sensed thickness of the detectedtouch input 20. Upon processing the run-time touch input 20, thetouch driver 14 outputs atouch input event 24 and a plurality of run-timetouch input parameters 26. The touch input event may be passed to aprogram 42, which in turn generatesdisplay output 44 based thereon. The run-timetouch input parameters 26 may be stored in memory as they are gathered over a period of time based at least in part on a series of run-time touch inputs, and may be characteristics associated with eachtouch input event 24 such as measurements of digit size, handedness, inking smoothness, stroke speed, stutter, latency, accuracy, repeatability, pressure, tilt, and azimuth. Thememory 16 further includes an artificial intelligence model that may be implemented as a touch driver enhancerartificial intelligence model 28. The touch driver enhancerartificial intelligence model 28 is configured to receive the run-timetouch input parameters 26 and, in response, output a personalized usertouch driver profile 30 that is enhanced according to the processedtouch input 20 and includes a plurality of updatedcalibration parameters 32 for thetouch driver 14. In one example, the touch driver enhancerartificial intelligence model 28 is configured to optimize thetouch driver profile 30 by computing updatedcalibration parameters 32 that optimize the performance of thetouch driver 14. In other examples, the touch driver enhancerartificial intelligence model 28 may be configured to output updatedcalibration parameters 32 that are an improvement over theoriginal calibration parameters 22, but are not necessarily optimized. - A detailed schematic diagram of the
computing system 100 is illustrated inFIG. 2 . As described above, thetouch input 20 may be performed with a stylus 20A or one ormore digits 20B of the user. The touch-sensitive display 12 may be configured as a capacitive touch screen equipped with a plurality of conductive layers. The touch-sensitive display 12 may further include a digitizer layer that enables thecomputing device 10 to be used with an active stylus 20A, such as a digital pen. Thetouch input 20 may be detected viasensors 34, such as touch sensors that detect stimulation of an electrostatic field of the touch-sensitive display 12 and/or sensors in the digitizer layer. Thetouch input 20 is processed by thetouch driver 14 based at least in part oncalibration parameters 22 associated with thetouch input 20. Thetouch driver 14 then outputstouch data 36 including thetouch input event 24 and a plurality of run-time touch parameters 26 associated with thetouch input event 24. - The
touch input event 24 is transferred to aninput stack 38, and aninput handler 40 sends it to anapplication program 42. Theapplication program 42 is configured to convert data from thetouch input event 24 into adisplay output 44 that is sent to adisplay driver 46 to be displayed to the user on the touch-sensitive display 12. As described below, theapplication program 42 further includes afeedback module 48 configured to communicate data from thetouch input event 24 associated with thetouch input 20 to the touch driver enhancerartificial intelligence model 28 where it may be used to enhance thetouch driver 14 according to characteristics of the user'stouch input 20. - The
touch input parameters 26 are recorded in alog 50 and sent to aprofile module 52 that includes the touch driver enhancerartificial intelligence model 28. As described above with reference toFIG. 1 , the touch driver enhancerartificial intelligence model 28 outputs a personalized usertouch driver profile 30, including updatedcalibration parameters 32, based at least in part on the run-timetouch input parameters 26 associated with the user'stouch input 20. The updatedcalibration parameters 32 are sent to thetouch driver 14 such that thetouch driver 14 may be updated with the personalized usertouch driver profile 30. - The
profile module 52 further includes atemplate profile classifier 54 that is configured to receive the run-timetouch input parameters 26, classify them according to a plurality of user template profiles, and output atemplate profile 56 for the user performing thetouch input 20. As described in detail below with reference toFIGS. 3-5 ,calibration parameters 22 andtouch input parameters 26 associated with thetouch input 20 are additionally collected ascrowd data 58 and interpreted with aclustering algorithm 60 to identify user clusters 62 that each define a cohort of users. Template profiles are created for each cohort of users, with each template profile includingcalibration parameters 22 associated with a respective cohort. Examples oftemplate profiles respective calibration parameters FIG. 2 . A plurality of template profiles 56 n are communicated to thetemplate profile classifier 54, which determines atemplate profile 56 for thetouch driver 14 based at least in part on thetouch input parameters 26 of the user'stouch input 20. - The
template profile 56 provides a starting point for thecalibration parameters 22 used by thetouch driver 14 to process the run-time touch input 20. As thetouch driver 14 continues to receive andprocess touch input 20 from the user, the touch driver enhancerartificial intelligence model 28 is configured to receivetouch input parameters 26 from thetouch driver 14 and output updatedcalibration parameters 32 included in the personalized usertouch driver profile 30 to enhance the user experience during the human-computer interaction. - Turning to
FIG. 3 , examples ofuser clusters clustering algorithm 60 are shown. As described above, touchinput parameters 26 associated with thetouch input 20 may be collected ascrowd data 58 from a plurality of users at run-time. Thecrowd data 58 may be sent to a storage device in the cloud where it is analyzed and used fortouch driver 14 updates. Additionally or alternatively, thecrowd data 58 may be collected from a plurality of users to curate training phase touch input parameters 66 that are used in an initial training phase of the touch driver enhancerartificial intelligence model 28. In this implementation, the one ormore processors 18 are configured to received training phase touch input parameters 66 associated with touch input from a plurality of users and instruct theclustering algorithm 60 to interpret the training phase touch input parameters 66. A plurality of user clusters 62 are identified, with each user cluster 62 defining a cohort of users having similar training phase touch input parameters 66. For example, the cohorts may be clustered according to similarities detected in digit size, handedness, inking smoothness, stroke speed, stutter, latency, accuracy, repeatability, pressure, tilt, and/or azimuth. The clustering of cohorts of users having similar characteristics and the resulting plurality of template profiles 56 n can be used to initially create thetemplate profile 56 for the user, thereby reducing the number of touch inputs and time required to calibrate thetouch driver 14 and create the personalizetouch driver profile 30. - As indicated above, the touch driver enhancer
artificial intelligence model 28 is trainable. Accordingly,FIG. 4 shows a schematic diagram of the touch driver enhancerartificial intelligence model 28 during an initial training phase. In the initial training phase, the touch driver enhancerartificial intelligence model 28 is trained with atraining data set 68 including a plurality of training data pairs 70 from a cohort of training data examples derived from a cohort of users, i.e., a user cluster 62, identified via theclustering algorithm 60. Each training data pair 70 includes training phase touch input parameters 66 andground truth output 72. - The touch driver enhancer
artificial intelligence model 28 further includes one or moreneural networks 64 that may have an input layer, one or more convolutional layers, one or more fully connected hidden layers, and an output layer. It will be appreciated that the one or moreneural networks 64 may be configured to include one or more convolutional layers, one or more fully connected hidden layers, or both one or more convolutional layers and one or more fully connected hidden layers. The input layer includes a plurality of nodes corresponding to the training phase touch input parameters 66, which indicate a characteristic of the trainingphase touch input 68. The output layer includes a plurality of output nodes corresponding to theground truth output 72, which indicate training phase calibration parameters 74 for the training data pair 70. Nodes in each layer are linked by weighted associations, and the touch driver enhancerartificial intelligence model 28 is configured to adjust internal weights to enhance one or more of the plurality of calibration parameters 74 via a backpropagation algorithm according to a loss function during training to increase the accuracy of the output nodes during run-time. As described in detail below with reference toFIG. 9 , the touch driver enhancerartificial intelligence model 28 may include a firstneural network 64A configured to output X coordinate offset and Y coordinate offset of thetouch input 20. The touch driver enhancerartificial intelligence model 28 may further include a secondneural network 64B configured to output a tilt offset and an azimuth offset, which together define an angle of the stylus 20A that contributes to a thickness of thetouch input 20. Additionally or alternatively, in use-case scenarios in which the touch input is performed with a digit or hand of the user, the touch driver enhancerartificial intelligence model 28 may include a neural network configured to output offsets related to hand pose. - A schematic diagram of the
template profile classifier 54 during a run-time phase is shown inFIG. 5 . As described above with reference toFIG. 2 , thetemplate profile classifier 54 receives run-timetouch input parameters 26, classifies them according to a plurality of user template profiles 56 n based at least in part on cohorts of users, and outputs atemplate profile 56 for the user performing thetouch input 20. Accordingly, at run-time, thetemplate profile classifier 54 is configured to perform cluster analysis on the run-timetouch input parameters 26 to determine the cohort of users with which the user is associated. Thetemplate profile classifier 54 then outputs the template profile created for the determined cohort and sets thecalibration parameters 22 for the personalized usertouch driver profile 30 according to thecalibration parameters 22 included in thetemplate profile 56 associated with the determined cohort of users. In this way, thecalibration parameters 22 used by thetouch driver 14 to process thetouch input 20 are based at least in part ontemplate profiles 56 n of cohorts of users having characteristics similar to that of the user. For example, thetemplate profile classifier 54 may determine that the user's touch input is similar to that of cohorts of users that are right-handed and have a moderate stroke speed, and thecalibration parameters 22 are then set according to template profiles created for user cohorts having these characteristics. As such, thetemplate profile 56 for the user may be initially set using a plurality of template profiles 56 n having characteristics similar to that of the user, thereby reducing the time and effort required to calibrate thetouch driver 14. The personalized usertouch driver profile 30 continues to be fine-tuned with the touch driver enhancerartificial intelligence model 28 based at least in part on thetouch input 20 provided by the user in a feedback training phase. - Turning to
FIG. 6 , a schematic diagram of the touch driver enhancerartificial intelligence model 28 during initial training and feedback training phases is illustrated. It will be appreciated that the initial training phase, indicated inFIG. 6 with solid lines, corresponds to the initial training phase described above with reference toFIG. 4 . The feedback training phase is indicated inFIG. 6 with dashed lines. - As described above with reference to
FIG. 2 , thefeedback module 48 is configured to communicate the run-timetouch input parameters 26 associated with thetouch input 20 to the touch driver enhancerartificial intelligence model 28 to enhance thetouch driver 14 according to characteristics of the user'stouch input 20. As shown, thefeedback module 48 may further include a user feedback interface 76 that is configured to collectuser feedback 78 that can be used in the feedback training phase. Theuser feedback 78 may be implicit, such as the user frequently erasing or undoing thetouch input 20, or explicit, such as performing touch input calibration exercises and indicating whether thetouch input 20 is accurately depicted by thedisplay output 44 shown on the touch-sensitive display 12. As such, the user feedback interface 76 may be configured as one or both of an implicit user feedback interface or an explicit user feedback interface. Together, theuser feedback 78 and run-time touch input 20 form afeedback pair 80 with which the touch driver enhancerartificial intelligence model 28 performs feedback training. As with the initial training phase, nodes in each layer of theneural network 64 are linked by weighted associations, the touch driver enhancerartificial intelligence model 28 is configured to adjust internal weights to enhance one or more of the plurality ofcalibration parameters 22 via a backpropagation algorithm according to a loss function during the feedback training phase. -
FIGS. 7 and 8 illustrate examples of an explicit user feedback interface 76, which may be presented during a calibration phase and configured as a touch input calibration graphical user interface (GUI) 82. Thecomputing device 10 may be configured to solicituser feedback 78 in conditions such as configuring a new computing device for touch input from a digit or stylus, configuring or updating a user profile, or upon pairing a stylus to the computing device, for example. Additionally or alternatively, the user may access the touchinput calibration GUI 82 to provideexplicit user feedback 78 when thetouch input 20 is not calibrated to their satisfaction. - As shown in the example illustrated in
FIG. 7 , the calibration phase may be implemented to assess X coordinate and Y coordinate offsets between thetouch input 20 and thedisplay output 44. Accordingly, the touchinput calibration GUI 82 may be configured to present the user with a calibration graphic 84A to be traced such that thetouch input 20 can be processed, and a position of thecorresponding display output 44A can be compared to the calibration graphic 84A for accuracy. InFIG. 7 , the calibration graphic 84A is depicted with a circle, as indicated by the dashed line. The user performstouch input 20 to trace the calibration graphic 84A, and thetouch input 20 is processed and displayed as thedisplay output 44A, indicated by the solid line. The user may additionally provideexplicit user feedback 78 based at least in part on how accurately thedisplay output 44 shown on the touch-sensitive display 12 matches the calibration graphic 84A. -
FIG. 8 shows an example implementation of the calibration phase for assessing tilt and azimuth offsets between thetouch input 20 and thedisplay output 44. As the tilt and azimuth define an angle of the stylus 20A that contributes to a thickness of thetouch input 20, thetouch input 20 is performed with the stylus 20A in this implementation of the calibration phase. Similar to the example described above with reference toFIG. 7 , the touchinput calibration GUI 82 may be configured to present the user with a calibration graphic 84B such that a thickness of thecorresponding display output 44B can be processed and compared to the calibration graphic 84B for accuracy. Accordingly, inFIG. 8 , the calibration graphic 84B is depicted as a dashed line increasing in thickness from left to right. The user performstouch input 20 to match the increasing thickness of the calibration graphic 84B, and thetouch input 20 is processed and displayed as thedisplay output 44B, indicated by the solid line above the calibration graphic 84B. The user may additionally provideexplicit user feedback 78 based at least in part on how accurately thedisplay output 44B shown on the touch-sensitive display 12 matches the calibration graphic 84B. - Additionally or alternatively, the calibration graphic 84 displayed on the touch
input calibration GUI 82 during the calibration phase may be configured to assess X coordinate and Y coordinate offsets concurrently with tilt and azimuth offsets. In any implementations of the calibration phase, theexplicit user feedback 78 may be processed by thefeedback module 48 and sent to the touch driver enhancerartificial intelligence model 28, which outputs the personalized usertouch driver profile 30 enhanced according to theexplicit user feedback 78, including a plurality of updatedcalibration parameters 32 for thetouch driver 14. - As indicted above and illustrated in
FIG. 9 , the touch driver enhancerartificial intelligence model 28 may include the firstneural network 64A configured to output X coordinate offset and Y coordinate offset of thetouch input 20, and the secondneural network 64B configured to output a tilt offset and an azimuth offset. Each neural network has an input layer, one or more fully connected hidden layers, and an output layer. As described above with reference toFIG. 2 , upon processing the run-time touch input 20, thetouch driver 14 is configured to output thetouch event 24 and a plurality of run-timetouch input parameters 26 indicating characteristics associated with thetouch input event 24. Thetouch input parameters 26 are recorded in thelog 50 as raw input values foruser touch input 20 on the touch-sensitive display 12. - As shown in
FIG. 9 ,feature extractors touch input parameters 26, and the extracted features 88A, 88B comprise the plurality of nodes included in the input layers for the respectiveneural networks artificial intelligence model 28 then outputs a personalized usertouch driver profile 30, including updated calibration parameters according to the X coordinate and Y offsets coordinate output by theneural network 64A and the tilt and azimuth offsets output by theneural network 64B. It will be appreciated that the output can be considered in terms of a difference between an initial center of mass from thetouch input parameters 26 to a target position of thetouch input 20 on the touch-sensitive display 12. - An example of the extraction of
features 88A for theneural network 64A is shown inFIG. 10 . In the illustrated example, thetouch input parameters 26 are recorded as raw input values comprising nine magnitudes of thetouch input 20 signal in X, Y, and time (t) for a tip of the stylus 20A and for a ring of the stylus 20A. Thetouch input parameters 26 are processed by thefeature extractor 86A to calculate the center of mass (CoM) in X, Y, and time (t) for each of the tip and the ring. The calculations for the CoM are then used to create the extracted features 88A, including CoM in terms of speed and angle, that are used as the input layer for theneural network 64A. While the example shown inFIG. 10 is for determining the X coordinate and Y coordinate offsets, it will be appreciated that the tilt and azimuth offsets are determined by theneural network 64B using similar calculations. -
FIG. 11 is a flowchart of amethod 200 for enhancing a touch driver operation according one example configuration of the present disclosure.Method 200 is preferably implemented by one or more processors of a computing system including a touch-sensitive display configured to detect a run-time touch input from a user. - At
step 202, themethod 200 may comprise detecting a run-time touch input on a touch-sensitive display. The run-time touch input may be received from a digit or a stylus of a user. Accordingly, the touch-sensitive display may be configured as a capacitive touch screen equipped with a plurality of conductive layers. The touch-sensitive display may further include a digitizer layer that enables the computing device to be used with an active stylus, such as a digital pen. The touch input may be detected via sensors, such as touch sensors that detect stimulation of an electrostatic field of the touch-sensitive display and/or sensors in the digitizer layer. - Continuing from
step 202 to step 204, themethod 200 may include processing the run-time touch input. The touch input may be processed by a touch driver based at least in part on a plurality of calibration parameters. The calibration parameters may include, for example, X coordinate and Y coordinate offsets based at least in part on the sensed position of the detected touch input and/or tilt and azimuth offsets based at least in part on a sensed thickness of the detected touch input. - Proceeding from
step 204 to step 206, themethod 200 may include outputting a touch input event. The touch input event may be transferred to an input stack, and an input handler may send it to an application program. The application program may be configured to convert data from the touch input event into a display output that is sent to a display driver. Accordingly, advancing fromstep 206 to step 208, themethod 200 may include displaying output to the user on the touch-sensitive display. - After processing the touch input event at
step 204, themethod 200 may additionally or alternatively continue to step 210. Atstep 210, themethod 200 may include outputting touch input parameters. The touch input parameters may be recorded in a log and sent to a profile module that includes a touch driver enhancer artificial intelligence model. - Continuing from
step 210 to step 212, themethod 200 may include, receiving, by the touch driver enhancer artificial intelligence model, the run-time touch input parameters. The run-time touch input parameters may be characteristics associated with the touch input event, such as measurements of digit size, handedness, inking smoothness, stroke speed, stutter, latency, accuracy, repeatability, pressure, tilt, and azimuth, for example. - Proceeding from
step 212 to 214, themethod 200 may include outputting, by the touch driver enhancer artificial intelligence model, a personalized user touch driver profile. The personalized user touch driver profile be based at least in part on the run-time touch input parameters associated with the user's touch input, and may include updated calibration parameters. The updated calibration parameters may be sent to the touch driver such that the touch driver may be updated with the personalized user touch driver profile. - In some implementations, the application program may include a feedback module. As such, after outputting the touch input event at
step 206, themethod 200 may include collecting user feedback atstep 216. The feedback module may be configured to collect user feedback and communicate user feedback associated with the touch input to the touch driver enhancer artificial intelligence model where it may be used to enhance the touch driver according to characteristics of the user's touch input. -
FIG. 12 is a flowchart of amethod 300 for training a touch driver enhancer artificial intelligence model according one example configuration of the present disclosure. Themethod 300 is preferably implemented on an artificial intelligence model for a computing system having a touch-sensitive display configured to detect touch input from a user. - At step 302, the
method 300 may comprise receiving training phase touch input parameters from a plurality of users. Touch input parameters may be characteristics associated with touch input, such as measurements of digit size, handedness, inking smoothness, stroke speed, stutter, latency, accuracy, repeatability, pressure, tilt, and azimuth, for example. These parameters may be collected from a plurality of users to be processed for training the artificial intelligence model. - Continuing from step 302 to step 304, the
method 300 may include interpreting the training phase touch input parameters. The training phase touch input parameters may be interpreted with a clustering algorithm. - Proceeding from
step 304 to step 306, themethod 300 may include identifying a plurality of user clusters. Touch input parameters from the plurality of users may be grouped by like characteristics by the clustering algorithm to identify user clusters. As such, each user cluster may define a cohort of users having similar training phase touch input parameters. For example, the cohorts may be clustered according to similarities detected in digit size, handedness, inking smoothness, stroke speed, stutter, latency, accuracy, repeatability, pressure, tilt, and/or azimuth. - Advancing from step 306 to step 308, the
method 300 may include training the artificial intelligence model with training data sets derived from the user clusters. The training data sets may include a plurality of training data pairs from a cohort of training data examples derived from respective a cohort of users, i.e., a user cluster identified via the clustering algorithm. Each training data pair may include training phase touch input parameters and ground truth output. - Continuing from step 308 to step 310, the
method 300 may include creating a template profile with calibration parameters for each user cluster. Template profiles are created for each cohort of users, with each template profile including calibration parameters associated with a respective cohort. A plurality of template profiles may be communicated to a template profile classifier, which determines a template profile for the touch driver at run-time based at least in part on the touch input parameters of a user's touch input. The template profile provides a starting point for the calibration parameters used by the touch driver to process the run-time touch input such that a personalized user touch driver profile may be created. - In some embodiments, the methods and processes described herein may be tied to a computing system of one or more computing devices. In particular, such methods and processes may be implemented as a computer application program or service, an application-programming interface (API), a library, and/or other computer program product.
-
FIG. 13 schematically shows a non-limiting embodiment of acomputing system 900 that can enact one or more of the methods and processes described above.Computing system 900 is shown in simplified form.Computing system 900 may embody thecomputing device 10 described above and illustrated inFIG. 1 .Computing system 900 may take the form of one or more personal computers, server computers, tablet computers, home-entertainment computers, network computing devices, gaming devices, mobile computing devices, mobile communication devices (e.g., smart phone), and/or other computing devices, and wearable computing devices such as smart wristwatches and head mounted augmented reality devices. -
Computing system 900 includes alogic processor 902volatile memory 904, and anon-volatile storage device 906.Computing system 900 may optionally include adisplay subsystem 908,input subsystem 910,communication subsystem 912, and/or other components not shown inFIG. 13 . -
Logic processor 902 includes one or more physical devices configured to execute instructions. For example, the logic processor may be configured to execute instructions that are part of one or more applications, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more components, achieve a technical effect, or otherwise arrive at a desired result. - The logic processor may include one or more physical processors (hardware) configured to execute software instructions. Additionally or alternatively, the logic processor may include one or more hardware logic circuits or firmware devices configured to execute hardware-implemented logic or firmware instructions. Processors of the
logic processor 902 may be single-core or multi-core, and the instructions executed thereon may be configured for sequential, parallel, and/or distributed processing. Individual components of the logic processor optionally may be distributed among two or more separate devices, which may be remotely located and/or configured for coordinated processing. Aspects of the logic processor may be virtualized and executed by remotely accessible, networked computing devices configured in a cloud-computing configuration. In such a case, these virtualized aspects are run on different physical logic processors of various different machines, it will be understood. -
Non-volatile storage device 906 includes one or more physical devices configured to hold instructions executable by the logic processors to implement the methods and processes described herein. When such methods and processes are implemented, the state ofnon-volatile storage device 906 may be transformed, e.g., to hold different data. -
Non-volatile storage device 906 may include physical devices that are removable and/or built-in.Non-volatile storage device 906 may include optical memory (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory (e.g., ROM, EPROM, EEPROM, FLASH memory, etc.), and/or magnetic memory (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), or other mass storage device technology.Non-volatile storage device 906 may include nonvolatile, dynamic, static, read/write, read-only, sequential-access, location-addressable, file-addressable, and/or content-addressable devices. It will be appreciated thatnon-volatile storage device 906 is configured to hold instructions even when power is cut to thenon-volatile storage device 906. -
Volatile memory 904 may include physical devices that include random access memory.Volatile memory 904 is typically utilized bylogic processor 902 to temporarily store information during processing of software instructions. It will be appreciated thatvolatile memory 904 typically does not continue to store instructions when power is cut to thevolatile memory 904. - Aspects of
logic processor 902,volatile memory 904, andnon-volatile storage device 906 may be integrated together into one or more hardware-logic components. Such hardware-logic components may include field-programmable gate arrays (FPGAs), program- and application-specific integrated circuits (PASIC/ASICs), program- and application-specific standard products (PSSP/ASSPs), system-on-a-chip (SOC), and complex programmable logic devices (CPLDs), for example. - The terms “module,” “program,” and “engine” may be used to describe an aspect of
computing system 900 typically implemented in software by a processor to perform a particular function using portions of volatile memory, which function involves transformative processing that specially configures the processor to perform the function. Thus, a module, program, or engine may be instantiated vialogic processor 902 executing instructions held bynon-volatile storage device 906, using portions ofvolatile memory 904. It will be understood that different modules, programs, and/or engines may be instantiated from the same application, service, code block, object, library, routine, API, function, etc. Likewise, the same module, program, and/or engine may be instantiated by different applications, services, code blocks, objects, routines, APIs, functions, etc. The terms “module,” “program,” and “engine” may encompass individual or groups of executable files, data files, libraries, drivers, scripts, database records, etc. - When included,
display subsystem 908 may be used to present a visual representation of data held bynon-volatile storage device 906. The visual representation may take the form of a graphical user interface (GUI). As the herein described methods and processes change the data held by the non-volatile storage device, and thus transform the state of the non-volatile storage device, the state ofdisplay subsystem 908 may likewise be transformed to visually represent changes in the underlying data.Display subsystem 908 may include one or more display devices utilizing virtually any type of technology. Such display devices may be combined withlogic processor 902,volatile memory 904, and/ornon-volatile storage device 906 in a shared enclosure, or such display devices may be peripheral display devices. - When included,
input subsystem 910 may comprise or interface with one or more user-input devices such as a keyboard, mouse, touch screen, or game controller. In some embodiments, the input subsystem may comprise or interface with selected natural user input (NUI) componentry. Such componentry may be integrated or peripheral, and the transduction and/or processing of input actions may be handled on- or off-board. Example NUI componentry may include a microphone for speech and/or voice recognition; an infrared, color, stereoscopic, and/or depth camera for machine vision and/or gesture recognition; a head tracker, eye tracker, accelerometer, and/or gyroscope for motion detection and/or intent recognition; as well as electric-field sensing componentry for assessing brain activity; and/or any other suitable sensor. - When included,
communication subsystem 912 may be configured to communicatively couple various computing devices described herein with each other, and with other devices.Communication subsystem 912 may include wired and/or wireless communication devices compatible with one or more different communication protocols. As non-limiting examples, the communication subsystem may be configured for communication via a wireless telephone network, or a wired or wireless local- or wide-area network, such as a HDMI over Wi-Fi connection. In some embodiments, the communication subsystem may allowcomputing system 900 to send and/or receive messages to and/or from other devices via a network such as the Internet. - The following paragraphs provide additional description of aspects of the present disclosure. One aspect provides a computing system for enhancing a touch driver operation. The computing system may comprise a touch-sensitive display and one or more processors. The touch-sensitive display may be configured to detect a run-time touch input from a user. The one or more processors may be configured to execute instructions using portions of associated memory to implement a touch driver of the touch-sensitive display and an artificial intelligence model. The touch driver of the touch-sensitive display may be configured to process the run-time touch input based at least in part on a plurality of calibration parameters, and output a touch input event and a plurality of run-time touch input parameters associated with the touch input event. The artificial intelligence model may be configured to receive, as input, the run-time touch input parameters. Responsive to receiving the run-time touch input parameters, the artificial intelligence model may output a personalized user touch driver profile including a plurality of updated calibration parameters for the touch driver.
- In this aspect, additionally or alternatively, in an initial training phase, the one or more processors may be configured to receive training phase touch input parameters from a plurality of users, and instruct a clustering algorithm to interpret the training phase touch input parameters and identify a plurality of user clusters, each user cluster defining a cohort of users.
- In this aspect, additionally or alternatively, the artificial intelligence model may have been trained in the initial training phase with a training data set including a plurality of training data pairs from a cohort of training data examples derived from a cohort of users. Each training data pair may include training phase touch input parameters indicating a characteristic of the touch input and ground truth output indicating calibration parameters for the paired touch input parameters. In this aspect, additionally or alternatively, the one or more processors may be configured to create a template profile for each cohort of users. Each template profile may include calibration parameters associated with a respective cohort.
- In this aspect, additionally or alternatively, in a run-time phase, the one or more processors may be configured to perform cluster analysis on the run-time touch input parameters to determine the cohort of users with which the user is associated, and set the calibration parameters for the personalized user touch driver profile according to the calibration parameters included in the template profile associated with the determined cohort of users.
- In this aspect, additionally or alternatively, in a feedback training phase, the artificial intelligence model may be configured to collect user feedback via an implicit or explicit user feedback interface, and perform feedback training of the artificial intelligence model based at least in part on the user feedback. In this aspect, additionally or alternatively, the artificial intelligence model may be configured to adjust internal weights to enhance one or more of the plurality of calibration parameters via a backpropagation algorithm.
- In this aspect, additionally or alternatively, the touch input parameters may be recorded as raw input values for user touch input on the touch-sensitive display, and touch input features may be extracted from the raw input values.
- In this aspect, additionally or alternatively, the artificial intelligence model may include one or more neural networks. In this aspect, additionally or alternatively, a first neural network of the one or more neural networks may be configured to output an X coordinate offset and a Y coordinate offset of the touch input. In this aspect, additionally or alternatively, a second neural network of the one or more neural networks may be configured to output a tilt offset and an azimuth offset of the touch input.
- In this aspect, additionally or alternatively, the touch input may be performed with a stylus or one or more digits of the user. In this aspect, additionally or alternatively, the characteristic of the touch input may be associated with at least one of digit size, handedness, inking smoothness, stroke speed, stutter, latency, accuracy, repeatability, pressure, tilt, and azimuth.
- Another aspect provides a computing method for enhancing a touch driver operation. The method may comprise, at one or more processors of a computing system including a touch-sensitive display configured to detect a run-time touch input from a user, processing the run-time touch input based at least in part on a plurality of calibration parameters. The method may further include outputting a touch input event and a plurality of run-time touch input parameters associated with the touch input event. The method may further include receiving, as input for an artificial intelligence model, the run-time touch input parameters. Responsive to receiving the run-time touch input parameters, the method may further include outputting, by the artificial intelligence model, a personalized user touch driver profile including a plurality of updated calibration parameters for a touch driver of the touch-sensitive display.
- In this aspect, additionally or alternatively, the method may further comprise, in an initial training phase, receiving training phase touch input parameters from a plurality of users, and instructing a clustering algorithm to interpret the training phase touch input parameters and identify a plurality of user clusters, each user cluster defining a cohort of users.
- In this aspect, additionally or alternatively, the method may further comprise, in the initial training phase, training the artificial intelligence model with a training data set including a plurality of training data pairs from a cohort of training data examples derived from a cohort of users. Each training data pair may include training phase touch input parameters indicating a characteristic of the touch input and ground truth output indicating calibration parameters for the paired touch input parameters.
- In this aspect, additionally or alternatively, the method may further comprise creating a template profile for each cohort of users. Each template profile may include calibration parameters associated with a respective cohort.
- In this aspect, additionally or alternatively, the method may further comprise, in a run-time phase, performing cluster analysis on the run-time touch input parameters to determine the cohort of users with which the user is associated, and setting the calibration parameters for the personalized user touch driver profile according to the calibration parameters included in the template profile associated with the determined cohort of users.
- In this aspect, additionally or alternatively, the method may further comprise, in a feedback training phase, collecting user feedback via an implicit or explicit user feedback interface, and performing feedback training of the artificial intelligence model based at least in part on the user feedback.
- Another aspect provides a computing system for enhancing a touch driver operation. The computing system may comprise a touch-sensitive display and one or more processors. The touch-sensitive display may be configured to detect a run-time touch input from a user. The one or more processors may be configured to execute instructions using portions of associated memory to implement a touch driver of the touch-sensitive display and an artificial intelligence model. The touch driver of the touch-sensitive display may be configured to process the run-time touch input based at least in part on a plurality of calibration parameters, and output a touch input event and a plurality of run-time touch input parameters associated with the touch input event. The artificial intelligence model may be configured to receive, as input, the run-time touch input parameters. Responsive to receiving the run-time touch input parameters, the artificial intelligence model may output a personalized user touch driver profile including a plurality of updated calibration parameters for the touch driver. The artificial intelligence model may include a first neural network configured to output an X coordinate offset and a Y coordinate offset of the touch input. The artificial intelligence model may further include a second neural network configured to output a tilt offset and an azimuth offset of the touch input. In a feedback training phase, the artificial intelligence model may be configured to collect user feedback via an implicit or explicit user feedback interface, and perform feedback training of the artificial intelligence model based at least in part on the user feedback.
- It will be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated and/or described may be performed in the sequence illustrated and/or described, in other sequences, in parallel, or omitted. Likewise, the order of the above-described processes may be changed.
- The subject matter of the present disclosure includes all novel and non-obvious combinations and sub-combinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.
Claims (20)
1. A computing system for enhancing a touch driver operation, comprising:
a touch-sensitive display configured to detect a run-time touch input from a user; and
one or more processors configured to execute instructions using portions of associated memory to implement:
a touch driver of the touch-sensitive display configured to process the run-time touch input based at least in part on a plurality of calibration parameters, and output a touch input event and a plurality of run-time touch input parameters associated with the touch input event, and
an artificial intelligence model configured to:
receive, as input, the run-time touch input parameters; and
responsive to receiving the run-time touch input parameters, output a personalized user touch driver profile including a plurality of updated calibration parameters for the touch driver, wherein
in a feedback training phase, the artificial intelligence model is configured to:
collect user feedback via an implicit or explicit user feedback interface, and
perform feedback training of the artificial intelligence model based at least in part on the user feedback.
2. The computing system of claim 1 , wherein
in an initial training phase, the one or more processors are configured to:
receive training phase touch input parameters from a plurality of users; and
instruct a clustering algorithm to interpret the training phase touch input parameters and identify a plurality of user clusters, each user cluster defining a cohort of users.
3. The computing system of claim 2 , wherein
the artificial intelligence model has been trained in the initial training phase with a training data set including a plurality of training data pairs from a cohort of training data examples derived from a cohort of users, each training data pair including training phase touch input parameters indicating a characteristic of the touch input and ground truth output indicating calibration parameters for the paired touch input parameters.
4. The computing system of claim 3 , wherein
the one or more processors are configured to create a template profile for each cohort of users, each template profile including calibration parameters associated with a respective cohort.
5. The computing system of claim 4 , wherein
in a run-time phase, the one or more processors are configured to:
perform cluster analysis on the run-time touch input parameters to determine the cohort of users with which the user is associated; and
set the calibration parameters for the personalized user touch driver profile according to the calibration parameters included in the template profile associated with the determined cohort of users.
6. (canceled)
7. The computing system of claim 1 , wherein
the artificial intelligence model is configured to adjust internal weights to enhance one or more of the plurality of calibration parameters via a backpropagation algorithm.
8. The computing system of claim 1 , wherein
the touch input parameters are recorded as raw input values for user touch input on the touch-sensitive display; and
touch input features are extracted from the raw input values.
9. The computing system of claim 1 , wherein
the artificial intelligence model includes one or more neural networks.
10. The computing system of claim 9 , wherein
a first neural network of the one or more neural networks is configured to output an X coordinate offset and a Y coordinate offset of the touch input.
11. The computing system of claim 10 , wherein
a second neural network of the one or more neural networks is configured to output a tilt offset and an azimuth offset of the touch input.
12. The computing system of claim 1 , wherein
the touch input is performed with a stylus or one or more digits of the user.
13. The computing system of claim 3 , wherein
the characteristic of the touch input is associated with at least one of digit size, handedness, inking smoothness, stroke speed, stutter, latency, accuracy, repeatability, pressure, tilt, and azimuth.
14. A computing method for enhancing a touch driver operation, the method comprising:
at one or more processors of a computing system including a touch-sensitive display configured to detect a run-time touch input from a user:
processing the run-time touch input based at least in part on a plurality of calibration parameters,
outputting a touch input event and a plurality of run-time touch input parameters associated with the touch input event,
receiving, as input for an artificial intelligence model, the run-time touch input parameters,
responsive to receiving the run-time touch input parameters, outputting, by the artificial intelligence model, a personalized user touch driver profile including a plurality of updated calibration parameters for a touch driver of the touch-sensitive display, and
in a feedback training phase, collecting user feedback via an implicit or explicit user feedback interface, and performing feedback training of the artificial intelligence model based at least in part on the user feedback.
15. The computing method of claim 14 , the method further comprising:
in an initial training phase, receiving training phase touch input parameters from a plurality of users, and
instructing a clustering algorithm to interpret the training phase touch input parameters and identify a plurality of user clusters, each user cluster defining a cohort of users.
16. The computing method of claim 15 , the method further comprising:
in the initial training phase, training the artificial intelligence model with a training data set including a plurality of training data pairs from a cohort of training data examples derived from a cohort of users, each training data pair including training phase touch input parameters indicating a characteristic of the touch input and ground truth output indicating calibration parameters for the paired touch input parameters.
17. The computing method of claim 16 , the method further comprising:
creating a template profile for each cohort of users, each template profile including calibration parameters associated with a respective cohort.
18. The computing method of claim 16 , the method further comprising:
in a run-time phase, performing cluster analysis on the run-time touch input parameters to determine the cohort of users with which the user is associated, and
setting the calibration parameters for the personalized user touch driver profile according to the calibration parameters included in the template profile associated with the determined cohort of users.
19. (canceled)
20. A computing system for enhancing a touch driver operation, comprising:
a touch-sensitive display configured to detect a run-time touch input from a user; and
one or more processors configured to execute instructions using portions of associated memory to implement:
a touch driver of the touch-sensitive display configured to process the run-time touch input based at least in part on a plurality of calibration parameters, and output a touch input event and a plurality of run-time touch input parameters associated with the touch input event, and
an artificial intelligence model configured to:
receive, as input, the run-time touch input parameters; and
responsive to receiving the run-time touch input parameters, output a personalized user touch driver profile including a plurality of updated calibration parameters for the touch driver, wherein
in an initial training phase, the one or more processors are configured to:
receive training phase touch input parameters from a plurality of users; and
instruct a clustering algorithm to interpret the training phase touch input parameters and identify a plurality of user clusters, each user cluster defining a cohort of users; and
wherein the artificial intelligence model has been trained in the initial training phase with a training data set including a plurality of training data pairs from a cohort of training data examples derived from a cohort of users, each training data pair including training phase touch input parameters indicating a characteristic of the touch input and ground truth output indicating calibration parameters for the paired touch input parameters.
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/323,757 US11526235B1 (en) | 2021-05-18 | 2021-05-18 | Artificial intelligence model for enhancing a touch driver operation |
PCT/US2022/026248 WO2022245485A1 (en) | 2021-05-18 | 2022-04-26 | Artificial intelligence model for enhancing a touch driver operation |
CN202280036457.0A CN117355814A (en) | 2021-05-18 | 2022-04-26 | Artificial intelligence model for enhanced touch driver operation |
EP22722963.0A EP4341790A1 (en) | 2021-05-18 | 2022-04-26 | Artificial intelligence model for enhancing a touch driver operation |
US18/064,114 US11966540B2 (en) | 2021-05-18 | 2022-12-09 | Artificial intelligence model for enhancing a touch driver operation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/323,757 US11526235B1 (en) | 2021-05-18 | 2021-05-18 | Artificial intelligence model for enhancing a touch driver operation |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/064,114 Continuation US11966540B2 (en) | 2021-05-18 | 2022-12-09 | Artificial intelligence model for enhancing a touch driver operation |
Publications (2)
Publication Number | Publication Date |
---|---|
US20220374099A1 true US20220374099A1 (en) | 2022-11-24 |
US11526235B1 US11526235B1 (en) | 2022-12-13 |
Family
ID=81603689
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/323,757 Active US11526235B1 (en) | 2021-05-18 | 2021-05-18 | Artificial intelligence model for enhancing a touch driver operation |
US18/064,114 Active US11966540B2 (en) | 2021-05-18 | 2022-12-09 | Artificial intelligence model for enhancing a touch driver operation |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/064,114 Active US11966540B2 (en) | 2021-05-18 | 2022-12-09 | Artificial intelligence model for enhancing a touch driver operation |
Country Status (4)
Country | Link |
---|---|
US (2) | US11526235B1 (en) |
EP (1) | EP4341790A1 (en) |
CN (1) | CN117355814A (en) |
WO (1) | WO2022245485A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117827035A (en) * | 2024-03-05 | 2024-04-05 | 江苏锦花电子股份有限公司 | Touch equipment monitoring system and method based on artificial intelligence |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11526235B1 (en) * | 2021-05-18 | 2022-12-13 | Microsoft Technology Licensing, Llc | Artificial intelligence model for enhancing a touch driver operation |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120007821A1 (en) * | 2010-07-11 | 2012-01-12 | Lester F. Ludwig | Sequential classification recognition of gesture primitives and window-based parameter smoothing for high dimensional touchpad (hdtp) user interfaces |
US20120162111A1 (en) * | 2010-12-24 | 2012-06-28 | Samsung Electronics Co., Ltd. | Method and apparatus for providing touch interface |
US20130120298A1 (en) * | 2010-09-30 | 2013-05-16 | Huawei Device Co., Ltd. | User touch operation mode adaptive method and device |
US20140247251A1 (en) * | 2012-12-10 | 2014-09-04 | Qing Zhang | Techniques and Apparatus for Managing Touch Interface |
US20180150280A1 (en) * | 2016-11-28 | 2018-05-31 | Samsung Electronics Co., Ltd. | Electronic device for processing multi-modal input, method for processing multi-modal input and sever for processing multi-modal input |
US20180181245A1 (en) * | 2016-09-23 | 2018-06-28 | Microsoft Technology Licensing, Llc | Capacitive touch mapping |
US20180253209A1 (en) * | 2017-03-03 | 2018-09-06 | Samsung Electronics Co., Ltd. | Electronic device for processing user input and method for processing user input |
US20190069154A1 (en) * | 2013-09-19 | 2019-02-28 | Unaliwear, Inc. | Assist device and system |
US20190370617A1 (en) * | 2018-06-04 | 2019-12-05 | Adobe Inc. | Sketch Completion Using Machine Learning |
US20200257847A1 (en) * | 2019-02-07 | 2020-08-13 | NetCentric Technologies Inc. dba CommonLook | System and method for using artificial intelligence to deduce the structure of pdf documents |
US20210217406A1 (en) * | 2018-06-08 | 2021-07-15 | Samsung Electronics Co., Ltd. | Voice recognition service operating method and electronic device supporting same |
US20220076385A1 (en) * | 2020-03-04 | 2022-03-10 | Samsung Electronics Co., Ltd. | Methods and systems for denoising media using contextual information of the media |
Family Cites Families (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5239489A (en) | 1991-05-06 | 1993-08-24 | International Business Machines Corporation | Pen position and tilt estimators for a digitizer tablet |
US7956847B2 (en) * | 2007-01-05 | 2011-06-07 | Apple Inc. | Gestures for controlling, manipulating, and editing of media files using touch sensitive devices |
JP2010182107A (en) | 2009-02-05 | 2010-08-19 | Canon Inc | Image forming apparatus and image forming method |
US20100302212A1 (en) | 2009-06-02 | 2010-12-02 | Microsoft Corporation | Touch personalization for a display device |
US8436821B1 (en) * | 2009-11-20 | 2013-05-07 | Adobe Systems Incorporated | System and method for developing and classifying touch gestures |
US20110314427A1 (en) * | 2010-06-18 | 2011-12-22 | Samsung Electronics Co., Ltd. | Personalization using custom gestures |
KR101162223B1 (en) * | 2011-06-14 | 2012-07-05 | 엘지전자 주식회사 | Mobile terminal and method for controlling thereof |
US20140143404A1 (en) * | 2012-11-19 | 2014-05-22 | Sony Corporation | System and method for communicating with multiple devices |
US9075464B2 (en) * | 2013-01-30 | 2015-07-07 | Blackberry Limited | Stylus based object modification on a touch-sensitive display |
US9286482B1 (en) * | 2013-06-10 | 2016-03-15 | Amazon Technologies, Inc. | Privacy control based on user recognition |
US20150066762A1 (en) * | 2013-08-28 | 2015-03-05 | Geoffrey W. Chatterton | Authentication system |
US10549180B2 (en) * | 2013-09-30 | 2020-02-04 | Zynga Inc. | Swipe-direction gesture control for video games using glass input devices |
US10440019B2 (en) * | 2014-05-09 | 2019-10-08 | Behaviometrics Ag | Method, computer program, and system for identifying multiple users based on their behavior |
WO2015181159A1 (en) * | 2014-05-28 | 2015-12-03 | Thomson Licensing | Methods and systems for touch input |
US10445783B2 (en) * | 2014-11-19 | 2019-10-15 | Adobe Inc. | Target audience content interaction quantification |
CN106909238B (en) | 2017-02-04 | 2020-04-24 | 广州华欣电子科技有限公司 | Intelligent pen and intelligent pen writing method |
US10386974B2 (en) * | 2017-02-07 | 2019-08-20 | Microsoft Technology Licensing, Llc | Detecting input based on a sensed capacitive input profile |
US20200027554A1 (en) * | 2018-07-18 | 2020-01-23 | International Business Machines Corporation | Simulating Patients for Developing Artificial Intelligence Based Medical Solutions |
CN111460453B (en) * | 2019-01-22 | 2023-12-12 | 百度在线网络技术(北京)有限公司 | Machine learning training method, controller, device, server, terminal and medium |
JP6923573B2 (en) | 2019-01-30 | 2021-08-18 | ファナック株式会社 | Control parameter adjuster |
US11159322B2 (en) * | 2019-01-31 | 2021-10-26 | Baidu Usa Llc | Secure multiparty computing framework using a restricted operating environment with a guest agent |
US11650717B2 (en) * | 2019-07-10 | 2023-05-16 | International Business Machines Corporation | Using artificial intelligence to iteratively design a user interface through progressive feedback |
US11526235B1 (en) * | 2021-05-18 | 2022-12-13 | Microsoft Technology Licensing, Llc | Artificial intelligence model for enhancing a touch driver operation |
-
2021
- 2021-05-18 US US17/323,757 patent/US11526235B1/en active Active
-
2022
- 2022-04-26 CN CN202280036457.0A patent/CN117355814A/en active Pending
- 2022-04-26 WO PCT/US2022/026248 patent/WO2022245485A1/en active Application Filing
- 2022-04-26 EP EP22722963.0A patent/EP4341790A1/en active Pending
- 2022-12-09 US US18/064,114 patent/US11966540B2/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120007821A1 (en) * | 2010-07-11 | 2012-01-12 | Lester F. Ludwig | Sequential classification recognition of gesture primitives and window-based parameter smoothing for high dimensional touchpad (hdtp) user interfaces |
US20130120298A1 (en) * | 2010-09-30 | 2013-05-16 | Huawei Device Co., Ltd. | User touch operation mode adaptive method and device |
US20120162111A1 (en) * | 2010-12-24 | 2012-06-28 | Samsung Electronics Co., Ltd. | Method and apparatus for providing touch interface |
US20140247251A1 (en) * | 2012-12-10 | 2014-09-04 | Qing Zhang | Techniques and Apparatus for Managing Touch Interface |
US20150286338A1 (en) * | 2012-12-10 | 2015-10-08 | Intel Corporation | Techniques and Apparatus for Managing Touch Interface |
US20190069154A1 (en) * | 2013-09-19 | 2019-02-28 | Unaliwear, Inc. | Assist device and system |
US20180181245A1 (en) * | 2016-09-23 | 2018-06-28 | Microsoft Technology Licensing, Llc | Capacitive touch mapping |
US20180150280A1 (en) * | 2016-11-28 | 2018-05-31 | Samsung Electronics Co., Ltd. | Electronic device for processing multi-modal input, method for processing multi-modal input and sever for processing multi-modal input |
US20180253209A1 (en) * | 2017-03-03 | 2018-09-06 | Samsung Electronics Co., Ltd. | Electronic device for processing user input and method for processing user input |
US20190370617A1 (en) * | 2018-06-04 | 2019-12-05 | Adobe Inc. | Sketch Completion Using Machine Learning |
US10650290B2 (en) * | 2018-06-04 | 2020-05-12 | Adobe Inc. | Sketch completion using machine learning |
US20210217406A1 (en) * | 2018-06-08 | 2021-07-15 | Samsung Electronics Co., Ltd. | Voice recognition service operating method and electronic device supporting same |
US20200257847A1 (en) * | 2019-02-07 | 2020-08-13 | NetCentric Technologies Inc. dba CommonLook | System and method for using artificial intelligence to deduce the structure of pdf documents |
US20220076385A1 (en) * | 2020-03-04 | 2022-03-10 | Samsung Electronics Co., Ltd. | Methods and systems for denoising media using contextual information of the media |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117827035A (en) * | 2024-03-05 | 2024-04-05 | 江苏锦花电子股份有限公司 | Touch equipment monitoring system and method based on artificial intelligence |
Also Published As
Publication number | Publication date |
---|---|
US20230103621A1 (en) | 2023-04-06 |
EP4341790A1 (en) | 2024-03-27 |
US11966540B2 (en) | 2024-04-23 |
US11526235B1 (en) | 2022-12-13 |
CN117355814A (en) | 2024-01-05 |
WO2022245485A1 (en) | 2022-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11966540B2 (en) | Artificial intelligence model for enhancing a touch driver operation | |
US10275022B2 (en) | Audio-visual interaction with user devices | |
US11682380B2 (en) | Systems and methods for crowdsourced actions and commands | |
US20230100423A1 (en) | Crowdsourced on-boarding of digital assistant operations | |
US11720633B2 (en) | Aggregating personalized suggestions from multiple sources | |
US9165566B2 (en) | Indefinite speech inputs | |
EP3005030B1 (en) | Calibrating eye tracking system by touch input | |
CN109844729B (en) | Merging with predictive granularity modification by example | |
EP3899696B1 (en) | Voice command execution from auxiliary input | |
EP3529715A1 (en) | Join with format modification by example | |
US10025427B2 (en) | Probabilistic touch sensing | |
US10719193B2 (en) | Augmenting search with three-dimensional representations | |
US10733779B2 (en) | Augmented and virtual reality bot infrastructure | |
US11748071B2 (en) | Developer and runtime environments supporting multi-input modalities | |
WO2023019948A1 (en) | Retrieval method, management method, and apparatuses for multimodal information base, device, and medium | |
WO2015153240A1 (en) | Directed recommendations | |
WO2024027125A1 (en) | Object recommendation method and apparatus, electronic device, and storage medium | |
US20220391028A1 (en) | User input interpretation via driver parameters | |
US20180052562A1 (en) | Touch detection using feature-vector dictionary | |
US11334192B1 (en) | Dual touchscreen device calibration | |
US11989369B1 (en) | Neural network-based touch input classification | |
US11551112B2 (en) | Information processing apparatus and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LIVNY, YOTAM;DAVID, NIR;LIVNE, YAEL;REEL/FRAME:056278/0914 Effective date: 20210518 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |