US11944436B2 - Determining a stress level of a smartphone user based on the user's touch interactions and motion sensor data - Google Patents

Determining a stress level of a smartphone user based on the user's touch interactions and motion sensor data Download PDF

Info

Publication number
US11944436B2
US11944436B2 US17/227,308 US202117227308A US11944436B2 US 11944436 B2 US11944436 B2 US 11944436B2 US 202117227308 A US202117227308 A US 202117227308A US 11944436 B2 US11944436 B2 US 11944436B2
Authority
US
United States
Prior art keywords
touch
motion
user
smartphone
data points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US17/227,308
Other versions
US20220322985A1 (en
Inventor
Joao Guerreiro
Bartlomiej M. Skorulski
Aleksandar MATIC
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Koa Health Digital Solutions S L U
Koa Health Digital Solutions SL
Original Assignee
Koa Health Digital Solutions S L U
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koa Health Digital Solutions S L U filed Critical Koa Health Digital Solutions S L U
Priority to US17/227,308 priority Critical patent/US11944436B2/en
Assigned to KOA HEALTH B.V. reassignment KOA HEALTH B.V. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GUERREIRO, JOAO, Matic, Aleksandar, Skorulski, Bartlomiej M.
Priority to EP21179548.9A priority patent/EP4070718A1/en
Publication of US20220322985A1 publication Critical patent/US20220322985A1/en
Assigned to KOA HEALTH DIGITAL SOLUTIONS S.L.U. reassignment KOA HEALTH DIGITAL SOLUTIONS S.L.U. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KOA HEALTH B.V.
Application granted granted Critical
Publication of US11944436B2 publication Critical patent/US11944436B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1113Local tracking of patients, e.g. in a hospital or private home
    • A61B5/1114Tracking parts of the body
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1121Determining geometric values, e.g. centre of rotation or angular range of movement
    • A61B5/1122Determining geometric values, e.g. centre of rotation or angular range of movement of movement trajectories
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6887Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient mounted on external non-worn devices, e.g. non-medical devices
    • A61B5/6898Portable consumer electronic devices, e.g. music players, telephones, tablet computers
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7278Artificial waveform generation or derivation, e.g. synthesising signals from measured signals
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/742Details of notification to user or communication with user or patient ; user input means using visual displays
    • A61B5/7435Displaying user selection data, e.g. icons in a graphical user interface
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/7475User input or interface means, e.g. keyboard, pointing device, joystick
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0414Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means using force sensing means to determine a position
    • G06F3/04146Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means using force sensing means to determine a position using pressure sensitive conductive elements delivering a boolean signal and located between crossing sensing lines, e.g. located between X and Y sensing line layers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/06Measuring instruments not otherwise provided for
    • A61B2090/064Measuring instruments not otherwise provided for for measuring force, pressure or mechanical tension
    • A61B2090/065Measuring instruments not otherwise provided for for measuring force, pressure or mechanical tension for measuring contact or contact pressure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/041Indexing scheme relating to G06F3/041 - G06F3/045
    • G06F2203/04105Pressure sensors for measuring the pressure or force exerted on the touch surface without providing the touch position

Definitions

  • the present invention relates to a method for determining the stress level of a person based only on how the person uses a smartphone by weighting contributions of both the person's touch interactions with the smartphone and movements of the smartphone as indicated by motion sensor data.
  • a method for determining a user's stress level is performed by a mobile app running on a smartphone.
  • the method includes generating touch feature values and motion feature values, weighting the feature values by regression parameters, and generating a stress score for the user based on the weighted touch feature values and weighted motion feature values.
  • the touch feature values indicate how the user's finger moves over the smartphone screen and are generated from touch data points including X positions, Y positions and associated touch timestamp values.
  • the motion feature values indicate the movement of the smartphone and are generated from motion data points including X movements, Y movements, Z movements and associated motion timestamp values.
  • the regression parameters are generated using touch and motion data identified by other users as being acquired while those other users were experiencing various perceived levels of stress.
  • the app indicates to the user whether the stress score is higher or lower than a previously generated stress score.
  • a method for determining a stress score of the user of a smartphone is based on touch and motion data sensed by the smartphone and does not require inputs from wearable devices or other sensors.
  • a computing system on the smartphone receives touch data points of the smartphone user that each includes an X position value, a Y position value and an associated touch timestamp value.
  • the touch data points are segmented into discrete movements of the user's finger on the screen of the smartphone.
  • the computing system receives motion data points of the smartphone that each includes an X movement value, a Y movement value, a Z movement value and an associated motion timestamp value.
  • the motion data points are segmented into discrete time intervals including a first time interval.
  • the first time interval is associated with those discrete movements of the user's finger that occur entirely within the first time interval.
  • Values of touch features are calculated for each discrete movement within the first time interval.
  • Values of motion features associated with the first time interval are calculated.
  • the touch features and the motion features are normalized based on prior touch data points and motion data points previously acquired while the user was using the smartphone.
  • a stress level value is calculated for each discrete movement within the first time interval by applying logistic regression parameters to the normalized touch features and the normalized motion features.
  • the logistic regression parameters are generated using a logistic regression model trained on touch data points and motion data points acquired from other users during time intervals identified by those other users as being associated with various levels of stress.
  • a stress score of the user during the first time interval is determined based on the stress level values of the discrete movements within the first time interval.
  • An indication is displayed of how the stress score of the user has changed since the stress score of the user was last determined. The user is prompted to engage in a stress mitigation activity based on the determined stress score.
  • FIG. 1 is a schematic diagram of a computing system that runs a smartphone app.
  • FIG. 2 is a schematic diagram of the components of the smartphone app that determines the stress level of the smartphone app user.
  • FIG. 3 illustrates motion data points segmented into evenly sized time intervals and touch data points segmented into discrete touch events.
  • FIG. 4 is a flowchart of steps of a method by which the smartphone app uses touch and motion data to determine the current stress level of the user of the smartphone app.
  • FIGS. 5 A and 5 B are a table listing the components of forty-six exemplary touch data points.
  • FIGS. 6 A, 6 B, 6 C, 6 D, 6 E, 6 F, 6 G, 6 H and 6 I are a table listing the components of 585 exemplary motion data points.
  • FIG. 7 is a table listing the values of thirty-two touch features for each of five touch samples as well as the mean and standard deviation of the touch features.
  • FIG. 8 shows the formula by which the value of the touch feature “distance_covered” is calculated.
  • FIG. 9 shows the formula by which the value of the touch feature “distance_spanned” is calculated.
  • FIG. 10 is a table listing the values of twenty-nine motion features for a motion sample as well as the mean and standard deviation of the motion features.
  • FIGS. 11 A and 11 B are a table listing the normalized touch and motion features associated with three touch samples as well as the regression parameter values associated with the touch and motion features.
  • FIG. 12 shows the formula by which the stress level value associated with a touch sample is calculated.
  • FIG. 1 is a simplified schematic diagram of a computing system 10 on a smartphone 11 , which is a mobile telecommunications device.
  • System 10 can be used to implement a method for determining the stress level of a smartphone user based only on that person's touch interactions with the smartphone and the movements of the smartphone as measured by an accelerometer in the phone. Portions of the computing system 10 are implemented as software executing as a mobile App on the smartphone 11 .
  • Components of the computing system 10 include, but are not limited to, a processing unit 12 , a system memory 13 , and a system bus 14 that couples the various system components including the system memory 13 to the processing unit 12 .
  • Computing system 10 also includes computing machine-readable media used for storing computer readable instructions, data structures, other executable software and other data.
  • the system memory 13 includes computer storage media such as read only memory (ROM) 15 and random access memory (RAM) 16 .
  • ROM 15 read only memory
  • RAM 16 random access memory
  • a basic input/output system 17 (BIOS) containing the basic routines that transfer information between elements of computing system 10 , is stored in ROM 15 .
  • RAM 16 contains software that is immediately accessible to processing unit 12 .
  • RAM includes portions of the operating system 18 , other executable software 19 , and program data 20 .
  • Application programs 21 including smartphone “apps”, are also stored in RAM 16 .
  • Computing system 10 employs standardized interfaces through which different system components communicate. In particular, communication between apps and other software is effected through application programming interfaces (APIs), which define the conventions and protocols for initiating and servicing function calls.
  • APIs application programming interfaces
  • a user of computing system 10 enters commands and information through input devices such as a touchscreen 22 , input buttons 23 and a microphone 24 .
  • a display screen 26 which is physically combined with touchscreen 22 , is connected via a video interface 27 to the system bus 14 .
  • Touchscreen 22 includes a contact intensity sensor, such as a piezoelectric force sensor, a capacitive force sensor, an electric force sensor or an optical force sensor.
  • These input devices are connected to the processing unit 12 through a user input interface 25 that is coupled to the system bus 14 .
  • User input interface 25 detects the contact of a finger of the user with touchscreen 22 .
  • Interface 25 includes various software components for performing operations related to detecting contact with touchscreen 22 , such as determining if contact (a fingerdown event) has occurred, determining the pressure of the contact, determining the size of the contact, determining if there is movement of the contact and tracking the movement across touchscreen 22 , and determining if the contact has ceased (a finger-up event).
  • User input interface 25 generates a series of touch data points, each of which includes an X position, a Y position, a touch pressure, a touch size (by minimum and maximum radius), and an associated touch timestamp value.
  • the touch timestamp value indicates the time at which the X position, Y position, touch pressure and touch size where measured.
  • Computing system 10 also includes an accelerometer 28 , whose output is connected to the system bus 14 .
  • Accelerometer 28 outputs motion data points indicative of the movement of smartphone 11 .
  • Each motion data point includes an X movement value, a Y movement value, a Z movement value and an associated motion timestamp value.
  • the motion timestamp value indicates the time at which the X movement value, Y movement value and Z movement value were measured.
  • accelerometer 28 actually includes three accelerometers, one that measures X movement, one that measures Y movement, and one that measures Z movement.
  • the wireless communication modules of smartphone 11 have been omitted from this description for brevity.
  • FIG. 2 is a schematic diagram of the components of one of the application programs 21 running on smartphone 11 .
  • This mobile application (app) 30 is part of computing system 10 .
  • App 30 includes a data collection module 31 , a sample extraction and selection module 32 , a predictive modeling module 33 and a knowledge base module 34 .
  • mobile app 30 is one of the application programs 21 .
  • at least some of the functionality of app 30 is implemented as part of the operating system 18 itself. For example, the functionality can be integrated into the iOS mobile operating system or the Android mobile operating system.
  • Data collection module 31 collects logs of user interactions with smartphone 11 , both as touch interaction logs from user input interface 25 and as motion data from accelerometer 28 .
  • Software development kits (SDKs) specific to the operating system of smartphone 11 are used to enable data collection module 31 to collect the logs of the user's interactions with smartphone 11 and the motion sensor output.
  • data collection module 31 collects data using layers of the operating system 18 .
  • app 30 is implemented entirely as a mobile app
  • only touch interaction data based on user interactions within the app itself can be collected.
  • user interactions are collected independently of which app or screen of the smartphone is being used. In that case, for example, touch data generated on social networking, texting or chat apps can be collected.
  • FIG. 2 shows that data collection module 31 collects touch data points 35 from user input interface 25 and motion data points 36 from accelerometer 28 .
  • data collection module 31 also collects reports in which users indicate their physiological and cognitive states.
  • Sample extraction and selection module 32 segments the continuous stream of touch data points 35 and motion data points 36 into discrete data samples and assesses whether the characteristics of the data of each sample warrant using the data in determining the user's stress. Module 32 parses the individual streams of interaction logs and motion data into a discrete series of samples. Module 32 separately extracts the samples from interaction logs and motion data.
  • the motion data points 36 of accelerometer data are segmented into evenly sized time intervals with a predefined time length starting from the first collected data point within a usage session, as illustrated in FIG. 3 .
  • the last (and shorter) interval of data in a usage session is discarded.
  • module 32 segments the continuous stream of motion data points 36 into discrete samples each having a length of twenty seconds.
  • the last group of motion data points 36 in the usage session which is nearly always shorter than twenty seconds, is discarded.
  • the daily usage of a smartphone user is one usage session. Thus, in a two-week period, there would be fourteen usage sessions.
  • the touch data points 35 of interaction logs are segmented into discrete touch events separated by interruptions.
  • the interruptions in the touch events are already designated in the output of the software development kits (SDKs) and are included as part of the touch interaction logs received by data collection module 31 .
  • Touch events whose time lengths do not fall within a predefined range e.g., 0.1 sec to 5 sec
  • a touch event that does not fall entirely within a concurrent sample of motion data points 36 such as a 20-second period
  • FIG. 3 illustrates touch data points 37 that are discarded either because the associated touch event does not fall entirely within a 20-second sample of motion data points 36 or because the touch event is too short.
  • Predictive modeling module 33 receives the segmented samples of motion data, including motion data points 36 , and interaction logs, including touch data points 35 , from sample extraction and selection module 32 .
  • Module 33 includes three components: a feature computation component 38 , a feature normalization component 39 , and a machine learning model 40 .
  • Feature computation component 38 computes motion features and touch features from the samples of motion and touch data that it receives in a raw format from module 32 .
  • motion features include the energy of the movement of the smartphone during a motion sample and the mean x movement per time increment during a motion sample.
  • the accelerometer determines the movement in the x dimension after each time increment of ten milliseconds.
  • touch features include the speed, curvature and distance covered of a touch event, such as a swipe.
  • Feature normalization component 39 normalizes the motion features and touch features to each particular user.
  • the magnitudes of the features are influenced in large part by usage patterns of individual users, which in turn depend on the type of device used. For example, the size and type of touchscreen 22 significantly influences the magnitudes of the features. Therefore, each of the many motion features and touch features is normalized based on the mean magnitude of that feature and the standard deviation of that feature for the particular user over a period of past use, such as a 2-week usage period. The user is assumed not to have changed the type of smartphone used during the normalization usage period.
  • Machine learning model 40 determines the user's physiological and cognitive states, such as the user's stress level, based on the normalized motion features, the normalized touch features, and a weight or parameter for each of the features calculated using machine learning on a knowledge base of features for which prior users have indicated their perceived stress levels during each usage session.
  • the knowledge base is stored in knowledge base module 34 .
  • the knowledge base was acquired from the usage patterns of 110 users each over fourteen daily usage sessions. For each usage session, sixty-one motion and touch features were calculated. The users indicated their perceived stress level on a scale between 0-10 for each usage session. Thus, there were 93,940 entries, each associated with a stress level between zero and ten.
  • the entries were divided into two equal groups, half having higher associated stress levels and the other half having lower associated stress levels. Each of the 46,970 entries associated with lower stress were deemed to represent a no-stress state, and each of the other 46,970 entries associated with higher stress were deemed to represent a stress state.
  • the entries in knowledge base module 34 are fixed and immutable.
  • the knowledge base is periodically augmented with additional entries from users who rate their stress levels during each usage session. Thus, the augmented knowledge base grows over time.
  • Machine learning model 40 then performs a logistic regression on the 93,940 entries from knowledge base module 34 and calculates logistic regression parameters for each of the sixty-one motion and touch features.
  • the feature parameter with the highest correlation to the stress state is the standard deviation of the overall motion.
  • the magnitude of the parameter std_overall is 0.701.
  • the feature parameter with the highest correlation to the no-stress state is the standard deviation of the energy of the overall motion.
  • the magnitude of the parameter std_energy is ⁇ 1.011.
  • a positive parameter correlates to more stress
  • a negative parameter correlates to less stress.
  • the magnitudes or weights of the parameters do not necessarily fall within any range, such as an arbitrary range between ⁇ 1.0 and 1.0.
  • Machine learning model 40 then applies the parameters to each of the sixty-one motion and touch features associated with each touch sample (discrete touch event), and a stress level is determined for the touch sample.
  • a stress level is determined for the touch sample.
  • the stress level is determined for a touch sample based not only on the touch features associated with the touch sample, but also on the motion features of the motion sample to which the touch feature corresponds.
  • a touch sample corresponds to a motion sample if the touch data points 35 of the touch sample fall entirely within the period of the motion sample.
  • there are three touch samples 41 - 43 whose touch data points 35 fall entirely within the motion sample 44 .
  • machine learning model 40 determines a stress level for each of the three touch samples 41 - 43 . Each determination is based on the thirty-two touch features of the corresponding touch sample as well as on the twenty-nine motion features of motion sample 44 .
  • the stress levels determined for each of the three touch samples are averaged to obtain a stress level for the motion sample associated with a time interval of motion data.
  • machine learning model 40 outputs a stress level associated with each 20-second motion sample.
  • App 30 displays an indication on display screen 26 of how the stress level of the user has changed since the stress level was last determined by displaying a numerical representation of the determined stress level.
  • App 30 displays an indication of how the stress level of the user has changed merely by indicating to the user whether the stress level is higher or lower than a previously determined stress level.
  • the size of a green arrow or dot could correspond to a decrease in the user's stress level, and a the size of a red arrow or dot could correspond to an increase in the user's stress level.
  • Another alternative would be to indicate the user's measured stress level using non-verbal audio feedback, such as a higher or lower pitched tone.
  • FIG. 4 is a flowchart of steps 46 - 56 of a method 45 by which App 30 uses data sensed by smartphone 11 to determine the current stress level of the user of App 30 .
  • the steps of FIG. 4 are described in relation to App 30 and computing system 10 which implement method 45 .
  • step 46 data collection module 31 receives touch interaction data from smartphone 11 indicative of a user's interactions with touchscreen 22 of smartphone 11 .
  • the touch data is received as touch interaction logs based on data sensed by user input interface 25 .
  • the touch data comprises touch data points 35 , each of which includes an X position value, a Y position value and an associated touch timestamp value.
  • FIGS. 5 A- 5 B are a table listing forty-six exemplary touch data points 35 .
  • each point also includes values indicating the change in the X position (delta x), the change in the Y position (delta y), the pressure of the user's finger on touchscreen 22 , the overall size of the touch impression on the screen, the minimum radius of the touch impression, the maximum radius of the touch impression and the orientation of the touch impression.
  • the size value has not been sensed, and is listed as zero.
  • the timestamp values are listed in units of milliseconds.
  • sample extraction and selection module 32 segments the continuous stream of touch data points 35 into discrete data samples representing distinct movements of the user's finger on touchscreen 22 . If the values for both delta x and delta y are zero for a particular timestamp value, then the touch event of the user's finger has stopped, and a new touch event is deemed to have begun. Each touch event is analyzed as a separate sample. In FIGS. 5 A- 5 B , for example, the delta x and delta y values are both zero at touch data points 14 , 22 , 30 and 38 . Thus, a first touch event sample #1 includes touch data points 1 - 14 , sample #2 includes points 15 - 22 , sample #3 includes points 23 - 30 , sample #4 includes points 31 - 38 and sample #5 includes points 39 - 46 .
  • step 48 data collection module 31 receives motion sensor data from smartphone 11 indicative of the movement of the smartphone while the user's finger is moving over touchscreen 22 of smartphone 11 . At least a portion of the sensed movement of smartphone 11 associated with the motion data has occurred concurrently with the user's interactions with touchscreen 22 .
  • the motion sensor data comprises motion data points 36 , each of which includes an X movement value, a Y movement value, a Z movement value and an associated motion timestamp value.
  • the motion timestamp value indicates the time at which the X movement value, Y movement value and Z movement value were measured by accelerometer 28 .
  • FIGS. 6 A- 6 I are a table listing 585 exemplary motion data points 36 .
  • the timestamp values are listed in units of milliseconds.
  • the table also lists the total time elapsed in seconds since the first motion data point was measured.
  • FIGS. 6 A- 6 I include 5.855 seconds of motion data.
  • the example usage session was segmented into 20-second samples of motion data points 36 .
  • the last motion sample that was shorter than twenty seconds in length was discarded.
  • the 585 motion data points 36 of FIGS. 6 A- 6 I are segmented into motion samples of no more than five seconds.
  • step 49 the motion data points 36 of accelerometer data are segmented into evenly sized time intervals with a predefined time length (5 seconds) starting from the first collected data point within the usage session.
  • the first time interval includes motion data points 1 - 499 .
  • the motion data points 500 - 585 are discarded because they include less than five seconds of data.
  • the first time interval of motion data points 36 (motion sample #1) is associated with those discrete movements of the user's finger that occur entirely within the first time interval.
  • a touch sample is associated with a motion sample if the touch data points 35 of the touch sample fall entirely within the period of the motion sample.
  • touch samples #1-#3 have touch data points with touch timestamp values that fall entirely within the motion timestamp values of motion sample #1 of FIGS. 6 A- 6 I .
  • FIG. 6 C shows the highlighted motion timestamp values of motion data points 160 - 177 within motion sample #1, which coincide with the touch timestamp values of touch data points 1 - 14 of touch sample #1 of FIG. 5 A .
  • Touch sample #1 runs from timestamp 1583005689576 to timestamp 1583005689753.
  • FIGS. 6 D- 6 E show the highlighted motion timestamp values of motion data points 277 - 291 within motion sample #1, which coincide with the touch timestamp values of touch data points 15 - 22 of touch sample #2 of FIG. 5 A .
  • Touch sample #2 runs from timestamp 1583005690747 to timestamp 1583005690903.
  • 6 F shows the highlighted motion timestamp values of motion data points 410 - 418 within motion sample #1, which coincide with the touch timestamp values of touch data points 23 - 30 of touch sample #3 of FIG. 5 A .
  • Touch sample #3 runs from timestamp 1583005692082 to timestamp 1583005692170.
  • the touch timestamp values of touch data points 31 - 38 of touch sample #4 of FIG. 5 B coincide with the highlighted motion timestamp values of motion data points 517 - 524 , which occur after the end of motion sample #1 at motion data point 499 .
  • touch sample #4 is not associated with motion sample #1, and the touch data points 35 of touch sample #4 are not used to calculate features in method 45 .
  • touch timestamp values of touch data points 39 - 46 of touch sample #5 of FIG. 5 B coincide with the highlighted motion timestamp values of motion data points 578 - 585 , which occur after the end of motion sample #1 at motion data point 499 .
  • touch sample #5 is also not associated with motion sample #1, and the touch data points 35 of touch sample #5 are not used to calculate features in method 45 .
  • a predefined threshold e.g., 0.05 sec
  • step 51 feature computation component 38 of predictive modeling module 33 calculates the values of touch features for each discrete movement within the first time interval.
  • thirty-two touch features are calculated for each of touch samples #1-#3 which fall within motion sample #1.
  • the touch features are calculated using the touch data points 35 listed in FIGS. 5 A- 5 B .
  • FIG. 7 is a table listing the values of the thirty-two touch features for each of touch samples #1-#5. Only the feature values are samples #1-#3 are used in further calculations.
  • the thirty-two touch features include: distance_covered, distance_spanned, efficiency, duration, speed, num_points, curvature, pressure_median, pressure_mean, pressure_std, pressure_max, pressure_min, radiusMin_median, radiusMin_mean, radiusMin_std, radiusMin_max, radiusMin_min, radiusMax_median, radiusMax_mean, radiusMax_std, radiusMax_max, radiusMax_min, orientation_median, orientation_mean, orientation_std, orientation_max, orientation_min, size_median, size_mean, size_std, size_max, and size_min.
  • the distance_covered is the entire distance_covered by the user's finger during a touch event, whereas the distance_spanned is the straight-line distance between the beginning and the end of the touch event.
  • FIG. 8 shows how the touch feature “distance_covered” is calculated for touch sample #1 using the touch data points 1 - 14 .
  • the sum of the distances between the xy positions at the end of each touch data point is calculated using the delta_x and delta_y values for touch data points 1 - 14 in FIG. 5 A .
  • the result is 200.3943288, which is listed as the touch feature value for distance_covered for touch sample #1 in FIG. 7 .
  • FIG. 9 shows how the touch feature “distance_spanned” is calculated for touch sample #1 using the first touch data point 1 of sample #1 and the first touch data point 14 of touch sample #2.
  • the distances between the xy position at the beginning of touch sample #1 and the xy position at the beginning of touch sample #2 is calculated using the x_position and y_position values for touch data points 1 and 14 in FIG. 5 A .
  • the result is 195.0025641, which is listed as the touch feature value for distance_spanned for touch sample #1 in FIG. 7 .
  • the touch feature “efficiency” is calculated as the ratio of distance_spanned to distance_covered. For the touch sample #1, the efficiency is 195.0025641/200.3943288, which equals 0.973094225 as listed in FIG. 7 .
  • the touch feature “duration” is an indication of the duration of a swipe of the user's finger over screen 26 of smartphone 11 .
  • the feature “duration” is calculated as the difference between the final and initial timestamps of the touch sample.
  • the duration is (1583005689753 ⁇ 1583005689576), which equals 177 milliseconds.
  • the duration is listed in FIG. 7 as 0.177 seconds.
  • the feature “pressure_mean” is an indication of the average pressure applied by the user's finger to screen 26 of smartphone 11 during a swipe or touch event.
  • step 52 feature computation component 38 calculates the values of motion features associated with the first time interval.
  • the values of the motion features are determined using motion data points 36 listed in FIGS. 6 A- 6 I whose timestamps fall within motion sample #1. In this example, twenty-nine motion features are calculated for motion sample #1.
  • FIG. 6 A- 6 I whose timestamps fall within motion sample #1.
  • 10 is a table listing the values of the twenty-nine motion features, which are: mean_x, std_x, var_x, mean_y, std_y, var_y, mean_z, std_z, var_z, mean_overall, median_overall, std_overall, var_overall, amax_overall, amin_overall, mean_l2_norm, mean_l1_norm, root_mean_sq, curve_len, diff_entropy, energy, std_energy, peak_magnitude, peak_magnitude frequency, peak_power, peak_power_frequency, magnitude_entropy, power_entropy, and energy_pct_between_3_and_7.
  • step 53 feature normalization component 39 of predictive modeling module 33 normalizes the touch features and the motion features based on prior touch data points 35 and motion data points 36 previously acquired while the user was using smartphone 11 .
  • the touch and motion features are normalized using the past mean of each feature and the past standard deviation of each feature for the particular user.
  • touch and motion data is acquired for the user over a 2-week period, and then the mean value and the standard deviation of each touch and motion feature are determined. Note that the user must be using the same smartphone 11 over the entire 2-week period because the feature values are heavily influenced by screen size and other characteristics that vary for different smartphones.
  • FIG. 7 lists the mean and the standard deviation for each touch feature for the particular user in the last two columns.
  • FIG. 10 lists the mean and the standard deviation for each motion feature for the particular user in the last two columns.
  • FIGS. 11 A- 11 B list the normalized touch and motion features associated with touch samples #1-#3.
  • the normalized value for each feature is calculated as the difference between the feature value and the feature mean, with the difference then divided by the standard deviation of the feature.
  • the normalized distance_covered for touch sample #1 is calculated as (200.3943288 ⁇ 196.713947)/48.93410519, which equals 0.075210975.
  • the inputs are listed in the first row of FIG. 7
  • the normalized distance_covered is listed in the first row of FIG. 11 A .
  • Note that the same normalized motion feature values are used with each of the three distinct touch feature values for touch samples #1-#3, as shown in the columns of FIGS. 11 A- 11 B .
  • the contribution of the motion features is the same.
  • the machine learning model 40 of predictive modeling module 33 determines a stress level value for each of the three touch samples #1-#3. Each determination is based on both the thirty-two touch features of the corresponding touch sample and the twenty-nine motion features of motion sample #1.
  • a stress level value is generated for each discrete movement within the first time interval of motion sample #1 by applying logistic regression parameters to weight each of the normalized touch and motion features.
  • the values of the sixty-one logistic regression parameters associated with each touch and motion feature are listed in the last column of FIGS. 11 A- 11 B .
  • the stress level value of each touch sample is calculated using the formula of FIG. 12 .
  • the value of “t” in the formula is the sum of the products of the sixty-one feature values times their associated regression parameters; an intercept value is then added to the sum.
  • the calculated stress level value falls within a range of zero to one, in which one represents the maximum possible stress.
  • the sums of the products of the sixty-one feature values times their associated regression parameters for the touch samples #1, #2 and #3 are ⁇ 0.861821372, ⁇ 0.952629425 and ⁇ 0.975875892, respectively.
  • the resulting stress levels for touch samples #1, #2 and #3 are 0.275536305, 0.257783613 and 0.25336094, respectively
  • the regression parameters used in step 54 are generated using a logistic regression model trained on touch data points and motion data points acquired from other users during time intervals identified by those other users as being associated with various levels of stress.
  • the touch and motion data was acquired from 110 other users over fourteen daily usage sessions.
  • the users recorded their perceived stress level on a scale between 0-10 for each usage session.
  • each touch data point and motion data point is associated with a stress level value corresponding to that of the usage session during which the data was acquired.
  • the data from the usage sessions is divided into a first half with the highest stress rankings and a second half with the lowest stress rankings.
  • the data in the first half is deemed to be associated with a stress level of one, and the data in the second half is deemed to be associated with a stress level of zero.
  • the feature with the greatest correlation to the user's stress level is the standard deviation of energy of the motion data points (std_energy), which has a negative contribution to stress of ⁇ 1.011.
  • the feature with the biggest positive correlation to the user's stress level is the overall standard deviation of the motion data points (std_overall), which has a positive contribution to stress of 0.701.
  • Another motion feature with a large contribution to the user's stress level is the peak magnitude of the motion data points while the user's finger moves over the screen, which has a positive contribution to stress of 0.529.
  • Touch features that make a large contribution to the user's stress level are the minimum size of the touch (size_min), the standard deviation of the touch size (size_std) and the minimum of the radius of the minimum sized touches (radiusMin_min).
  • size_min the minimum size of the touch
  • size_std the standard deviation of the touch size
  • radiusMin_min the minimum of the radius of the minimum sized touches
  • a stress score of the user during the first time interval is determined based on the stress level values calculated for the discrete touch movements during the first time interval.
  • a stress score is calculated by averaging the determined stress level values of the touch samples #1-#3 that fall within the first time interval between the timestamps for motion data points 1 and 499 .
  • the stress score for motion sample #1 is 0.262, which is the average of the stress level values of the touch samples #1-#3. Note, however, that both the stress level values and the stress score are influenced by both the touch data points and the motion data points that occur during the first time interval.
  • the stress score is determined to be the highest stress level value calculated for a touch sample during the first time interval.
  • stress level values are calculated for the touch samples (discrete finger movements) that occur during many motion samples acquired over longer periods of use of smartphone 11 , such as a work day or portions of a day. The calculated stress level values are then averaged to determine a stress score for the day or portion of a day.
  • an indication is displayed on screen 26 of how the stress score of the user has changed since the stress score was last determined. For example, an image is displayed indicating to the user that the stress score is higher or lower than a previously generated stress score. A numerical representation of the stress score is displayed on display screen 26 of smartphone 11 .
  • App 30 gives the user non-verbal audio feedback indicating to the user whether the stress score is higher or lower than the previously generated stress score. For example, a higher tone indicates a rising stress score, and a lower tone indicates a falling stress score.
  • step 56 the user is prompted to engage in an activity based on the determined stress score that is not indicated to the user.
  • App 30 prompts the user to engage in an intervention that is selected based on the determined stress score. If the user's stress score is above a predetermined threshold, the user is prompted to engage in relaxation and meditation techniques guided by App 30 .
  • a company may recommend that its employees use the mobile app running on their smartphones to assist them in engaging in stress reduction techniques during the work day.
  • Each employee's stress level is determined in the background as the employee uses the smartphone throughout the day without the employee having to perform any specific diagnostic procedures to measure his or her current stress level. If the employee's stress level exceeds a predetermined threshold, then App 30 notifies the employee and recommends a stress reduction activity, for example, meditation.
  • An advantage of method 45 for determining a stress level using App 30 on smartphone 11 is that the method does not require other sensors or wearable devices to determine the user's stress level. Over forty percent of the world's population owns a mobile device on which method 45 can be implemented, and most of these people use the mobile device regularly throughout the day. This ubiquity of mobile devices allows the method to be used by many more people than would otherwise be possible if purchasing a wearable device or medical-grade device were required to determine the user's stress level.
  • method 45 Another advantage of method 45 is that the user is not required to engage in any specific behavior during which the user's stress level is measured. Most other methods of determining a user's stress level require the user to adjust his or her patterns of behavior and to act in a way that the user would not otherwise act. However, method 45 is completely unobtrusive because the user's stress level is determined based on the user's normal usage of the mobile device or smartphone. Thus, the user of App 30 is more likely to engage in stress reduction techniques because the reluctance to perform additional steps to measure the user's current stress level is removed as an excuse for failing to implement the techniques.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physiology (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Child & Adolescent Psychology (AREA)
  • Social Psychology (AREA)
  • Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Educational Technology (AREA)
  • Developmental Disabilities (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Dentistry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A method for determining a user's stress level is performed by a smartphone app. Touch and motion feature values are generated, the feature values are weighted by regression parameters, and a stress score is generated based on the weighted touch and motion feature values. The touch feature values indicate how the user's finger moves over the smartphone and are generated from touch data points including X positions, Y positions and associated touch timestamp values. The motion feature values indicate movement of the smartphone and are generated from motion data points including X movements, Y movements, Z movements and associated motion timestamp values. The regression parameters are generated using touch and motion data identified by other users as being acquired while those other users were experiencing various perceived levels of stress. The app indicates to the user whether the stress score is higher or lower than a previously generated stress score.

Description

TECHNICAL FIELD
The present invention relates to a method for determining the stress level of a person based only on how the person uses a smartphone by weighting contributions of both the person's touch interactions with the smartphone and movements of the smartphone as indicated by motion sensor data.
BACKGROUND
Stress is a significant health problem in modern-day life. In fact, many doctors argue that stress and stress-related symptoms are a major cause of death. Consequently, methods of measuring and monitoring stress can provide significant health benefits. A person comes under stress when that person's physical safety or emotional balance is threatened. In the short term, adrenalin is released, the person becomes tense, the person's heart rate and blood pressure increase, and other physiological systems prepare to meet the threat. When the threat passes, the person's body regains its normal equilibrium. However, extended activation of the body's stress response has a harmful effect on human health because the body is prevented from returning to its equilibrium state. Extended periods of increased heart rate, blood pressure, breathing rate and stress hormones harm the body's cardiovascular system, immune system and general metabolism, which can lead to hypertension, obesity and cognitive and emotional impairment.
In order to avoid the damage that chronic stress can cause, methods have been developed to reduce stress in one's work life and private life. Some treatments for relieving the physical symptoms of stress involve relaxation and meditation techniques guided by smartphone apps. These treatments are most effective, however, when the user receives feedback regarding how the relaxation and meditation techniques are influencing the user's physiological state. Moreover, it is preferable if the user need not acquire and carry dedicated sensors and wearables other than the smartphone itself that are required to measure parameters such as arterial pressure, electrocardiogram signals, respiratory volume or body temperature. Accordingly, a method is sought for accurately determining the stress levels of smartphone users who are undergoing stress reduction treatments that does not require input from sensors other than those present on standard smartphones.
SUMMARY
A method for determining a user's stress level is performed by a mobile app running on a smartphone. The method includes generating touch feature values and motion feature values, weighting the feature values by regression parameters, and generating a stress score for the user based on the weighted touch feature values and weighted motion feature values. The touch feature values indicate how the user's finger moves over the smartphone screen and are generated from touch data points including X positions, Y positions and associated touch timestamp values. The motion feature values indicate the movement of the smartphone and are generated from motion data points including X movements, Y movements, Z movements and associated motion timestamp values. The regression parameters are generated using touch and motion data identified by other users as being acquired while those other users were experiencing various perceived levels of stress. The app indicates to the user whether the stress score is higher or lower than a previously generated stress score.
A method for determining a stress score of the user of a smartphone is based on touch and motion data sensed by the smartphone and does not require inputs from wearable devices or other sensors. A computing system on the smartphone receives touch data points of the smartphone user that each includes an X position value, a Y position value and an associated touch timestamp value. The touch data points are segmented into discrete movements of the user's finger on the screen of the smartphone. The computing system receives motion data points of the smartphone that each includes an X movement value, a Y movement value, a Z movement value and an associated motion timestamp value. The motion data points are segmented into discrete time intervals including a first time interval. The first time interval is associated with those discrete movements of the user's finger that occur entirely within the first time interval.
Values of touch features are calculated for each discrete movement within the first time interval. Values of motion features associated with the first time interval are calculated. The touch features and the motion features are normalized based on prior touch data points and motion data points previously acquired while the user was using the smartphone. A stress level value is calculated for each discrete movement within the first time interval by applying logistic regression parameters to the normalized touch features and the normalized motion features. The logistic regression parameters are generated using a logistic regression model trained on touch data points and motion data points acquired from other users during time intervals identified by those other users as being associated with various levels of stress.
A stress score of the user during the first time interval is determined based on the stress level values of the discrete movements within the first time interval. An indication is displayed of how the stress score of the user has changed since the stress score of the user was last determined. The user is prompted to engage in a stress mitigation activity based on the determined stress score.
Other embodiments and advantages are described in the detailed description below. This summary does not purport to define the invention. The invention is defined by the claims.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, where like numerals indicate like components, illustrate embodiments of the invention.
FIG. 1 is a schematic diagram of a computing system that runs a smartphone app.
FIG. 2 is a schematic diagram of the components of the smartphone app that determines the stress level of the smartphone app user.
FIG. 3 illustrates motion data points segmented into evenly sized time intervals and touch data points segmented into discrete touch events.
FIG. 4 is a flowchart of steps of a method by which the smartphone app uses touch and motion data to determine the current stress level of the user of the smartphone app.
FIGS. 5A and 5B are a table listing the components of forty-six exemplary touch data points.
FIGS. 6A, 6B, 6C, 6D, 6E, 6F, 6G, 6H and 6I are a table listing the components of 585 exemplary motion data points.
FIG. 7 is a table listing the values of thirty-two touch features for each of five touch samples as well as the mean and standard deviation of the touch features.
FIG. 8 shows the formula by which the value of the touch feature “distance_covered” is calculated.
FIG. 9 shows the formula by which the value of the touch feature “distance_spanned” is calculated.
FIG. 10 is a table listing the values of twenty-nine motion features for a motion sample as well as the mean and standard deviation of the motion features.
FIGS. 11A and 11B are a table listing the normalized touch and motion features associated with three touch samples as well as the regression parameter values associated with the touch and motion features.
FIG. 12 shows the formula by which the stress level value associated with a touch sample is calculated.
DETAILED DESCRIPTION
Reference will now be made in detail to some embodiments of the invention, examples of which are illustrated in the accompanying drawings.
FIG. 1 is a simplified schematic diagram of a computing system 10 on a smartphone 11, which is a mobile telecommunications device. System 10 can be used to implement a method for determining the stress level of a smartphone user based only on that person's touch interactions with the smartphone and the movements of the smartphone as measured by an accelerometer in the phone. Portions of the computing system 10 are implemented as software executing as a mobile App on the smartphone 11. Components of the computing system 10 include, but are not limited to, a processing unit 12, a system memory 13, and a system bus 14 that couples the various system components including the system memory 13 to the processing unit 12. Computing system 10 also includes computing machine-readable media used for storing computer readable instructions, data structures, other executable software and other data.
The system memory 13 includes computer storage media such as read only memory (ROM) 15 and random access memory (RAM) 16. A basic input/output system 17 (BIOS), containing the basic routines that transfer information between elements of computing system 10, is stored in ROM 15. RAM 16 contains software that is immediately accessible to processing unit 12. RAM includes portions of the operating system 18, other executable software 19, and program data 20. Application programs 21, including smartphone “apps”, are also stored in RAM 16. Computing system 10 employs standardized interfaces through which different system components communicate. In particular, communication between apps and other software is effected through application programming interfaces (APIs), which define the conventions and protocols for initiating and servicing function calls.
A user of computing system 10 enters commands and information through input devices such as a touchscreen 22, input buttons 23 and a microphone 24. A display screen 26, which is physically combined with touchscreen 22, is connected via a video interface 27 to the system bus 14. Touchscreen 22 includes a contact intensity sensor, such as a piezoelectric force sensor, a capacitive force sensor, an electric force sensor or an optical force sensor. These input devices are connected to the processing unit 12 through a user input interface 25 that is coupled to the system bus 14. User input interface 25 detects the contact of a finger of the user with touchscreen 22. Interface 25 includes various software components for performing operations related to detecting contact with touchscreen 22, such as determining if contact (a fingerdown event) has occurred, determining the pressure of the contact, determining the size of the contact, determining if there is movement of the contact and tracking the movement across touchscreen 22, and determining if the contact has ceased (a finger-up event). User input interface 25 generates a series of touch data points, each of which includes an X position, a Y position, a touch pressure, a touch size (by minimum and maximum radius), and an associated touch timestamp value. The touch timestamp value indicates the time at which the X position, Y position, touch pressure and touch size where measured.
Computing system 10 also includes an accelerometer 28, whose output is connected to the system bus 14. Accelerometer 28 outputs motion data points indicative of the movement of smartphone 11. Each motion data point includes an X movement value, a Y movement value, a Z movement value and an associated motion timestamp value. The motion timestamp value indicates the time at which the X movement value, Y movement value and Z movement value were measured. In some embodiments, accelerometer 28 actually includes three accelerometers, one that measures X movement, one that measures Y movement, and one that measures Z movement. The wireless communication modules of smartphone 11 have been omitted from this description for brevity.
FIG. 2 is a schematic diagram of the components of one of the application programs 21 running on smartphone 11. This mobile application (app) 30 is part of computing system 10. App 30 includes a data collection module 31, a sample extraction and selection module 32, a predictive modeling module 33 and a knowledge base module 34. In one embodiment, mobile app 30 is one of the application programs 21. In another embodiment, at least some of the functionality of app 30 is implemented as part of the operating system 18 itself. For example, the functionality can be integrated into the iOS mobile operating system or the Android mobile operating system.
Data collection module 31 collects logs of user interactions with smartphone 11, both as touch interaction logs from user input interface 25 and as motion data from accelerometer 28. Software development kits (SDKs) specific to the operating system of smartphone 11 are used to enable data collection module 31 to collect the logs of the user's interactions with smartphone 11 and the motion sensor output. Thus, data collection module 31 collects data using layers of the operating system 18. However, in the embodiment in which app 30 is implemented entirely as a mobile app, only touch interaction data based on user interactions within the app itself can be collected. In the embodiment in which app 30 is implemented partly within the operating system 18, user interactions are collected independently of which app or screen of the smartphone is being used. In that case, for example, touch data generated on social networking, texting or chat apps can be collected. FIG. 2 shows that data collection module 31 collects touch data points 35 from user input interface 25 and motion data points 36 from accelerometer 28. In addition, data collection module 31 also collects reports in which users indicate their physiological and cognitive states.
Sample extraction and selection module 32 segments the continuous stream of touch data points 35 and motion data points 36 into discrete data samples and assesses whether the characteristics of the data of each sample warrant using the data in determining the user's stress. Module 32 parses the individual streams of interaction logs and motion data into a discrete series of samples. Module 32 separately extracts the samples from interaction logs and motion data.
The motion data points 36 of accelerometer data are segmented into evenly sized time intervals with a predefined time length starting from the first collected data point within a usage session, as illustrated in FIG. 3 . The last (and shorter) interval of data in a usage session is discarded. For example, module 32 segments the continuous stream of motion data points 36 into discrete samples each having a length of twenty seconds. The last group of motion data points 36 in the usage session, which is nearly always shorter than twenty seconds, is discarded. In one scenario, the daily usage of a smartphone user is one usage session. Thus, in a two-week period, there would be fourteen usage sessions.
The touch data points 35 of interaction logs are segmented into discrete touch events separated by interruptions. The interruptions in the touch events are already designated in the output of the software development kits (SDKs) and are included as part of the touch interaction logs received by data collection module 31. Touch events whose time lengths do not fall within a predefined range (e.g., 0.1 sec to 5 sec) are not used as samples and are discarded. In addition, a touch event that does not fall entirely within a concurrent sample of motion data points 36, such as a 20-second period, is discarded. FIG. 3 illustrates touch data points 37 that are discarded either because the associated touch event does not fall entirely within a 20-second sample of motion data points 36 or because the touch event is too short.
Predictive modeling module 33 receives the segmented samples of motion data, including motion data points 36, and interaction logs, including touch data points 35, from sample extraction and selection module 32. Module 33 includes three components: a feature computation component 38, a feature normalization component 39, and a machine learning model 40. Feature computation component 38 computes motion features and touch features from the samples of motion and touch data that it receives in a raw format from module 32. Examples of motion features include the energy of the movement of the smartphone during a motion sample and the mean x movement per time increment during a motion sample. For example, the accelerometer determines the movement in the x dimension after each time increment of ten milliseconds. Examples of touch features include the speed, curvature and distance covered of a touch event, such as a swipe.
Feature normalization component 39 normalizes the motion features and touch features to each particular user. The magnitudes of the features are influenced in large part by usage patterns of individual users, which in turn depend on the type of device used. For example, the size and type of touchscreen 22 significantly influences the magnitudes of the features. Therefore, each of the many motion features and touch features is normalized based on the mean magnitude of that feature and the standard deviation of that feature for the particular user over a period of past use, such as a 2-week usage period. The user is assumed not to have changed the type of smartphone used during the normalization usage period.
Machine learning model 40 determines the user's physiological and cognitive states, such as the user's stress level, based on the normalized motion features, the normalized touch features, and a weight or parameter for each of the features calculated using machine learning on a knowledge base of features for which prior users have indicated their perceived stress levels during each usage session. The knowledge base is stored in knowledge base module 34. For example, in one implementation, the knowledge base was acquired from the usage patterns of 110 users each over fourteen daily usage sessions. For each usage session, sixty-one motion and touch features were calculated. The users indicated their perceived stress level on a scale between 0-10 for each usage session. Thus, there were 93,940 entries, each associated with a stress level between zero and ten. The entries were divided into two equal groups, half having higher associated stress levels and the other half having lower associated stress levels. Each of the 46,970 entries associated with lower stress were deemed to represent a no-stress state, and each of the other 46,970 entries associated with higher stress were deemed to represent a stress state. In one embodiment, the entries in knowledge base module 34 are fixed and immutable. In another embodiment, the knowledge base is periodically augmented with additional entries from users who rate their stress levels during each usage session. Thus, the augmented knowledge base grows over time.
Machine learning model 40 then performs a logistic regression on the 93,940 entries from knowledge base module 34 and calculates logistic regression parameters for each of the sixty-one motion and touch features. In this example implementation, the feature parameter with the highest correlation to the stress state is the standard deviation of the overall motion. The magnitude of the parameter std_overall is 0.701. The feature parameter with the highest correlation to the no-stress state is the standard deviation of the energy of the overall motion. The magnitude of the parameter std_energy is −1.011. Thus, a positive parameter correlates to more stress, and a negative parameter correlates to less stress. The magnitudes or weights of the parameters do not necessarily fall within any range, such as an arbitrary range between −1.0 and 1.0.
Machine learning model 40 then applies the parameters to each of the sixty-one motion and touch features associated with each touch sample (discrete touch event), and a stress level is determined for the touch sample. In this implementation, there are thirty-two touch features and twenty-nine motion features. The stress level is determined for a touch sample based not only on the touch features associated with the touch sample, but also on the motion features of the motion sample to which the touch feature corresponds. A touch sample corresponds to a motion sample if the touch data points 35 of the touch sample fall entirely within the period of the motion sample. In the example illustrated in FIG. 3 , there are three touch samples 41-43 whose touch data points 35 fall entirely within the motion sample 44. In this example, machine learning model 40 determines a stress level for each of the three touch samples 41-43. Each determination is based on the thirty-two touch features of the corresponding touch sample as well as on the twenty-nine motion features of motion sample 44.
In one aspect, the stress levels determined for each of the three touch samples are averaged to obtain a stress level for the motion sample associated with a time interval of motion data. In this implementation, machine learning model 40 outputs a stress level associated with each 20-second motion sample. App 30 displays an indication on display screen 26 of how the stress level of the user has changed since the stress level was last determined by displaying a numerical representation of the determined stress level. Alternatively, App 30 displays an indication of how the stress level of the user has changed merely by indicating to the user whether the stress level is higher or lower than a previously determined stress level. For example, the size of a green arrow or dot could correspond to a decrease in the user's stress level, and a the size of a red arrow or dot could correspond to an increase in the user's stress level. Another alternative would be to indicate the user's measured stress level using non-verbal audio feedback, such as a higher or lower pitched tone.
FIG. 4 is a flowchart of steps 46-56 of a method 45 by which App 30 uses data sensed by smartphone 11 to determine the current stress level of the user of App 30. The steps of FIG. 4 are described in relation to App 30 and computing system 10 which implement method 45.
In step 46, data collection module 31 receives touch interaction data from smartphone 11 indicative of a user's interactions with touchscreen 22 of smartphone 11. The touch data is received as touch interaction logs based on data sensed by user input interface 25. The touch data comprises touch data points 35, each of which includes an X position value, a Y position value and an associated touch timestamp value.
FIGS. 5A-5B are a table listing forty-six exemplary touch data points 35. In addition to the X position value, Y position value and associated touch timestamp value, each point also includes values indicating the change in the X position (delta x), the change in the Y position (delta y), the pressure of the user's finger on touchscreen 22, the overall size of the touch impression on the screen, the minimum radius of the touch impression, the maximum radius of the touch impression and the orientation of the touch impression. In the exemplary touch data points 35, the size value has not been sensed, and is listed as zero. The timestamp values are listed in units of milliseconds.
In step 47, sample extraction and selection module 32 segments the continuous stream of touch data points 35 into discrete data samples representing distinct movements of the user's finger on touchscreen 22. If the values for both delta x and delta y are zero for a particular timestamp value, then the touch event of the user's finger has stopped, and a new touch event is deemed to have begun. Each touch event is analyzed as a separate sample. In FIGS. 5A-5B, for example, the delta x and delta y values are both zero at touch data points 14, 22, 30 and 38. Thus, a first touch event sample #1 includes touch data points 1-14, sample #2 includes points 15-22, sample #3 includes points 23-30, sample #4 includes points 31-38 and sample #5 includes points 39-46.
In step 48, data collection module 31 receives motion sensor data from smartphone 11 indicative of the movement of the smartphone while the user's finger is moving over touchscreen 22 of smartphone 11. At least a portion of the sensed movement of smartphone 11 associated with the motion data has occurred concurrently with the user's interactions with touchscreen 22. The motion sensor data comprises motion data points 36, each of which includes an X movement value, a Y movement value, a Z movement value and an associated motion timestamp value. The motion timestamp value indicates the time at which the X movement value, Y movement value and Z movement value were measured by accelerometer 28.
FIGS. 6A-6I are a table listing 585 exemplary motion data points 36. The timestamp values are listed in units of milliseconds. The table also lists the total time elapsed in seconds since the first motion data point was measured. FIGS. 6A-6I include 5.855 seconds of motion data. In FIG. 3 , the example usage session was segmented into 20-second samples of motion data points 36. The last motion sample that was shorter than twenty seconds in length was discarded. To reduce the amount of data in this example, the 585 motion data points 36 of FIGS. 6A-6I are segmented into motion samples of no more than five seconds.
In step 49, the motion data points 36 of accelerometer data are segmented into evenly sized time intervals with a predefined time length (5 seconds) starting from the first collected data point within the usage session. In this example, the first time interval includes motion data points 1-499. The motion data points 500-585 are discarded because they include less than five seconds of data.
In step 50, the first time interval of motion data points 36 (motion sample #1) is associated with those discrete movements of the user's finger that occur entirely within the first time interval. A touch sample is associated with a motion sample if the touch data points 35 of the touch sample fall entirely within the period of the motion sample. In the example illustrated in FIG. 3 , there are three touch samples 41-43 whose touch data points 35 fall entirely within the motion sample 44. Of the five touch samples included in FIGS. 5A-5B, only touch samples #1-#3 have touch data points with touch timestamp values that fall entirely within the motion timestamp values of motion sample #1 of FIGS. 6A-6I.
FIG. 6C shows the highlighted motion timestamp values of motion data points 160-177 within motion sample #1, which coincide with the touch timestamp values of touch data points 1-14 of touch sample #1 of FIG. 5A. Touch sample #1 runs from timestamp 1583005689576 to timestamp 1583005689753. FIGS. 6D-6E show the highlighted motion timestamp values of motion data points 277-291 within motion sample #1, which coincide with the touch timestamp values of touch data points 15-22 of touch sample #2 of FIG. 5A. Touch sample #2 runs from timestamp 1583005690747 to timestamp 1583005690903. FIG. 6F shows the highlighted motion timestamp values of motion data points 410-418 within motion sample #1, which coincide with the touch timestamp values of touch data points 23-30 of touch sample #3 of FIG. 5A. Touch sample #3 runs from timestamp 1583005692082 to timestamp 1583005692170. However, the touch timestamp values of touch data points 31-38 of touch sample #4 of FIG. 5B coincide with the highlighted motion timestamp values of motion data points 517-524, which occur after the end of motion sample #1 at motion data point 499. Thus, touch sample #4 is not associated with motion sample #1, and the touch data points 35 of touch sample #4 are not used to calculate features in method 45. Similarly, the touch timestamp values of touch data points 39-46 of touch sample #5 of FIG. 5B coincide with the highlighted motion timestamp values of motion data points 578-585, which occur after the end of motion sample #1 at motion data point 499. Thus, touch sample #5 is also not associated with motion sample #1, and the touch data points 35 of touch sample #5 are not used to calculate features in method 45. Although in the exemplary touch data of FIGS. 5A-5B there are no touch samples whose time lengths are less than a predefined threshold (e.g., 0.05 sec), any such short samples would also be discarded and not used to calculate features in method 45.
In step 51, feature computation component 38 of predictive modeling module 33 calculates the values of touch features for each discrete movement within the first time interval. In this example, thirty-two touch features are calculated for each of touch samples #1-#3 which fall within motion sample #1. The touch features are calculated using the touch data points 35 listed in FIGS. 5A-5B. FIG. 7 is a table listing the values of the thirty-two touch features for each of touch samples #1-#5. Only the feature values are samples #1-#3 are used in further calculations. The thirty-two touch features include: distance_covered, distance_spanned, efficiency, duration, speed, num_points, curvature, pressure_median, pressure_mean, pressure_std, pressure_max, pressure_min, radiusMin_median, radiusMin_mean, radiusMin_std, radiusMin_max, radiusMin_min, radiusMax_median, radiusMax_mean, radiusMax_std, radiusMax_max, radiusMax_min, orientation_median, orientation_mean, orientation_std, orientation_max, orientation_min, size_median, size_mean, size_std, size_max, and size_min.
The distance_covered is the entire distance_covered by the user's finger during a touch event, whereas the distance_spanned is the straight-line distance between the beginning and the end of the touch event. FIG. 8 shows how the touch feature “distance_covered” is calculated for touch sample #1 using the touch data points 1-14. The sum of the distances between the xy positions at the end of each touch data point is calculated using the delta_x and delta_y values for touch data points 1-14 in FIG. 5A. The result is 200.3943288, which is listed as the touch feature value for distance_covered for touch sample #1 in FIG. 7 .
FIG. 9 shows how the touch feature “distance_spanned” is calculated for touch sample #1 using the first touch data point 1 of sample #1 and the first touch data point 14 of touch sample #2. The distances between the xy position at the beginning of touch sample #1 and the xy position at the beginning of touch sample #2 is calculated using the x_position and y_position values for touch data points 1 and 14 in FIG. 5A. The result is 195.0025641, which is listed as the touch feature value for distance_spanned for touch sample #1 in FIG. 7 .
The touch feature “efficiency” is calculated as the ratio of distance_spanned to distance_covered. For the touch sample #1, the efficiency is 195.0025641/200.3943288, which equals 0.973094225 as listed in FIG. 7 . The touch feature “duration” is an indication of the duration of a swipe of the user's finger over screen 26 of smartphone 11. The feature “duration” is calculated as the difference between the final and initial timestamps of the touch sample. For the touch sample #1, the duration is (1583005689753−1583005689576), which equals 177 milliseconds. The duration is listed in FIG. 7 as 0.177 seconds. The feature “pressure_mean” is an indication of the average pressure applied by the user's finger to screen 26 of smartphone 11 during a swipe or touch event.
In step 52, feature computation component 38 calculates the values of motion features associated with the first time interval. The values of the motion features are determined using motion data points 36 listed in FIGS. 6A-6I whose timestamps fall within motion sample #1. In this example, twenty-nine motion features are calculated for motion sample #1. FIG. 10 is a table listing the values of the twenty-nine motion features, which are: mean_x, std_x, var_x, mean_y, std_y, var_y, mean_z, std_z, var_z, mean_overall, median_overall, std_overall, var_overall, amax_overall, amin_overall, mean_l2_norm, mean_l1_norm, root_mean_sq, curve_len, diff_entropy, energy, std_energy, peak_magnitude, peak_magnitude frequency, peak_power, peak_power_frequency, magnitude_entropy, power_entropy, and energy_pct_between_3_and_7.
In step 53, feature normalization component 39 of predictive modeling module 33 normalizes the touch features and the motion features based on prior touch data points 35 and motion data points 36 previously acquired while the user was using smartphone 11. The touch and motion features are normalized using the past mean of each feature and the past standard deviation of each feature for the particular user. In one implementation, touch and motion data is acquired for the user over a 2-week period, and then the mean value and the standard deviation of each touch and motion feature are determined. Note that the user must be using the same smartphone 11 over the entire 2-week period because the feature values are heavily influenced by screen size and other characteristics that vary for different smartphones. FIG. 7 lists the mean and the standard deviation for each touch feature for the particular user in the last two columns. FIG. 10 lists the mean and the standard deviation for each motion feature for the particular user in the last two columns. FIGS. 11A-11B list the normalized touch and motion features associated with touch samples #1-#3.
The normalized value for each feature is calculated as the difference between the feature value and the feature mean, with the difference then divided by the standard deviation of the feature. For example, the normalized distance_covered for touch sample #1 is calculated as (200.3943288−196.713947)/48.93410519, which equals 0.075210975. The inputs are listed in the first row of FIG. 7 , and the normalized distance_covered is listed in the first row of FIG. 11A. Note that the same normalized motion feature values are used with each of the three distinct touch feature values for touch samples #1-#3, as shown in the columns of FIGS. 11A-11B. Thus, in the ultimate determination of the stress level associated with each touch sample, the contribution of the motion features is the same.
In step 54, the machine learning model 40 of predictive modeling module 33 determines a stress level value for each of the three touch samples #1-#3. Each determination is based on both the thirty-two touch features of the corresponding touch sample and the twenty-nine motion features of motion sample #1. A stress level value is generated for each discrete movement within the first time interval of motion sample #1 by applying logistic regression parameters to weight each of the normalized touch and motion features. The values of the sixty-one logistic regression parameters associated with each touch and motion feature are listed in the last column of FIGS. 11A-11B. The stress level value of each touch sample is calculated using the formula of FIG. 12 . The value of “t” in the formula is the sum of the products of the sixty-one feature values times their associated regression parameters; an intercept value is then added to the sum. The calculated stress level value falls within a range of zero to one, in which one represents the maximum possible stress.
For this exemplary touch and motion data, the sums of the products of the sixty-one feature values times their associated regression parameters for the touch samples #1, #2 and #3 are −0.861821372, −0.952629425 and −0.975875892, respectively. The resulting stress levels for touch samples #1, #2 and #3 are 0.275536305, 0.257783613 and 0.25336094, respectively
The regression parameters used in step 54 are generated using a logistic regression model trained on touch data points and motion data points acquired from other users during time intervals identified by those other users as being associated with various levels of stress. In this example, the touch and motion data was acquired from 110 other users over fourteen daily usage sessions. The users recorded their perceived stress level on a scale between 0-10 for each usage session. Thus, each touch data point and motion data point is associated with a stress level value corresponding to that of the usage session during which the data was acquired. The data from the usage sessions is divided into a first half with the highest stress rankings and a second half with the lowest stress rankings. The data in the first half is deemed to be associated with a stress level of one, and the data in the second half is deemed to be associated with a stress level of zero. Thirty-two touch features and the twenty-nine motion features are then calculated for each usage session. Machine learning in the form of logistic regression is used to determine the relative contribution of each of the sixty-one features to the perceived stress level of the usage session being one or zero. The magnitudes of the relative contribution are expressed by the logistic regression parameter values listed in the last column of FIGS. 11A-11B. These values are stored in knowledge base module 34.
For this sample knowledge base, the feature with the greatest correlation to the user's stress level is the standard deviation of energy of the motion data points (std_energy), which has a negative contribution to stress of −1.011. The feature with the biggest positive correlation to the user's stress level is the overall standard deviation of the motion data points (std_overall), which has a positive contribution to stress of 0.701. Another motion feature with a large contribution to the user's stress level is the peak magnitude of the motion data points while the user's finger moves over the screen, which has a positive contribution to stress of 0.529.
Touch features that make a large contribution to the user's stress level are the minimum size of the touch (size_min), the standard deviation of the touch size (size_std) and the minimum of the radius of the minimum sized touches (radiusMin_min). For this sample knowledge base, the resulting regression parameters result in an overall contribution to the stress level by the motion features of about 78% and an overall contribution to the stress level by the touch features of about 22%. However, the accuracy of the overall stress level determination is improved by using both motion features and touch features in a combined weighted model to determine the stress level of the user.
In step 55, a stress score of the user during the first time interval is determined based on the stress level values calculated for the discrete touch movements during the first time interval. In this example, a stress score is calculated by averaging the determined stress level values of the touch samples #1-#3 that fall within the first time interval between the timestamps for motion data points 1 and 499. Thus, the stress score for motion sample #1 is 0.262, which is the average of the stress level values of the touch samples #1-#3. Note, however, that both the stress level values and the stress score are influenced by both the touch data points and the motion data points that occur during the first time interval.
In another embodiment, the stress score is determined to be the highest stress level value calculated for a touch sample during the first time interval. In yet another embodiment, stress level values are calculated for the touch samples (discrete finger movements) that occur during many motion samples acquired over longer periods of use of smartphone 11, such as a work day or portions of a day. The calculated stress level values are then averaged to determine a stress score for the day or portion of a day.
In step 56, an indication is displayed on screen 26 of how the stress score of the user has changed since the stress score was last determined. For example, an image is displayed indicating to the user that the stress score is higher or lower than a previously generated stress score. A numerical representation of the stress score is displayed on display screen 26 of smartphone 11. In another aspect, App 30 gives the user non-verbal audio feedback indicating to the user whether the stress score is higher or lower than the previously generated stress score. For example, a higher tone indicates a rising stress score, and a lower tone indicates a falling stress score.
In an alternative embodiment, instead of indicating in step 56 that the user's stress score has risen or fallen, the user is prompted to engage in an activity based on the determined stress score that is not indicated to the user. For example, App 30 prompts the user to engage in an intervention that is selected based on the determined stress score. If the user's stress score is above a predetermined threshold, the user is prompted to engage in relaxation and meditation techniques guided by App 30.
In one implementation of App 30, a company may recommend that its employees use the mobile app running on their smartphones to assist them in engaging in stress reduction techniques during the work day. Each employee's stress level is determined in the background as the employee uses the smartphone throughout the day without the employee having to perform any specific diagnostic procedures to measure his or her current stress level. If the employee's stress level exceeds a predetermined threshold, then App 30 notifies the employee and recommends a stress reduction activity, for example, meditation.
An advantage of method 45 for determining a stress level using App 30 on smartphone 11 is that the method does not require other sensors or wearable devices to determine the user's stress level. Over forty percent of the world's population owns a mobile device on which method 45 can be implemented, and most of these people use the mobile device regularly throughout the day. This ubiquity of mobile devices allows the method to be used by many more people than would otherwise be possible if purchasing a wearable device or medical-grade device were required to determine the user's stress level.
Another advantage of method 45 is that the user is not required to engage in any specific behavior during which the user's stress level is measured. Most other methods of determining a user's stress level require the user to adjust his or her patterns of behavior and to act in a way that the user would not otherwise act. However, method 45 is completely unobtrusive because the user's stress level is determined based on the user's normal usage of the mobile device or smartphone. Thus, the user of App 30 is more likely to engage in stress reduction techniques because the reluctance to perform additional steps to measure the user's current stress level is removed as an excuse for failing to implement the techniques.
Although the present invention has been described in connection with certain specific embodiments for instructional purposes, the present invention is not limited thereto. Accordingly, various modifications, adaptations, and combinations of various features of the described embodiments can be practiced without departing from the scope of the invention as set forth in the claims.

Claims (15)

What is claimed is:
1. A method comprising:
generating a touch feature value from touch data points of a smartphone, wherein each touch data point includes an X position, a Y position and an associated touch timestamp value, and wherein the touch feature value is an indication of how a finger of a user moves over a screen of the smartphone;
generating a motion feature value from motion data points of the smartphone, wherein each motion data point includes an X movement, a Y movement, a Z movement and an associated motion timestamp value, and wherein the motion feature value is an indication of movement of the smartphone including while the finger of the user moves over the screen of the smartphone;
segmenting the motion data points into non-overlapping time intervals, wherein at least a portion of a first time interval occurs concurrently with the touch data points as the finger of the user moves over the screen of the smartphone, and wherein the touch feature value is generated from the touch data points that fall within the first time interval;
weighting the touch feature value and the motion feature value by corresponding regression parameters, wherein the regression parameters are generated using a logistic regression model trained on touch data points and motion data points identified by other users as being acquired while those other users were experiencing various perceived levels of stress;
generating a stress score based on the weighted touch feature value and the weighted motion feature value; and
indicating to the user whether the stress score is higher or lower than a previously generated stress score.
2. The method of claim 1, wherein the touch feature value is an indication of a duration of a swipe of the finger of the user over the screen of the smartphone.
3. The method of claim 1, wherein the touch feature is an indication of a pressure applied by the finger of the user to the screen of the smartphone.
4. The method of claim 1, wherein the motion feature value is an indication of standard deviation of the motion data points.
5. The method of claim 1, wherein the motion feature value is an indication of a peak magnitude of the motion data points while the finger of the user moves over the screen of the smartphone.
6. The method of claim 1, further comprising:
normalizing the touch feature value based on prior touch feature values acquired from prior movements of the finger of the user over the screen of the smartphone, wherein the weighting of the touch feature value is a weighting of the normalized touch feature value; and
normalizing the motion feature value based on prior movement of the smartphone that occurred concurrently while the user was using the smartphone, wherein the weighting of the motion feature value is a weighting of the normalized motion feature value, and wherein the generating of the stress score is based on the weighted normalized touch feature value and the weighted normalized motion feature value.
7. The method of claim 1, wherein the indicating to the user whether the stress score is higher or lower than the previously generated stress score involves non-verbal audio feedback.
8. The method of claim 1, further comprising:
prompting the user to engage in an activity based on the stress score.
9. A method comprising:
receiving touch data points of a user of a smartphone, wherein each touch data point includes an X position value, a Y position value and an associated touch timestamp value;
segmenting the touch data points into discrete movements of a finger of the user on a screen of the smartphone;
receiving motion data points of the smartphone, wherein each motion data point includes an X movement value, a Y movement value, a Z movement value and an associated motion timestamp value;
segmenting the motion data points into non-overlapping time intervals, wherein at least a portion of a first time interval occurs concurrently with the touch data points as the finger of the user moves over the screen of the smartphone, and wherein values of touch features are generated from the touch data points that fall within the first time interval;
associating the first time interval with those discrete movements of the finger of the user that occur entirely within the first time interval;
calculating the values of touch features for each discrete movement within the first time interval;
calculating values of motion features associated with the first time interval;
normalizing the touch features and the motion features based on prior touch data points and motion data points previously acquired while the user was using the smartphone;
calculating a stress level value for each discrete movement within the first time interval by applying regression parameters to the normalized touch features and the normalized motion features, wherein the regression parameters are generated using a logistic regression model trained on touch data points and motion data points acquired from other users during time intervals identified by those other users as being associated with various levels of stress;
determining a stress score of the user during the first time interval based on the stress level values of the discrete movements within the first time interval; and
displaying an indication of how the stress score of the user has changed since the stress score of the user was last determined.
10. The method of claim 9, wherein the displaying an indication of how the stress score of the user has changed involves displaying a numerical representation of the determined stress score on the screen of the smartphone.
11. The method of claim 9, further comprising:
prompting the user to engage in an intervention, wherein the intervention is selected based on the determined stress score.
12. The method of claim 9, wherein one of the values of touch features is a touch pressure value.
13. The method of claim 9, wherein each touch data point includes a touch pressure value corresponding to the associated touch timestamp value.
14. The method of claim 9, wherein the normalizing is performed using mean values of the touch features generated using the prior touch data points previously acquired while the user was using the smartphone.
15. The method of claim 9, wherein the normalizing is performed using standard deviation values of the touch features generated using the prior touch data points previously acquired while the user was using the smartphone.
US17/227,308 2021-04-10 2021-04-10 Determining a stress level of a smartphone user based on the user's touch interactions and motion sensor data Active 2041-11-07 US11944436B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/227,308 US11944436B2 (en) 2021-04-10 2021-04-10 Determining a stress level of a smartphone user based on the user's touch interactions and motion sensor data
EP21179548.9A EP4070718A1 (en) 2021-04-10 2021-06-15 Determining a stress level of a smartphone user based on the user's touch interactions and motion sensor data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US17/227,308 US11944436B2 (en) 2021-04-10 2021-04-10 Determining a stress level of a smartphone user based on the user's touch interactions and motion sensor data

Publications (2)

Publication Number Publication Date
US20220322985A1 US20220322985A1 (en) 2022-10-13
US11944436B2 true US11944436B2 (en) 2024-04-02

Family

ID=76796879

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/227,308 Active 2041-11-07 US11944436B2 (en) 2021-04-10 2021-04-10 Determining a stress level of a smartphone user based on the user's touch interactions and motion sensor data

Country Status (2)

Country Link
US (1) US11944436B2 (en)
EP (1) EP4070718A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120289793A1 (en) 2011-05-13 2012-11-15 Fujitsu Limited Continuous Monitoring of Stress Using Accelerometer Data
US20140136450A1 (en) 2012-11-09 2014-05-15 Samsung Electronics Co., Ltd. Apparatus and method for determining user's mental state
US20160006941A1 (en) 2014-03-14 2016-01-07 Samsung Electronics Co., Ltd. Electronic apparatus for providing health status information, method of controlling the same, and computer-readable storage
US9521973B1 (en) * 2014-02-14 2016-12-20 Ben Beiski Method of monitoring patients with mental and/or behavioral disorders using their personal mobile devices
US20170071551A1 (en) * 2015-07-16 2017-03-16 Samsung Electronics Company, Ltd. Stress Detection Based on Sympathovagal Balance
US20200042687A1 (en) * 2019-08-06 2020-02-06 Lg Electronics Inc. Method and device for authenticating user using user's behavior pattern
US20200060603A1 (en) * 2016-10-24 2020-02-27 Akili Interactive Labs, Inc. Cognitive platform configured as a biomarker or other type of marker
US20200380882A1 (en) * 2017-08-03 2020-12-03 Akili Interactive Labs, Inc. Cognitive platform including computerized evocative elements in modes

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120289793A1 (en) 2011-05-13 2012-11-15 Fujitsu Limited Continuous Monitoring of Stress Using Accelerometer Data
US20140136450A1 (en) 2012-11-09 2014-05-15 Samsung Electronics Co., Ltd. Apparatus and method for determining user's mental state
JP2014094291A (en) 2012-11-09 2014-05-22 Samsung Electronics Co Ltd Apparatus and method for determining user's mental state
US9521973B1 (en) * 2014-02-14 2016-12-20 Ben Beiski Method of monitoring patients with mental and/or behavioral disorders using their personal mobile devices
US20160006941A1 (en) 2014-03-14 2016-01-07 Samsung Electronics Co., Ltd. Electronic apparatus for providing health status information, method of controlling the same, and computer-readable storage
US20170071551A1 (en) * 2015-07-16 2017-03-16 Samsung Electronics Company, Ltd. Stress Detection Based on Sympathovagal Balance
US20200060603A1 (en) * 2016-10-24 2020-02-27 Akili Interactive Labs, Inc. Cognitive platform configured as a biomarker or other type of marker
US20200380882A1 (en) * 2017-08-03 2020-12-03 Akili Interactive Labs, Inc. Cognitive platform including computerized evocative elements in modes
US20200042687A1 (en) * 2019-08-06 2020-02-06 Lg Electronics Inc. Method and device for authenticating user using user's behavior pattern

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
Ciman et al., "Individuals' stress assessment using human-smartphone interaction analysis," 2016 IEEE, Doi 10.1109/TAFFC. 2016. 2592504 (14 pages).
Ciman et al., "iSenseStress: Assessing stress through human-smartphone interaction analysis," Proceedings of the 9th International Conference on Pervasive Computing Technologies for Healthcare, ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering) 2015, pp. 84-91 (8 pages).
Extended European Search Report dated Feb. 21, 2022, from the European Patent Office in the related foreign application EP21179548.9 (14 pages).
Joselli et al., "gRmobile: A Framework for Touch and Accelerometer Gesture Recognition for Mobile Games," Games and Digital Entertainment (SBGAMES), 2009 VIII Brazilian Symposium, IEEE Computer Society, Piscataway, NJ, Oct. 8, 2009, pp. 141-150 XP031685727 (10 pages).
Ruensuk et al., "How Do You Feel Online, " Proceedings of ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, ACMPUB27, NY, NY, vol. 4, No. 4, Dec. 17, 2020, pp. 1-32, XP058662221 (32 pages).
Sagbas et al., "Stress Detection via Keyboard Typing Behaviors by Using Smartphone Sensors and Machine Learning Techniques," Jour. of Med. Systems, Springer US, NY, vol. 44, No. 4, Feb. 17, 2020, XP037054202 (12 pages).
Wampfler et al., "Affective State Prediction Based on Semi-Supervised Learning from Smartphone Touch Data, " Companion Pub. of 2020 ACM Designing Interactive Sys. Conf., ACMPUB27, Apr. 21, 2020 pp. 1-13, XP058616613 (13 pages).

Also Published As

Publication number Publication date
EP4070718A1 (en) 2022-10-12
US20220322985A1 (en) 2022-10-13

Similar Documents

Publication Publication Date Title
Muaremi et al. Towards measuring stress with smartphones and wearable devices during workday and sleep
CN107405072B (en) System and method for generating stress level information and stress elasticity level information for an individual
US20180055375A1 (en) Systems and methods for determining an intensity level of an exercise using photoplethysmogram (ppg)
CN106343979B (en) Resting heart rate measuring method and device and wearable equipment comprising device
CN104023802B (en) Use the control of the electronic installation of neural analysis
CN111818850B (en) Pressure evaluation device, pressure evaluation method, and storage medium
KR20170076281A (en) Electronic device and method for providing of personalized workout guide therof
US11191483B2 (en) Wearable blood pressure measurement systems
US20210290082A1 (en) Heart rate monitoring device, system, and method for increasing performance improvement efficiency
EP3364856B1 (en) Method and device for heart rate detection with multi-use capacitive touch sensors
Reimer et al. Mobile stress recognition and relaxation support with smartcoping: user-adaptive interpretation of physiological stress parameters
Raij et al. mstress: Supporting continuous collection of objective and subjective measures of psychosocial stress on mobile devices
CN115227213A (en) Heart rate measuring method, electronic device and computer readable storage medium
CN108697363B (en) Apparatus and method for detecting cardiac chronotropic insufficiency
EP3838142A1 (en) Method and device for monitoring dementia-related disorders
KR20210067827A (en) Operation method of stress management apparatus for the emotional worker
CN111012312A (en) Portable Parkinson's disease bradykinesia monitoring intervention device and method
US11944436B2 (en) Determining a stress level of a smartphone user based on the user's touch interactions and motion sensor data
CN110313922A (en) A kind of pressure regulating method, pressure regulating system and terminal
Aras et al. GreenMonitor: Extending battery life for continuous heart rate monitoring in smartwatches
Maier et al. A mobile solution for stress recognition and prevention
WO2019230235A1 (en) Stress evaluation device, stress evaluation method, and program
KR102390599B1 (en) Method and apparatus for training inner concentration
CN114847949A (en) Cognitive fatigue recovery method based on electroencephalogram characteristics and relaxation indexes
CN111315296B (en) Method and device for determining pressure value

Legal Events

Date Code Title Description
AS Assignment

Owner name: KOA HEALTH B.V., SPAIN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GUERREIRO, JOAO;SKORULSKI, BARTLOMIEJ M.;MATIC, ALEKSANDAR;REEL/FRAME:055884/0349

Effective date: 20210409

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

AS Assignment

Owner name: KOA HEALTH DIGITAL SOLUTIONS S.L.U., SPAIN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KOA HEALTH B.V.;REEL/FRAME:064106/0466

Effective date: 20230616

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STCF Information on status: patent grant

Free format text: PATENTED CASE