WO2023120809A1 - Methods and systems for identification of an unintended touch at a user interface of a device - Google Patents

Methods and systems for identification of an unintended touch at a user interface of a device Download PDF

Info

Publication number
WO2023120809A1
WO2023120809A1 PCT/KR2022/001984 KR2022001984W WO2023120809A1 WO 2023120809 A1 WO2023120809 A1 WO 2023120809A1 KR 2022001984 W KR2022001984 W KR 2022001984W WO 2023120809 A1 WO2023120809 A1 WO 2023120809A1
Authority
WO
WIPO (PCT)
Prior art keywords
touch
intended
timestamp
user
actual
Prior art date
Application number
PCT/KR2022/001984
Other languages
French (fr)
Inventor
Abhishek Mishra
Sai Hemanth KASARANENI
Saurabh Kumar
Deepak Kumar Sanklecha
Original Assignee
Samsung Electronics Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co., Ltd. filed Critical Samsung Electronics Co., Ltd.
Publication of WO2023120809A1 publication Critical patent/WO2023120809A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0237Character input methods using prediction or retrieval techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Definitions

  • the present disclosure generally relates to identification of an unintended touch and an intended user interface (UI) element at a UI of a user device, and more particularly relates to methods and systems for accurately identifying and translating of touch coordinates for unintended touch prevention.
  • UI user interface
  • such inadvertent touch-based inputs may also be provided during operation of regular mobile operations. For instance, while a user may have intended to click on a search view of an application store, but a social media notification may appear and get touched, which may lead to the social media application instead.
  • a method for identification of an unintended touch and an intended user-interface (UI) element at a UI of a user device includes receiving a touch input at the user interface at a specific timestamp. Further, the method includes extracting, from a database, a composite multi-sensor score for each of a plurality of timestamps preceding the specific timestamp. Furthermore, the method includes determining a probability of an intended touch input based on the plurality of composite multi-sensor scores. Additionally, the method includes comparing the probability of the intended touch with a user touch dynamics threshold to identify the unintended touch.
  • the method includes determining an actual touch intended timestamp, of the plurality of preceding timestamps, based on the plurality of composite multi-sensor scores, when the probability of the intended touch is greater than the user touch dynamics threshold. Further, the method includes determining an actual touch intended timestamp, of the plurality of preceding timestamps, based on the plurality of composite multi-sensor scores, when the probability of the intended touch is greater than the user touch dynamics threshold.
  • a system for identification of an unintended touch and an intended user-interface (UI) element at a UI of a user device comprises a receiving module configured to receive a touch input at the user interface at a specific timestamp.
  • the system comprises an intended touch deriving module in communication with the receiving module and configured to: extract, from a database, a composite multi-sensor score for each of a plurality of timestamps preceding the specific timestamp; determine a probability of an intended touch input based on the plurality of composite multi-sensor scores; compare the probability of the intended touch with a user touch dynamics threshold to identify the unintended touch; determine an actual touch intended timestamp, of the plurality of preceding timestamps, based on the plurality of composite multi-sensor scores, when the probability of the intended touch is greater than the user touch dynamics threshold; and identify an intended UI element at a touch coordinate of the received touch input from a UI layout at the actual touch intended timestamp.
  • various functions described below can be implemented or supported by one or more computer programs, each of which is formed from computer readable program code and embodied in a computer readable medium.
  • application and “program” refer to one or more computer programs, software components, sets of instructions, procedures, functions, objects, classes, instances, related data, or a portion thereof adapted for implementation in a suitable computer readable program code.
  • computer readable program code includes any type of computer code, including source code, object code, and executable code.
  • computer readable medium includes any type of medium capable of being accessed by a computer, such as read only memory (ROM), random access memory (RAM), a hard disk drive, a compact disc (CD), a digital video disc (DVD), or any other type of memory.
  • ROM read only memory
  • RAM random access memory
  • CD compact disc
  • DVD digital video disc
  • a "non-transitory” computer readable medium excludes wired, wireless, optical, or other communication links that transport transitory electrical or other signals.
  • a non-transitory computer readable medium includes media where data can be permanently stored and media where data can be stored and later overwritten, such as a rewritable optical disc or an erasable memory device.
  • Figure 1 illustrates an exemplary scenario of execution of an inadvertent touch input on a pop-up or advertisement, according to current prior art
  • Figure 2 illustrates an exemplary scenario of execution of an inadvertent touch input on a pop-up or advertisement, according to an embodiment of the present disclosure
  • Figures 3A-3C illustrate various scenarios of variation in composite multi-sensor scores over time, according to various embodiments of the present disclosure
  • Figure 4 illustrates a schematic block diagram of an intended element identification system within a mobile device, according to an embodiment of the present disclosure
  • Figure 5 illustrates a schematic block diagram of modules of the intended element identification system, according to an embodiment of the present disclosure
  • Figures 6A-6C illustrate an exemplary process flow depicting a method for identification of an unintended touch and an intended user-interface (UI) element at a UI of a user device, according to an embodiment of the present disclosure
  • Figure 7 illustrates an exemplary methodology to determine user touch dynamics threshold, according to an embodiment of the present disclosure.
  • Figure 8 illustrates an exemplary use case for prevention from clicking on sudden pop-ups at a UI of a user device according to an embodiment of the present disclosure, as compared to the existing art.
  • FIGS. 1 through 8, discussed below, and the various embodiments used to describe the principles of the present disclosure in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged system or device.
  • circuits may, for example, be embodied in one or more semiconductor chips, or on substrate supports such as printed circuit boards and the like.
  • circuits constituting a block may be implemented by dedicated hardware, or by a processor (e.g., one or more programmed microprocessors and associated circuitry), or by a combination of dedicated hardware to perform some functions of the block and a processor to perform other functions of the block.
  • a processor e.g., one or more programmed microprocessors and associated circuitry
  • Each block of the embodiments may be physically separated into two or more interacting and discrete blocks without departing from the scope of the disclosure.
  • the blocks of the embodiments may be physically combined into more complex blocks without departing from the scope of the disclosure.
  • FIG. 1 illustrates an exemplary scenario of execution of an inadvertent touch input on a pop-up or advertisement, according to current prior art.
  • the mobile device 100 may include a display/user interface 102 comprising one or more user-interface (UI) elements, including 104, 106, and 108.
  • the display interface 102 may correspond to an interface of a web-browser, live website, interface of a mobile device, or a software application running on a mobile device.
  • Each of the UI elements 104-108 may correspond to a button, text-field, picture, control element, a selectable option of a webpage or interface of a mobile device, or any user interface element well-known in the art.
  • UI element 106 When the user of the mobile device 100 decides to click on UI element 106, it may happen that the UI element is obscured by another UI element 110 due to a pop-up, application-based notification, or an advertisement on the display interface, before the user actually provides the touch input on the display interface 102. Since there is a time gap of milliseconds between the time when user decides to provide a touch input and the time when user actually provides the touch input, the user may inadvertently provide a touch input on the other UI element 110.
  • the other UI element 110 may also include, but is not limited to, one or more hyperlinks, links to software applications, or any other user interface element mentioned above.
  • the hyperlink or the link to software application may be executed as-is.
  • the user may be provided with an advertisement, software application, or any other further display interface associated with the user element which the user never intended to see, while the user was expecting execution of touch input on the UI element 106. This may lead to user frustration and security concerns due to inadvertent opening of unintended links or pop-ups.
  • Figure 2 illustrates an exemplary scenario of execution of an inadvertent touch input on a pop-up, application-based notification, advertisement, or the like, according to an embodiment of the present disclosure.
  • the UI element 110 is not executed as-is on the display interface 102.
  • an intended element identification system may be provided which is configured to determine whether the user actually intended to click on the UI element 110. If it is determined that the user did not intend to touch UI element 110 and instead intend to touch 106, the system may wait for UI element 110 to disappear from the display interface. Further, once the UI element 110 disappears from the display interface and upon reproduction of UI element 106 on the display interface, the touch action corresponding to the UI element 106 may be executed.
  • the present disclosure considers analysis of the duration from the point when the user of the mobile device 100 starts to move his/her finger to actual touching the point on the display interface 102.
  • one or more inertial sensor e.g., accelerometer, gyroscope, and/or magnetometer
  • the intended element identification system of the present disclosure is able to detect if a touch has been made or not.
  • the inertial sensor values are being recorded to note the change in values over time, for detection of intended and actual timestamp of touching the display interface 102. Further, a pattern of spike in the values of the inertial and pressure/grip sensors may be observed, from the time when user starts moving finger to actually touching the display interface.
  • the pattern of spike further facilitates in detection of intended and actual timestamps of touching the display interface.
  • a UI element present on the display interface 102 at the touch coordinates may be extracted from a database, which may correspond to the UI element 106 that the user intended to touch or select.
  • the intended element identification system may be configured to identify intended touches in a UI by analyzing the historical movements of the mobile device 100 (inertial sensors, pressure/grip sensor) and the UI layout history (over a period of time) to obtain a likelihood of the user intending to interact with a UI element 110 which has just loaded against the UI element 106 that was actually intended to be interacted with, in order to prevent unintended touches which further prevents user annoyance, privacy issues and security risks.
  • Figures 3A-3C illustrate various scenarios of variation in composite multi-sensor scores over time, according to various embodiments of the present disclosure.
  • a typical human reaction time (i.e., T gap ) from when the user decides to provide a touch input on the display interface to the time of actual touch on the display interface may be 200 to 300 milliseconds.
  • the present disclosure utilizes this time gap and associated intertial/pressure sensor values to analyze whether the user actually intended to provide touch input, as currently provided on the display interface.
  • the T gap provides an ample amount of time to process and tabulate inertial sensors' values as well as pressure/grip sensor readings to analyze an actual intended element.
  • Figs. 3A-3C illustrate a high probability sensor movement score range when a combined sensor score value of inertial and pressure/grip sensors (discussed hereinafter as “composite multi-sensor score” or "CMSR”) changes over time.
  • CMSR composite multi-sensor score
  • the peak values of CMSR are within the range of intended touch threshold range, i.e., between 302 and 304, then it may be inferred as a high probability of touch being intended by the user.
  • the peak values of CMSR are below the range of intended touch threshold range, i.e., below 302, then it may be inferred as very slight and erratic phone movement, implying an unintended or accidental touch due to drowsiness, etc.
  • the peak values of CMSR below the range of intended touch threshold range may imply low probability of touch being intended by the user. Such touch shall be discarded and not acted upon by the mobile device.
  • CMSR CMSR if the peak values of CMSR are over/beyond the range of intended touch threshold range, i.e., above 304, then it may be inferred as very high phone movement, which may imply somebody bumping into the user while the user was trying to touch the display interface. Again, such CMSR values may imply a low probability of touch being intended by the user.
  • Figure 4 illustrates a schematic block diagram of an intended element identification system 402, according to an embodiment of the present disclosure.
  • the intended element identification system 402 may be included within a mobile device 400.
  • the intended element identification system 402 may be a standalone device or a system.
  • Examples of mobile device 400 may include, but not limited to, a mobile phone, a laptop computer, a desktop computer, a Personal Computer (PC), a notebook, a tablet, e-book readers, a server, a network server, and any other electronic device providing touch-based input functionality.
  • the mobile device 400 may include an intended element identification system 402.
  • the intended element identification system 402 may be configured to identify intended touches in a UI by analyzing the historical movements of the mobile device (inertial sensors, pressure/grip sensor) and the UI layout history (over a period of time) to obtain a likelihood of the user intending to interact with a UI element which has just loaded against the UI element that was actually intended to be interacted with in order to prevent unintended touches which further prevents user annoyance, privacy issues and security risks.
  • the intended element identification system 402 may further include a processor/controller 404, an I/O interface 406, inertial sensors 408, pressure sensors 410, transceiver 412, and a memory 414.
  • the inertial sensors 408 may include, but not limited to, an accelerometer, a gyroscope, and a magnetometer.
  • the accelerometer, gyroscope, and a magnetometer may be configured to measure standard parameters for the mobile device 400, as well-known in the art.
  • the pressure/grip sensors 410 may include any off-the-shelf pressure sensors embedded in the mobile device 200 for capturing pressure/grip of the user while holding the mobile device 200.
  • the architecture and functionality of inertial sensors 408 and pressure/grip sensors 410 are not discussed herein detail.
  • the memory 414 may be communicatively coupled to the at least one processor/controller 404.
  • the memory 414 may be configured to store data, instructions executable by the at least one processor/controller 404.
  • the memory 414 may include one or more modules 416 and a database 418 to store data.
  • the one or more modules 416 may include a set of instructions that may be executed to cause the intended element identification system 402 to perform any one or more of the methods disclosed herein.
  • the one or more modules 416 may be configured to perform the steps of the present disclosure using the data stored in the database 418, to identify an intended UI element on the display interface of the mobile device 400.
  • each of the one or more modules 414 may be a hardware unit which may be outside the memory 414.
  • the memory 414 may include an operating system 420 for performing one or more tasks of the system 402 and/or mobile device 400, as performed by a generic operating system in the communications domain.
  • operating system 420 for performing one or more tasks of the system 402 and/or mobile device 400, as performed by a generic operating system in the communications domain.
  • a generic operating system in the communications domain For the sake of brevity, the architecture and standard operations of operating system 420, memory 414, database 418, processor/controller 404, transceiver 412, and I/O interface 406 are not discussed in detail.
  • the memory 414 may communicate via a bus within the system 402.
  • the memory 414 may include, but not limited to, a non-transitory computer-readable storage media, such as various types of volatile and non-volatile storage media including, but not limited to, random access memory, read-only memory, programmable read-only memory, electrically programmable read-only memory, electrically erasable read-only memory, flash memory, magnetic tape or disk, optical media and the like.
  • the memory 414 may include a cache or random access memory for the processor/controller 404.
  • the memory 414 is separate from the processor/controller 404, such as a cache memory of a processor, the system memory, or other memory.
  • the memory 414 may be an external storage device or database for storing data.
  • the memory 414 may be operable to store instructions executable by the processor/controller 404.
  • the functions, acts or tasks illustrated in the figures or described may be performed by the programmed processor/controller 404 for executing the instructions stored in the memory 414.
  • the functions, acts or tasks are independent of the particular type of instructions set, storage media, processor or processing strategy and may be performed by software, hardware, integrated circuits, firmware, micro-code and the like, operating alone or in combination.
  • processing strategies may include multiprocessing, multitasking, parallel processing and the like.
  • the present disclosure contemplates a computer-readable medium that includes instructions or receives and executes instructions responsive to a propagated signal, so that a device (e.g., mobile device 200) connected to a network may communicate voice, video, audio, images, or any other data over a network.
  • the instructions may be transmitted or received over the network via a communication port or interface or using a bus (not shown).
  • the communication port or interface may be a part of the processor/controller 404 or maybe a separate component.
  • the communication port may be created in software or maybe a physical connection in hardware.
  • the communication port may be configured to connect with a network, external media, the display, or any other components in system, or combinations thereof.
  • connection with the network may be a physical connection, such as a wired Ethernet connection or may be established wirelessly.
  • the additional connections with other components of the intended element identification system 202 may be physical or may be established wirelessly.
  • the network may alternatively be directly connected to the bus.
  • the processor/controller 404 may include at least one data processor for executing processes in Virtual Storage Area Network.
  • the processor/controller 404 may include specialized processing units such as, integrated system (bus) controllers, memory management control units, floating point units, graphics processing units, digital signal processing units, etc.
  • the processor/controller 404 may include a central processing unit (CPU), a graphics processing unit (GPU), or both.
  • the processor/controller 404 may be one or more general processors, digital signal processors, application-specific integrated circuits, field-programmable gate arrays, servers, networks, digital circuits, analog circuits, combinations thereof, or other now known or later developed devices for analyzing and processing data.
  • the processor/controller 404 may implement a software program, such as code generated manually (i.e., programmed).
  • the processor/controller 404 may be disposed in communication with one or more input/output (I/O) devices via the I/O interface 406.
  • the I/O interface 406 may employ communication code-division multiple access (CDMA), high-speed packet access (HSPA+), global system for mobile communications (GSM), long-term evolution (LTE), WiMax, or the like), etc.
  • the mobile device 200 may communicate with one or more I/O devices.
  • the input device may be an antenna, keyboard, mouse, joystick, (infrared) remote control, camera, card reader, fax machine, dongle, biometric reader, microphone, touch screen, touchpad, trackball, stylus, scanner, storage device, transceiver, video device/source, etc.
  • the output devices may be a printer, fax machine, video display (e.g., cathode ray tube (CRT), liquid crystal display (LCD), light-emitting diode (LED), plasma, Plasma Display Panel (PDP), Organic light-emitting diode display (OLED) or the like), audio speaker, etc.
  • CTR cathode ray tube
  • LCD liquid crystal display
  • LED light-emitting diode
  • PDP Plasma Display Panel
  • OLED Organic light-emitting diode display
  • the processor/controller 404 may be disposed in communication with a communication network via a network interface.
  • the network interface may be the I/O interface 406.
  • the network interface may connect to a communication network.
  • the network interface may employ connection protocols including, without limitation, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/internet protocol (TCP/IP), token ring, IEEE 802.11a/b/g/n/x, etc.
  • the communication network may include, without limitation, a direct interconnection, local area network (LAN), wide area network (WAN), wireless network (e.g., using Wireless Application Protocol), the Internet, etc.
  • LAN local area network
  • WAN wide area network
  • wireless network e.g., using Wireless Application Protocol
  • the mobile device 200 may communicate with other devices.
  • the network interface may employ connection protocols including, but not limited to, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/internet protocol (TCP/IP), token ring, IEEE 802.11a/b/g/n/x, etc.
  • connection protocols including, but not limited to, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/internet protocol (TCP/IP), token ring, IEEE 802.11a/b/g/n/x, etc.
  • FIG. 5 illustrates a schematic block diagram 500 of modules of an intended element identification system 402, according to an embodiment of the present disclosure.
  • the one or more modules 416 may include a layout and feature extraction module 502, a receiving module 504, an intended touch deriving module 506, and an intended action execution module 508.
  • the layout and feature extraction module 502 may be configured to extract parameters related to a plurality of sensors, and to further extract a user-interface (UI) layout at each of a plurality of timestamps.
  • extracting parameters related to a plurality of sensors comprises extracting data or readings related to one or more inertial sensors and one or more pressure/grip sensors present in the mobile device (e.g., mobile device 400).
  • the one or more inertial sensors may include, but not limited to, a gyroscope, an accelerometer, and a magnetometer.
  • the data/readings related to acceleration, angular velocity, and magnetic field intensity of the mobile device may be extracted for each of a plurality of timestamps using the plurality of sensors. Additionally, the readings of the pressure/grip sensor(s) may indicate the change in applied pressure on these sensors during the time when the user decides to execute an action to the time when the user executes the touch action on the user interface of the mobile device. Specifically, the readings of the plurality of sensors facilitate in depicting a peculiar movement of the mobile device in the user's hand between the time when the user decides to touch something on screen and the time when the user executes the touch on the mobile device.
  • the layout and feature extraction module 502 may be configured to extract a UI layout at each of such plurality of timestamps.
  • the UI layout may comprise one or more UI elements with their relative positions or coordinates, currently displayed at the UI or display interface of the mobile device.
  • the UI layout may include one or more selectable options at a web-browser interface/layout or UI elements, as discussed above, currently displayed on the mobile device.
  • the UI layout may include a UI of a software application currently displayed on the mobile device.
  • the UI of the software application may include multiple UI elements to perform various functions associated with the application. For instance, if the software application is a shopping application, the UI elements may include, but not limited to, search bar, menu bar, "add to cart” button, "add to wish list” button, and one or more shopping items.
  • the layout and feature extraction module 502 may be configured to determine a composite multi-sensor score (CMSR) at each of the plurality of timestamps based on parameter readings of the plurality of sensors recorded at the corresponding timestamp.
  • CMSR composite multi-sensor score
  • multiple sensors' (e.g., accelerometer, gyroscope, accelerometer, etc.) readings are aggregated that are recorded at each timestamp.
  • a final CMSR is output for each of the plurality of timestamps based on the aggregate readings for the corresponding timestamp.
  • the CMSR may indicate an amplitude of the readings of the plurality of sensors.
  • the series of CMSRs for the plurality of timestamps may indicate the movement of the user's mobile device during the time when the user decides to execute an action to the time when the user executes the touch action on the user interface of the mobile device.
  • the CMSR for each timestamp is based on a normalized linear combination of readings of the plurality of sensors recorded at the corresponding timestamp. Further, the normalized readings may be further processed using a non-linear multivariate regression function to determine a final CMSR for each timestamp. For example, a plurality of sensor readings at a specific timestamp may be recorded as:
  • Gyroscope Y 0.04
  • Gyroscope P -0.07
  • Gyroscope R -0.01
  • the above readings may be subsequently normalized, followed by further processing using a non-linear multivariate regression function to output the CMSR for a specific timestamp.
  • An exemplary set of CMSRs for a series of timestamps, using plurality of sensor readings for each timestamp, is depicted here in Table 1 below:
  • the layout and feature extraction module 502 may be configured to store the CMSRs and UI layouts for each of the plurality of timestamps in a database.
  • the CMSRs and UI layouts may be stored in a local database (e.g., database 418) of the mobile device 400.
  • the CMSRs and UI layouts may be stored in a cloud database (not shown) associated with the mobile device.
  • the layout and feature extraction module 502 may be configured to compare the two UI layouts for successive timestamps to detect the changes in UI elements between such successive timestamps.
  • the receiving module 504 may be configured to receive a physical touch input at the user/display interface of the mobile device at a specific timestamp.
  • the touch input may be received on the user/display interface of the mobile device via, but not limited to, a finger touch, click using a pen, button press, or a finger swipe.
  • the touch input may correspond to selecting a specific UI element, of a plurality of UI elements, currently displayed on the UI of the mobile device.
  • the specific timestamp may be recorded as a physical touch timestamp, when the user provided the input at the display interface.
  • a coordinate or relative position of the received touch input may also be recorded. The coordinate may correspond to a position of a UI element that the user touched at the present display interface on the mobile device.
  • the intended touch deriving module 506 may be configured to extract, from the database, the CMSRs for each of a plurality of timestamps preceding the specific timestamp or the physical touch timestamp, at which the user provided the touch input on the display interface.
  • the CMSRs for 5-20 timestamps, before the physical touch timestamp may be extracted from the database.
  • the extracted historical CMSRs indicate a particular movement of the mobile device and the pressure/grip change on the mobile device immediately before the physical touch timestamp, i.e., before the user physically touched the mobile device to provide the touch input.
  • a number of the plurality of timestamps (e.g., 5 to 20) for which the CMSRs are extracted may correspond to a predefined number of timestamps, based on a pre-recorded movement of the user's finger and pressure before he/she touches the display interface.
  • the pre-recorded movement may be learned using a machine learning or neural network model.
  • the machine learning or neural network model may be based on a pattern of spikes observed in the values of one or more of the accelerometer, gyroscope, magnetometer, and pressure/grip sensor(s) when the user starts moving his/her finger to providing a touch input.
  • the pre-recorded movement may be specific to each user.
  • a user identification based on one or more known user identification techniques may be performed. For instance, while one user may have a finger and pressure changes of 2 seconds, another user may have a movement of 0.5 seconds before providing an actual touch input. Accordingly, the plurality of timestamps for which the historical CMSRs are extracted may correspond to such 0.5 or 2 seconds before the physical touch timestamp. As it may be appreciated, these timestamps are just exemplary for the sake of description, and the values may vary from user to user.
  • the intended touch deriving module 506 may be configured to determine a probability of an intended touch input based on an analysis of the plurality of extracted CMSRs.
  • the historical CMSRs of a plurality of timestamps preceding the physical touch timestamp may be analyzed backwards.
  • the analysis of the extracted historical CMSRs may be performed using a trained machine learning or neural network model.
  • the analysis of the extracted historical CMSRs may be performed using a many-to-one recurrent neural network (RNN) model.
  • RNN recurrent neural network
  • the plurality of CMSRs are analyzed using the many-to-one RNN model to output the probability of intended touch input as 0.78.
  • the device mobility that is induced due to the touch intention may be determined.
  • the causal relation from the user's intention of touch to the mobile device's movement patterns are analyzed through the historical CMSR value changes.
  • the probability value of user's intention of touch by analyzing the historical CMSR values in the near past from the physical touch timestamp may be outputted.
  • the probability of the intended touch input may be used to determine if the user actually intended to touch the display/user interface or not. This probability determination may eliminate the unintended screen/display touches that happen when the user is in a crowded place.
  • a negative probability of intended touch may eliminate the scenarios of unintended touches illustrated in Figs. 3B and 3C.
  • the intended touch deriving module 506 may be configured to determine whether the probability of the intended touch input is greater than a user touch dynamics threshold to identify the unintended touch.
  • the user touch dynamics threshold is indicative of touch characteristics associated with handling and movement of the user device.
  • the user touch dynamics threshold may be tuned as per the user touch instances to maximize determination accuracy of a true touch.
  • the user touch dynamics threshold may be a periodically updated threshold value based on user inputs at the UI of the mobile device, providing an induced touch probability. Further, the user touch dynamics threshold may tune in the user touch action characteristics in terms of device handling and movement etc. which is reflected in CMSR scores.
  • the user touch dynamics threshold may be determined/estimated based on a training performed using the user/display interface of the mobile device 400, as depicted in Figure 7.
  • the user of the mobile device 400 may be presented with a balloon popping exercise to learn user's touch dynamics.
  • the intended touch timestamp i.e., when the balloon 702 is displayed on the display interface
  • the physical touch timestamp i.e., when the user actually touches the balloon 702 to pop it
  • the plurality of CMSRs may be analyzed between the intended touch timestamp and the physical touch timestamp to determine an aggregate value of user touch dynamics threshold.
  • This exercise may be performed on a periodic basis to update the user touch dynamics threshold, to reflect the user current manner of finger and pressure movement more realistically while providing a touch input.
  • the intended touch deriving module 506 may be configured to determine whether the probability of the intended touch input is greater than a user touch dynamics threshold to identify the unintended touch.
  • the user touch dynamics threshold is indicative of touch characteristics associated with handling and movement of the user device.
  • the user touch dynamics threshold may be tuned as per the user touch instances to maximize determination accuracy of a true touch.
  • the intended touch deriving module 506 may be configured to discard the received touch input.
  • the probability of intended touch input is lesser than the user touch dynamics threshold, it may indicate an unintended/inadvertent touch input. Accordingly, the touch input should be discarded without performing any corresponding action on the touch input.
  • the intended touch deriving module 506 may be configured to determine an actual touch intended timestamp, from the plurality of preceding timestamps, based on the plurality of composite multi-sensor scores, when the probability of intended touch input is greater than the user touch dynamics threshold.
  • the actual touch intended timestamp corresponds to the time when the user decides to click on the screen and is always lesser than the physical touch timestamp.
  • the CMSR pattern i.e., a series of CMSRs which resulted in the probability of intended touch input to cross the threshold may be extracted, and an output corresponding to the timestamp of the onset of the CMSR pattern may be provided.
  • the timestamp of the onset of the CMSR pattern corresponds to the actual touch intended timestamp.
  • the actual touch intended timestamp is required to extract the UI layout that was present at the time when user decided to click/touch on something in his mind, so as to find out which UI element stimulated him/her to do so.
  • the actual touch intended timestamp may be determined using a machine learning or neural network model.
  • the actual touch intended timestamp may be determined using a trained many-to-one RNN model. If the set of CMSR scores' pattern includes a length "N" window comprising N scores, then the output provided by the RNN model may correspond to:
  • T I corresponds to actual touch intended timestamp
  • T P corresponds to specific timestamp or physical touch timestamp
  • N corresponds to length window of CMSR scores (e.g., 4 successive CMSR scores)
  • An exemplary set of output probability of each actual touch intended time is calculated and provided in Table 4 below.
  • the actual touch intended time may be selected based on the timestamp with highest probability. In the current example, the highest probability is calculated for "T P - N + 2", and thus, the actual touch intended timestamp may correspond to "T P - N + 2".
  • the intended touch deriving module 506 may be configured to extract UI layouts both at the specific timestamp and the actual touch intended timestamp.
  • the UI layouts may be extracted from a database (local or cloud-based), which were previously stored by the layout and feature extraction module 502.
  • Each of the UI layouts may include one or more UI elements.
  • the UI layout corresponding to the specific timestamp or physical touch timestamp may include at least the UI element that user actually touched on the display interface of the mobile device.
  • the UI layout corresponding to the actual touch intended timestamp may include at least the UI element that user intended to actually touch on the display interface.
  • the intended touch deriving module 506 may be configured to determine whether the UI layout at the specific timestamp is same as the UI layout at the actual touch intended timestamp.
  • the comparison of the UI layouts may include comparing UI elements at one or more positions within each of the two UIs. If all the UI elements within each of the two UI layouts are exactly same, then it may be determined that the UI layout at the specific timestamp is same as the UI layout at the actual touch intended timestamp.
  • the intended action execution module 508 may be configured to execute the received touch input as-is, when the UI layout at the specific timestamp is same as the UI layout at the actual touch intended timestamp. At an instance when the layouts at the specific timestamp and the actual touch intended timestamp are same, then it may be implied that the UI element that the user actually intended (or decided to touch) is currently present in the UI layout at the specific or physical touch timestamp when the user provided the touch input. Accordingly, the touch input may be executed as-is. In one embodiment, the touch input may be executed in coordination with a processor/controller or touch interface, as conventionally performed in the mobile devices.
  • the intended touch deriving module 506 may be configured to determine an actual intended UI element from the at least one UI element in the UI layout of the actual touch intended timestamp, when the UI layout at the specific timestamp is different from the UI layout at the actual touch intended timestamp.
  • the determination that the UI layouts of the specific timestamp and the actual touch intended timestamp may be based on a determination that one or more UI elements are different at a relative position or coordinate within the two UI layouts.
  • the coordinate or relative position of the received touch input may be provided as an input to the module.
  • the actual intended UI element may be extracted from the coordinate of the touch input in the UI layout at the specific timestamp. This may be based on the assumption that the user of the mobile device actually intended to touch the UI element at the same coordinate as the coordinate of the UI element present in the UI layout at the specific timestamp or the physical touch timestamp.
  • the output of this step may provide a UI element which was present at the touch coordinates on the mobile device at the time of touch intention after extracting the UI layout elements from the database.
  • the UI element may correspond to a user element identification, such as, but not limited to, a button or a text field.
  • the (X, Y) coordinates of the received input touch may correspond to Device (64.86, 104.25).
  • the UI layout corresponding to the actual touch intended timestamp may correspond to the layout at 10:00:00 shown in Table 2. Based on the mapping of the input coordinates to the actual touch intended timestamp, it may be determined that the actual intended UI element corresponds to id3. Additionally, further properties such as, but not limited to, element type, allowed actions, purpose, and whether the UI element is currently active, may be determined for the element id3. An illustration of exemplary different UI element properties is provided here in Table 5 below:
  • the intended action executing module 508 may be configured to redirect an action, corresponding to the received touch input, to the actual intended UI element at the touch coordinate instead of an obscuring UI element in the UI layout at the specific timestamp or the physical touch timestamp.
  • the intended action executing module 508 may be configured to determine whether the UI is loaded completely. The current UI may include a pop-up or an advertisement, which may not include the actual intended UI element. If the UI is loaded completely, then the method moves to next step to determine a delay in execution of the actual intended UI element. If the UI is not loaded completely, the system may wait to move to next step before the UI gets loaded completely.
  • the intended action executing module 508 may be configured to estimate an amount of delay in time period required for the execution of an intended touch for the actual intended UI element based on a current UI state and further based on whether the actual intended UI element is currently obscured by another UI element on the current UI.
  • the current UI state may include a pop-up, notification, or an advertisement and thus, the actual intended UI element may not be present for execution.
  • the amount to delay in time period required for execution may correspond to the time period when the pop-up or advertisement may automatically disappear from the display interface, thereby re-presenting or providing the previous UI layout associated with the actual touch intended timestamp.
  • the current UI state includes a pop-up or an advertisement, which does not include the actual intended UI element, then it may imply that the actual intended UI element is currently obscured by another UI element on the current UI.
  • a current UI layout may be analyzed.
  • one or more UI element legends may be analyzed to determine the amount of delay which may include, but not limited to, video player window, app install link, a subscribe button, and an official merchandise.
  • the video player window may indicate "Ad: 19 secs", which may be interpreted as 19 seconds remaining in disappearance of the advertisement or in restoration of the actual intended UI layout and element.
  • the amount of delay in time period required for the execution of the intended actual touch may correspond to 19 seconds.
  • the current UI layout may include a pop-up or notification corresponding to a social media application at the coordinates of the actual intended touch input.
  • the time for disappearance of the pop-up or notification may be determined in coordination with the operating system of the mobile device, or settings, or policy of the corresponding social media application.
  • the delay determination may be based on image extraction (or UI element extraction) from the current UI layout and/or based on policies or settings of the mobile device/software application.
  • the delay determination may be performed using a machine learning or neural network model, such as, but not limited to, a many-to-one RNN model.
  • the intended action executing module 508 may be configured to discard the received touch input when the obscuring another UI element is present after the estimated time period.
  • it may be determined that another UI element (e.g., a pop-up or an advertisement) is still present at the actual intended UI element coordinates, then the received touch input may be discarded.
  • the discarding of the received touch input on the other UI element facilitates avoiding an unnecessary execution of a touch input, which may have led to redirecting to a link within an unintended advertisement or to a pop-up notification.
  • the intended action executing module 508 may be configured to execute the action corresponding to the received touch input on the actual intended UI element when the obscuring another UI element disappears within the estimated time period.
  • the received touch input may be executed on the actual touch intended UI element.
  • the execution may include, but not limited to, click, touch, and navigating to next page.
  • the execution of the received touch input may be performed in conjunction with the operating system or processor/controller of the mobile device, as performed conventionally, but on the actual intended UI element.
  • Figures 6A-6C illustrate an exemplary process flow depicting a method 600 for identification of an unintended touch and an intended user-interface (UI) element at a UI of a user device, according to an embodiment of the present disclosure.
  • the method 600 may be performed by the intended element identification system of a mobile device, as discussed throughout this disclosure.
  • the method 600 comprises extracting parameters related to a plurality of sensors, and further comprises extracting a user-interface (UI) layout at each of a plurality of timestamps.
  • extracting parameters related to a plurality of sensors comprises extracting data or readings related to one or more inertial sensors and one or more pressure/grip sensors present in the mobile device.
  • the one or more inertial sensors may include, but not limited to, a gyroscope, an accelerometer, and a magnetometer.
  • the data/readings related to the plurality of sensors may be extracted for each of a plurality of timestamps. For example, the readings may be extracted or recorded at a difference of 0.01 seconds.
  • the inertial sensor readings facilitate in detecting the movement of mobile device along with the timestamps. Additionally, the readings of the pressure/grip sensor(s) may indicate the change in applied pressure on these sensors during the time when the user decides to execute an action to the time when the user executes the touch action on the user interface of the mobile device. Specifically, the readings of the plurality of sensors facilitate in depicting a peculiar movement of the mobile device in the user's hand between the time when the user decides to touch something on screen and the time when the user executes the touch on the mobile device. Additionally, a UI layout at each of such plurality of timestamps may be extracted. The UI layout may comprise one or more UI elements with their relative positions, currently displayed at the UI of the mobile device.
  • the method 600 comprises determining a composite multi-sensor score (CMSR) at each of the plurality of timestamps based on parameter readings of the plurality of sensors recorded at the corresponding timestamp.
  • CMSR composite multi-sensor score
  • multiple sensors' (e.g., gyroscope, accelerometer, magnetometer, etc.) readings are aggregated that are recorded at each timestamp.
  • a a final CMSR is output based on the aggregate readings.
  • the CMSR may indicate an amplitude of the readings of the plurality of sensors.
  • the series of CMSRs for the plurality of timestamps may indicate the movement of the user's mobile device during the time when the user decides to execute an action to the time when the user executes the touch action on the user interface of the mobile device.
  • the method 600 comprises storing the CMSRs and UI layouts for each of the plurality of timestamps in a database.
  • the two UI layouts for successive timestamps may be compared to detect the changes in UI elements between such successive timestamps.
  • the steps 602-606 may be continuously executed in the mobile device, and the CMSRs and UI layouts may be stored for a specific time duration.
  • the CMSRs and UI layouts may be stored for last 15 minutes with a difference of 0.05 seconds.
  • the CMSRs and UI layouts before 15 minutes may be continuously erased from the database.
  • the method 600 comprises receiving a physical touch input at the user/display interface of the mobile device at a specific timestamp.
  • the touch input may be received on the user/display interface of the mobile device via, but not limited to, a finger touch, click using a pen, button press, or a finger swipe.
  • the touch input may correspond to selecting a specific UI element, of a plurality of UI elements, currently displayed on the UI of the mobile device.
  • the specific timestamp may be recorded as a physical touch timestamp, when the user provided the input at the display interface.
  • a coordinate or relative position of the received touch input may also be recorded. The coordinate may correspond to a position of a UI element that the user touched at the present display interface on the mobile device.
  • the method 600 comprises extracting, from the database, the CMSRs for each of a plurality of timestamps preceding the specific timestamp or the physical touch timestamp, at which the user provided the touch input on the display interface.
  • the extracted historical CMSRs indicate a particular movement of the mobile device and the pressure/grip change on the mobile device immediately before the physical touch timestamp, i.e., before the user physically touched the mobile device to provide the touch input.
  • the method 600 comprises determining a probability of an intended touch input based on an analysis of the plurality of extracted CMSRs.
  • the historical CMSRs of a plurality of timestamps preceding the physical touch timestamp may be analyzed backwards.
  • the analysis of the CMSRs may provide a probability of an intended touch input.
  • the device mobility that is induced due to the touch intention may be determined.
  • the causal relation from the user's intention of touch to the mobile device's movement patterns are analyzed through the historical CMSR value changes.
  • the probability value of user's intention of touch by analyzing the historical CMSR values in the near past from the physical touch timestamp may be outputted.
  • the probability of the intended touch input may be used to determine if the user actually intended to touch the display/user interface or not. This probability determination may eliminate the unintended screen/display touches that happen when the user is in a crowded place.
  • the method 600 comprises determining whether the probability of the intended touch input is greater than a user touch dynamics threshold to identify the unintended touch.
  • the user touch dynamics threshold is indicative of touch characteristics associated with handling and movement of the user device.
  • the user touch dynamics threshold may be tuned as per the user touch instances to maximize determination accuracy of a true touch.
  • the method 600 comprises discarding the received touch input when the probability of intended touch input is lesser than the user touch dynamics threshold.
  • the probability of intended touch input is lesser than the user touch dynamics threshold, it may indicate an unintended/inadvertent touch input. Accordingly, the touch input should be discarded without performing any corresponding action on the touch input.
  • the method 600 comprises determining an actual touch intended timestamp, from the plurality of preceding timestamps, based on the plurality of composite multi-sensor scores, when the probability of intended touch input is greater than the user touch dynamics threshold.
  • the CMSR pattern i.e., a series of CMSRs
  • an output corresponding to the timestamp of the onset of the CMSR pattern may be provided.
  • the timestamp of the onset of the CMSR pattern corresponds to the actual touch intended timestamp.
  • the actual touch intended timestamp is required to extract the UI layout that was present at the time when user decided to click/touch on something in his mind, so as to find out which UI element stimulated him/her to do so.
  • the method 600 comprises extracting UI layouts both at the specific timestamp and the actual touch intended timestamp.
  • the UI layouts may be extracted from a database (local or cloud-based), which were previously stored in step 606.
  • Each of the UI layouts may include one or more UI elements.
  • the UI layout corresponding to the specific timestamp or physical touch timestamp may include at least the UI element that user actually touched on the display interface of the mobile device.
  • the UI layout corresponding to the actual touch intended timestamp may include at least the UI element that user intended to actually touch on the display interface.
  • the method 600 comprises determining whether the UI layout at the specific timestamp is same as the UI layout at the actual touch intended timestamp.
  • the comparison of the UI layouts may include comparing UI elements at one or more positions within each of the two UIs. If all the UI elements within each of the two UI layouts are exactly same, then it may be determined that the UI layout at the specific timestamp is same as the UI layout at the actual touch intended timestamp.
  • the method 600 comprises executing the received touch input as-is (i.e., without any change), when the UI layout at the specific timestamp is same as the UI layout at the actual touch intended timestamp.
  • the touch input may be executed as-is.
  • the method 600 comprises determining an actual intended UI element from the at least one UI element in the UI layout of the actual touch intended timestamp, when the UI layout at the specific timestamp is different from the UI layout at the actual touch intended timestamp.
  • the determination that the UI layouts of the specific timestamp and the actual touch intended timestamp may be based on a determination that one or more UI elements are different at a relative position or coordinate within the two UI layouts.
  • the coordinate or relative position of the received touch input may be provided as an input at the step 626.
  • the actual intended UI element may be extracted from the coordinate of the touch input in the UI layout at the specific timestamp. This may be based on the assumption that the user of the mobile device actually intended to touch the UI element at the same coordinate as the coordinate of the UI element present in the UI layout at the specific timestamp or the physical touch timestamp.
  • the output of this step may provide a UI element which was present at the touch coordinates on the mobile device at the time of touch intention after extracting the UI layout elements from the database.
  • the UI element may correspond to a user element identification, such as, but not limited to, a button or a text field.
  • the method 600 comprises redirecting an action, corresponding to the received touch input, to the actual intended UI element at the touch coordinate instead of an obscuring UI element in the UI layout at the specific timestamp or the physical touch timestamp.
  • a determination of whether the UI is loaded completely may be performed prior to redirecting the action to the actual intended UI element to step 630.
  • the current UI may include a pop-up or an advertisement, which may not include the actual intended UI element. If the UI is loaded completely, then the method moves to step 630 to determine a delay in execution of the actual intended UI element. If the UI is not loaded completely, the system may wait to move to step 630 before the UI gets loaded completely.
  • the method 600 comprises estimating an amount of delay in time period required for the execution of an intended touch for the actual intended UI element based on a current UI state and further based on whether the actual intended UI element is currently obscured by another UI element on the current UI.
  • the current UI state may include, a pop-up or an advertisement, and thus, the actual intended UI element may not be present for execution.
  • the amount to delay in time period required for execution may correspond to the time period when the pop-up or advertisement may automatically disappear from the display interface, thereby re-presenting or providing the previous UI layout associated with the actual touch intended timestamp.
  • the current UI state includes a pop-up or an advertisement, which does not include the actual intended UI element, then it may imply that the actual intended UI element is currently obscured by another UI element on the current UI.
  • the method 600 comprises discarding the received touch input when the obscuring another UI element is present after the estimated time period.
  • another UI element e.g., a pop-up or an advertisement
  • the received touch input may be discarded.
  • the discarding of the received touch input on the another UI element facilitates avoiding an unnecessary execution of a touch input, which may have led to redirecting to a link within an unintended advertisement or to a pop-up notification.
  • the method 600 comprises executing the action corresponding to the received touch input on the actual intended UI element when the obscuring another UI element disappears within the estimated time period.
  • the received touch input may be executed on the actual touch intended UI element.
  • the execution may include, but not limited to, click, touch, and navigating to next page.
  • the execution of the received touch input may be performed in conjunction with the operating system or processor/controller of the mobile device, as performed conventionally, but on the actual intended UI element.
  • Figure 8 illustrates an exemplary use case for prevention from clicking on sudden pop-ups at a UI of a user device according to an embodiment of the present disclosure, as compared to the existing art.
  • the present disclosure avoids such scenarios of inadvertent touches on the mobile device.
  • the present disclosure utilizes the CMSR scores during the time duration when the user is about to touch the intended list item, and decides to move finger towards the location.
  • the inertial and pressure sensors start recording the movement at this time - T intended (1.22 sec), as depicted in 808.
  • the mobile device detects sensor reading pattern from T intended to T actual touch time, as depicted in 810. This pattern is analyzed to get the intended touch coordinate at T intended .
  • the correct link is opened, as depicted in 812.
  • the user does not have to see advertisement link based on the system of the present disclosure, and instead a correct link at the intended UI element is opened, even though the user provided an inadvertent touch on the advertisement link.
  • the present disclosure provides various technical advancements over the existing art.
  • the present disclosure provides an enhanced UI experience for users by obstructing inadvertent touch inputs on advertisements and pop-ups that appear abruptly on display interface of the mobile device. Since the present disclosure tracks sudden changes in UI during the reaction time of user, many errors due to sudden appearance of UI elements on native UI of devices are prevented.
  • the present disclosure facilitates user privacy protection. Due to the prevention of unintended touches from suddenly appearing fraudulent links or scams, the present disclosure provides better privacy protection to the smartphone users. Further, by avoiding unintended clicks and accurately determining intended touches by the user, the present disclosure improves user experience while interacting with smartphone. Moreover, the touch recognition will be more tuned to user persona and way they handle the mobile device because of the usage of user touch dynamics. Furthermore, the present disclosure supports to avoid clicking on scam, fraudulent links and advertisements, and thus the user is better protected from fraudulent activities and unintended redirections.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A method and system for identification of an unintended touch and an intended user-interface element at a UI is disclosed. The method comprises receiving a touch input at the UI at a specific timestamp. Further, a composite multi-sensor score for each of a plurality of timestamps preceding the specific timestamp may be extracted from database. A probability of an intended touch input based on the plurality of CMSRs may be determined. Furthermore, the probability of the intended touch may be compared with a user touch dynamics threshold to identify the unintended touch. Subsequently, an actual touch intended timestamp, of the plurality of preceding timestamps, may be determined based on the plurality of CMSRs, responsive to the comparison. An actual intended UI element from a UI layout at the actual touch intended timestamp may be identified for execution.

Description

METHODS AND SYSTEMS FOR IDENTIFICATION OF AN UNINTENDED TOUCH AT A USER INTERFACE OF A DEVICE
The present disclosure generally relates to identification of an unintended touch and an intended user interface (UI) element at a UI of a user device, and more particularly relates to methods and systems for accurately identifying and translating of touch coordinates for unintended touch prevention.
In the current era of mobile devices, web-browsing and software applications include a lot of pop-ups and advertisements being integrated within the content presented on display. Often, advertisements and fraudulent links pop up at places where a user was intending to click on a specific link in a current display window. For example, if a user intended to click on a list item displayed on screen, but an advertisement appeared suddenly, as soon as the user was about to click, which led to the advertisement link instead. Thus, with advertisements and pop-ups appearing within a blink of eye, it is very easy to accidentally provide a touch input on the pop-ups/advertisements, that the user never intended to. The intended element to be clicked could get replaced by something else on the mobile phone's native UI.
Additionally, such inadvertent touch-based inputs may also be provided during operation of regular mobile operations. For instance, while a user may have intended to click on a search view of an application store, but a social media notification may appear and get touched, which may lead to the social media application instead.
Further, websites due to an internet dependency often load slowly or have changing UI Elements. This may cause the user to click on unintended items on the website as the item to be clicked is not what is intended as while clicking, the website could have loaded some item pushing the intended item click up or down. Furthermore, click bait sites and the sites causing security vulnerabilities intentionally load UI slowly to increase clicks and accidental redirections Such instances of unintended clicks may lead to user frustration as well as cause security risks.
Accordingly, there is a need to provide mechanisms for sensing inadvertent clicks or touch based inputs on pop-ups/ads/notifications to enhance web-usage experience. Additionally, there is a need to avoid inadvertent clicks/inputs which cause security vulnerabilities for users.
This summary is provided to introduce a selection of concepts, in a simplified format, that are further described in the detailed description of the disclosure. This summary is neither intended to identify key or essential inventive concepts of the disclosure and nor is it intended for determining the scope of the disclosure.
According to one embodiment of the present disclosure, a method for identification of an unintended touch and an intended user-interface (UI) element at a UI of a user device is disclosed. The method includes receiving a touch input at the user interface at a specific timestamp. Further, the method includes extracting, from a database, a composite multi-sensor score for each of a plurality of timestamps preceding the specific timestamp. Furthermore, the method includes determining a probability of an intended touch input based on the plurality of composite multi-sensor scores. Additionally, the method includes comparing the probability of the intended touch with a user touch dynamics threshold to identify the unintended touch. Moreover, the method includes determining an actual touch intended timestamp, of the plurality of preceding timestamps, based on the plurality of composite multi-sensor scores, when the probability of the intended touch is greater than the user touch dynamics threshold. Further, the method includes determining an actual touch intended timestamp, of the plurality of preceding timestamps, based on the plurality of composite multi-sensor scores, when the probability of the intended touch is greater than the user touch dynamics threshold.
According to another embodiment of the present disclosure, a system for identification of an unintended touch and an intended user-interface (UI) element at a UI of a user device is disclosed. The system comprises a receiving module configured to receive a touch input at the user interface at a specific timestamp. Further, the system comprises an intended touch deriving module in communication with the receiving module and configured to: extract, from a database, a composite multi-sensor score for each of a plurality of timestamps preceding the specific timestamp; determine a probability of an intended touch input based on the plurality of composite multi-sensor scores; compare the probability of the intended touch with a user touch dynamics threshold to identify the unintended touch; determine an actual touch intended timestamp, of the plurality of preceding timestamps, based on the plurality of composite multi-sensor scores, when the probability of the intended touch is greater than the user touch dynamics threshold; and identify an intended UI element at a touch coordinate of the received touch input from a UI layout at the actual touch intended timestamp.
To further clarify the advantages and features of the present disclosure, a more particular description of the disclosure will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the disclosure and are therefore not to be considered limiting of its scope. The disclosure will be described and explained with additional specificity and detail with the accompanying drawings.
Before undertaking the Mode for Invention below, it may be advantageous to set forth definitions of certain words and phrases used throughout this patent document: the terms "include" and "comprise," as well as derivatives thereof, mean inclusion without limitation; the term "or," is inclusive, meaning and/or; the phrases "associated with" and "associated therewith," as well as derivatives thereof, may mean to include, be included within, interconnect with, contain, be contained within, connect to or with, couple to or with, be communicable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have, have a property of, or the like; and the term "controller" means any device, system or part thereof that controls at least one operation, such a device may be implemented in hardware, firmware or software, or some combination of at least two of the same. It should be noted that the functionality associated with any particular controller may be centralized or distributed, whether locally or remotely.
Moreover, various functions described below can be implemented or supported by one or more computer programs, each of which is formed from computer readable program code and embodied in a computer readable medium. The terms "application" and "program" refer to one or more computer programs, software components, sets of instructions, procedures, functions, objects, classes, instances, related data, or a portion thereof adapted for implementation in a suitable computer readable program code. The phrase "computer readable program code" includes any type of computer code, including source code, object code, and executable code. The phrase "computer readable medium" includes any type of medium capable of being accessed by a computer, such as read only memory (ROM), random access memory (RAM), a hard disk drive, a compact disc (CD), a digital video disc (DVD), or any other type of memory. A "non-transitory" computer readable medium excludes wired, wireless, optical, or other communication links that transport transitory electrical or other signals. A non-transitory computer readable medium includes media where data can be permanently stored and media where data can be stored and later overwritten, such as a rewritable optical disc or an erasable memory device.
These and other features, aspects, and advantages of the present disclosure will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
Figure 1 illustrates an exemplary scenario of execution of an inadvertent touch input on a pop-up or advertisement, according to current prior art;
Figure 2 illustrates an exemplary scenario of execution of an inadvertent touch input on a pop-up or advertisement, according to an embodiment of the present disclosure;
Figures 3A-3C illustrate various scenarios of variation in composite multi-sensor scores over time, according to various embodiments of the present disclosure;
Figure 4 illustrates a schematic block diagram of an intended element identification system within a mobile device, according to an embodiment of the present disclosure;
Figure 5 illustrates a schematic block diagram of modules of the intended element identification system, according to an embodiment of the present disclosure;
Figures 6A-6C illustrate an exemplary process flow depicting a method for identification of an unintended touch and an intended user-interface (UI) element at a UI of a user device, according to an embodiment of the present disclosure
Figure 7 illustrates an exemplary methodology to determine user touch dynamics threshold, according to an embodiment of the present disclosure; and
Figure 8 illustrates an exemplary use case for prevention from clicking on sudden pop-ups at a UI of a user device according to an embodiment of the present disclosure, as compared to the existing art.
Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have necessarily been drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present disclosure. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.
FIGS. 1 through 8, discussed below, and the various embodiments used to describe the principles of the present disclosure in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged system or device.
The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. Also, the various embodiments described herein are not necessarily mutually exclusive, as some embodiments can be combined with one or more other embodiments to form new embodiments. The term "or" as used herein, refers to a non-exclusive or, unless otherwise indicated. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein can be practiced and to further enable those skilled in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
As is traditional in the field, embodiments may be described and illustrated in terms of blocks which carry out a described function or functions. These blocks, which may be referred to herein as managers, units, modules, hardware components or the like, are physically implemented by analogy and/or digital circuits such as logic gates, integrated circuits, microprocessors, microcontrollers, memory circuits, passive electronic components, active electronic components, optical components, hardwired circuits and the like, and may optionally be driven by firmware. The circuits may, for example, be embodied in one or more semiconductor chips, or on substrate supports such as printed circuit boards and the like. The circuits constituting a block may be implemented by dedicated hardware, or by a processor (e.g., one or more programmed microprocessors and associated circuitry), or by a combination of dedicated hardware to perform some functions of the block and a processor to perform other functions of the block. Each block of the embodiments may be physically separated into two or more interacting and discrete blocks without departing from the scope of the disclosure. Likewise, the blocks of the embodiments may be physically combined into more complex blocks without departing from the scope of the disclosure.
The accompanying drawings are used to help easily understand various technical features and it should be understood that the embodiments presented herein are not limited by the accompanying drawings. As such, the present disclosure should be construed to extend to any alterations, equivalents and substitutes in addition to those which are particularly set out in the accompanying drawings. Although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are generally used to distinguish one element from another.
Figure 1 illustrates an exemplary scenario of execution of an inadvertent touch input on a pop-up or advertisement, according to current prior art. As depicted, the mobile device 100 may include a display/user interface 102 comprising one or more user-interface (UI) elements, including 104, 106, and 108. The display interface 102 may correspond to an interface of a web-browser, live website, interface of a mobile device, or a software application running on a mobile device. Each of the UI elements 104-108 may correspond to a button, text-field, picture, control element, a selectable option of a webpage or interface of a mobile device, or any user interface element well-known in the art.
When the user of the mobile device 100 decides to click on UI element 106, it may happen that the UI element is obscured by another UI element 110 due to a pop-up, application-based notification, or an advertisement on the display interface, before the user actually provides the touch input on the display interface 102. Since there is a time gap of milliseconds between the time when user decides to provide a touch input and the time when user actually provides the touch input, the user may inadvertently provide a touch input on the other UI element 110. The other UI element 110 may also include, but is not limited to, one or more hyperlinks, links to software applications, or any other user interface element mentioned above. According to current prior art systems, once the user inadvertently provides a touch input on UI element 110, the hyperlink or the link to software application may be executed as-is. Thus, the user may be provided with an advertisement, software application, or any other further display interface associated with the user element which the user never intended to see, while the user was expecting execution of touch input on the UI element 106. This may lead to user frustration and security concerns due to inadvertent opening of unintended links or pop-ups.
Figure 2 illustrates an exemplary scenario of execution of an inadvertent touch input on a pop-up, application-based notification, advertisement, or the like, according to an embodiment of the present disclosure. As depicted, in various embodiments of the present disclosure, the UI element 110 is not executed as-is on the display interface 102. Specifically, according to various embodiments of the present disclosure, an intended element identification system may be provided which is configured to determine whether the user actually intended to click on the UI element 110. If it is determined that the user did not intend to touch UI element 110 and instead intend to touch 106, the system may wait for UI element 110 to disappear from the display interface. Further, once the UI element 110 disappears from the display interface and upon reproduction of UI element 106 on the display interface, the touch action corresponding to the UI element 106 may be executed.
The present disclosure considers analysis of the duration from the point when the user of the mobile device 100 starts to move his/her finger to actual touching the point on the display interface 102. During this time duration, one or more inertial sensor (e.g., accelerometer, gyroscope, and/or magnetometer) values as well as the grip/pressure values change, and thus, the intended element identification system of the present disclosure is able to detect if a touch has been made or not. The inertial sensor values are being recorded to note the change in values over time, for detection of intended and actual timestamp of touching the display interface 102. Further, a pattern of spike in the values of the inertial and pressure/grip sensors may be observed, from the time when user starts moving finger to actually touching the display interface. The pattern of spike further facilitates in detection of intended and actual timestamps of touching the display interface. Upon determination of the intended timestamp, a UI element present on the display interface 102 at the touch coordinates may be extracted from a database, which may correspond to the UI element 106 that the user intended to touch or select.
Specifically, the intended element identification system may be configured to identify intended touches in a UI by analyzing the historical movements of the mobile device 100 (inertial sensors, pressure/grip sensor) and the UI layout history (over a period of time) to obtain a likelihood of the user intending to interact with a UI element 110 which has just loaded against the UI element 106 that was actually intended to be interacted with, in order to prevent unintended touches which further prevents user annoyance, privacy issues and security risks.
Figures 3A-3C illustrate various scenarios of variation in composite multi-sensor scores over time, according to various embodiments of the present disclosure.
As well-known, a typical human reaction time (i.e., Tgap) from when the user decides to provide a touch input on the display interface to the time of actual touch on the display interface may be 200 to 300 milliseconds. The present disclosure utilizes this time gap and associated intertial/pressure sensor values to analyze whether the user actually intended to provide touch input, as currently provided on the display interface. In other words, the Tgap provides an ample amount of time to process and tabulate inertial sensors' values as well as pressure/grip sensor readings to analyze an actual intended element.
More particularly, there is a peculiar movement detectable by inertial sensors of the user's mobile device when he/she executes a touch on the phone in his/her hand which happens in the time duration from Tintended (the instance user decides to touch something on the display interface) to Tactual (the instance when user actually touches the display interface). Further, there is a change in applied pressure on pressure sensor and grip sensor between Tintended and Tactual. These changes are used to further enhance the intended time detection.
Figs. 3A-3C illustrate a high probability sensor movement score range when a combined sensor score value of inertial and pressure/grip sensors (discussed hereinafter as "composite multi-sensor score" or "CMSR") changes over time.
As illustrated in Fig. 3A, if the peak values of CMSR are within the range of intended touch threshold range, i.e., between 302 and 304, then it may be inferred as a high probability of touch being intended by the user.
As illustrated in Fig. 3B, if the peak values of CMSR are below the range of intended touch threshold range, i.e., below 302, then it may be inferred as very slight and erratic phone movement, implying an unintended or accidental touch due to drowsiness, etc. In other words, the peak values of CMSR below the range of intended touch threshold range may imply low probability of touch being intended by the user. Such touch shall be discarded and not acted upon by the mobile device.
Further, as illustrated in Fig. 3C, if the peak values of CMSR are over/beyond the range of intended touch threshold range, i.e., above 304, then it may be inferred as very high phone movement, which may imply somebody bumping into the user while the user was trying to touch the display interface. Again, such CMSR values may imply a low probability of touch being intended by the user.
Figure 4 illustrates a schematic block diagram of an intended element identification system 402, according to an embodiment of the present disclosure. In one embodiment, the intended element identification system 402 may be included within a mobile device 400. In other embodiments, the intended element identification system 402 may be a standalone device or a system. Examples of mobile device 400 may include, but not limited to, a mobile phone, a laptop computer, a desktop computer, a Personal Computer (PC), a notebook, a tablet, e-book readers, a server, a network server, and any other electronic device providing touch-based input functionality.
In one embodiment, the mobile device 400 may include an intended element identification system 402. The intended element identification system 402 may be configured to identify intended touches in a UI by analyzing the historical movements of the mobile device (inertial sensors, pressure/grip sensor) and the UI layout history (over a period of time) to obtain a likelihood of the user intending to interact with a UI element which has just loaded against the UI element that was actually intended to be interacted with in order to prevent unintended touches which further prevents user annoyance, privacy issues and security risks. The intended element identification system 402 may further include a processor/controller 404, an I/O interface 406, inertial sensors 408, pressure sensors 410, transceiver 412, and a memory 414.
In one embodiment, the inertial sensors 408 may include, but not limited to, an accelerometer, a gyroscope, and a magnetometer. The accelerometer, gyroscope, and a magnetometer may be configured to measure standard parameters for the mobile device 400, as well-known in the art. Further, the pressure/grip sensors 410 may include any off-the-shelf pressure sensors embedded in the mobile device 200 for capturing pressure/grip of the user while holding the mobile device 200. For the sake of brevity, the architecture and functionality of inertial sensors 408 and pressure/grip sensors 410 are not discussed herein detail.
In some embodiments, the memory 414 may be communicatively coupled to the at least one processor/controller 404. The memory 414 may be configured to store data, instructions executable by the at least one processor/controller 404. The memory 414 may include one or more modules 416 and a database 418 to store data. The one or more modules 416 may include a set of instructions that may be executed to cause the intended element identification system 402 to perform any one or more of the methods disclosed herein. The one or more modules 416 may be configured to perform the steps of the present disclosure using the data stored in the database 418, to identify an intended UI element on the display interface of the mobile device 400. In an embodiment, each of the one or more modules 414 may be a hardware unit which may be outside the memory 414. Further, the memory 414 may include an operating system 420 for performing one or more tasks of the system 402 and/or mobile device 400, as performed by a generic operating system in the communications domain. For the sake of brevity, the architecture and standard operations of operating system 420, memory 414, database 418, processor/controller 404, transceiver 412, and I/O interface 406 are not discussed in detail.
In one embodiment, the memory 414 may communicate via a bus within the system 402. The memory 414 may include, but not limited to, a non-transitory computer-readable storage media, such as various types of volatile and non-volatile storage media including, but not limited to, random access memory, read-only memory, programmable read-only memory, electrically programmable read-only memory, electrically erasable read-only memory, flash memory, magnetic tape or disk, optical media and the like. In one example, the memory 414 may include a cache or random access memory for the processor/controller 404. In alternative examples, the memory 414 is separate from the processor/controller 404, such as a cache memory of a processor, the system memory, or other memory. The memory 414 may be an external storage device or database for storing data. The memory 414 may be operable to store instructions executable by the processor/controller 404. The functions, acts or tasks illustrated in the figures or described may be performed by the programmed processor/controller 404 for executing the instructions stored in the memory 414. The functions, acts or tasks are independent of the particular type of instructions set, storage media, processor or processing strategy and may be performed by software, hardware, integrated circuits, firmware, micro-code and the like, operating alone or in combination. Likewise, processing strategies may include multiprocessing, multitasking, parallel processing and the like.
Further, the present disclosure contemplates a computer-readable medium that includes instructions or receives and executes instructions responsive to a propagated signal, so that a device (e.g., mobile device 200) connected to a network may communicate voice, video, audio, images, or any other data over a network. Further, the instructions may be transmitted or received over the network via a communication port or interface or using a bus (not shown). The communication port or interface may be a part of the processor/controller 404 or maybe a separate component. The communication port may be created in software or maybe a physical connection in hardware. The communication port may be configured to connect with a network, external media, the display, or any other components in system, or combinations thereof. The connection with the network may be a physical connection, such as a wired Ethernet connection or may be established wirelessly. Likewise, the additional connections with other components of the intended element identification system 202 may be physical or may be established wirelessly. The network may alternatively be directly connected to the bus.
In one embodiment, the processor/controller 404 may include at least one data processor for executing processes in Virtual Storage Area Network. The processor/controller 404 may include specialized processing units such as, integrated system (bus) controllers, memory management control units, floating point units, graphics processing units, digital signal processing units, etc. In one embodiment, the processor/controller 404 may include a central processing unit (CPU), a graphics processing unit (GPU), or both. The processor/controller 404 may be one or more general processors, digital signal processors, application-specific integrated circuits, field-programmable gate arrays, servers, networks, digital circuits, analog circuits, combinations thereof, or other now known or later developed devices for analyzing and processing data. The processor/controller 404 may implement a software program, such as code generated manually (i.e., programmed).
The processor/controller 404 may be disposed in communication with one or more input/output (I/O) devices via the I/O interface 406. The I/O interface 406 may employ communication code-division multiple access (CDMA), high-speed packet access (HSPA+), global system for mobile communications (GSM), long-term evolution (LTE), WiMax, or the like), etc.
Using the I/O interface 406, the mobile device 200 may communicate with one or more I/O devices. For example, the input device may be an antenna, keyboard, mouse, joystick, (infrared) remote control, camera, card reader, fax machine, dongle, biometric reader, microphone, touch screen, touchpad, trackball, stylus, scanner, storage device, transceiver, video device/source, etc. The output devices may be a printer, fax machine, video display (e.g., cathode ray tube (CRT), liquid crystal display (LCD), light-emitting diode (LED), plasma, Plasma Display Panel (PDP), Organic light-emitting diode display (OLED) or the like), audio speaker, etc.
The processor/controller 404 may be disposed in communication with a communication network via a network interface. The network interface may be the I/O interface 406. The network interface may connect to a communication network. The network interface may employ connection protocols including, without limitation, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/internet protocol (TCP/IP), token ring, IEEE 802.11a/b/g/n/x, etc. The communication network may include, without limitation, a direct interconnection, local area network (LAN), wide area network (WAN), wireless network (e.g., using Wireless Application Protocol), the Internet, etc. Using the network interface and the communication network, the mobile device 200 may communicate with other devices. The network interface may employ connection protocols including, but not limited to, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), transmission control protocol/internet protocol (TCP/IP), token ring, IEEE 802.11a/b/g/n/x, etc.
Figure 5 illustrates a schematic block diagram 500 of modules of an intended element identification system 402, according to an embodiment of the present disclosure. The one or more modules 416 may include a layout and feature extraction module 502, a receiving module 504, an intended touch deriving module 506, and an intended action execution module 508.
In one embodiment of the present disclosure, the layout and feature extraction module 502 may be configured to extract parameters related to a plurality of sensors, and to further extract a user-interface (UI) layout at each of a plurality of timestamps. In one embodiment of the present disclosure, extracting parameters related to a plurality of sensors comprises extracting data or readings related to one or more inertial sensors and one or more pressure/grip sensors present in the mobile device (e.g., mobile device 400). The one or more inertial sensors may include, but not limited to, a gyroscope, an accelerometer, and a magnetometer. The data/readings related to acceleration, angular velocity, and magnetic field intensity of the mobile device may be extracted for each of a plurality of timestamps using the plurality of sensors. Additionally, the readings of the pressure/grip sensor(s) may indicate the change in applied pressure on these sensors during the time when the user decides to execute an action to the time when the user executes the touch action on the user interface of the mobile device. Specifically, the readings of the plurality of sensors facilitate in depicting a peculiar movement of the mobile device in the user's hand between the time when the user decides to touch something on screen and the time when the user executes the touch on the mobile device.
Additionally, the layout and feature extraction module 502 may be configured to extract a UI layout at each of such plurality of timestamps. The UI layout may comprise one or more UI elements with their relative positions or coordinates, currently displayed at the UI or display interface of the mobile device. In one embodiment, the UI layout may include one or more selectable options at a web-browser interface/layout or UI elements, as discussed above, currently displayed on the mobile device. In another embodiment, the UI layout may include a UI of a software application currently displayed on the mobile device. The UI of the software application may include multiple UI elements to perform various functions associated with the application. For instance, if the software application is a shopping application, the UI elements may include, but not limited to, search bar, menu bar, "add to cart" button, "add to wish list" button, and one or more shopping items.
Further, the layout and feature extraction module 502 may be configured to determine a composite multi-sensor score (CMSR) at each of the plurality of timestamps based on parameter readings of the plurality of sensors recorded at the corresponding timestamp. In one embodiment, multiple sensors' (e.g., accelerometer, gyroscope, accelerometer, etc.) readings are aggregated that are recorded at each timestamp. A final CMSR is output for each of the plurality of timestamps based on the aggregate readings for the corresponding timestamp. The CMSR may indicate an amplitude of the readings of the plurality of sensors. Additionally, the series of CMSRs for the plurality of timestamps may indicate the movement of the user's mobile device during the time when the user decides to execute an action to the time when the user executes the touch action on the user interface of the mobile device.
In one exemplary embodiment, the CMSR for each timestamp is based on a normalized linear combination of readings of the plurality of sensors recorded at the corresponding timestamp. Further, the normalized readings may be further processed using a non-linear multivariate regression function to determine a final CMSR for each timestamp. For example, a plurality of sensor readings at a specific timestamp may be recorded as:
Sensor Parameters
Barometer : 980 hPa
Side bezel pressure : 701hPa
Altitude : 270m
Gyroscope Y : 0.04
Gyroscope P : -0.07
Gyroscope R : -0.01
Magnetic X : 15
Magnetic Y : -20
Magnetic Z : -1
The above readings may be subsequently normalized, followed by further processing using a non-linear multivariate regression function to output the CMSR for a specific timestamp. An exemplary set of CMSRs for a series of timestamps, using plurality of sensor readings for each timestamp, is depicted here in Table 1 below:
[Table 1]
Figure PCTKR2022001984-appb-I000001
Furthermore, the layout and feature extraction module 502 may be configured to store the CMSRs and UI layouts for each of the plurality of timestamps in a database. In one embodiment, the CMSRs and UI layouts may be stored in a local database (e.g., database 418) of the mobile device 400. In another embodiment, the CMSRs and UI layouts may be stored in a cloud database (not shown) associated with the mobile device.
In some embodiments, the layout and feature extraction module 502 may be configured to compare the two UI layouts for successive timestamps to detect the changes in UI elements between such successive timestamps. An exemplary set of UI layouts at successive timestamps indicating difference in UI elements as compared to previous timestamp, is illustrated here in Table 2 below:
[Table 2]
Figure PCTKR2022001984-appb-I000002
In one embodiment of the present disclosure, the receiving module 504 may be configured to receive a physical touch input at the user/display interface of the mobile device at a specific timestamp. In one embodiment, the touch input may be received on the user/display interface of the mobile device via, but not limited to, a finger touch, click using a pen, button press, or a finger swipe. Further, the touch input may correspond to selecting a specific UI element, of a plurality of UI elements, currently displayed on the UI of the mobile device. The specific timestamp may be recorded as a physical touch timestamp, when the user provided the input at the display interface. In one embodiment, a coordinate or relative position of the received touch input may also be recorded. The coordinate may correspond to a position of a UI element that the user touched at the present display interface on the mobile device.
In one embodiment of the present disclosure, the intended touch deriving module 506 may be configured to extract, from the database, the CMSRs for each of a plurality of timestamps preceding the specific timestamp or the physical touch timestamp, at which the user provided the touch input on the display interface. In one exemplary embodiment, the CMSRs for 5-20 timestamps, before the physical touch timestamp, may be extracted from the database. The extracted historical CMSRs indicate a particular movement of the mobile device and the pressure/grip change on the mobile device immediately before the physical touch timestamp, i.e., before the user physically touched the mobile device to provide the touch input.
In one embodiment of the present disclosure, a number of the plurality of timestamps (e.g., 5 to 20) for which the CMSRs are extracted may correspond to a predefined number of timestamps, based on a pre-recorded movement of the user's finger and pressure before he/she touches the display interface. The pre-recorded movement may be learned using a machine learning or neural network model. The machine learning or neural network model may be based on a pattern of spikes observed in the values of one or more of the accelerometer, gyroscope, magnetometer, and pressure/grip sensor(s) when the user starts moving his/her finger to providing a touch input. The pre-recorded movement may be specific to each user. Thus, in one embodiment, a user identification based on one or more known user identification techniques (e.g., biometric or other known techniques) may be performed. For instance, while one user may have a finger and pressure changes of 2 seconds, another user may have a movement of 0.5 seconds before providing an actual touch input. Accordingly, the plurality of timestamps for which the historical CMSRs are extracted may correspond to such 0.5 or 2 seconds before the physical touch timestamp. As it may be appreciated, these timestamps are just exemplary for the sake of description, and the values may vary from user to user.
Further, the intended touch deriving module 506 may be configured to determine a probability of an intended touch input based on an analysis of the plurality of extracted CMSRs. The historical CMSRs of a plurality of timestamps preceding the physical touch timestamp may be analyzed backwards. In one embodiment, the analysis of the extracted historical CMSRs may be performed using a trained machine learning or neural network model. For example, the analysis of the extracted historical CMSRs may be performed using a many-to-one recurrent neural network (RNN) model. The analysis of the CMSRs using the RNN may provide a probability of an intended touch input.
In the following Table 3 of historical CMSRs preceding the physical touch timestamp, the plurality of CMSRs are analyzed using the many-to-one RNN model to output the probability of intended touch input as 0.78.
[Table 3]
Figure PCTKR2022001984-appb-I000003
In one embodiment, to determine the probability of the intended touch input, the device mobility that is induced due to the touch intention may be determined. The causal relation from the user's intention of touch to the mobile device's movement patterns are analyzed through the historical CMSR value changes. Thus, the probability value of user's intention of touch by analyzing the historical CMSR values in the near past from the physical touch timestamp may be outputted. The probability of the intended touch input may be used to determine if the user actually intended to touch the display/user interface or not. This probability determination may eliminate the unintended screen/display touches that happen when the user is in a crowded place. In one example, a negative probability of intended touch may eliminate the scenarios of unintended touches illustrated in Figs. 3B and 3C.
Furthermore, the intended touch deriving module 506 may be configured to determine whether the probability of the intended touch input is greater than a user touch dynamics threshold to identify the unintended touch. The user touch dynamics threshold is indicative of touch characteristics associated with handling and movement of the user device. The user touch dynamics threshold may be tuned as per the user touch instances to maximize determination accuracy of a true touch. In one embodiment, the user touch dynamics threshold may be a periodically updated threshold value based on user inputs at the UI of the mobile device, providing an induced touch probability. Further, the user touch dynamics threshold may tune in the user touch action characteristics in terms of device handling and movement etc. which is reflected in CMSR scores.
In one exemplary embodiment, the user touch dynamics threshold may be determined/estimated based on a training performed using the user/display interface of the mobile device 400, as depicted in Figure 7. For instance, the user of the mobile device 400 may be presented with a balloon popping exercise to learn user's touch dynamics. During the exercise, the intended touch timestamp (i.e., when the balloon 702 is displayed on the display interface) and the physical touch timestamp (i.e., when the user actually touches the balloon 702 to pop it) may be recorded. Further, the plurality of CMSRs may be analyzed between the intended touch timestamp and the physical touch timestamp to determine an aggregate value of user touch dynamics threshold. This exercise may be performed on a periodic basis to update the user touch dynamics threshold, to reflect the user current manner of finger and pressure movement more realistically while providing a touch input.
Furthermore, the intended touch deriving module 506 may be configured to determine whether the probability of the intended touch input is greater than a user touch dynamics threshold to identify the unintended touch. The user touch dynamics threshold is indicative of touch characteristics associated with handling and movement of the user device. The user touch dynamics threshold may be tuned as per the user touch instances to maximize determination accuracy of a true touch.
In response to determining that the probability of intended touch input is lesser than the user touch dynamics threshold, the intended touch deriving module 506 may be configured to discard the received touch input. When the probability of intended touch input is lesser than the user touch dynamics threshold, it may indicate an unintended/inadvertent touch input. Accordingly, the touch input should be discarded without performing any corresponding action on the touch input.
Furthermore, the intended touch deriving module 506 may be configured to determine an actual touch intended timestamp, from the plurality of preceding timestamps, based on the plurality of composite multi-sensor scores, when the probability of intended touch input is greater than the user touch dynamics threshold. In one embodiment, the actual touch intended timestamp corresponds to the time when the user decides to click on the screen and is always lesser than the physical touch timestamp. To determine the actual touch intended timestamp, the CMSR pattern (i.e., a series of CMSRs) which resulted in the probability of intended touch input to cross the threshold may be extracted, and an output corresponding to the timestamp of the onset of the CMSR pattern may be provided. The timestamp of the onset of the CMSR pattern corresponds to the actual touch intended timestamp. The actual touch intended timestamp is required to extract the UI layout that was present at the time when user decided to click/touch on something in his mind, so as to find out which UI element stimulated him/her to do so.
In one embodiment, the actual touch intended timestamp may be determined using a machine learning or neural network model. For example, the actual touch intended timestamp may be determined using a trained many-to-one RNN model. If the set of CMSR scores' pattern includes a length "N" window comprising N scores, then the output provided by the RNN model may correspond to:
TI = TP - N + 2
where, TI corresponds to actual touch intended timestamp,
TP corresponds to specific timestamp or physical touch timestamp, and
N corresponds to length window of CMSR scores (e.g., 4 successive CMSR scores)
An exemplary set of output probability of each actual touch intended time is calculated and provided in Table 4 below. The actual touch intended time may be selected based on the timestamp with highest probability. In the current example, the highest probability is calculated for "TP - N + 2", and thus, the actual touch intended timestamp may correspond to "TP - N + 2".
[Table 4]
Figure PCTKR2022001984-appb-I000004
Furthermore, the intended touch deriving module 506 may be configured to extract UI layouts both at the specific timestamp and the actual touch intended timestamp. The UI layouts may be extracted from a database (local or cloud-based), which were previously stored by the layout and feature extraction module 502. Each of the UI layouts may include one or more UI elements. In one example, the UI layout corresponding to the specific timestamp or physical touch timestamp may include at least the UI element that user actually touched on the display interface of the mobile device. Further, the UI layout corresponding to the actual touch intended timestamp may include at least the UI element that user intended to actually touch on the display interface.
Furthermore, the intended touch deriving module 506 may be configured to determine whether the UI layout at the specific timestamp is same as the UI layout at the actual touch intended timestamp. The comparison of the UI layouts may include comparing UI elements at one or more positions within each of the two UIs. If all the UI elements within each of the two UI layouts are exactly same, then it may be determined that the UI layout at the specific timestamp is same as the UI layout at the actual touch intended timestamp.
In one embodiment of the present disclosure, the intended action execution module 508 may be configured to execute the received touch input as-is, when the UI layout at the specific timestamp is same as the UI layout at the actual touch intended timestamp. At an instance when the layouts at the specific timestamp and the actual touch intended timestamp are same, then it may be implied that the UI element that the user actually intended (or decided to touch) is currently present in the UI layout at the specific or physical touch timestamp when the user provided the touch input. Accordingly, the touch input may be executed as-is. In one embodiment, the touch input may be executed in coordination with a processor/controller or touch interface, as conventionally performed in the mobile devices.
Furthermore, the intended touch deriving module 506 may be configured to determine an actual intended UI element from the at least one UI element in the UI layout of the actual touch intended timestamp, when the UI layout at the specific timestamp is different from the UI layout at the actual touch intended timestamp. The determination that the UI layouts of the specific timestamp and the actual touch intended timestamp may be based on a determination that one or more UI elements are different at a relative position or coordinate within the two UI layouts. The coordinate or relative position of the received touch input may be provided as an input to the module. The actual intended UI element may be extracted from the coordinate of the touch input in the UI layout at the specific timestamp. This may be based on the assumption that the user of the mobile device actually intended to touch the UI element at the same coordinate as the coordinate of the UI element present in the UI layout at the specific timestamp or the physical touch timestamp.
Thus, the output of this step may provide a UI element which was present at the touch coordinates on the mobile device at the time of touch intention after extracting the UI layout elements from the database. In one embodiment, the UI element may correspond to a user element identification, such as, but not limited to, a button or a text field.
In an exemplary embodiment, the (X, Y) coordinates of the received input touch may correspond to Device (64.86, 104.25). In one example, the UI layout corresponding to the actual touch intended timestamp may correspond to the layout at 10:00:00 shown in Table 2. Based on the mapping of the input coordinates to the actual touch intended timestamp, it may be determined that the actual intended UI element corresponds to id3. Additionally, further properties such as, but not limited to, element type, allowed actions, purpose, and whether the UI element is currently active, may be determined for the element id3. An illustration of exemplary different UI element properties is provided here in Table 5 below:
[Table 5]
Figure PCTKR2022001984-appb-I000005
Furthermore, the intended action executing module 508 may be configured to redirect an action, corresponding to the received touch input, to the actual intended UI element at the touch coordinate instead of an obscuring UI element in the UI layout at the specific timestamp or the physical touch timestamp. In one embodiment, prior to redirecting the action to the actual intended UI element, the intended action executing module 508 may be configured to determine whether the UI is loaded completely. The current UI may include a pop-up or an advertisement, which may not include the actual intended UI element. If the UI is loaded completely, then the method moves to next step to determine a delay in execution of the actual intended UI element. If the UI is not loaded completely, the system may wait to move to next step before the UI gets loaded completely.
Furthermore, the intended action executing module 508 may be configured to estimate an amount of delay in time period required for the execution of an intended touch for the actual intended UI element based on a current UI state and further based on whether the actual intended UI element is currently obscured by another UI element on the current UI. In one embodiment, the current UI state may include a pop-up, notification, or an advertisement and thus, the actual intended UI element may not be present for execution. Thus, the amount to delay in time period required for execution may correspond to the time period when the pop-up or advertisement may automatically disappear from the display interface, thereby re-presenting or providing the previous UI layout associated with the actual touch intended timestamp. Hence, if the current UI state includes a pop-up or an advertisement, which does not include the actual intended UI element, then it may imply that the actual intended UI element is currently obscured by another UI element on the current UI.
In one embodiment, to determine the amount of delay, a current UI layout may be analyzed. In an exemplary scenario comprising an advertisement in the current UI layout, one or more UI element legends may be analyzed to determine the amount of delay which may include, but not limited to, video player window, app install link, a subscribe button, and an official merchandise. The video player window may indicate "Ad: 19 secs", which may be interpreted as 19 seconds remaining in disappearance of the advertisement or in restoration of the actual intended UI layout and element. Thus, the amount of delay in time period required for the execution of the intended actual touch may correspond to 19 seconds. In another example, the current UI layout may include a pop-up or notification corresponding to a social media application at the coordinates of the actual intended touch input. In such a scenario, the time for disappearance of the pop-up or notification may be determined in coordination with the operating system of the mobile device, or settings, or policy of the corresponding social media application. Thus, the delay determination may be based on image extraction (or UI element extraction) from the current UI layout and/or based on policies or settings of the mobile device/software application. In one embodiment, the delay determination may be performed using a machine learning or neural network model, such as, but not limited to, a many-to-one RNN model.
Furthermore, the intended action executing module 508 may be configured to discard the received touch input when the obscuring another UI element is present after the estimated time period. In one embodiment, it may be determined that another UI element (e.g., a pop-up or an advertisement) is still present at the actual intended UI element coordinates, then the received touch input may be discarded. The discarding of the received touch input on the other UI element facilitates avoiding an unnecessary execution of a touch input, which may have led to redirecting to a link within an unintended advertisement or to a pop-up notification.
Furthermore, the intended action executing module 508 may be configured to execute the action corresponding to the received touch input on the actual intended UI element when the obscuring another UI element disappears within the estimated time period. In one embodiment, it may be determined that another UI element (e.g., a pop-up or an advertisement) has disappeared from the actual intended UI element coordinates, then the received touch input may be executed on the actual touch intended UI element. The execution may include, but not limited to, click, touch, and navigating to next page. In one embodiment, the execution of the received touch input may be performed in conjunction with the operating system or processor/controller of the mobile device, as performed conventionally, but on the actual intended UI element.
Figures 6A-6C illustrate an exemplary process flow depicting a method 600 for identification of an unintended touch and an intended user-interface (UI) element at a UI of a user device, according to an embodiment of the present disclosure. In one embodiment, the method 600 may be performed by the intended element identification system of a mobile device, as discussed throughout this disclosure.
At step 602, the method 600 comprises extracting parameters related to a plurality of sensors, and further comprises extracting a user-interface (UI) layout at each of a plurality of timestamps. In one embodiment of the present disclosure, extracting parameters related to a plurality of sensors comprises extracting data or readings related to one or more inertial sensors and one or more pressure/grip sensors present in the mobile device. The one or more inertial sensors may include, but not limited to, a gyroscope, an accelerometer, and a magnetometer. The data/readings related to the plurality of sensors may be extracted for each of a plurality of timestamps. For example, the readings may be extracted or recorded at a difference of 0.01 seconds. The inertial sensor readings facilitate in detecting the movement of mobile device along with the timestamps. Additionally, the readings of the pressure/grip sensor(s) may indicate the change in applied pressure on these sensors during the time when the user decides to execute an action to the time when the user executes the touch action on the user interface of the mobile device. Specifically, the readings of the plurality of sensors facilitate in depicting a peculiar movement of the mobile device in the user's hand between the time when the user decides to touch something on screen and the time when the user executes the touch on the mobile device. Additionally, a UI layout at each of such plurality of timestamps may be extracted. The UI layout may comprise one or more UI elements with their relative positions, currently displayed at the UI of the mobile device.
At step 604, the method 600 comprises determining a composite multi-sensor score (CMSR) at each of the plurality of timestamps based on parameter readings of the plurality of sensors recorded at the corresponding timestamp. In one embodiment, multiple sensors' (e.g., gyroscope, accelerometer, magnetometer, etc.) readings are aggregated that are recorded at each timestamp. A a final CMSR is output based on the aggregate readings. The CMSR may indicate an amplitude of the readings of the plurality of sensors. Additionally, the series of CMSRs for the plurality of timestamps may indicate the movement of the user's mobile device during the time when the user decides to execute an action to the time when the user executes the touch action on the user interface of the mobile device.
At step 606, the method 600 comprises storing the CMSRs and UI layouts for each of the plurality of timestamps in a database. In some embodiments, the two UI layouts for successive timestamps may be compared to detect the changes in UI elements between such successive timestamps.
In one embodiment, the steps 602-606 may be continuously executed in the mobile device, and the CMSRs and UI layouts may be stored for a specific time duration. In one example, the CMSRs and UI layouts may be stored for last 15 minutes with a difference of 0.05 seconds. The CMSRs and UI layouts before 15 minutes may be continuously erased from the database.
At step 608, the method 600 comprises receiving a physical touch input at the user/display interface of the mobile device at a specific timestamp. In one embodiment, the touch input may be received on the user/display interface of the mobile device via, but not limited to, a finger touch, click using a pen, button press, or a finger swipe. Further, the touch input may correspond to selecting a specific UI element, of a plurality of UI elements, currently displayed on the UI of the mobile device. The specific timestamp may be recorded as a physical touch timestamp, when the user provided the input at the display interface. In one embodiment, a coordinate or relative position of the received touch input may also be recorded. The coordinate may correspond to a position of a UI element that the user touched at the present display interface on the mobile device.
At step 610, the method 600 comprises extracting, from the database, the CMSRs for each of a plurality of timestamps preceding the specific timestamp or the physical touch timestamp, at which the user provided the touch input on the display interface. The extracted historical CMSRs indicate a particular movement of the mobile device and the pressure/grip change on the mobile device immediately before the physical touch timestamp, i.e., before the user physically touched the mobile device to provide the touch input.
At step 612, the method 600 comprises determining a probability of an intended touch input based on an analysis of the plurality of extracted CMSRs. The historical CMSRs of a plurality of timestamps preceding the physical touch timestamp may be analyzed backwards. The analysis of the CMSRs may provide a probability of an intended touch input. In one embodiment, to determine the probability of the intended touch input, the device mobility that is induced due to the touch intention may be determined. The causal relation from the user's intention of touch to the mobile device's movement patterns are analyzed through the historical CMSR value changes. Thus, the probability value of user's intention of touch by analyzing the historical CMSR values in the near past from the physical touch timestamp may be outputted. The probability of the intended touch input may be used to determine if the user actually intended to touch the display/user interface or not. This probability determination may eliminate the unintended screen/display touches that happen when the user is in a crowded place.
At step 614, the method 600 comprises determining whether the probability of the intended touch input is greater than a user touch dynamics threshold to identify the unintended touch. The user touch dynamics threshold is indicative of touch characteristics associated with handling and movement of the user device. The user touch dynamics threshold may be tuned as per the user touch instances to maximize determination accuracy of a true touch.
At step 616, the method 600 comprises discarding the received touch input when the probability of intended touch input is lesser than the user touch dynamics threshold. When the probability of intended touch input is lesser than the user touch dynamics threshold, it may indicate an unintended/inadvertent touch input. Accordingly, the touch input should be discarded without performing any corresponding action on the touch input.
At step 618, the method 600 comprises determining an actual touch intended timestamp, from the plurality of preceding timestamps, based on the plurality of composite multi-sensor scores, when the probability of intended touch input is greater than the user touch dynamics threshold. To determine the actual touch intended timestamp, the CMSR pattern (i.e., a series of CMSRs) which resulted in the probability of intended touch input to cross the threshold may be extracted, and an output corresponding to the timestamp of the onset of the CMSR pattern may be provided. The timestamp of the onset of the CMSR pattern corresponds to the actual touch intended timestamp. The actual touch intended timestamp is required to extract the UI layout that was present at the time when user decided to click/touch on something in his mind, so as to find out which UI element stimulated him/her to do so.
At step 620, the method 600 comprises extracting UI layouts both at the specific timestamp and the actual touch intended timestamp. The UI layouts may be extracted from a database (local or cloud-based), which were previously stored in step 606. Each of the UI layouts may include one or more UI elements. In one example, the UI layout corresponding to the specific timestamp or physical touch timestamp may include at least the UI element that user actually touched on the display interface of the mobile device. Further, the UI layout corresponding to the actual touch intended timestamp may include at least the UI element that user intended to actually touch on the display interface.
At step 622, the method 600 comprises determining whether the UI layout at the specific timestamp is same as the UI layout at the actual touch intended timestamp. The comparison of the UI layouts may include comparing UI elements at one or more positions within each of the two UIs. If all the UI elements within each of the two UI layouts are exactly same, then it may be determined that the UI layout at the specific timestamp is same as the UI layout at the actual touch intended timestamp.
At step 624, the method 600 comprises executing the received touch input as-is (i.e., without any change), when the UI layout at the specific timestamp is same as the UI layout at the actual touch intended timestamp. At an instance when the layouts at the specific timestamp and the actual touch intended timestamp are same, then it may be implied that the UI element that the user actually intended (or decided to touch) is currently present in the UI layout at the specific or physical touch timestamp when the user provided the touch input. Accordingly, the touch input may be executed as-is.
At step 626, the method 600 comprises determining an actual intended UI element from the at least one UI element in the UI layout of the actual touch intended timestamp, when the UI layout at the specific timestamp is different from the UI layout at the actual touch intended timestamp. The determination that the UI layouts of the specific timestamp and the actual touch intended timestamp may be based on a determination that one or more UI elements are different at a relative position or coordinate within the two UI layouts. The coordinate or relative position of the received touch input may be provided as an input at the step 626. The actual intended UI element may be extracted from the coordinate of the touch input in the UI layout at the specific timestamp. This may be based on the assumption that the user of the mobile device actually intended to touch the UI element at the same coordinate as the coordinate of the UI element present in the UI layout at the specific timestamp or the physical touch timestamp.
Thus, the output of this step may provide a UI element which was present at the touch coordinates on the mobile device at the time of touch intention after extracting the UI layout elements from the database. In one embodiment, the UI element may correspond to a user element identification, such as, but not limited to, a button or a text field.
At step 628, the method 600 comprises redirecting an action, corresponding to the received touch input, to the actual intended UI element at the touch coordinate instead of an obscuring UI element in the UI layout at the specific timestamp or the physical touch timestamp. In one embodiment, prior to redirecting the action to the actual intended UI element to step 630, a determination of whether the UI is loaded completely may be performed. The current UI may include a pop-up or an advertisement, which may not include the actual intended UI element. If the UI is loaded completely, then the method moves to step 630 to determine a delay in execution of the actual intended UI element. If the UI is not loaded completely, the system may wait to move to step 630 before the UI gets loaded completely.
At step 630, the method 600 comprises estimating an amount of delay in time period required for the execution of an intended touch for the actual intended UI element based on a current UI state and further based on whether the actual intended UI element is currently obscured by another UI element on the current UI. In one embodiment, the current UI state may include, a pop-up or an advertisement, and thus, the actual intended UI element may not be present for execution. Thus, the amount to delay in time period required for execution may correspond to the time period when the pop-up or advertisement may automatically disappear from the display interface, thereby re-presenting or providing the previous UI layout associated with the actual touch intended timestamp. Hence, if the current UI state includes a pop-up or an advertisement, which does not include the actual intended UI element, then it may imply that the actual intended UI element is currently obscured by another UI element on the current UI.
At step 632, the method 600 comprises discarding the received touch input when the obscuring another UI element is present after the estimated time period. In one embodiment, it may be determined that another UI element (e.g., a pop-up or an advertisement) is still present at the actual intended UI element coordinates, then the received touch input may be discarded. The discarding of the received touch input on the another UI element facilitates avoiding an unnecessary execution of a touch input, which may have led to redirecting to a link within an unintended advertisement or to a pop-up notification.
At step 634, the method 600 comprises executing the action corresponding to the received touch input on the actual intended UI element when the obscuring another UI element disappears within the estimated time period. In one embodiment, it may be determined that another UI element (e.g., a pop-up or an advertisement) has disappeared from the actual intended UI element coordinates, then the received touch input may be executed on the actual touch intended UI element. The execution may include, but not limited to, click, touch, and navigating to next page. In one embodiment, the execution of the received touch input may be performed in conjunction with the operating system or processor/controller of the mobile device, as performed conventionally, but on the actual intended UI element.
Figure 8 illustrates an exemplary use case for prevention from clicking on sudden pop-ups at a UI of a user device according to an embodiment of the present disclosure, as compared to the existing art.
Often ad pops up at places where the user was supposed to click on a link. As depicted, here the user is trying to open a download link from the list of links in the display interface 802. However, in the meantime, an advertisement or an unwanted link may pop-up during transition of user's finger towards the display interface, as shown in 804. The user may end up accidentally clicking that advertisement or unwanted link, as per the existing art. A touch on the advertisement may open a link for download of the advertised application instead of the actual link, as per the existing art, as depicted on the user interface 806.
The present disclosure avoids such scenarios of inadvertent touches on the mobile device. The present disclosure utilizes the CMSR scores during the time duration when the user is about to touch the intended list item, and decides to move finger towards the location. The inertial and pressure sensors start recording the movement at this time - Tintended (1.22 sec), as depicted in 808. Further, the mobile device detects sensor reading pattern from Tintended to Tactual touch time, as depicted in 810. This pattern is analyzed to get the intended touch coordinate at Tintended. Based on the identification of Tintended, the correct link is opened, as depicted in 812. Thus, the user does not have to see advertisement link based on the system of the present disclosure, and instead a correct link at the intended UI element is opened, even though the user provided an inadvertent touch on the advertisement link.
While the above steps are shown in Figures 6A-6C and described in a particular sequence, the steps may occur in variations to the sequence in accordance with other embodiments.
The present disclosure provides various technical advancements over the existing art. First, the present disclosure provides an enhanced UI experience for users by obstructing inadvertent touch inputs on advertisements and pop-ups that appear abruptly on display interface of the mobile device. Since the present disclosure tracks sudden changes in UI during the reaction time of user, many errors due to sudden appearance of UI elements on native UI of devices are prevented.
Additionally, the present disclosure facilitates user privacy protection. Due to the prevention of unintended touches from suddenly appearing fraudulent links or scams, the present disclosure provides better privacy protection to the smartphone users. Further, by avoiding unintended clicks and accurately determining intended touches by the user, the present disclosure improves user experience while interacting with smartphone. Moreover, the touch recognition will be more tuned to user persona and way they handle the mobile device because of the usage of user touch dynamics. Furthermore, the present disclosure supports to avoid clicking on scam, fraudulent links and advertisements, and thus the user is better protected from fraudulent activities and unintended redirections.
While specific language has been used to describe the embodiments herein disclose, any limitations arising on account thereto, are not intended. As would be apparent to a person in the art, various working modifications may be made to the method in order to implement the inventive concept as taught herein. The drawings and the foregoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment.

Claims (15)

  1. A method for identification of an unintended touch and an intended user-interface (UI) element at a UI of a user device, the method comprising:
    receiving a touch input at the UI at a specific timestamp;
    extracting, from a database, a composite multi-sensor score for each of a plurality of timestamps preceding the specific timestamp;
    determining a probability of an intended touch input based on the extracted composite multi-sensor scores;
    comparing the probability of the intended touch with a user touch dynamics threshold to identify the unintended touch;
    determining an actual touch intended timestamp, of the plurality of preceding timestamps, based on the extracted composite multi-sensor scores, when the probability of the intended touch is greater than the user touch dynamics threshold; and
    identifying an intended UI element at a touch coordinate of the received touch input from a UI layout at the actual touch intended timestamp.
  2. The method as claimed in claim 1, further comprising:
    determining the composite multi-sensor score at each of the plurality of preceding timestamps based on readings of a plurality of sensors recorded at each of the plurality of preceding timestamps, wherein the plurality of sensors comprises two or more of a pressure/grip sensor, an accelerometer, a gyroscope, and a magnetometer, and wherein the composite multi-sensor score is indicative of at least one of a movement of the user device and pressure at the user device; or
    determining the composite multi-sensor score at each of the plurality of preceding timestamps based on a normalized linear combination of readings of a plurality of sensors recorded at each of the plurality of preceding timestamps, wherein the composite multi-sensor score indicates an amplitude of the readings of the plurality of sensors.
  3. The method as claimed in claim 1, further comprising:
    discarding the received touch input when the probability of the intended touch is lesser than the user touch dynamics threshold,
    wherein the user touch dynamics threshold is indicative of touch characteristics associated with handling and movement of the user device, and
    wherein the user touch dynamics threshold is periodically updated based on user inputs at the UI.
  4. The method as claimed in claim 1, further comprising:
    storing, before receiving the actual input, a UI layout at each of the plurality of preceding timestamps, wherein each of the UI layout comprises at least one UI element.
  5. The method as claimed in claim 4, further comprising:
    extracting UI layouts for the specific timestamp and the actual touch intended timestamp; and
    executing the received touch input based on the UI layout at the specific timestamp, when the UI layout at the specific timestamp is same as the UI layout at the actual touch intended timestamp;
    determining an actual intended UI element from the at least one UI element in the UI layout of the actual touch intended timestamp, when the UI layout at the specific timestamp is different from the UI layout at the actual touch intended timestamp;
    redirecting an action, corresponding to the received touch input, to the actual intended UI element at the touch coordinate instead of an obscuring UI element in the UI layout at the specific timestamp;
    estimating an amount of delay in time period required for the execution of an intended touch for the actual intended UI element based on a current UI state and further based on whether the actual intended UI element is currently obscured by another UI element on the current UI;
    discarding the received touch input when the obscuring another UI element is present after the estimated time period; and
    executing the action corresponding to the received touch input on the actual intended UI element when the obscuring another UI element disappears within the estimated time period.
  6. The method as claimed in claim 1, wherein determining the actual touch intended timestamp comprises determining a timestamp of an onset of a pattern of the composite multi-sensor scores that resulted in the probability of the intended touch being greater than the user touch dynamics threshold, and wherein determining the timestamp of the onset of a pattern of the composite multi-sensor scores is based on a neural network model.
  7. The method as claimed in claim 1, wherein identifying the intended UI element comprises determining at least one of an element ID of the intended UI element, a type of element, allowed actions, purpose, and whether the intended UI element is currently active.
  8. A system for identification of an unintended touch and an intended user-interface (UI) element at a UI of a user device, the system comprising:
    a receiving module configured to receive a touch input at the UI at a specific timestamp;
    an intended touch deriving module in communication with the receiving module and configured to:
    extract, from a database, a composite multi-sensor score for each of a plurality of timestamps preceding the specific timestamp;
    determine a probability of an intended touch input based on the extracted composite multi-sensor scores;
    compare the probability of the intended touch with a user touch dynamics threshold to identify the unintended touch;
    determine an actual touch intended timestamp, of the plurality of preceding timestamps, based on the extracted composite multi-sensor scores, when the probability of the intended touch is greater than the user touch dynamics threshold; and
    identify an intended UI element at a touch coordinate of the received touch input from a UI layout at the actual touch intended timestamp.
  9. The system as claimed in claim 8, further comprising:
    a layout and feature extraction module configured to:
    determine the composite multi-sensor score at each of the plurality of preceding timestamps based on readings of a plurality of sensors recorded at each of the plurality of preceding timestamps, wherein the plurality of sensors comprises two or more of a pressure/grip sensor, an accelerometer, a gyroscope, and a magnetometer, and wherein the composite multi-sensor score is indicative of at least one of a movement of the user device and pressure at the user device; ordetermine the composite multi-sensor score at each of the plurality of preceding timestamps based on a normalized linear combination of readings of the plurality of sensors recorded at each of the plurality of preceding timestamps, wherein the composite multi-sensor score indicates an amplitude of the readings of the plurality of sensors.
  10. The system as claimed in claim 8, wherein the intended touch deriving module is further configured to discard the received touch input when the probability of the intended touch is lesser than the user touch dynamics threshold, wherein the user touch dynamics threshold is indicative of touch characteristics associated with handling and movement of the user device, and wherein the user touch dynamics threshold is periodically updated based on user inputs at the UI.
  11. The system as claimed in claim 10, further comprising:
    a layout and feature extraction module configured to store, before receiving the actual input, a UI layout at each of the plurality of preceding timestamps, wherein each user-interface layout comprises at least one UI element,
    wherein the layout and feature extraction module configured to extract UI layouts at the specific timestamp and the actual touch intended timestamp; and
    wherein the system comprises an intended action execution module configured to:
    execute the received touch input based on the UI layout at the specific timestamp, when the UI layout at the specific timestamp is same as the UI layout at the actual touch intended timestamp;
    determine an actual intended UI element from the at least one UI element in the UI layout of the actual touch intended timestamp, when the UI layout at the specific timestamp is different from the UI layout at the actual touch intended timestamp; and
    redirect an action, corresponding to the received touch input, to the actual intended UI element at the touch coordinate instead of an obscuring UI element in the UI layout at the specific timestamp.
  12. The system as claimed in claim 11, wherein the intended touch deriving module is configured to estimate an amount of delay in time period required for the execution of an intended touch for the actual intended UI element based on a current UI state and further based on whether the actual intended UI element is currently obscured by another UI element on the current UI.
  13. The system as claimed in claim 12, wherein the intended action execution module is configured to:
    discard the received touch input when the obscuring another UI element is present after the estimated time period; and
    execute the action corresponding to the received touch input on the actual intended UI element when the obscuring another UI element disappears within the estimated time period.
  14. The system as claimed in claim 8, wherein the intended touch deriving module is further configured to determines a timestamp of an onset of a pattern of the composite multi-sensor scores that resulted in the probability of the intended touch being greater than the user touch dynamics threshold, and wherein determining the timestamp of the onset of a pattern of the composite multi-sensor scores is based on a neural network model.
  15. The system as claimed in claim 8, wherein the intended touch deriving module is further configured to determine at least one of an element ID of the intended UI element, a type of element, allowed actions, purpose, and whether the intended UI element is currently active.
PCT/KR2022/001984 2021-12-22 2022-02-09 Methods and systems for identification of an unintended touch at a user interface of a device WO2023120809A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202111060058 2021-12-22
IN202111060058 2021-12-22

Publications (1)

Publication Number Publication Date
WO2023120809A1 true WO2023120809A1 (en) 2023-06-29

Family

ID=86903127

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2022/001984 WO2023120809A1 (en) 2021-12-22 2022-02-09 Methods and systems for identification of an unintended touch at a user interface of a device

Country Status (1)

Country Link
WO (1) WO2023120809A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120169646A1 (en) * 2010-12-29 2012-07-05 Microsoft Corporation Touch event anticipation in a computing device
US20130222287A1 (en) * 2012-02-29 2013-08-29 Pantech Co., Ltd. Apparatus and method for identifying a valid input signal in a terminal
US20140082490A1 (en) * 2012-09-18 2014-03-20 Samsung Electronics Co., Ltd. User terminal apparatus for providing local feedback and method thereof
US20160124571A1 (en) * 2011-09-30 2016-05-05 Intel Corporation Mobile device rejection of unintentional touch sensor contact
US20200064960A1 (en) * 2018-08-21 2020-02-27 Qeexo, Co. Recognizing and rejecting unintentional touch events associated with a touch sensitive device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120169646A1 (en) * 2010-12-29 2012-07-05 Microsoft Corporation Touch event anticipation in a computing device
US20160124571A1 (en) * 2011-09-30 2016-05-05 Intel Corporation Mobile device rejection of unintentional touch sensor contact
US20130222287A1 (en) * 2012-02-29 2013-08-29 Pantech Co., Ltd. Apparatus and method for identifying a valid input signal in a terminal
US20140082490A1 (en) * 2012-09-18 2014-03-20 Samsung Electronics Co., Ltd. User terminal apparatus for providing local feedback and method thereof
US20200064960A1 (en) * 2018-08-21 2020-02-27 Qeexo, Co. Recognizing and rejecting unintentional touch events associated with a touch sensitive device

Similar Documents

Publication Publication Date Title
WO2017135797A2 (en) Method and electronic device for managing operation of applications
US9916514B2 (en) Text recognition driven functionality
WO2011068374A2 (en) Method and apparatus for providing user interface of portable device
WO2014030902A1 (en) Input method and apparatus of portable device
WO2012153914A1 (en) Method and apparatus for providing graphic user interface having item deleting function
WO2019125060A1 (en) Electronic device for providing telephone number associated information, and operation method therefor
WO2019022567A2 (en) Method for automatically providing gesture-based auto-complete suggestions and electronic device thereof
EP3504619A1 (en) Apparatus and method for managing notification
US10775926B2 (en) Method of performing touch sensing and fingerprint sensing simultaneously and electronic device and system using the same
WO2019164119A1 (en) Electronic device and control method therefor
WO2018004200A1 (en) Electronic device and information providing method thereof
CN109670507A (en) Image processing method, device and mobile terminal
EP3942510A1 (en) Method and system for providing personalized multimodal objects in real time
WO2017026655A1 (en) User terminal device and control method therefor
WO2013191408A1 (en) Method for improving touch recognition and electronic device thereof
CN105320437A (en) Method of selecting character or image and computing device
TW202106041A (en) Electronic apparatus and automatic advertisement closing method thereof
WO2023120809A1 (en) Methods and systems for identification of an unintended touch at a user interface of a device
WO2019151689A1 (en) Electronic device and control method therefor
WO2020045909A1 (en) Apparatus and method for user interface framework for multi-selection and operation of non-consecutive segmented information
WO2020171613A1 (en) Method for displaying visual object regarding contents and electronic device thereof
WO2013118971A1 (en) Method and system for completing schedule information, and computer-readable recording medium having recorded thereon program for executing the method
WO2017131251A1 (en) Display device and touch input processing method therefor
EP3646150A1 (en) Method for providing cognitive semiotics based multimodal predictions and electronic device thereof
WO2020171574A1 (en) System and method for ai enhanced shutter button user interface

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22911449

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE