CA3147026A1 - Natural gesture detecting ring system for remote user interface control and text entry - Google Patents

Natural gesture detecting ring system for remote user interface control and text entry

Info

Publication number
CA3147026A1
CA3147026A1 CA3147026A CA3147026A CA3147026A1 CA 3147026 A1 CA3147026 A1 CA 3147026A1 CA 3147026 A CA3147026 A CA 3147026A CA 3147026 A CA3147026 A CA 3147026A CA 3147026 A1 CA3147026 A1 CA 3147026A1
Authority
CA
Canada
Prior art keywords
computing device
main computing
data
sensor platform
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CA3147026A
Other languages
French (fr)
Inventor
Jerry Lam
Zhe JIANG
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CA3147026A priority Critical patent/CA3147026A1/en
Publication of CA3147026A1 publication Critical patent/CA3147026A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/014Hand-worn input/output arrangements, e.g. data gloves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0236Character input methods using selection techniques to select from displayed items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Character Discrimination (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A method for using finger or hand mounted motion sensing platforms connected wirelessly to a main computing device to reliably recognize complex hand and finger movements is shown. This allows the user to control the computing device or other devices with intuitive gestures and character entry using finger handwriting. A fitness tracking mode is also supported. The sensor platform and main computing device use a combination of conventional signal processing and deep neural networks to process data to determine the presence and the classification of gesture motions and character entry. Preset tap patterns can change the device from recognizing gestures, characters, fitness tracking and sleep mode, while an inactivity timer also engages sleep mode. For character recognition, the invention displays the top detected character candidates and allows the correct one to be selected by the user via swipe gestures to better accommodate for errors in character detection and also to help build a library of recognized characters for improving future operation.

Description

Description This invention is in the field of wearable electronic devices. It deals with an interface for controlling user interfaces for different types of electronic devices. This can include everyday portable electronic devices like a smartphone, tablet, laptop or desktop computer. Another these types of devices are displays like television screens, smart TVs, and TV set-top boxes like Apple TV, Google Chromecast, Amazon Fire stick, or Roku TV stick, etc. Another type of device are smart home electronics such as robot vacuum cleaners, smart blinds, smart home lighting, or smart thermostat, etc.
Still another type of devices are virtual reality or augmented reality headsets. The use of this invention is not limited to these types of device categories, which are listed to give examples of applications for this invention.
Consumer electronic devices have advanced greatly in the past decade, taking on smaller, low energy form factors, with increased capabilities made possible by wireless technologies such as Bluetooth, IEEE 802.11 ("Wifi") and cellular networks.
However, methods for controlling such devices are still limited to predominantly tactile methods such as keyboards, mouse, physical buttons, touchscreens or pointers. Control when direct proximity is not available is limited to audio cues such as voice commands or rudimentary hand gestures such as waving or clapping. This invention expands upon these methods by using a finger mounted wearable controlled wireless accelerometer sensor to relay precise and complex finger gestures and interpret them using a combination of conventional signal processing and deep neural networks. This allows a wide array of fine natural and intuitive gestures to be detected and classified with high proficiency to allow for a high degree of device control, including operation of devices and text entry. The invention allows for intuitive text entry and adds a method for easily correcting errors in text entry, while at the same time improving the gesture recognition capabilities of the invention.
The concept of the invention is shown in figure 1. It makes use of a sensor platform, a compact wireless device with a built-in motion detector (such as an accelerometer or gyroscope) worn on the user's finger or hand, such as a ring, or bracelet, or an implanted sensor, that can communicate wirelessly to the main computing device, such as a laptop computer or smart phone, using a wireless digital communications protocol such as Wifi, or Bluetooth. The invention consists of the software operating on wearable device and on the computing device and the integration of the two, which allows the sensor platform and the main computing device to work together to recognize complex hand and figure motions, allowing it to take the user's finger inputs and take actions accordingly (open applications, insert or modify text, control an additional remote device, etc.).
The general operation of the invention has the wearable finger sensor platform's accelerometer data sent to a low power microprocessor on the wearable sensor platform. The low power microprocessor takes this data and can perform some basic signal processing and decision making with this data- such as wake from sleep mode, Date Recue/Date Received 2022-01-28 or change to other mode of operation. When appropriate, a stream of data, which can include the accelerometer data and data that can be derived from the accelerometer data (such as filtered data, or transformed into frequency domain or wavelet transformation) is sent to the main computing device, where more advanced signal processing takes place. This signal processing involves multiple layers of conventional signal processing (such as basic time or frequency domain filters, wavelet transforms etc.), and deep neural networks (using algorithms such as convolutional neural networks, recursive neural networks such as LSTM or GRU, as well as fully connected neural networks). A high-level decision is made based on the processing of this data, which can be to determine if the accelerometer data represents a valid gesture or character that should be acted upon, performing the necessary actions based on the gesture, and learning from previous gestures to improve the future functionality of the device.
The hardware of the sensor platform is shown in Figure 2. It is a small finger or hand worn device, which contains several electronic components, such as a motion detector (such as an accelerometer or gyroscope), a microprocessor, a power management circuit, on board data storage (both for storing data, access codes, and executable instructions) as well as a radio transmitter for sending the measured motion data, as well as for operations such as firmware updates, and device pairing.
The motion data is sent to the main computing device where it is analyzed and decisions are made based off of the nature of the data. This includes the detection and classification of gestures and characters, with actions taken accordingly (such as the insertion of a character in a line of text, or the operation of a smart device). In order to make such a determination given the complex nature of the motion data, a deep processing stack is required. Parts of this stack may exist on the sensor platform and the main computing device.
A sample of the data processing stack that may be on either or both of the sensor platform and the main computing device is shown in Figure 3. Motion data taken from the motion sensor on the sensor platform is given to a preprocessing layer, which can perform basic operations such as scaling, interpolating, decimating, normalization.
Additional data streams may be derived from this stream as well, which can include differentiation, integration, weighted averaging, etc. This data is fed into a processing stack which performs filtering operations on the data. These could be conventional signal processing filters such as IIR, or FIR filter, FFT transforms, or non-linear operations (such as exponentiation, power, logarithmic, etc.). Alternatively, they can be neural network layers to make up a deep neural network, which could include convolutional neural network (CNN) layers, Recursive Neural Network (RNN) layers such as Long Short-Term Memory (LSTM) or GRU units, as well as fully connected layers. These layers are given numerical weights, bias values, and a non-linear activation function, whose values are determined through mathematical training operations based on a large library of measured and labelled motion data (including Date Recue/Date Received 2022-01-28 gesture and character samples). Many of these processing layers are present to process the data, and the layers are connected one to the other until they reach the decision-making layer, which takes the result of the previous layers to make a determination based on them (such as the detection or classification of a gesture or character). These layers may be branched, interconnected, and combined as needed, which is determined by a mathematical training operation using the previously mentioned library of labelled motion data. To improve the efficiency of the library, lower level layers can be shared to make different types of determinations, while the upper levels can remain distinct.
The invention allows for 4 different modes of operation. These include a gesture detection mode, a character detection mode, a fitness tracking mode and a low power sleep mode. Figure 4 shows a state diagram of these 4 different modes and how they may change from one state to another. One way the state changes may occur is if predetermine tap patterns are entered. Once detected, the tap pattern "G" will cause the device to enter gesture detection mode, while tap pattern "C" will cause it to enter character detection mode, while tap pattern "F" will cause it to enter fitness tracking mode, while tap pattern "S" will cause it to enter low power sleep mode. Other means of triggering transitions to other modes of operation are possible. The device sleep mode can be triggered on a long enough period of inactivity, while context sensitive cues can cause the device to move from character to gesture recognition mode (for instance, once a character is detected, several candidate characters will be displayed for the user to select one, with the device changed to the gesture recognition mode to allow the user to use directional gestures to select the desired character).
In the low power sleep mode, most of the device functions, such as the radio transmitter, are disabled to save power and to prolong battery life. Limited functionality exists to detect tap patterns to wake the sensor platform and change it to another mode of operation.
In the fitness tracking mode, the device acts as a basic step counter to track steps or distance travelled. The motion detector on the sensor platform is used to monitor the motions of the user's arm and body to track motions associated with fitness activity such as walking or jogging. The signal processing block on the sensor platform can provide preliminary signal processing so to keep track of detected steps while minimizing the number of radio transmissions to save power.
In the gesture detection mode, gestures detected as shown in the chart in Figure 5.
Which allows the user to perform many different type of actions such as the manipulation of remote user interfaces without the need for physical contact.
This can be when it is impractical, impossible, or undesirable to do so (such as a screen that is far away, or only exists virtually in a headset, or if the user's hands are full, wet or dirty).
The invention can detect a gesture when a predetermined tap pattern is detected to start listening for a gesture. The gesture data will be the accelerometer data (direct and Date Recue/Date Received 2022-01-28 preprocessed) up until the end of the gesture is detected (such as when the accelerometer detects no further motion of the wearable sensor platform). This data is sent to the main computing device when listens for this gesture and acts accordingly -such as opening or closing an application or dialog box, changing an active selection, or interacting with a Ul element such as a slider. The gestures include a set to recognize commonly performed actives- such as a gesture to accept, enter or start, a gesture for cancellation of exit, a directional swipe set of gesture for selection, and rotational gestures for incrementing or decrementing values.
The text entry mode of operation operates similarly to the gesture detection mode of operation. It is shown in the chart in Figure 6. To make the invention as intuitive to use as possible, the use of a virtual keyboard is not desirable. Instead, when the user wishes to enter a character (such as a letter, number, or punctuation symbol), the user should draw the character directly with their fingers as if they were writing it in the air.
The invention can detect a character when a predetermined tap pattern is detected to start listening for a character. The character data will be the accelerometer data (direct and preprocessed) up until the end of the gesture is detected (such as when the motion detector detects no further motion of the wearable sensor platform). This data is sent to the main computing device when listens for this gesture and acts accordingly.
Due to the larger set of characters that can be recognized, there will often by a large margin of error in selecting the correct character- to accommodate for this, after the character entry, the invention will display several top choices based on the character detection algorithm and allow the user to choose the desired one with a quick swipe gesture (similar to what was described in the gesture detection mode of operation). In addition to giving the user an opportunity to correct for errors, this also gives the system a chance to save the recorded character entered and the user's actual intent to a library of prerecorded characters for user in improving the character detection algorithm in the future for all users in general, and for the current user in particular to recognize their handwriting style, for instance.
Date Recue/Date Received 2022-01-28

Claims (5)

Claims
1.A portable wireless electronic system allowing gesture-controlled functionality comprising of a. a battery powered wireless motion detecting sensor platform i. Consisting of a motion detecting sensor, a digital microprocessor, an electronic battery, a power management control unit, a digital memory storage unit, a wireless radio transmitter, and a digital communications bus for the above to communicate ii. With a physical apparatus to secure the device to a user's finger or hand iii. With a wireless bidirectional link to the main computing device, allowing = the sensor platform to send both digitally processed and original data from the motion detecting sensor = the main computing device to change modes of operation, to update or modify the firmware or other data storage on the sensor platform b. a main computing device, a compact portable electronic device with digital storage, a digital microprocessor, a wireless connection to other smart devices, a means to display information visually to the user via either a physical display on the main computing device or a wireless link to a display device, or a wireless link to a device connected to a display device
2.The combination in claim 1, where the motion data processing path comprises of a. the motion detecting sensor on the sensor platform sending data to the microprocessor on the sensor platform where it is processed using a digital algorithm b. a digital algorithm consisting of a data preprocessing block which feeds data to parallel sets of sequential data processing blocks, which can consist of digital signal processing filters, convolutional neural networks, recursive neural networks, or fully connected neural network layers c. a decision-making algorithm block at the end of the parallel sets of processing blocks and where the processing blocks may branch in parallel paths or combine together to a common block d. the number of, types, and coefficients of the data processing blocks and as well as the algorithms and the coefficients of the decision making block being adjustable, programmable and reconfigurable based on a library of labelled, prerecorded motion sensor data corresponding to known and classified movements e. the motion detecting apparatus sending data over the communications bus to the digital microprocessor, which processes it according to a preprogrammed digital algorithm stored on the digital memory storage unit, before sending the data to the main computing device, which Date Recue/Date Received 2022-01-28
3.The combination in claim 2, where the main computing device receives processed or unprocessed sensor data from the sensor platform over the wireless link and processes it with a digital algorithm:
a. a digital algorithm consisting of a data preprocessing block which feeds data to parallel sets of sequential data processing blocks, which can consist of digital signal processing filters, convolutional neural networks, recursive neural networks, or fully connected neural network layers b. a decision-making algorithm block at the end of the parallel sets of processing blocks and where the processing blocks may branch in parallel paths or combine together to a common block c. the number of, types, and coefficients of the data processing blocks and as well as the algorithms and the coefficients of the decision-making block being adjustable, programmable and reconfigurable based on a library of labelled, prerecorded motion sensor data corresponding to known and classified movements d. the decision-making blocks are able to make determinations made by decision-making blocks is used by the digital algorithm stored on the digital storage memory on the main computing device by the digital microprocessor on the main computing device is used to make determinations as to the presence and type of relevant hand and finger gestures represented by the data received by the sensor platform.
e. the decision-making block is able to make determinations upon with the microprocessor is able to take actions on the main computing device itself, or the plurality of smart devices that the main computing device is connected to, such as to remotely send or receive information, execute commands.
4. The combination in claim 1, wherein the operational mode of the system comprises of 4 different modes of operation a. Where the sensor platform and main computing device are able to change operational modes through the detection of predetermined tap patterns by means of the sensor platform motion detector, in addition to context changes through inactivity timers, context sensitive user interface cues, or through an external control.
b. a low power sleep mode where the low power sleep operation mode comprises of having the sensor platform operate in a low energy state, with the microprocessor operating with a reduced operational frequency, and with the radio transmitter operational for short and infrequent periods of time, and with the motion sensor operating with a reduced sample rate with a communicating indicator if a predetermined tapping pattern is received to indicate a transition into a different operational mode of the system.
c. a gesture detection mode where the gesture detection mode comprises of the sensor and process combination of the sensor platform and main computing device to record motion data from the sensor platform, making Date Recue/Date Received 2022-01-28 use of the data processing stacks on the sensor platform and the main computing device to detect the presence of and make a determination of the type of gestures and where the main computing device is able to take actions in response to detected gestures, such as the changing of the mode of operation, manipulating the user interface as displayed by the main computing device, and manipulating the user interface of a remote connected smart device (by means of the bidirectional communications link).
d. a text entry mode where the text entry mode comprises of the sensor and process combination of the sensor platform and main computing device to record motion data from the sensor platform, making use of the data processing stacks on the sensor platform and the main computing device to detect the presence of and make a determination of the most likely types of character entry motions.
e. a fitness tracking mode, where the fitness tracking mode comprises of the sensor and process combination of the sensor platform and main computing device which process motion data to detect steps, motion, or other fitness activity.
5. The combination in claim 4, wherein upon the detection of a character motion and upon the determination of the most likely types of character entry motions a. the user interface of the main computing device displays the most likely character candidates and temporarily changes the mode of operation to the gesture detection operational mode to detect selection gestures b. the main computing device is able to take actions in response to detected character entry motions in combination with the selection gesture, such as the insertion, deletion or manipulation of text characters or the character entry point on the user interface as displayed by the main computing device, or the insertion, deletion or manipulation of text characters or the character entry point the user interface of a remote connected smart device (by means of the bidirectional communications link).
c. the main computing device is able to take actions in response to detected character entry motions in combination with the selection gesture by adding the detected character motion and the selected character to a library of labelled gesture and character data to be used in the improvement of future character motion detection.
Date Recue/Date Received 2022-01-28
CA3147026A 2022-01-28 2022-01-28 Natural gesture detecting ring system for remote user interface control and text entry Pending CA3147026A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CA3147026A CA3147026A1 (en) 2022-01-28 2022-01-28 Natural gesture detecting ring system for remote user interface control and text entry

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CA3147026A CA3147026A1 (en) 2022-01-28 2022-01-28 Natural gesture detecting ring system for remote user interface control and text entry

Publications (1)

Publication Number Publication Date
CA3147026A1 true CA3147026A1 (en) 2023-07-28

Family

ID=87424006

Family Applications (1)

Application Number Title Priority Date Filing Date
CA3147026A Pending CA3147026A1 (en) 2022-01-28 2022-01-28 Natural gesture detecting ring system for remote user interface control and text entry

Country Status (1)

Country Link
CA (1) CA3147026A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117786643A (en) * 2024-02-22 2024-03-29 清华大学 User identity authentication method and related equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117786643A (en) * 2024-02-22 2024-03-29 清华大学 User identity authentication method and related equipment
CN117786643B (en) * 2024-02-22 2024-05-07 清华大学 User identity authentication method and related equipment

Similar Documents

Publication Publication Date Title
CN103262008B (en) Intelligent wireless mouse
AU2015314949B2 (en) Classification of touch input as being unintended or intended
US20180024643A1 (en) Gesture Based Interface System and Method
EP2555497B1 (en) Controlling responsiveness to user inputs
US20100117959A1 (en) Motion sensor-based user motion recognition method and portable terminal using the same
CN104731496B (en) Unlocking method and electronic device
KR20160039499A (en) Display apparatus and Method for controlling thereof
WO2009155981A1 (en) Gesture on touch sensitive arrangement
EP2752831B1 (en) Input device, display device and method of controlling thereof
CN110908513B (en) Data processing method and electronic equipment
CN109634438B (en) Input method control method and terminal equipment
CN108494962B (en) Alarm clock control method and terminal
CN109144377A (en) Operating method, smartwatch and the computer readable storage medium of smartwatch
CN107831987A (en) The error touch control method and device of anti-gesture operation
KR20100052372A (en) Method for recognizing motion based on motion sensor and mobile terminal using the same
CA3147026A1 (en) Natural gesture detecting ring system for remote user interface control and text entry
CN111367483A (en) Interaction control method and electronic equipment
CN111158487A (en) Man-machine interaction method for interacting with intelligent terminal by using wireless earphone
JP4784367B2 (en) Device control device and device control processing program
US20200341557A1 (en) Information processing apparatus, method, and program
WO2017165023A1 (en) Under-wrist mounted gesturing
CN107491141A (en) Include the intelligent watch of two table bodies
CN107807740B (en) Information input method and mobile terminal
CN104199555A (en) Terminal setting method and terminal setting device
CN110780784B (en) Display method and electronic equipment