CN111158487A - Man-machine interaction method for interacting with intelligent terminal by using wireless earphone - Google Patents

Man-machine interaction method for interacting with intelligent terminal by using wireless earphone Download PDF

Info

Publication number
CN111158487A
CN111158487A CN201911408792.5A CN201911408792A CN111158487A CN 111158487 A CN111158487 A CN 111158487A CN 201911408792 A CN201911408792 A CN 201911408792A CN 111158487 A CN111158487 A CN 111158487A
Authority
CN
China
Prior art keywords
gesture
clicking
human
interaction method
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911408792.5A
Other languages
Chinese (zh)
Inventor
史海天
易鑫
徐栩海
史元春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201911408792.5A priority Critical patent/CN111158487A/en
Publication of CN111158487A publication Critical patent/CN111158487A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1041Mechanical or electronic switches, or control elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/60Substation equipment, e.g. for use by subscribers including speech amplifiers
    • H04M1/6033Substation equipment, e.g. for use by subscribers including speech amplifiers for providing handsfree use or a loudspeaker mode in telephone sets
    • H04M1/6041Portable telephones adapted for handsfree use
    • H04M1/6058Portable telephones adapted for handsfree use involving the use of a headset accessory device connected to the portable telephone
    • H04M1/6066Portable telephones adapted for handsfree use involving the use of a headset accessory device connected to the portable telephone including a wireless connection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72409User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
    • H04M1/72415User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories for remote control of appliances

Abstract

The man-machine interaction method for interacting with the intelligent terminal by using the wireless earphone is provided, and comprises the following steps: establishing wireless connection between the earphone and the intelligent terminal; the earphone detects the sound signal of the contact gesture made by the human hand in the region above the neck of the human body by using the sensor, and transmits the sound signal to the intelligent terminal; the intelligent terminal classifies and predicts the sound signals by utilizing a trained machine learning-based classification model, and judges whether the sound signals represent that a user makes a preset gesture type; in response to determining that the sound signal indicates that a predetermined gesture type is made, the smart terminal performs a predetermined control on the wireless headset. By using the man-machine interaction method of the wireless earphone, a user carries out touch gestures on the head of the user, so that the user can easily find out the position to be touched and remember the gestures corresponding to specific operations; the interactive system can accurately detect whether the gesture action exists and accurately identify the gesture action; and the response speed is obviously faster than that of the traditional mobile phone operation mode.

Description

Man-machine interaction method for interacting with intelligent terminal by using wireless earphone
Technical Field
The present invention generally relates to human-computer interaction techniques using wireless headsets.
Background
The interaction of a conventional headset occurs on the surface of the device itself, such as a touch screen, physical buttons, a camera, etc. For example, in the existing wireless headsets such as samsung iconX, apple air pots, Freebuds, and the like, a method of detecting vibration is used to realize interaction by touching or clicking the headset.
Such a traditional interaction approach for wireless headset touch or click has the following disadvantages:
1. the wireless headset is small in size and small in surface area, clicking or touching the wireless headset needs a user to accurately find the headset position and the surface to be clicked during operation, and according to user feedback, the process needs to pay attention and obviously reduces the accuracy of user operation (other positions are possibly touched).
2. The wireless earphone is light and not firm in fixation, and the positions of the earphone in the ear can be influenced by clicking and touching operations, so that the earphone is loose and even easily falls off.
At present, the headset device is developed towards small size, light weight and wireless, and for most of the existing headsets, there is no or only simple interaction function (such as volume adjustment), and other functions all need to be operated on the mobile phone. This interaction has two problems: firstly, the size of the mobile phone is getting bigger and bigger, and the mobile phone is not convenient and safe enough to be taken out frequently and operated (such as riding and driving environments); secondly, the lack of input mode directly limits the possibility that the earphone becomes a wearable device independent of the mobile phone. Although design solutions for voice interaction with headphones have been developed in recent years, such solutions have not been able to perform the interaction task reliably and consistently in view of the current state of the art of voice recognition and artificial intelligence. In addition, some situations are not suitable for voice interaction. More similar to the present invention are emerging tapping and touch solutions such as tapping both sides of the headset (airpots) or sliding on the surface of the headset (iconX). However, the problem with such schemes is two-fold: firstly, the earphone can be loosened and even fall off by directly touching the earphone, so that the user experience is reduced; secondly, the miniaturization of the earphone limits the area and the expandability of the interactive operation, and the time for the user to operate is prolonged or the operation accuracy is reduced according to the Fitz law.
Disclosure of Invention
The present invention has been made in view of the above problems of the prior art.
According to one aspect of the invention, a man-machine interaction method for interacting with an intelligent terminal by using a wireless earphone is provided, which comprises the following steps: establishing wireless connection between the earphone and the intelligent terminal; under the condition that a user wears the wireless earphone, the earphone detects a sound signal of a contact gesture made by a human hand in an area above the neck of a human body by using a self-contained sensor and transmits the sound signal to the intelligent terminal; the intelligent terminal classifies and predicts the sound signals by utilizing a trained machine learning-based classification model, and judges whether the sound signals represent that a user makes a preset gesture type; in response to determining that the sound signal indicates that a predetermined gesture type is made, the smart terminal performs a predetermined control on the wireless headset.
Preferably, the gesture types include: clicking temple, clicking cheekbone, clicking mandible angle, clicking occipital bone, clicking superior edge of auricle, clicking middle of auricle, clicking inferior edge of auricle, double clicking temple, double clicking cheekbone, double clicking inferior angle of mandible, double clicking occipital bone, double clicking superior edge of auricle, double clicking inferior edge of auricle, sliding up and down on face, sliding down and up on face, sliding back and front on face, sliding front and back on face, sliding up and down on inferior angle of mandible, sliding down and up on occipital bone, sliding down and down on auricle, sliding down and up on auricle, circling on face, two fingers doing face zooming gesture, two fingers doing face gesture, unfolding on face.
Preferably, the gesture type is one of a single click, a double click, and a slide on a part of the face, ears, or behind the ears.
Preferably, the human-computer interaction method further comprises: and (3) using the random gradient descent SGD as an optimization algorithm for training a classification model, and combining a linear gradient preheating method and a cosine annealing technology to update the learning rate.
Preferably, in the optimization algorithm in the human-computer interaction method, momentum is 0.9 to accelerate convergence, and weight regularization parameter is 0.0001 to prevent overfitting.
Preferably, in the human-computer interaction method, updating the learning rate by combining the linear gradient preheating method and the cosine annealing technology comprises: the learning rate starts at 0.01, rises to 0.1 in 20 epochs, and then decays with a cosine curve in the next 400 epochs.
Preferably, the man-machine interaction method comprises the following steps: the data set of the training model is augmented in a manner that includes mixing office and street noise with the original audio data and panning the audio signal.
Preferably, the human-computer interaction method further comprises: the original audio input length for the gesture classification was taken as 99 percentile of the duration of the swipe gesture.
Preferably, the human-computer interaction method further comprises: at 1.2 seconds, this duration is used as the original audio input length for gesture classification. Finally, each gesture is segmented by a 1.2s long audio data window around the midpoint of the labeled range.
Preferably, in the human-computer interaction method, the predetermined control includes: music play, phone call reception, notification message.
Preferably, the sound signal comprises a signal envelope map and a time-frequency map.
According to another aspect of the present invention, there is provided a computer-readable medium having stored thereon computer-executable instructions, which when executed by a computer, operate to perform a human-computer interaction method for interacting with a smart terminal using a wireless headset, including: establishing wireless connection between the earphone and the intelligent terminal; under the condition that a user wears the wireless earphone, the earphone detects a sound signal of a contact gesture made by a human hand in an area above the neck of a human body by using a self-contained sensor and transmits the sound signal to the intelligent terminal; the intelligent terminal classifies and predicts the sound signals by utilizing a trained machine learning-based classification model, and judges whether the sound signals represent that a user makes a preset gesture type; in response to determining that the sound signal indicates that a predetermined gesture type is made, the smart terminal performs a predetermined control on the wireless headset.
By utilizing the man-machine interaction method of the wireless headset, the user carries out touch gestures on the head of the user, so that the user can easily and accurately find out the position to be touched, and the user can easily remember the gestures corresponding to specific operations; the interactive system can accurately detect whether the gesture action exists and accurately identify the gesture action; and in scenes such as music playing, telephone calls, notification messages and the like, the response speed of the interactive system of the embodiment of the invention is obviously faster on average than that of the traditional mobile phone which is taken out of a pocket and taken up from a table.
Drawings
Fig. 1 is a schematic diagram illustrating a usage scenario of wireless headset interaction by using a head touch manner according to an embodiment of the present invention.
Fig. 2 shows an overall flowchart of a human-computer interaction method 200 for interacting with a smart terminal using a wireless headset according to an embodiment of the present invention.
FIG. 3 shows a signal envelope diagram and a time-frequency diagram for 27 gestures according to an embodiment of the invention.
FIG. 4 illustrates a process of sample acquisition, model training and testing according to an embodiment of the present invention.
FIG. 5 shows a graph of overall classification accuracy on a test set at the end of training through multiple rounds of training, in accordance with an embodiment of the invention.
FIG. 6 shows three general scenarios of interactive task simulation: music playing, telephone call receiving and making, and notification message processing.
Fig. 7 shows the speed comparison of the interactive system of an embodiment of the invention compared to a pick-up group and a draw-out group in three scenarios, music play, phone call, and notification message.
Detailed Description
Various embodiments of the present invention are described in detail below with reference to the accompanying drawings.
Unless specifically stated otherwise, "interactive mode" herein refers to an interactive mode in which a user controls a headset by touching an interaction (including sliding, clicking, tapping, scratching) with a hand (including any part of the hand, such as a finger, palm, and back of the hand) on the head (i.e., the area above the human neck) in a scene where the user wears the wireless headset.
The invention uses the sensor (such as microphone, IMU, radar, etc. in the earphone) in the earphone to collect the physical signal of the interactive gesture and finally realizes the basic principle of gesture classification by a machine learning related method.
In view of the problems in the background art, the inventors originally proposed a design scheme for performing a human-computer interaction operation by a wireless headset using a touch operation on a human head and a face region. The design scheme comprises the following steps: a user making a specific gesture on the head (including the face); receiving physical signals (sound, vibration) using the headset's own sensors (e.g. microphone, IMU); after simple signal processing, classifying and predicting the physical signals by using a classification algorithm mainly based on machine learning; and executing the command according to the prediction result.
Fig. 1 is a schematic diagram illustrating a usage scenario of wireless headset interaction by using a head touch manner according to an embodiment of the present invention. In the figure, the user wears the wireless headset with his hand touching the area on the face near the ear, thereby controlling the volume of the headset, etc.
Fig. 2 shows an overall flowchart of a human-computer interaction method 200 for interacting with a smart terminal using a wireless headset according to an embodiment of the present invention.
As shown in fig. 2, in step S210, a wireless connection between the earphone and the smart terminal is established.
In step S220, when the user wears the wireless headset, the headset detects a voice signal of a contact gesture made by the human hand in an area above the neck of the human body using the self-contained sensor and transmits the voice signal to the smart terminal.
The inventors have done a great deal of work with respect to the design and selection of gestures.
The inventor firstly designs a group of gesture spaces for combination, such as clicking/double clicking/long pressing, finger types, finger numbers, finger positions, head positions, whether nails are used, left/right, and the dimensions are combined with each other to generate a huge gesture set. From the above set, 27 basic gestures were selected according to common gesture habits: clicking temple, clicking cheekbone, clicking mandible angle, clicking occipital bone, clicking superior edge of auricle, clicking middle of auricle, clicking inferior edge of auricle, double clicking temple, double clicking cheekbone, double clicking inferior angle of mandible, double clicking occipital bone, double clicking superior edge of auricle, double clicking inferior edge of auricle, sliding up and down on face, sliding down and up on face, sliding back and front on face, sliding front and back on face, sliding up and down on inferior angle of mandible, sliding down and up on occipital bone, sliding down and down on auricle, sliding down and up on auricle, circling on face, two fingers doing face zooming gesture, two fingers doing face gesture, all gestures including left and right sides.
FIG. 3 shows a signal envelope diagram and a time-frequency diagram for 27 gestures according to an embodiment of the invention.
Through a large amount of questionnaire surveys of users and the identification analysis of signals, 8 core gestures which are easy to identify and distinguish are further screened out, wherein the specific forms comprise single-click, double-click and sliding, and the parts comprise the face, the ears and the back of the ears.
In step S230, the intelligent terminal classifies and predicts the sound signal by using the trained machine learning-based classification model, and determines whether the sound signal represents that the user makes a predetermined gesture type.
The acquisition of the sample, the training of the model and the testing will be described in detail later.
In step S240, in response to determining that the sound signal represents that a predetermined gesture type is made, the smart terminal performs a predetermined control on the wireless headset.
The acquisition of the sample, training of the model and testing are described in detail below with reference to fig. 4.
The inventor organizes that the volunteers collected tens of thousands of physical signal data (in this system, the sound signals received by the headset microphone) for different gestures.
In one example, after the acquisition is complete, the data is washed: the time-frequency graphs of all data were examined and the start and end times of each gesture were manually noted. Samples affected by noise due to hardware problems during data collection are deleted. Through this process, 11147 gesture samples (77.4% in weight) remain in the data set. Among the gestures, the swipe gesture takes the longest time, with the 99 percentile duration being 1.2 seconds, so this duration is used as the original audio input length for gesture classification. Finally, each gesture is segmented with a total 1.2s long audio data window around the midpoint of the labeled range to generate a data set for evaluating gesture detection and classification.
Next, the data is enhanced: the data set collected here is small compared to what is typically required for deep learning, so the data set is augmented by generating similar variants of the collected examples. The main mixing means include mixing the noise of two common scenes (office noise and street noise) with the original audio data; the audio signal is translated.
And marking the gesture corresponding to the audio signal after the enhancement is finished.
And (3) performing formal model training after labeling: the enhanced and labeled data are all mixed together (wherein the ratio of the training set to the test set is 8:2), converted into a time-frequency diagram as input, and trained on a pre-trained DenseNet.
Design of optimization algorithm: the literature indicates that random gradient descent (SGD) has better generalization performance than an adaptive optimizer (such as Adam). SGD was therefore used as the optimization algorithm for training, learning momentum (momentum) was 0.9 to accelerate convergence, and weight reduction parameter (weight reduction) was 0.0001 to prevent overfitting. In addition, the learning rate is updated by combining a linear gradient preheating method and a cosine annealing technology. The learning rate starts at 0.01, rises to 0.1 in 20 epochs, and then decays with a cosine curve in the next 400 epochs. Such a learning rate schedule has the advantage of fast (high learning rate at the beginning) and robust (low rate at the end) convergence.
Through multiple rounds of training, the overall classification accuracy on the test set reaches 95.3% at the end of training. As shown in fig. 5.
On the other hand, a set of APP programs for recognizing and executing commands in real time has been developed on android phones.
After designing the system, the inventor abstracts 20 users actually using the real-time interactive system and requires the users to complete specific interactive tasks by using different gestures. The interaction task simulates three common scenarios: music playing, telephone call receiving and notification message processing; as shown in fig. 6.
As a comparison of the gesture interaction system, a user needs to simulate two situations of a mobile phone in a pocket and on a table, namely a 'picking-out group' and a 'picking-up group'. The experimental results show that:
1. the user well remembers the gesture corresponding to the specific operation (the situation of action error does not occur);
2. the interactive system of the embodiment of the invention accurately (95.9%) detects the existence of the gesture action;
3. the interactive system of the embodiment of the invention accurately identifies the gesture actions (the classification accuracy is 93.7%);
4. in three scenarios, music playing, phone call receiving, and notification message, the embodiment of the present invention is 33.9% faster than picking up the group on average and 56.2% faster than picking up the group on average, as shown in fig. 7 (earbouddy in fig. 7 represents an interactive system according to an embodiment of the present invention).
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A man-machine interaction method for interacting with an intelligent terminal by using a wireless earphone comprises the following steps:
establishing wireless connection between the earphone and the intelligent terminal;
under the condition that a user wears the wireless earphone, the earphone detects a sound signal of a contact gesture made by a human hand in an area above the neck of a human body by using a self-contained sensor and transmits the sound signal to the intelligent terminal;
the intelligent terminal classifies and predicts the sound signals by utilizing a trained machine learning-based classification model, and judges whether the sound signals represent that a user makes a preset gesture type;
in response to determining that the sound signal indicates that a predetermined gesture type is made, the smart terminal performs a predetermined control on the wireless headset.
2. The human-computer interaction method of claim 1, the gesture types comprising:
clicking temple, clicking cheekbone, clicking mandible angle, clicking occipital bone, clicking superior edge of auricle, clicking middle of auricle, clicking inferior edge of auricle, double clicking temple, double clicking cheekbone, double clicking inferior angle of mandible, double clicking occipital bone, double clicking superior edge of auricle, double clicking inferior edge of auricle, sliding up and down on face, sliding down and up on face, sliding back and front on face, sliding front and back on face, sliding up and down on inferior angle of mandible, sliding down and up on occipital bone, sliding down and down on auricle, sliding down and up on auricle, circling on face, two fingers doing face zooming gesture, two fingers doing face gesture, unfolding on face.
3. The human-computer interaction method of claim 1, wherein the gesture type is one of a single click, a double click, and a sliding motion of a part of the face, ears, or behind the ears.
4. The human-computer interaction method of claim 1, further comprising:
and (3) using the random gradient descent SGD as an optimization algorithm for training a classification model, and combining a linear gradient preheating method and a cosine annealing technology to update the learning rate.
5. The human-computer interaction method of claim 4, wherein:
the learning momentum is 0.9 to accelerate convergence and the weight decays to 0.0001 to prevent overfitting.
6. The human-computer interaction method of claim 4, wherein updating the learning rate by combining the linear gradient pre-heating method and the cosine annealing technique comprises: the learning rate starts at 0.01, rises to 0.1 in 20 epochs, and then decays with a cosine curve in the next 400 epochs.
7. The human-computer interaction method of claim 4, comprising: the data set of the training model is augmented in a manner that includes mixing office and street noise with the original audio data and panning the audio signal.
8. The human-computer interaction method of claim 1, further comprising
The original audio input length for the gesture classification was taken as 99 percentile of the duration of the swipe gesture.
9. The human-computer interaction method of claim 1, further comprising:
each gesture was segmented with 1.2 seconds as the original audio input length for gesture classification, and a total 1.2 second long audio data window before and after the midpoint of the labeled range as the center.
10. The human-computer interaction method of claim 1, said predetermined controlling comprising:
music play, phone call reception, notification message.
CN201911408792.5A 2019-12-31 2019-12-31 Man-machine interaction method for interacting with intelligent terminal by using wireless earphone Pending CN111158487A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911408792.5A CN111158487A (en) 2019-12-31 2019-12-31 Man-machine interaction method for interacting with intelligent terminal by using wireless earphone

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911408792.5A CN111158487A (en) 2019-12-31 2019-12-31 Man-machine interaction method for interacting with intelligent terminal by using wireless earphone

Publications (1)

Publication Number Publication Date
CN111158487A true CN111158487A (en) 2020-05-15

Family

ID=70559836

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911408792.5A Pending CN111158487A (en) 2019-12-31 2019-12-31 Man-machine interaction method for interacting with intelligent terminal by using wireless earphone

Country Status (1)

Country Link
CN (1) CN111158487A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112969116A (en) * 2021-02-01 2021-06-15 深圳市美恩微电子有限公司 Interactive control system of wireless earphone and intelligent terminal
CN113741703A (en) * 2021-11-08 2021-12-03 广东粤港澳大湾区硬科技创新研究院 Non-contact intelligent earphone or glasses interaction method
CN113825063A (en) * 2021-11-24 2021-12-21 珠海深圳清华大学研究院创新中心 Earphone voice recognition starting method and earphone voice recognition method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103576578A (en) * 2013-11-05 2014-02-12 小米科技有限责任公司 Method, device and equipment for adopting earphone wire to control terminal
CN104410938A (en) * 2014-12-23 2015-03-11 上海斐讯数据通信技术有限公司 Intelligent headset and control method thereof
CN106155271A (en) * 2015-03-25 2016-11-23 联想(北京)有限公司 Earphone, electronic apparatus system, control instruction determine method and data processing unit
CN110069199A (en) * 2019-03-29 2019-07-30 中国科学技术大学 A kind of skin-type finger gesture recognition methods based on smartwatch

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103576578A (en) * 2013-11-05 2014-02-12 小米科技有限责任公司 Method, device and equipment for adopting earphone wire to control terminal
CN104410938A (en) * 2014-12-23 2015-03-11 上海斐讯数据通信技术有限公司 Intelligent headset and control method thereof
CN106155271A (en) * 2015-03-25 2016-11-23 联想(北京)有限公司 Earphone, electronic apparatus system, control instruction determine method and data processing unit
CN110069199A (en) * 2019-03-29 2019-07-30 中国科学技术大学 A kind of skin-type finger gesture recognition methods based on smartwatch

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112969116A (en) * 2021-02-01 2021-06-15 深圳市美恩微电子有限公司 Interactive control system of wireless earphone and intelligent terminal
CN113741703A (en) * 2021-11-08 2021-12-03 广东粤港澳大湾区硬科技创新研究院 Non-contact intelligent earphone or glasses interaction method
CN113825063A (en) * 2021-11-24 2021-12-21 珠海深圳清华大学研究院创新中心 Earphone voice recognition starting method and earphone voice recognition method
CN113825063B (en) * 2021-11-24 2022-03-15 珠海深圳清华大学研究院创新中心 Earphone voice recognition starting method and earphone voice recognition method

Similar Documents

Publication Publication Date Title
US10185543B2 (en) Method, apparatus and computer program product for input detection
US8842919B2 (en) Gesture based interface system and method
CN107613131B (en) Application program disturbance-free method, mobile terminal and computer-readable storage medium
US8250001B2 (en) Increasing user input accuracy on a multifunctional electronic device
CN103529934B (en) Method and apparatus for handling multiple input
US20100180202A1 (en) User Interfaces for Electronic Devices
CN111158487A (en) Man-machine interaction method for interacting with intelligent terminal by using wireless earphone
CN110674801B (en) Method and device for identifying user motion mode based on accelerometer and electronic equipment
CN104090652A (en) Voice input method and device
CN109212534B (en) Method, device, equipment and storage medium for detecting holding gesture of mobile terminal
CN110908513B (en) Data processing method and electronic equipment
CN112820299B (en) Voiceprint recognition model training method and device and related equipment
CN110830368B (en) Instant messaging message sending method and electronic equipment
CN109302528B (en) Photographing method, mobile terminal and computer readable storage medium
CN109977426A (en) A kind of training method of translation model, device and machine readable media
CN111338594A (en) Input control using fingerprints
US10088897B2 (en) Method and electronic device for improving performance of non-contact type recognition function
CN107958273B (en) Volume adjusting method and device and storage medium
CN112230779B (en) Operation response method, device, equipment and storage medium
CN109831375A (en) Receiving/transmission method, terminal and the computer readable storage medium of instant messaging information
CN110597480B (en) Custom voice instruction implementation method and terminal
CN112256135A (en) Equipment control method and device, equipment and storage medium
CN105373318B (en) Information display method and device
CN106598445A (en) Method and device for outputting communication message
CN108628534B (en) Character display method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination