CN113989828A - Gesture recognition method and system based on portable intelligent device and ultrasonic signals - Google Patents

Gesture recognition method and system based on portable intelligent device and ultrasonic signals Download PDF

Info

Publication number
CN113989828A
CN113989828A CN202111092418.6A CN202111092418A CN113989828A CN 113989828 A CN113989828 A CN 113989828A CN 202111092418 A CN202111092418 A CN 202111092418A CN 113989828 A CN113989828 A CN 113989828A
Authority
CN
China
Prior art keywords
model
ultrasonic
gesture
gesture recognition
signals
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111092418.6A
Other languages
Chinese (zh)
Inventor
黄顺亮
唐德宾
赵国成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Sandu Intelligent Technology Co.,Ltd.
Original Assignee
Suzhou Shengying Space Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Shengying Space Intelligent Technology Co ltd filed Critical Suzhou Shengying Space Intelligent Technology Co ltd
Priority to CN202111092418.6A priority Critical patent/CN113989828A/en
Publication of CN113989828A publication Critical patent/CN113989828A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

The invention discloses a gesture recognition method and a system based on portable intelligent equipment and ultrasonic signals, wherein the method comprises the following steps: transmitting specific frequency ultrasonic waves; a specific frequency ultrasonic receiving and near field attenuation mechanism; a gesture recognition method with a deep learning model and a machine learning model fused together; the system is composed of a smart phone, an installable APP, a background model and a service. The invention utilizes the ultrasonic distance measurement principle and the Doppler frequency shift effect to realize the detection and identification of gestures, and further completes various controls of multimedia through the gestures, so that the control is convenient and simple.

Description

Gesture recognition method and system based on portable intelligent device and ultrasonic signals
Technical Field
The invention relates to the technical field of information, in particular to a gesture recognition method and system based on portable intelligent equipment and ultrasonic signals.
Background
In recent years, the development of artificial intelligence technology has advanced sufficiently, and the industry generally holds optimism for the development of artificial intelligence. The application in the fields of face recognition, human behavior recognition, target detection, target tracking, voice recognition and the like is continuously emerging, and becomes the power of a new technological development. The popularization of mobile devices and intelligent devices enables people to contact and communicate with machines more and more, and therefore, the mode of human-computer interaction is also required more and more.
From traditional Command Line Interaction (CLI) to graphical interface interaction (GUI), the human interaction experience with the device is getting better, but this still does not reach the ideal state for humans. From the instinct and habit of human evolution, it is really desirable that human-computer interaction is more consistent with the natural expression of human. Therefore, interactive modes based on sound, gestures, vision, human body gestures/behaviors are gradually attracting the attention of researchers.
The current mobile device or intelligent device is similar to a smart watch and a smart phone, and mainly utilizes a button and a touch screen to interact in a human-computer interaction mode, so that the daily requirement can be basically met, but the experience is poor, and particularly in many occasions such as driving, the touch screen is very inconvenient and has potential safety hazards. Although some devices or software implement part of the sound control, the influence of the fertility and the surrounding noise environment is large and the experience is still not good. Therefore, a new wireless sensing control mode is needed, better experience is provided for users, and developers can utilize man-machine interaction options with more dimensions to make the wireless sensing control mode have high commercial value and application value. Therefore, the gesture recognition method and system based on the portable intelligent device and the ultrasonic signals are provided.
Disclosure of Invention
The invention aims to provide a gesture recognition method based on a portable intelligent device and an ultrasonic signal to solve the problems in the prior art.
In order to achieve the purpose, the invention provides the following technical scheme: the gesture recognition method and system based on the portable intelligent device and the ultrasonic signals specifically comprise the following steps:
s1: transmission of specific frequency ultrasonic waves: the method comprises the following steps that software is used on intelligent terminal equipment to freely designate transmitted ultrasonic frequency, and a loudspeaker or a loudspeaker array on an intelligent mobile phone is adopted as transmitted hardware equipment;
s2: the receiving and near-field attenuation mechanism of the specific frequency ultrasonic wave: the ultrasonic wave receiving is realized by a microphone or a microphone array of the smart phone, the collected ultrasonic wave is an ultrasonic wave signal which is emitted by emitting sound waves and is emitted by a person through gesture actions, in order to improve the detection accuracy and reduce the interference of the actions of peripheral objects, a near-field attenuation mechanism is adopted to effectively attenuate echo signals exceeding a specified range, and only the gesture actions within a specified distance range are ensured to be effective actions;
s3: the gesture recognition method with the integration of the deep learning model and the machine learning model comprises the following steps:
s31: firstly, a training sample is collected by a smart phone, wherein the sample comprises an ultrasonic echo signal with a gesture label;
s32: then, the training samples are preprocessed, wherein two ways are mainly adopted,
s321: the first mode is that sound wave signals are processed into images, the images are used as the input of a deep learning model, and a gesture tag is used as a learning tag, so that the deep learning model is trained;
s322: the first mode is to use a traditional digital signal processing mode to extract the change characteristics of the acoustic wave signal as the input of a machine learning model, and the gesture label is used as a learning label, so as to train the machine learning model;
s33: finally, fusing the two models, taking the fused result as a final prediction model, and arranging the trained model to be a gesture recognition application tool;
s4: during specific application, gestures can be defined through the APP, namely the actual control significance of various gestures is defined by user.
As a preferred technical solution of the present invention, the software in S1 is an APP applied to a smart phone.
In a preferred embodiment of the present invention, the ultrasonic frequency in S1 is defined as a cosine wave ultrasonic wave of 18khz or more.
As a preferred embodiment of the present invention, the variation characteristics of the acoustic wave signal in S322 are digital characteristics of frequency variation, phase variation, amplitude, slope, distance, and distance variation.
As a preferred embodiment of the present invention, the image of S321 is a doppler shift map.
A gesture recognition system based on portable intelligent equipment and ultrasonic signals comprises a smart phone, an installable APP, a background model and a service; the specific implementation can be divided into a model building phase and a model application phase.
As a preferred technical scheme of the invention, the model building stage comprises the acquisition of sample data, the processing of data, the characteristic transformation and the model training;
a: collecting sample data: the APP arranged at the mobile phone end can set the frequency and amplitude of transmitted ultrasonic waves, after the APP is started, the system transmits ultrasonic signals, an operator performs gesture actions according to the regulations, and the UBGR system samples the receivers at a sampling frequency of 96khz to receive return signals of the ultrasonic signals after the gesture effects so as to collect time domain signals for each receiver;
b: data processing and feature transformation: on one hand, the time domain and frequency domain characteristics of the signal, including frequency, amplitude, change indexes, and calculated depth and distance, are extracted in a signal processing mode; on the other hand, a Doppler frequency shift image is obtained through calculation and is used as the input of end-to-end learning;
c: model training: the model selects the combination of a traditional machine learning model and a deep learning model; the traditional machine learning model adopts XGBOOST as a classifier, and the core of the model is to generate a weak prediction model in each step by utilizing a boosting thought, and to add the weak prediction model into a total model in a weighted manner to finally obtain a strong classifier.
As a preferred technical scheme of the invention, the model application stage comprises deployment and operation of the model, in the UBGR system, the deployment of the model is put on a server at the rear end to operate, the acquisition of front-end data is the same as the acquisition of sample data in the model training stage, only acquired ultrasonic echo signals are directly sent to the server end, processed into proper model input data, sent into the model, and then recognition results are output.
The invention has the beneficial effects that: the invention utilizes ultrasonic waves as a signal medium, establishes a gesture recognition method and a gesture recognition system based on the ultrasonic waves according to the sensing change of an ultrasonic distance measurement principle and a Doppler effect on the position and the motion state of an object and combines a latest deep learning model, and simultaneously considers the factors of convenience, cost and the like of application implementation.
The invention utilizes the ultrasonic distance measurement principle and the Doppler frequency shift effect to realize the detection and identification of gestures, and further completes various controls of multimedia through the gestures, so that the control is convenient and simple.
Drawings
FIG. 1 is a flow chart of a gesture recognition method of the present invention;
FIG. 2 is a block diagram of a gesture recognition system according to the present invention.
Detailed Description
The following detailed description of the preferred embodiments of the present invention, taken in conjunction with the accompanying drawings, will make the advantages and features of the invention more readily understood by those skilled in the art, and thus will more clearly and distinctly define the scope of the invention.
Example (b): referring to fig. 1-2, the present invention provides a technical solution: the gesture recognition method based on the portable intelligent device and the ultrasonic signals comprises the following specific steps:
s1: transmission of specific frequency ultrasonic waves: the method comprises the following steps that software is used on intelligent terminal equipment to freely designate transmitted ultrasonic frequency, and a loudspeaker or a loudspeaker array on an intelligent mobile phone is adopted as transmitted hardware equipment;
s2: the receiving and near-field attenuation mechanism of the specific frequency ultrasonic wave: the ultrasonic wave receiving is realized by a microphone or a microphone array of the smart phone, the collected ultrasonic wave is an ultrasonic wave signal which is emitted by emitting sound waves and is emitted by a person through gesture actions, in order to improve the detection accuracy and reduce the interference of the actions of peripheral objects, a near-field attenuation mechanism is adopted to effectively attenuate echo signals exceeding a specified range, and only the gesture actions within a specified distance range are ensured to be effective actions;
s3: the gesture recognition method with the integration of the deep learning model and the machine learning model comprises the following steps:
s31: firstly, a training sample is collected by a smart phone, wherein the sample comprises an ultrasonic echo signal with a gesture label;
s32: then, the training samples are preprocessed, wherein two ways are mainly adopted,
s321: the first mode is that sound wave signals are processed into images, the images are used as the input of a deep learning model, and a gesture tag is used as a learning tag, so that the deep learning model is trained;
s322: the first mode is to use a traditional digital signal processing mode to extract the change characteristics of the acoustic wave signal as the input of a machine learning model, and the gesture label is used as a learning label, so as to train the machine learning model;
s33: finally, fusing the two models, taking the fused result as a final prediction model, and arranging the trained model to be a gesture recognition application tool;
s4: during specific application, gestures can be defined through the APP, namely the actual control significance of various gestures is defined by user.
The software in the S1 is an application APP on the smart phone; the ultrasonic frequency in the S1 is defined as cosine wave ultrasonic wave with the frequency of more than 18 khz; the variation characteristics of the sound wave signal in the step S322 are digital characteristics of frequency variation, phase variation, amplitude, slope, distance and distance variation; the image of S321 is a doppler shift map.
A gesture recognition system based on portable intelligent equipment and ultrasonic signals comprises a smart phone, an installable APP, a background model and a service; the specific implementation can be divided into a model building phase and a model application phase.
The model building stage comprises the acquisition of sample data, the processing of data, the characteristic transformation and the model training;
a: collecting sample data: the APP arranged at the mobile phone end can set the frequency and amplitude of transmitted ultrasonic waves, after the APP is started, the system transmits ultrasonic signals, an operator performs gesture actions according to the regulations, and the UBGR system samples the receivers at a sampling frequency of 96khz to receive return signals of the ultrasonic signals after the gesture effects so as to collect time domain signals for each receiver;
b: data processing and feature transformation: on one hand, the time domain and frequency domain characteristics of the signal, including frequency, amplitude, change indexes, and calculated depth and distance, are extracted in a signal processing mode; on the other hand, a Doppler frequency shift image is obtained through calculation and is used as the input of end-to-end learning;
c: model training: the model selects the combination of a traditional machine learning model and a deep learning model; the traditional machine learning model adopts XGBOOST as a classifier, and the core of the model is to generate a weak prediction model in each step by utilizing a boosting thought, and to add the weak prediction model into a total model in a weighted manner to finally obtain a strong classifier.
The model application stage comprises deployment and operation of the model, in the UBGR system, the deployment of the model is placed on a server at the rear end to operate, the acquisition of front-end data is the same as the acquisition of sample data in the model training stage, only the acquired ultrasonic echo signals can be directly sent to the server end, and after the ultrasonic echo signals are processed into proper model input data, the model is sent into the model, and then the recognition result is output.
The working principle is as follows: the method utilizes the existing portable equipment (smart phone) to realize the sending and receiving of specific ultrasonic signals under the condition of not adding any element, and particularly utilizes the invention to realize the detection of ultrasonic signals in a limited range and increase the attenuation of peripheral ultrasonic signals; the method combines deep learning and the traditional machine learning method, realizes accurate gesture recognition through multi-model fusion, and the input of the fusion model is a signal obtained by extracting and transforming the characteristics of an echo signal received by an ultrasonic receiving unit.
The gesture recognition system based on the ultrasonic waves is composed of a smart phone, an installable APP, a background model and a service, and the specific implementation can be divided into two stages of model establishment and model application.
In the model building stage, the core work is the acquisition of sample data and the model training;
(1) collecting sample data: the APP (part of the system of the invention, which needs to be developed separately) installed at the mobile phone end can set the frequency and amplitude of the transmitted ultrasonic wave. In this embodiment, the emitted ultrasound is set to be 21khz, the sampling frequency is 96khz, the speaker volume is defaulted to be 0.5, the effective range takes the mobile phone as the center of a circle and the radius is 50cm, after the APP is started, the system emits an ultrasound signal, and simultaneously, an operator performs gesture actions according to the regulations (the invention takes 6 basic gestures as an example), which can be static gestures or dynamic gestures. The UBGR system samples the receivers (in this example, microphones) at a sampling frequency of 96khz to receive the return signals of the ultrasound signals after the gesture is affected, so as to collect time domain signals for each receiver, and a file corresponding to each gesture signal is stored in a background server and labeled (namely, a gesture name);
(2) data processing and feature transformation: the acquired sample data needs to be processed, such as noise reduction, and then two major types of processing are performed, on one hand, time domain and frequency domain characteristics of signals, including frequency, amplitude, change indexes, calculated depth (distance) and the like, are extracted through a signal processing mode, such as Fast Fourier Transform (FFT) and the like, and on the other hand, a Doppler frequency shift image is obtained through calculation and is used as input of end-to-end learning;
(3) model training: the model selects a combination of a traditional machine learning model and a deep learning model. The traditional machine learning model adopts XGBOOST as a classifier, and the core of the model is that a weak prediction model is generated in each step by utilizing a lifting idea and is weighted and accumulated in a total model to finally obtain a strong classifier;
the deep learning model in this example employs a convolutional neural network ("CNN") that includes a convolutional layer, which may include a convolutional sublayer, a rectifying linear unit ("ReLU") sublayer, and a max-pooling sublayer, followed by a fully-connected ("FC") layer. The frequency shift image and other related data are used as input of the CNN, the gesture type is used as a label, and a proper classifier can be obtained through multiple rounds of iterative training;
and finally, fusing the classification result of the XGB OST model and the classification result of the CNN, wherein different modes such as mean value fusion, voting fusion and the like can be selected as the specific fusion mode. In the application of the embodiment, the fused model is improved by more than 30 percent compared with the original single best model;
in the model application phase, the deployment and operation of the model are mainly performed. In the UBGR system, the deployment of the model is put on a server at the rear end to operate, the acquisition of the front-end data is the same as the model training stage (1), only the acquired ultrasonic echo signals can be directly sent to the server end, and after the ultrasonic echo signals are processed into proper model input data, the model is sent to the server end, and then the recognition result is output.
The invention utilizes the ultrasonic distance measurement principle and the Doppler frequency shift effect to realize the detection and identification of gestures, and further completes various controls of multimedia through the gestures, so that the control is convenient and simple.
The above examples only show some embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention.

Claims (8)

1. Gesture recognition method based on portable smart machine and ultrasonic signal, its characterized in that: the method comprises the following specific steps:
s1: transmission of specific frequency ultrasonic waves: the method comprises the following steps that software is used on intelligent terminal equipment to freely designate transmitted ultrasonic frequency, and a loudspeaker or a loudspeaker array on an intelligent mobile phone is adopted as transmitted hardware equipment;
s2: the receiving and near-field attenuation mechanism of the specific frequency ultrasonic wave: the ultrasonic wave receiving is realized by a microphone or a microphone array of the smart phone, the collected ultrasonic wave is an ultrasonic wave signal which is emitted by emitting sound waves and is emitted by a person through gesture actions, in order to improve the detection accuracy and reduce the interference of the actions of peripheral objects, a near-field attenuation mechanism is adopted to effectively attenuate echo signals exceeding a specified range, and only the gesture actions within a specified distance range are ensured to be effective actions;
s3: the gesture recognition method with the integration of the deep learning model and the machine learning model comprises the following steps:
s31: firstly, a training sample is collected by a smart phone, wherein the sample comprises an ultrasonic echo signal with a gesture label;
s32: then, the training samples are preprocessed, wherein two ways are mainly adopted,
s321: the first mode is that sound wave signals are processed into images, the images are used as the input of a deep learning model, and a gesture tag is used as a learning tag, so that the deep learning model is trained;
s322: the first mode is to use a traditional digital signal processing mode to extract the change characteristics of the acoustic wave signal as the input of a machine learning model, and the gesture label is used as a learning label, so as to train the machine learning model;
s33: finally, fusing the two models, taking the fused result as a final prediction model, and arranging the trained model to be a gesture recognition application tool;
s4: during specific application, gestures can be defined through the APP, namely the actual control significance of various gestures is defined by user.
2. The method of gesture recognition based on a portable smart device and ultrasound signals of claim 1, wherein: the software in S1 is an APP on the smartphone.
3. The method of gesture recognition based on a portable smart device and ultrasound signals of claim 1, wherein: the ultrasonic frequency in S1 is defined as cosine wave ultrasonic waves of 18khz or more.
4. The method of gesture recognition based on a portable smart device and ultrasound signals of claim 1, wherein: the variation of the acoustic wave signal in S322 is characterized by digital characteristics of frequency variation, phase variation, amplitude, slope, distance, and distance variation.
5. The method of gesture recognition based on a portable smart device and ultrasound signals of claim 1, wherein: the image of S321 is a doppler shift map.
6. The utility model provides a gesture recognition system based on portable smart machine and ultrasonic signal which characterized in that: the system comprises a smart phone, an installable APP and a background model and service; the specific implementation can be divided into a model building phase and a model application phase.
7. The system of claim 6, wherein the gesture recognition system based on the portable intelligent device and the ultrasonic signal is characterized in that: the model building stage comprises the acquisition of sample data, the processing of data, the feature transformation and the model training;
a: collecting sample data: the APP arranged at the mobile phone end can set the frequency and amplitude of transmitted ultrasonic waves, after the APP is started, the system transmits ultrasonic signals, an operator makes gesture motions according to the specification, and the UBGR system samples the receivers at a sampling frequency of 96khz to receive return signals of the ultrasonic signals after the gesture influences so as to collect time domain signals for each receiver;
b: data processing and feature transformation: on one hand, the time domain and frequency domain characteristics of the signal, including frequency, amplitude, change indexes, and calculated depth and distance, are extracted in a signal processing mode; on the other hand, a Doppler frequency shift image is obtained through calculation and is used as the input of end-to-end learning;
c: model training: the model selects the combination of a traditional machine learning model and a deep learning model; the traditional machine learning model adopts XGBOOST as a classifier, and the core of the model is to generate a weak prediction model in each step by utilizing a boosting thought, and to add the weak prediction model into a total model in a weighted manner to finally obtain a strong classifier.
8. The system of claim 6, wherein the gesture recognition system based on the portable intelligent device and the ultrasonic signal is characterized in that: the model application stage comprises deployment and operation of a model, in the UBGR system, the deployment of the model is placed on a server at the rear end to operate, the acquisition of front-end data is the same as the acquisition of sample data in the model training stage, only acquired ultrasonic echo signals can be directly sent to the server end, and after the ultrasonic echo signals are processed into proper model input data, the model is sent into the model, and then a recognition result is output.
CN202111092418.6A 2021-09-17 2021-09-17 Gesture recognition method and system based on portable intelligent device and ultrasonic signals Pending CN113989828A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111092418.6A CN113989828A (en) 2021-09-17 2021-09-17 Gesture recognition method and system based on portable intelligent device and ultrasonic signals

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111092418.6A CN113989828A (en) 2021-09-17 2021-09-17 Gesture recognition method and system based on portable intelligent device and ultrasonic signals

Publications (1)

Publication Number Publication Date
CN113989828A true CN113989828A (en) 2022-01-28

Family

ID=79736034

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111092418.6A Pending CN113989828A (en) 2021-09-17 2021-09-17 Gesture recognition method and system based on portable intelligent device and ultrasonic signals

Country Status (1)

Country Link
CN (1) CN113989828A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116032679A (en) * 2023-03-28 2023-04-28 合肥坤语智能科技有限公司 Intelligent host interaction control system for intelligent hotel

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116032679A (en) * 2023-03-28 2023-04-28 合肥坤语智能科技有限公司 Intelligent host interaction control system for intelligent hotel
CN116032679B (en) * 2023-03-28 2023-05-30 合肥坤语智能科技有限公司 Intelligent host interaction control system for intelligent hotel

Similar Documents

Publication Publication Date Title
EP3923273B1 (en) Voice recognition method and device, storage medium, and air conditioner
CN107454508B (en) TV set and TV system of microphone array
Liu et al. Wavoice: A noise-resistant multi-modal speech recognition system fusing mmwave and audio signals
CN110364144B (en) Speech recognition model training method and device
EP3639051B1 (en) Sound source localization confidence estimation using machine learning
CN110780741B (en) Model training method, application running method, device, medium and electronic equipment
CN105744434B (en) A kind of intelligent sound box control method and system based on gesture identification
CN103730116B (en) Intelligent watch realizes the system and method that intelligent home device controls
US11152016B2 (en) Autonomous intelligent radio
CN111124108B (en) Model training method, gesture control method, device, medium and electronic equipment
Zhang et al. Endophasia: Utilizing acoustic-based imaging for issuing contact-free silent speech commands
CN105807923A (en) Ultrasonic wave based volley gesture identification method and system
CN108962241B (en) Position prompting method and device, storage medium and electronic equipment
CN103948398A (en) Heart sound location segmenting method suitable for Android system
CN104123930A (en) Guttural identification method and device
CN110865710A (en) Terminal control method and device, mobile terminal and storage medium
CN112735418A (en) Voice interaction processing method and device, terminal and storage medium
CN110519450A (en) Ultrasonic processing method, device, electronic equipment and computer-readable medium
CN113989828A (en) Gesture recognition method and system based on portable intelligent device and ultrasonic signals
CN105975220B (en) Voice printing auxiliary equipment and voice printing system
CN107452381B (en) Multimedia voice recognition device and method
Cao et al. ipand: Accurate gesture input with smart acoustic sensing on hand
CN111257890A (en) Fall behavior identification method and device
US20220252722A1 (en) Method and apparatus for event detection, electronic device, and storage medium
CN116129942A (en) Voice interaction device and voice interaction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220606

Address after: Room 1505, block B, twin towers, Wuxi Software Park, No. 18-17, Zhenze Road, Xinwu District, Wuxi, Jiangsu 214115

Applicant after: Wuxi Sandu Intelligent Technology Co.,Ltd.

Address before: 200335 D01, 12th floor, building 11, Lingkong SOHO, No. 968, Jinzhong Road, Changning District, Shanghai

Applicant before: Suzhou Shengying Space Intelligent Technology Co.,Ltd.