CN110069199A - A kind of skin-type finger gesture recognition methods based on smartwatch - Google Patents

A kind of skin-type finger gesture recognition methods based on smartwatch Download PDF

Info

Publication number
CN110069199A
CN110069199A CN201910248707.7A CN201910248707A CN110069199A CN 110069199 A CN110069199 A CN 110069199A CN 201910248707 A CN201910248707 A CN 201910248707A CN 110069199 A CN110069199 A CN 110069199A
Authority
CN
China
Prior art keywords
gesture
finger
skin
acoustic signal
smartwatch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910248707.7A
Other languages
Chinese (zh)
Other versions
CN110069199B (en
Inventor
杨盘隆
曹书敏
李向阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN201910248707.7A priority Critical patent/CN110069199B/en
Publication of CN110069199A publication Critical patent/CN110069199A/en
Application granted granted Critical
Publication of CN110069199B publication Critical patent/CN110069199B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The skin-type finger gesture recognition methods based on smartwatch that the invention discloses a kind of, comprising: step 1, the passive acoustic signal for the generation that rubs between finger and skin of dorsum of hand signal acquisition: is acquired by smartwatch;Step 2, data prediction: noise signal is removed by filtering processing, obtains acoustic signal to be detected;Step 3, gesture motion detects: being handled by gestures detection acoustic signal to be detected being divided into multiple independent gesture acoustic signals;Step 4, multiple independent gesture acoustic signals feature extraction: are converted into the time-frequency spectrum of gray level image expression acoustic signal and the image of mel cepstrum coefficients as characteristic value;Step 5, gesture motion identifies: using the characteristic value of each independent finger movement of extraction as input data, carrying out finger gesture action recognition to input data using convolutional neural networks model, obtains corresponding finger gesture movement.This method extends the input area of wearable device, without being equipped with other equipment, has simple, the good advantage of real-time.

Description

A kind of skin-type finger gesture recognition methods based on smartwatch
Technical field
The present invention relates to be based on smartwatch application field more particularly to a kind of skin-type finger hand based on smartwatch Gesture recognition methods.
Background technique
Currently, thering are numerous studies to develop much new softwares based on smartwatch and proposing many new Method is used for the interaction of smartwatch.Research before many is exactly that some special hardware or sensor are utilized.Than Such as, Google develops a special chip in project Soli, utilizes 60GHz radar moving normal input hand Make to replace;WatchIt provides the equipment prototype of an extension Intelligent bracelet input;Mole propose using in wrist-watch from The motion sensor of band goes to speculate what content user is writing.
The skin of people can be used as the input face that can be used at any time, and also have many technical research this respect Content, such as the hardware of novel skin-worn is devised in the existing scheme of public technology.Or utilize some differences Ordinary technology, such as electric signal, sound, even optical projection etc..ISkin, which is proposed, has plated biofacies using one The small sensor of capacitive metal is inputted on human body.SkinTrack reflects RF signal using a finger ring, and measures Receive the phase offset of signal out to track finger.Skinput carries out the transmission of biological sound using the body of people, so that skin Skin can become the input face for having one group of sensor on arm.SkinButton proposes small-sized projector being embedded into wrist-watch In, portrait is projected on skin.
But in these above-mentioned existing methods using human skin input, exists and specific hardware or system is needed to constitute again The problems such as miscellaneous.
Summary of the invention
Based on the problems of prior art, the object of the present invention is to provide a kind of skin-type hand based on smartwatch Refer to gesture identification method, can realize handwriting input using the back of the hand of human body as input face using the smartwatch worn.
The purpose of the present invention is what is be achieved through the following technical solutions:
Embodiment of the present invention provides a kind of skin-type finger gesture recognition methods based on smartwatch, comprising:
Step 1, signal acquisition: finger passes through Mike's elegance of smartwatch when friction marks gesture on skin of dorsum of hand Collect the passive acoustic signal for the generation that rubs between finger and skin of dorsum of hand;
Step 2, the noise in the passive acoustic signal that the step 1 acquires data prediction: is removed by filtering processing Signal obtains acoustic signal to be detected;
Step 3, gesture motion detects: handling the sound to be detected that will be obtained after the step 2 pretreatment by gestures detection It learns signal and is divided into multiple independent gesture acoustic signals;
Step 4, multiple independent gesture acoustic signals feature extraction: are converted into the acoustic signal indicated with gray level image Characteristic value of the image of time-frequency spectrum and mel cepstrum coefficients as each independent finger movement;
Step 5, gesture motion identifies: using the characteristic value of each independent finger movement of the step 4 extraction as input Data carry out finger gesture action recognition to input data using convolutional neural networks model, it is dynamic to obtain corresponding finger gesture Make.
As seen from the above technical solution provided by the invention, the skin provided in an embodiment of the present invention based on smartwatch Skin formula finger gesture recognition methods, it has the advantage that:
Finger is acquired in the sound of skin of dorsum of hand hand writing gesture stroke friction by the microphone of smartwatch, and to acquisition Sound handled, therefrom identify the finger gesture of writing, wearable device can be carried out on skin of dorsum of hand by finger It interactively enters, extends the input area of wearable device, without being equipped with other heavy equipments.With simple, real-time is good The advantages of.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, required use in being described below to embodiment Attached drawing be briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for For those skilled in the art, without creative efforts, it can also be obtained according to these attached drawings His attached drawing.
Fig. 1 is the flow chart of the skin-type finger gesture recognition methods provided in an embodiment of the present invention based on smartwatch;
Fig. 2 is the gesture motion collection schematic diagram of recognition methods provided in an embodiment of the present invention;
Fig. 3 is the application system configuration diagram of recognition methods provided in an embodiment of the present invention;
Fig. 4 is the acoustic signal schematic diagram handled in recognition methods provided in an embodiment of the present invention, wherein (1) is this hair The original signal schematic diagram handled in the recognition methods that bright embodiment provides;It (2) is recognition methods provided in an embodiment of the present invention Signal schematic representation after the bandpass filtering of middle processing;
Fig. 5 is the acoustic signal schematic diagram in recognition methods provided in an embodiment of the present invention after wavelet transformation;
Fig. 6 is figure, frequency spectrum after the data processing of four kinds of standard gesture motions in recognition methods provided in an embodiment of the present invention Figure and mel cepstrum figure.
Specific embodiment
Below with reference to particular content of the invention, technical solution in the embodiment of the present invention is clearly and completely retouched It states, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Based on the present invention Embodiment, every other embodiment obtained by those of ordinary skill in the art without making creative efforts, Belong to protection scope of the present invention.The content being not described in detail in the embodiment of the present invention belongs to this field professional technique people The prior art well known to member.
As described in Figure 1, the embodiment of the present invention provides a kind of skin-type finger gesture recognition methods based on smartwatch, packet It includes:
Step 1, signal acquisition: finger passes through Mike's elegance of smartwatch when friction marks gesture on skin of dorsum of hand Collect the passive acoustic signal for the generation that rubs between finger and skin of dorsum of hand;
Step 2, the noise in the passive acoustic signal that the step 1 acquires data prediction: is removed by filtering processing Signal obtains acoustic signal to be detected;
Step 3, gesture motion detects: handling the sound to be detected that will be obtained after the step 2 pretreatment by gestures detection It learns signal and is divided into multiple independent gesture acoustic signals;
Step 4, multiple independent gesture acoustic signals feature extraction: are converted into the acoustic signal indicated with gray level image Characteristic value of the image of time-frequency spectrum and mel cepstrum coefficients as each independent finger movement;
Step 5, gesture motion identifies: using the characteristic value of each independent finger movement of the step 4 extraction as input Data carry out finger gesture action recognition to input data using convolutional neural networks model, it is dynamic to obtain corresponding finger gesture Make.
In the step 2 of the above method, filtering processing used includes:
Low frequency in passive acoustic signal collected is removed by FIR filter filtering (i.e. bandpass filtering) and high frequency is made an uproar Sound obtains the acoustic signal to be detected only comprising finger gesture movement acoustic signal.
The low frequency and high-frequency noise in passive acoustic signal collected are removed above by FIR filter filtering are as follows: benefit To be detected acoustic signal y of the filtered signal for using multiple FIR filter filtering outputs to obtain as removal out-of-band noise [n], y [n] are as follows:
bk=bn+2-k, k=1,2 ..., n+1;
Wherein, N is the quantity of FIR filter, sets N=112;biIt is n-th layer of i-th of example in FIR filter Frequency response values are the coefficient of this layer of FIR filter;The parameter of two FIR filters is respectively set to two passband cut-offs Frequency is 6000 and 14000Hz and two stopband cutoff frequency is 5000 and 15000Hz, and the sample rate of original signal is Fs =44100Hz.
In the step 3 of the above method, the acoustics to be detected that will be obtained after the step 2 pretreatment is handled by gestures detection Signal is divided into multiple independent gesture acoustic signals
The beginning and end that each independent finger gesture is detected from acoustic signal to be detected, by each independent The beginning and end of finger gesture extracts effective segment, confirms multiple independent gesture acoustic signals according to effective segment.
In the above method, the beginning and end of each independent finger gesture is detected from acoustic signal to be detected, is led to Cross each independent finger gesture beginning and end extract effective segment mode it is as follows:
Acoustic signal y [n] to be detected is divided into multiple segment data by sliding window, calculates the short-term averaging energy of every segment data AmountWherein, W is window size, and W is set as 882, and step-length is set as 750;
By judge average energy E [n] withBetween first difference valueIt whether is more than warp Threshold gamma is tested, confirms the starting point n of finger gesture, i.e.,
In the two sides of the gesture input sound of estimation, two protection intervals are setWithIt calculatesObtain hand Gesture inputs the candidate of starting point, and it is { n that candidate gesture, which plays point set,1,n2,...,nm};These points subtractAs starting point, in additionAs end point, i.e. Candidate Set is changed to
In the step 4 of the above method, multiple independent gesture acoustic signals are converted into gray level image to obtain finger movement Characteristic value mode are as follows:
Time-frequency spectrum is calculated by carrying out Short Time Fourier Transform to each independent gesture acoustic signal, and to each independence Gesture voice signal carry out mel cepstrum the image that mel cepstrum coefficients are calculated, Short Time Fourier Transform is obtained Time-frequency spectrum and the image of mel cepstrum coefficients that is calculated combine after be converted to gray level image, pass through the ash Degree image obtains the characteristic value of gesture acoustic signal;
Wherein, Short Time Fourier Transform STFT are as follows:
Wherein, w [t] is window function, and Y [m, ω] is the Fourier transformation of y [n] w [n-m], and being sized is 512 Hamming window, the FFT that length is 512, overlap length 256.
Wherein, the calculating of mel cepstrum coefficients are as follows:
Wherein, f is frequency, setpoint frequency 100Hz.
In the step 5 of the above method, convolutional neural networks model used are as follows:
Using the structure of LeNet as main structure, with the convolutional layer of AlexNet;
Including four convolutional layers and four pond layers, followed by two full articulamentums and an output layer;Wherein,
The size of convolution kernel is two 11 × 11,5 × 5 and 3 × 3, pond size is 3 × 3, stride 2.
In the convolutional neural networks model of above method step 5,
It willIt is added in the error function of the convolutional neural networks model, is used in the layer being fully connected;
Each layer of the convolutional neural networks model all uses degradation mechanism, and it is fixed that its probability value is set in training process Value p=0.8.
Method of the invention by the way that the back of the hand is extended to input face, using finger skin of dorsum of hand slip gesture friction sound Sound signal identifies the finger gesture of the input on the back of the hand, and it is smaller to can solve existing intelligent wearable device screen, Touch screen power consumption, and input inconvenient problem.
The embodiment of the present invention is specifically described in further detail below.
The skin-type finger gesture recognition methods based on smartwatch of the embodiment of the present invention is in commercial smartwatch On the basis of, a virtual handwriting input keyboard is constructed on the back of the hand, the identification to finger gesture realizes skin-type Handwriting input.The recognition methods collects finger in the acoustic signal of the back of the hand sliding friction using the microphone that smartwatch carries; Then the feature extraction for the acoustic signal being collected into is come out, as the input of machine learning, and then is realized to each movement Identification.The places different from existing recognition methods are that the recognition methods carries out the input mode of intelligent wearable device Extension, but do not need to wear on finger or equipment outside retained amount on hand.
Skin-type finger gesture recognition methods based on smartwatch proposed by the invention, using being embedded in intelligent wearing Microphone in equipment (smartwatch) extracts the grating between finger and the back of the hand, and Lai Shixian identifies finger hand in real time Gesture.The microphone that recognition methods of the invention is mainly based upon commercial intelligent wearable device can capture faint skin Grating, therefore it is feasible that the back of the hand, which is used as the gesture input surface of extension,.In view of extension the small screen and few button can The input of wearable device, the present invention execute more useful operations using the multi-finger gesture on the back of the hand.As shown in Fig. 2, can determine Adopted 4 kinds of basic finger gestures, including slide to the left, it slides to the right, mediates and extend, these are user and intelligent wearable device Common gesture when interaction.In order to realize more useful action controls, the gesture of these four types is extended to for two Or more finger 12 pairs of finger gestures.Multi-finger gesture provides flexibility for intelligent wearable device developer, so as to from specific Multi-finger gesture concentrate selection be most suitable for its application gesture.These gestures are only related with finger and hand motion, and not It is related to body action.The unique motion of each finger gesture introduces the difference of acoustic signal, can use these differences to identify These finger gestures, and then realize input face of the extension skin of dorsum of hand as intelligence wearing the back of the hand.
Detailed process is as follows for recognition methods of the present invention:
A. it firstly the need of the smartwatch or Intelligent bracelet that one piece of operation has recognition methods of the invention to apply is prepared, wears On hand;
B. using wear wrist-watch hand the back of the hand as extension input face, finger gesture movement shown in Fig. 2 is carried out above;
C. the start button in smartwatch or Intelligent bracelet interface is clicked, sound-recording function is opened, is then done on the back of the hand Finger gesture movement, smartwatch or Intelligent bracelet will provide corresponding reaction, that is, carry out corresponding operation.
Recognition methods of the invention is applied in acoustic sensing system, be can recognize finger gesture, is utilized commercial smart machine In microphone realize human-computer interaction.Fig. 3 depicts the system architecture of the system, mainly includes five processing modules, it may be assumed that letter Number acquisition, data prediction, gesture motion detection, feature extraction and gesture motion identification;Wherein,
In first signal acquisition module, smartwatch collect finger gesture issue passive acoustic signal, then by In its limited computing capability, it sends smart phone or other calculating treatmenting equipments (such as the signal of record by bluetooth Tablet computer, computer etc.) to be further processed.
In second data preprocessing module, ring is minimized by the way that bandpass filter is applied to original audio signal The interference of border noise;
Then in gesture motion detection module use gesture detecting method, with extract it is pretreated after voice signal It is middle that there are the parts of gesture motion;
In characteristic extracting module, time-frequency spectrum and mel cepstrum coefficients figure are converted acoustical signals into, as each only The feature of vertical finger movement;
In the last one gesture motion identification module, finger gesture motion is carried out using convolutional neural networks (CNN) Identification, wherein time-frequency spectrum and mel cepstrum coefficients are converted into input of the visual pattern as CNN.Finally, smartwatch or Smart phone calls the corresponding function being each used alone to interact with user based on the output of CNN.
When recognition methods work, smartwatch can continue to monitor the gesture input on the back of the hand and pass the sound of recording It is defeated to arrive smart phone.The sample rate for being embedded in the microphone in smartwatch is 44100Hz, it is sufficient to collect the sound of surrounding.Figure The single finger gesture of 5 display right hand slidings.
The original sound of commercial microphones capture inherently carry certain noise, and ambient enviroment usually have it is different Noise level, therefore using wavelet time-frequency analysis to obtain the frequency range for generating sound by finger and skin of dorsum of hand friction.It is logical It crosses a series of filters and calculates wavelet transform (DWT) to analyze the reception acoustical signal x [n] with time-varying length n, i.e.,
Wherein, xα,L[n] and xα,H[n] is the output of low-pass filter g and high-pass filter h respectively.Such as Fig. 4 (1) and figure Shown in 4 (2), nearby there is a highlighted vertical line at 0.4 second, finger gesture has occurred in expression.Bright line occupies 500Hz To the frequency of 20000Hz, the normal noise of this frequency is unable to reach.
In order to optimize the audio signal for finger gesture identification, system proposed by the present invention is first filtered them by band logical Wave device is to remove low frequency and high-frequency noise.FIR filter is a kind of naturally selection, inherently stable, can be designed At generation linear frequency response.The output signal of FIR filter are as follows:
bk=bn+2-k, k=1,2 ..., n+1;
Wherein, N is the quantity of FIR filter, biIt is frequency response values of i-th of example in the n-th layer of FIR filter, It that is to say the coefficient of this layer of FIR filter.Therefore, the out-of-band interference of voice signal is eliminated using filter.As previously mentioned, by The frequency of the variation of the voice signal caused by skin friction is usually within 5000~15000Hz.By two FIR filters It is 6000 and 14000Hz that parameter, which is respectively set to two cut-off frequecy of passband, and two stopband cutoff frequencies are 5000 Hes 15000Hz, wherein the sample rate of original sound signal is Fs=44100Hz.Rule of thumb setting N=112 is required to obtain Denoising result.It is obvious that FIR filter almost eliminates shown in out-of-band noise such as Fig. 4 (2), Fig. 4 (1) is original sound signal Schematic diagram.
Skin-type input system proposed by the invention is that the method that each finger gesture extracts audio signal is based on time domain In signal processing.Grating between finger and the back of the hand is rising edge or decline on the main influence of received voice signal Edge.These variations are vital for detection finger gesture, and using the uniqueness of different changing patteries come to finger Movement is classified.For the starting point of detection gesture, effective segment is extracted from the acoustic signal of processing.
By the inspiration of constant false alarm rate (CFAR), a kind of method of the beginning and end of detection input gesture is proposed.This hair Voice signal y [n] is divided into sectional data by sliding window by bright mentioned system.The every segment data of the system-computed Short-term averaging energyWherein W is window size.882 are set by window size, step-length setting It is 750, i.e., each segment includes the audio signal of 0.02s, sample rate 44100Hz.Since signal is via bandpass filter Processing, if short-term averaging energy is slightly different with neighbouring part without gesture.However, being put down when gesture input occurs The outburst suddenly of equal ENERGY E [n] and its first differenceIt becomes much larger.In addition, gesture input Appearance cause E [n] andBetween difference it is bigger, it is contemplated that can be more than empirical value γ.Then, it is indicated that the starting of finger gesture Point n, i.e.,
During gestures detection, which is arranged two protection intervals in the two sides of the gesture input sound of estimationWithThe system-computedTo obtain the candidate of gesture input starting point.For example, candidate gesture point set is {n1,n2,...,nm}.These points should subtractAs starting point, in additionAs end point, i.e. Candidate Set is changed toThe purpose of the operation is preferably to extract complete gesture input letter Number.
After detecting gesture input, which obtains effective voice signal of each gesture.Fig. 6 the first row, which is shown, to be mentioned The gesture taken is slided to the left, is slided to the right, the voice signal mediated and extended.Due to only be used only time-domain signal be it is inadequate, Therefore the system is based primarily upon time frequency analysis and extracts feature.
However, classical Fourier transformation is not providing resolution ratio over time and frequency.On the contrary, using Fourier in short-term This limitation can be improved by the way that long period signal is divided into shorter section by converting (STFT).STFT is defined by following formula:
Wherein w [t] is window function, and Y [m, ω] is substantially the Fourier transformation of y [n] w [n-m].The effect of STFT, is depended on In selection variable appropriate.Within the system, the Hamming window that a size is 512, the FFT that length is 512, weight are set Folded length is 256.The second row of Fig. 6 is drawn gesture and is slided to the left, slides to the right, the sound time-frequency spectrum for sliding to the right and mediating. Then, 12 MFCC features are extracted.Mel cepstrum image is shown in Fig. 6 the third line.In gesture extraction process, the system Simultaneously combination S TFT and MFCC coefficient is calculated, gray level image is then converted into.
Skin-type input system proposed by the invention identifies different gesture inputs using CNN model, passes through calculating STFT and MFCC coefficient simultaneously converts the result to input of the gray level image as CNN model, thus obtain image appropriate for Guarantee that the performance of CNN classification is very important, the present invention devises the above method to obtain input picture.
In recognition methods of the invention, one is designed with reference to two popular CNN structures (i.e. LeNet-5 and AlexNet) It is a to be suitble to the CNN structure run on the mobile apparatus as CNN model of the invention.CNN models coupling LeNet-5 and The advantages of CNN structure of AlexNet.The CNN model is specifically: selecting the structure of LeNet as main structure, and utilizes The convolutional layer of AlexNet.Using four convolutional layers and four pond layers, followed by two full articulamentums and an output layer.Volume The size of product core is two 11 × 11,5 × 5 and 3 × 3, pond size is 3 × 3, stride 2.Meanwhile in identification side of the invention The regularization method and degradation mechanism for being usually used in handling over-fitting are introduced in method.L2 regularization can pass through byAddition Error function into neural network is realized, is only used in the layer being fully connected.We all employ degeneration in each layer Mechanism sets its probability as a fixed value p=0.8 in the training process.
Those of ordinary skill in the art will appreciate that: realizing that all or part of the process in above-described embodiment method is can be with Relevant hardware is instructed to complete by program, the program can be stored in a computer-readable storage medium, should Program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, the storage medium can be magnetic disk, light Disk, read-only memory (Read-Only Memory, ROM) or random access memory (Random Access Memory, RAM) etc..Method i.e. of the invention can be operated in smartwatch, bracelet and mobile phone in a manner of application program.
The foregoing is only a preferred embodiment of the present invention, but scope of protection of the present invention is not limited thereto, Within the technical scope of the present disclosure, any changes or substitutions that can be easily thought of by anyone skilled in the art, It should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with the protection model of claims Subject to enclosing.

Claims (8)

1. a kind of skin-type finger gesture recognition methods based on smartwatch characterized by comprising
Step 1, signal acquisition: finger acquires hand when friction marks gesture on skin of dorsum of hand, through the microphone of smartwatch Refer to the passive acoustic signal for the generation that rubs between skin of dorsum of hand;
Step 2, data prediction: removing the noise signal in the passive acoustic signal that the step 1 acquires by filtering processing, Obtain acoustic signal to be detected;
Step 3, gesture motion detects: being handled by gestures detection and believes the acoustics to be detected obtained after the step 2 pretreatment Number it is divided into multiple independent gesture acoustic signals;
Step 4, multiple independent gesture acoustic signals feature extraction: are converted to the time-frequency of the acoustic signal indicated with gray level image Characteristic value of the image of spectrogram and mel cepstrum coefficients as each independent finger movement;
Step 5, gesture motion identifies: using the characteristic value of each independent finger movement of the step 4 extraction as input data, Finger gesture action recognition is carried out to input data using convolutional neural networks model, obtains corresponding finger gesture movement.
2. the skin-type finger gesture recognition methods according to claim 1 based on smartwatch, which is characterized in that described In the step 2 of method, filtering processing used includes:
Low frequency and high-frequency noise in passive acoustic signal collected are removed by FIR filter filtering, is obtained only comprising hand Refer to the acoustic signal to be detected of gesture motion acoustic signal.
3. the skin-type finger gesture recognition methods according to claim 2 based on smartwatch, which is characterized in that described In method, the low frequency and high-frequency noise in passive acoustic signal collected are eliminated by FIR filter filtering are as follows:
Believed using the filtered signal that the filtering output of multiple FIR filters obtains as the acoustics to be detected of removal out-of-band noise Number y [n], y [n] are as follows:
bk=bn+2-k, k=1,2 ..., n+1;
Wherein, N is the quantity of FIR filter, sets N=112;biIt is that i-th of example is rung in the frequency of the n-th layer of FIR filter It should be worth, be the coefficient of this layer of FIR filter;The parameter of two FIR filters is respectively set to two cut-off frequecy of passband 6000 and 14000Hz and two stopband cutoff frequency be 5000 and 15000Hz, and the sample rate of original signal is Fs= 44100Hz。
4. the skin-type finger gesture recognition methods according to any one of claims 1 to 3 based on smartwatch, feature It is, in the step 3 of the method, is handled by gestures detection and believe the acoustics to be detected obtained after the step 2 pretreatment Number being divided into multiple independent gesture acoustic signals includes:
The beginning and end that each independent finger gesture is detected from acoustic signal to be detected passes through each independent finger The beginning and end of gesture extracts effective segment, confirms multiple independent gesture acoustic signals according to effective segment.
5. the skin-type finger gesture recognition methods according to claim 4 based on smartwatch, which is characterized in that described In method, the beginning and end of each independent finger gesture is detected from acoustic signal to be detected, by each independent The mode that the beginning and end of finger gesture extracts effective segment is as follows:
Acoustic signal y [n] to be detected is divided into multiple segment data by sliding window, calculates the short-term averaging energy of every segment dataWherein, W is window size, and W is set as 882, and step-length is set as 750;
By judge average energy E [n] withBetween first difference valueIt whether is more than experience threshold Value γ confirms the starting point n of finger gesture, i.e.,
In the two sides of the gesture input sound of estimation, two protection intervals are setWithIt calculatesIt is defeated to obtain gesture Enter the candidate of starting point, it is { n that candidate gesture, which plays point set,1,n2,...,nm};These points subtractAs starting point, in additionMake For end point, i.e. Candidate Set is changed to
6. the skin-type finger gesture recognition methods according to any one of claims 1 to 3 based on smartwatch, feature It is, in the step 4 of the method, multiple independent gesture acoustic signals is converted into gray level image to obtain the spy of finger movement The mode of value indicative are as follows:
Time-frequency spectrum is calculated by carrying out Short Time Fourier Transform to each independent gesture acoustic signal, and to each independent hand Gesture voice signal carries out the image that mel cepstrum coefficients are calculated of mel cepstrum, the time-frequency that Short Time Fourier Transform is obtained The image of spectrogram and the mel cepstrum coefficients being calculated is converted to gray level image after combining, and is obtained by the gray level image To the characteristic value of gesture acoustic signal;
Wherein, Short Time Fourier Transform STFT are as follows:
Wherein, w [t] is window function, and Y [m, ω] is the Fourier transformation of y [n] w [n-m], is sized the Hamming for 512 Window, the FFT that length is 512, overlap length 256.
7. the skin-type finger gesture recognition methods according to any one of claims 1 to 3 based on smartwatch, feature It is, in the step 5 of the method, convolutional neural networks model used are as follows:
Using the structure of LeNet as main structure, with the convolutional layer of AlexNet;
Including four convolutional layers and four pond layers, followed by two full articulamentums and an output layer;Wherein,
The size of convolution kernel is two 11 × 11,5 × 5 and 3 × 3, pond size is 3 × 3, stride 2.
8. the skin-type finger gesture recognition methods according to claim 7 based on smartwatch, which is characterized in that described In the convolutional neural networks model of the step 5 of method,
It willIt is added in the error function of the convolutional neural networks model, is used in the layer being fully connected;
Each layer of the convolutional neural networks model all uses degradation mechanism, and probability is set in training process as a fixed value P=0.8.
CN201910248707.7A 2019-03-29 2019-03-29 Skin type finger gesture recognition method based on smart watch Active CN110069199B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910248707.7A CN110069199B (en) 2019-03-29 2019-03-29 Skin type finger gesture recognition method based on smart watch

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910248707.7A CN110069199B (en) 2019-03-29 2019-03-29 Skin type finger gesture recognition method based on smart watch

Publications (2)

Publication Number Publication Date
CN110069199A true CN110069199A (en) 2019-07-30
CN110069199B CN110069199B (en) 2022-01-11

Family

ID=67366749

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910248707.7A Active CN110069199B (en) 2019-03-29 2019-03-29 Skin type finger gesture recognition method based on smart watch

Country Status (1)

Country Link
CN (1) CN110069199B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110751105A (en) * 2019-10-22 2020-02-04 珠海格力电器股份有限公司 Finger image acquisition method and device and storage medium
CN110784788A (en) * 2019-09-18 2020-02-11 广东思派康电子科技有限公司 Gesture recognition method based on microphone
CN111158487A (en) * 2019-12-31 2020-05-15 清华大学 Man-machine interaction method for interacting with intelligent terminal by using wireless earphone
CN111929689A (en) * 2020-07-22 2020-11-13 杭州电子科技大学 Object imaging method based on sensor of mobile phone
CN112364779A (en) * 2020-11-12 2021-02-12 中国电子科技集团公司第五十四研究所 Underwater sound target identification method based on signal processing and deep-shallow network multi-model fusion
CN112966662A (en) * 2021-03-31 2021-06-15 安徽大学 Short-range capacitive dynamic gesture recognition system and method
CN113126764A (en) * 2021-04-22 2021-07-16 中国水利水电科学研究院 Personal water volume detection method based on smart watch
CN113197569A (en) * 2021-04-23 2021-08-03 华中科技大学 Human body intention recognition sensor based on friction power generation and recognition method thereof
CN113849068A (en) * 2021-09-28 2021-12-28 中国科学技术大学 Gesture multi-mode information fusion understanding and interacting method and system
US20240037529A1 (en) * 2022-07-27 2024-02-01 Bank Of America Corporation System and methods for detecting and implementing resource allocation in an electronic network based on non-contact instructions

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103885744A (en) * 2013-05-30 2014-06-25 美声克(成都)科技有限公司 Sound based gesture recognition method
US20140296935A1 (en) * 2013-03-29 2014-10-02 Neurometrix, Inc. Transcutaneous electrical nerve stimulator with user gesture detector and electrode-skin contact detector, with transient motion detector for increasing the accuracy of the same
CN106095203A (en) * 2016-07-21 2016-11-09 范小刚 Sensing touches the calculating Apparatus and method for that sound inputs as user's gesture
CN106919958A (en) * 2017-03-21 2017-07-04 电子科技大学 A kind of human finger action identification method based on intelligent watch

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140296935A1 (en) * 2013-03-29 2014-10-02 Neurometrix, Inc. Transcutaneous electrical nerve stimulator with user gesture detector and electrode-skin contact detector, with transient motion detector for increasing the accuracy of the same
CN103885744A (en) * 2013-05-30 2014-06-25 美声克(成都)科技有限公司 Sound based gesture recognition method
CN106095203A (en) * 2016-07-21 2016-11-09 范小刚 Sensing touches the calculating Apparatus and method for that sound inputs as user's gesture
CN106919958A (en) * 2017-03-21 2017-07-04 电子科技大学 A kind of human finger action identification method based on intelligent watch

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110784788A (en) * 2019-09-18 2020-02-11 广东思派康电子科技有限公司 Gesture recognition method based on microphone
CN110751105B (en) * 2019-10-22 2022-04-08 珠海格力电器股份有限公司 Finger image acquisition method and device and storage medium
CN110751105A (en) * 2019-10-22 2020-02-04 珠海格力电器股份有限公司 Finger image acquisition method and device and storage medium
CN111158487A (en) * 2019-12-31 2020-05-15 清华大学 Man-machine interaction method for interacting with intelligent terminal by using wireless earphone
CN111929689A (en) * 2020-07-22 2020-11-13 杭州电子科技大学 Object imaging method based on sensor of mobile phone
CN111929689B (en) * 2020-07-22 2023-04-07 杭州电子科技大学 Object imaging method based on sensor of mobile phone
CN112364779A (en) * 2020-11-12 2021-02-12 中国电子科技集团公司第五十四研究所 Underwater sound target identification method based on signal processing and deep-shallow network multi-model fusion
CN112966662A (en) * 2021-03-31 2021-06-15 安徽大学 Short-range capacitive dynamic gesture recognition system and method
CN113126764A (en) * 2021-04-22 2021-07-16 中国水利水电科学研究院 Personal water volume detection method based on smart watch
CN113197569A (en) * 2021-04-23 2021-08-03 华中科技大学 Human body intention recognition sensor based on friction power generation and recognition method thereof
CN113849068A (en) * 2021-09-28 2021-12-28 中国科学技术大学 Gesture multi-mode information fusion understanding and interacting method and system
CN113849068B (en) * 2021-09-28 2024-03-29 中国科学技术大学 Understanding and interaction method and system for multi-modal information fusion of gestures
US20240037529A1 (en) * 2022-07-27 2024-02-01 Bank Of America Corporation System and methods for detecting and implementing resource allocation in an electronic network based on non-contact instructions
US11983691B2 (en) * 2022-07-27 2024-05-14 Bank Of America Corporation System and methods for detecting and implementing resource allocation in an electronic network based on non-contact instructions

Also Published As

Publication number Publication date
CN110069199B (en) 2022-01-11

Similar Documents

Publication Publication Date Title
CN110069199A (en) A kind of skin-type finger gesture recognition methods based on smartwatch
Mouawad et al. Robust detection of COVID-19 in cough sounds: using recurrence dynamics and variable Markov model
Bhat et al. A real-time convolutional neural network based speech enhancement for hearing impaired listeners using smartphone
CN101599127B (en) Method for extracting and identifying characteristics of electro-ocular signal
Zhao et al. Towards low-cost sign language gesture recognition leveraging wearables
CN106919958B (en) Human body finger action recognition method based on smart watch
WO2017152531A1 (en) Ultrasonic wave-based air gesture recognition method and system
CN113349752B (en) Wearable device real-time heart rate monitoring method based on sensing fusion
CN107928673A (en) Acoustic signal processing method, device, storage medium and computer equipment
CN103294199B (en) A kind of unvoiced information identifying system based on face's muscle signals
CN103413113A (en) Intelligent emotional interaction method for service robot
JP2003255993A (en) System, method, and program for speech recognition, and system, method, and program for speech synthesis
CN107300971A (en) The intelligent input method and system propagated based on osteoacusis vibration signal
Kim et al. Finger language recognition based on ensemble artificial neural network learning using armband EMG sensors
CN111643098A (en) Gait recognition and emotion perception method and system based on intelligent acoustic equipment
WO2011092549A1 (en) Method and apparatus for assigning a feature class value
EP4098182A1 (en) Machine-learning based gesture recognition with framework for adding user-customized gestures
WO2017036147A1 (en) Bioelectricity-based control method, device and controller
CN114707562A (en) Electromyographic signal sampling frequency control method and device and storage medium
Zakaria et al. VGG16, ResNet-50, and GoogLeNet deep learning architecture for breathing sound classification: a comparative study
Lu et al. Detection of smoking events from confounding activities of daily living
CN107495939A (en) Live biometric monitoring method, device and system
Casaseca-de-la-Higuera et al. Effect of downsampling and compressive sensing on audio-based continuous cough monitoring
CN106323330A (en) Non-contact-type step count method based on WiFi motion recognition system
CN116027911B (en) Non-contact handwriting input recognition method based on audio signal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant