CN115145399A - Deep learning assisted non-contact gesture recognition device and method - Google Patents

Deep learning assisted non-contact gesture recognition device and method Download PDF

Info

Publication number
CN115145399A
CN115145399A CN202210877947.5A CN202210877947A CN115145399A CN 115145399 A CN115145399 A CN 115145399A CN 202210877947 A CN202210877947 A CN 202210877947A CN 115145399 A CN115145399 A CN 115145399A
Authority
CN
China
Prior art keywords
deep learning
liquid metal
contact
gesture
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210877947.5A
Other languages
Chinese (zh)
Inventor
毛彦超
周豪
肖卓
李旺展
胡锦辉
朱鹏程
潘志峰
冯天星
吴竞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou University
Original Assignee
Zhengzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou University filed Critical Zhengzhou University
Priority to CN202210877947.5A priority Critical patent/CN115145399A/en
Publication of CN115145399A publication Critical patent/CN115145399A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a non-contact gesture recognition device and method assisted by deep learning, and the device comprises a sensor array, a signal acquisition module, a data processing module, a deep learning module and a communication module; the sensor array consists of a liquid metal conducting layer, a high-molecular elastic fiber layer and a viscous hydrogel layer; the sensor array senses the change of a space electric field caused by different non-contact gestures, and the change of the electric field can enable the liquid metal conducting layer to generate polarization to generate different electric signals; the signal acquisition module acquires different electric signals generated by the non-contact gesture and carries out filtering, sampling and shaping processing by the data processing module; the deep learning module predicts the gesture type in real time according to the processed electric signal and then sends a corresponding action instruction to the peripheral equipment through the communication module. By adopting the gesture recognition device and method, the invention realizes the non-contact recognition of complex gestures, and has the advantages of simple operation, good interactive experience, small environmental influence and high sanitation.

Description

Deep learning assisted non-contact gesture recognition device and method
Technical Field
The invention relates to the technical field of man-machine interaction, in particular to a non-contact gesture recognition device and method assisted by deep learning.
Background
As a technique for exchanging information between a human and a machine, a human-computer interaction technique plays a very important role in daily life. Due to the spread of epidemic infectious diseases such as new crown and the like, the non-contact gesture recognition and control are widely concerned. The non-contact gesture recognition has the advantages of being sanitary, convenient, good in experience and the like, can reduce the risk of cross infection, improves the degree of freedom of man-machine interaction, and has unique application advantages in the fields of intelligent life, VR, medical treatment and the like.
The non-contact gesture recognition reported at present is mainly realized by a sensor based on humidity/infrared/capacitance/magnetic induction/electromagnetic effect and a sensor based on vision. Wherein the humidity/infrared/capacitive sensor is only capable of recognizing limited, simple non-contact gestures, such as approaching, departing, sliding, etc. Magnetic induction formula sensor often need dress complicated magnet accessory, has reduced mutual experience and has felt. Sensing based on electromagnetic effects requires complex sensor structures and systems. Vision-based sensors do not work efficiently in low-light environments. Therefore, a non-contact gesture recognition device which is assisted by machine learning, has a simple structure and low requirements on environment and can accurately recognize various complex three-dimensional gestures is needed.
Disclosure of Invention
The invention aims to provide a non-contact gesture recognition device and method assisted by deep learning, which are based on human body electrostatic induction gesture actions and utilize a neural network for prediction, can accurately recognize various complex non-contact gestures without depending on a complex structure, and reduce the interference of the environment on the recognition process.
In order to achieve the above object, the present invention provides a technical solution,
a deep learning assisted contactless gesture recognition device comprising:
the liquid metal is based on the electrostatic induction principle and induces the change of a space electric field caused by gestures;
the signal acquisition module is used for acquiring Maxwell displacement current signals generated by the sensor array due to gestures;
the data processing module is used for filtering, sampling and shaping the Maxwell displacement current signal;
the deep learning module consists of a trained multilayer perceptron neural network, judges gesture types in real time according to the processed Maxwell displacement current signals and determines action instructions;
and the communication module is used for issuing the action command to the peripheral equipment according to the TCP/IP protocol.
Preferably, the sensor array sequentially comprises a liquid metal conducting layer, a high polymer elastic fiber layer and a viscous hydrogel layer from top to bottom, and the liquid metal conducting layer is composed of liquid metal conducting blocks distributed in a 3 × 3 array.
Correspondingly, in contrast to the above device, the present invention further provides a deep learning assisted non-contact gesture recognition method, including:
s1, inducing the change of a space electric field caused by gesture actions by liquid metal through polarization;
s2, acquiring Maxwell displacement current signals generated by polarization of the liquid metal;
s3, filtering, sampling and shaping the Maxwell displacement current signal;
s4, inputting the processed Maxwell displacement current signal into a multilayer perceptron neural network to predict the gesture type in real time and determine an action instruction corresponding to the gesture type;
and S5, issuing the action command to the peripheral equipment according to the TCP/IP protocol.
In summary, the non-contact gesture recognition apparatus and method of the present invention have the following advantages:
1. by means of a deep learning algorithm, accurate recognition of three-dimensional complex gestures including non-contact gestures such as fist making, hand opening, palm rotation, palm turning and the like can be achieved;
2. the human body carries charges to carry out gesture recognition without any external trigger sensor;
3. complex sensing equipment (such as intelligent gloves) does not need to be worn, and the interaction experience is good;
4. the requirement on the environment is low, the multichannel array sensor is adopted to sense the electric signals of the human hand, the influence of any environmental change (such as humidity and temperature) and gesture operation habits (such as distance and speed) on the sensor array is consistent, and the identification accuracy rate cannot be influenced.
Drawings
FIG. 1 is a flow chart of the recognition process of the non-contact gesture recognition apparatus according to the present invention;
fig. 2 is a schematic structural diagram of a 3 × 3 sensor array according to the present invention.
Reference numerals: 1. a liquid metal conductive layer; 2. a polymeric elastic fiber layer; 3. a viscous hydrogel layer.
Detailed Description
The technical scheme of the invention is further explained by combining the drawings and the embodiment.
Example one
A sensor based on human static electricity is combined with deep learning, so that the non-contact gesture recognition device which is simple in structure, low in requirement on environment and capable of accurately recognizing various complex three-dimensional gestures is obtained.
A human body can carry a certain amount of charges in daily life, and based on the electrostatic induction principle, a space electric field can be changed due to the movement of a human hand in space, so that a dielectric substance in the space can be polarized, corresponding Maxwell displacement current can be generated due to the directional movement of the polarized charges, and non-contact sensing can be performed. However, at present, the non-contact gesture recognition based on human body static electricity only relies on artificial extraction of simple and shallow signal features (such as amplitude, frequency and the like), resulting in low accuracy and reliability. Based on full-automatic data analysis and deep-level feature extraction, the deep learning technology has strong and unique potential for comprehensive fine analysis and feature extraction of multi-channel signals. By constructing the neural network, only proper model training is needed, and the deep learning algorithm can autonomously, comprehensively and efficiently extract deep sample characteristics from a large number of data samples to realize data processing and classification. The labels of new data can be predicted efficiently and rapidly through the trained model, and high recognition accuracy is further achieved. Therefore, the non-contact gesture recognition accuracy based on the human body static electricity can be improved by combining the deep learning technology and processing the sensor data based on the human body static electricity by constructing the neural network.
Specifically, the non-contact gesture recognition device of the scheme comprises a 3 x 3 sensor array, a signal acquisition module, a data processing module, a deep learning module and a communication module. As shown in fig. 1, the 3 × 3 sensor array is composed of a liquid metal conductive layer 1, a polymer elastic fiber layer 2, and a viscous hydrogel layer 3. The sensor array senses the change of a space electric field caused by gestures by utilizing charges carried by a human body, the change of the electric field can enable a liquid metal conducting layer of the sensor array to generate polarization, the generated polarization charges move directionally to generate Maxwell displacement current signals corresponding to the gestures, and the non-contact sensing of complex gestures is achieved. The signal acquisition module consists of a nine-channel signal acquisition module and acquires an electric signal generated by the sensor array. The data processing module realizes the processes of filtering, sampling and shaping of the multi-channel electric signals acquired by the data acquisition module and generates a data structure used by the deep learning module in real time. The deep learning module consists of a trained multilayer perceptron neural network, judges gesture types in real time and sends corresponding instructions to peripheral equipment, such as a commercial mechanical arm, through the communication module. The communication module is based on a TCP/IP protocol, and can send instructions corresponding to different gestures in real time to control the commercial mechanical arm to perform interactive behaviors, for example, medical instruments with possible bacteria and virus cross contamination are controlled, and pharynx swab collection and other medical applications requiring non-contact control are performed.
In addition, the sensor array can further expand the number of the sensor units, and the corresponding signal acquisition channels are also expanded. As the number of the liquid metal conductive blocks in the sensor array per unit area is larger, the identification accuracy is correspondingly improved.
Example two
A deep learning assisted non-contact gesture recognition method comprises the following steps:
s1, inducing the change of a space electric field caused by gesture actions by liquid metal through polarization;
s2, acquiring Maxwell displacement current signals generated by polarization of the liquid metal;
s3, filtering, sampling and shaping the Maxwell displacement current signal;
s4, inputting the processed Maxwell displacement current signal into a multilayer perceptron neural network to predict the gesture type in real time and determine an instruction action corresponding to the gesture type;
and S5, issuing the command action to peripheral equipment according to the TCP/IP protocol.
The above is a specific embodiment of the present invention, but the scope of the present invention should not be limited thereto. Any changes or substitutions which can be easily made by those skilled in the art within the technical scope of the present invention disclosed herein shall be covered by the protection scope of the present invention, and therefore the protection scope of the present invention shall be subject to the protection scope defined by the appended claims.

Claims (3)

1. A deep learning assisted contactless gesture recognition apparatus comprising:
the liquid metal is used for sensing the space electric field change caused by different non-contact gestures based on the electrostatic induction principle;
the signal acquisition module is used for acquiring different Maxwell displacement current signals generated by the sensor array due to different non-contact gestures;
the data processing module is used for filtering, sampling and shaping different Maxwell displacement current signals;
the deep learning module consists of a trained multilayer perceptron neural network, judges gesture types in real time according to different processed Maxwell displacement current signals and determines action instructions;
and the communication module is used for issuing the action command to the peripheral equipment according to the TCP/IP protocol so as to realize the control of the peripheral equipment through different non-contact gestures.
2. The device for non-contact gesture recognition according to claim 1, wherein the sensor array comprises a liquid metal conductive layer, a polymer elastic fiber layer, and a viscous hydrogel layer in sequence from top to bottom, and the liquid metal conductive layer is composed of 3 x 3 array of liquid metal conductive blocks.
3. A deep learning assisted non-contact gesture recognition method is characterized by comprising the following steps:
s1, inducing the change of a space electric field caused by gesture actions by liquid metal through polarization;
s2, acquiring Maxwell displacement current signals generated by polarization of the liquid metal;
s3, filtering, sampling and shaping the Maxwell displacement current signal;
s4, inputting the processed Maxwell displacement current signal into a multilayer perceptron neural network to predict the gesture type in real time and determine an action instruction corresponding to the gesture type;
and S5, issuing the action command to the peripheral equipment according to the TCP/IP protocol.
CN202210877947.5A 2022-07-25 2022-07-25 Deep learning assisted non-contact gesture recognition device and method Pending CN115145399A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210877947.5A CN115145399A (en) 2022-07-25 2022-07-25 Deep learning assisted non-contact gesture recognition device and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210877947.5A CN115145399A (en) 2022-07-25 2022-07-25 Deep learning assisted non-contact gesture recognition device and method

Publications (1)

Publication Number Publication Date
CN115145399A true CN115145399A (en) 2022-10-04

Family

ID=83414297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210877947.5A Pending CN115145399A (en) 2022-07-25 2022-07-25 Deep learning assisted non-contact gesture recognition device and method

Country Status (1)

Country Link
CN (1) CN115145399A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3020482A1 (en) * 2014-04-29 2015-10-30 Orange METHOD FOR ENTERING A CODE BY MICROGESTES
CN108181985A (en) * 2017-03-02 2018-06-19 北京理工大学 A kind of vehicle mounted multimedia gesture identifying device based on electrostatic detection
CN208283909U (en) * 2018-04-13 2018-12-25 刘禹欣 A kind of mouse system and device based on space gesture control
US20210256246A1 (en) * 2020-02-10 2021-08-19 Massachusetts Institute Of Technology Methods and apparatus for detecting and classifying facial motions

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3020482A1 (en) * 2014-04-29 2015-10-30 Orange METHOD FOR ENTERING A CODE BY MICROGESTES
CN108181985A (en) * 2017-03-02 2018-06-19 北京理工大学 A kind of vehicle mounted multimedia gesture identifying device based on electrostatic detection
CN208283909U (en) * 2018-04-13 2018-12-25 刘禹欣 A kind of mouse system and device based on space gesture control
US20210256246A1 (en) * 2020-02-10 2021-08-19 Massachusetts Institute Of Technology Methods and apparatus for detecting and classifying facial motions

Similar Documents

Publication Publication Date Title
Zhu et al. Haptic-feedback smart glove as a creative human-machine interface (HMI) for virtual/augmented reality applications
Jin et al. Triboelectric nanogenerator sensors for soft robotics aiming at digital twin applications
Guo et al. Human-machine interaction sensing technology based on hand gesture recognition: A review
CN102334086A (en) Device and method for monitoring an object's behavior
US20160103500A1 (en) System and method for a human machine interface utilizing near-field quasi-state electrical field sensing technology
CN210402266U (en) Sign language translation system and sign language translation gloves
CN103853333A (en) Gesture control scheme for toy
Pan et al. State-of-the-art in data gloves: A review of hardware, algorithms, and applications
Bhattacharya et al. Surface-property recognition with force sensors for stable walking of humanoid robot
Prasad et al. A wireless dynamic gesture user interface for HCI using hand data glove
CN115145399A (en) Deep learning assisted non-contact gesture recognition device and method
TWI657352B (en) Three-dimensional capacitive wear human-computer interaction device and method thereof
Lin et al. An event-triggered low-cost tactile perception system for social robot’s whole body interaction
Zhao et al. An adaptive real-time gesture detection method using EMG and IMU series for robot control
Roshandel et al. Multi-sensor based gestures recognition with a smart finger ring
CN107247523B (en) Multi-array fingertip touch interaction device
CN113176825B (en) Large-area air-isolated gesture recognition system and method
CN116009695A (en) Dynamic gesture recognition method and system based on data glove
Prabhuraj et al. Gesture controlled home automation for differently challenged people
Wang et al. HSVTac: A High-speed Vision-based Tactile Sensor for Exploring Fingertip Tactile Sensitivity
CN111230872A (en) Object delivery intention recognition system and method based on multiple sensors
Abougarair et al. Analysis of Mobile Accelerometer Sensor Movement Using Machine Learning Algorithm
Geyik et al. Decoding human intent using a wearable system and multi-modal sensor data
Mali et al. Hand gestures recognition using inertial sensors through deep learning
Kadam et al. Gesture control of robotic arm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20221004