CN112114655A - Barrier-free man-machine interaction system and interaction method based on sensor fusion - Google Patents

Barrier-free man-machine interaction system and interaction method based on sensor fusion Download PDF

Info

Publication number
CN112114655A
CN112114655A CN201910532742.1A CN201910532742A CN112114655A CN 112114655 A CN112114655 A CN 112114655A CN 201910532742 A CN201910532742 A CN 201910532742A CN 112114655 A CN112114655 A CN 112114655A
Authority
CN
China
Prior art keywords
data
module
sensor
fusion
control module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910532742.1A
Other languages
Chinese (zh)
Inventor
主轩铭
徐文超
李威君
齐平平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China Normal University
Original Assignee
East China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China Normal University filed Critical East China Normal University
Priority to CN201910532742.1A priority Critical patent/CN112114655A/en
Publication of CN112114655A publication Critical patent/CN112114655A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Dermatology (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a barrier-free man-machine interaction system based on sensor fusion, which comprises a sensor fusion module, a control module, a data sending module and a data receiving module, wherein the sensor fusion module is used for receiving data; the sensor fusion module comprises a six-axis sensor module for controlling cursor displacement, an electromyographic sensor module for cursor confirmation, a sound sensor module for controlling switching and mode conversion and a data processing unit module for data preprocessing and data fusion which are sequentially connected, the control module makes a decision after receiving data of the sensor fusion module, the control module is connected with the data receiving module through data sending, data are transmitted through a Bluetooth protocol, and a user can realize man-machine interaction with a computer, a tablet and a mobile phone through head movement, eye blinking and voice control. Not only provides practical and effective man-machine interaction equipment for special people, but also provides a new idea for freeing hands of ordinary people to prevent cervical spondylosis. The invention also discloses a barrier-free man-machine interaction method.

Description

Barrier-free man-machine interaction system and interaction method based on sensor fusion
Technical Field
The invention relates to the technical field of multi-sensor data fusion technology and gesture recognition algorithm, and discloses a gesture recognition method and a barrier-free human-computer interaction system based on multi-sensor fusion.
Background
With the development of the MEMS technology and the Internet of things technology, the human-computer interaction demand enters an unprecedented peak, the intelligent wearable device and the gesture recognition algorithm are combined into a new trend, and the application prospect of the intelligent human-computer interaction device is wider and wider.
The invention provides a barrier-free man-machine interaction system based on sensor fusion, which not only expands the application range of the traditional man-machine interaction equipment, but also provides a new idea for reducing the morbidity of a mouse hand and providing barrier-free man-machine interaction equipment for special people.
Disclosure of Invention
Aiming at the existing requirements, the invention provides a barrier-free man-machine interaction system based on sensor fusion. The multi-sensor data fusion technology is combined with the gesture recognition algorithm, the system control module makes a decision after receiving the data of the sensor fusion module, the control module is connected with the data receiving module through data sending, the data are transmitted through a Bluetooth protocol, and a user can realize man-machine interaction with a computer, a tablet and a mobile phone through head movement, eye blinking and voice control. Not only provides practical and effective man-machine interaction equipment for special people, but also provides a new idea for freeing hands of ordinary people to prevent cervical spondylosis.
The barrier-free man-machine interaction system based on sensor fusion comprises a sensor fusion module, a control module, a data sending module and a data receiving module.
Wherein, the sensor fuses the module and sets up in the head, and it includes: the system comprises a six-axis sensor module for controlling cursor displacement, an electromyographic sensor module for cursor confirmation, an acoustic sensor module for controlling switching and mode conversion and a data processing unit module for data preprocessing and data fusion; the control module makes a decision after receiving data of the sensor fusion module, wherein the judgment process adopts decoupling design and comprises hardware decoupling and software decoupling; the control module is connected with the data receiving module through the data sending module and transmits data through a Bluetooth protocol; the control module is connected with a computer through the USB, and a user can realize man-machine interaction with the computer, a tablet and a mobile phone (hereinafter referred to as an intelligent terminal) through head movement, eye blinking and voice control.
According to the barrier-free man-machine interaction system based on sensor fusion, a six-axis sensor module integrates a three-axis accelerometer and a three-axis gyroscope, the accelerometer detects linear acceleration and gravity vectors, the gyroscope measures rotation angular velocity, and a data processing unit obtains attitude data of a head through calculation; the electromyographic sensor module is used for suppressing noise by adopting a differential input and analog filter circuit mode, and the data processing unit is used for filtering, rectifying and integrating the surface original muscle electrical signal to obtain a surface electromyographic pulse signal so as to obtain eye blinking data; the sound sensor module integrates high-precision digital/analog and analog/digital interfaces, and the data processing unit can perform dynamically edited keyword speech recognition so as to perform control switching and mode conversion.
According to the barrier-free human-computer interaction system based on sensor fusion, the data processing unit in the fusion sensor fuses and processes data of the sensor fusion module, the control module is converted into a human-computer interaction instruction and sends the data through the data sending module, and the intelligent terminal receives the data through the data receiving module, so that the control module controls human-computer interaction of the intelligent terminal.
According to the barrier-free man-machine interaction system based on sensor fusion, the data sending module transmits data to the receiver module through a Bluetooth protocol, and the data sending module is connected with the control module and the data receiving module; the data receiving module is connected with the intelligent terminal through a USB and calls mouse and keyboard instructions to realize human-computer interaction.
The barrier-free man-machine interaction system based on sensor fusion, which is provided by the invention, comprises a hardware decoupling and a software decoupling, wherein the hardware decoupling is that the system can be communicated with an intelligent terminal through Bluetooth or directly communicated with the intelligent terminal through a USB; the software decoupling is that the operations of clicking, double clicking, right clicking, opening, closing and the like in the human-computer interaction can be simultaneously controlled by the myoelectric sensor module and the sound sensor module, so that the human-computer interaction experience is improved.
The invention provides a barrier-free man-machine interaction method based on sensor fusion, which comprises the following steps:
the method comprises the following steps: placing a fusion sensor module on the head, wherein the six-axis sensor module is placed on the top of the head, a microphone part of a sound sensor module is placed beside the mouth, electrodes of an electromyographic sensor module are placed on muscles outside the canthus, and are used for respectively collecting head motion signals, sound signals and surface original muscle electric signals, wherein the motion signals comprise three-dimensional acceleration signals and three-dimensional angular velocity signals;
step two: processing original data by a data processing unit in the fusion sensor module, correcting and compensating errors of the three-dimensional acceleration signal and the three-dimensional angular velocity signal to obtain course angle, rolling angle and pitch angle data of head movement attitude data; carrying out regular matching on the sound signal and key words programmed in a register to obtain matched voice data; filtering, rectifying and integrating the surface original muscle electric signal to obtain a surface myoelectric pulse signal;
step three: the signal sending module transmits a signal output by the fusion sensor module to the control module, the control module processes the obtained signal, converts course angle, rolling angle and pitch angle data of head movement posture data into cursor displacement data, converts voice data into control switching and mode switching data, and converts a surface electromyogram pulse signal into cursor confirmation data;
step four: and the data sending module transmits the data processed by the control module to the data receiving module through a Bluetooth protocol, and the transmission is in a full duplex mode.
Step five: after the data receiving module receives the data, the micro-processing module calls a mouse and keyboard instruction to realize human-computer interaction on the intelligent terminal equipment.
The invention provides a barrier-free man-machine interaction method based on sensor fusion, wherein the process of acquiring head motion attitude data by fusing a sensor module comprises the following steps:
step a 1: placing a six-axis sensor at the top of a head, and acquiring three-axis acceleration signals and three-axis angular velocity signals, wherein the three axes are an X axis, a Y axis and a Z axis which are mutually disposed;
step a 2: the data processing unit constructs a vector matrix from the acquired triaxial acceleration signals and triaxial angular velocity signals, and quaternions can be obtained after Kalman filtering and secondary Gaussian covariance white noise matrix processing and error compensation and correction;
step a 3: the data processing unit converts the quaternion data into Euler angles, namely heading angle, rolling angle and pitch angle data through a 3D Cartesian coordinate system;
the invention provides a barrier-free man-machine interaction method based on sensor fusion, wherein the process of acquiring eye blinking data by a fusion sensor module comprises the following steps:
step b 1: placing an electromyographic sensor at an eye corner muscle, placing a middle electrode and a tail end electrode in the middle and at the tail end of the muscle, placing a reference electrode in cheek muscle, and collecting surface original muscle electric signals;
step b 2: the data processing unit is used for amplifying, high-pass filtering, power frequency trap wave and low-pass filtering the collected surface original electromyographic signals, wherein a 50Hz digital trap wave filter is adopted for filtering power frequency noise interference;
step b 3: the data processing unit rectifies the signals and acquires integral myoelectric values by adopting a sliding time window, so that eye blinking data, namely surface myoelectric pulse signals, are acquired.
The invention provides a barrier-free man-machine interaction method based on sensor fusion.A process of converting head movement posture data into cursor displacement by a control module comprises the following steps:
step c 1: the control module acquires Euler angle data, sets cursor movement rate and maps a three-dimensional course angle and a pitch angle to a two-dimensional plane;
step c 2: calculating the offset of the instant Euler angle, and converting the offset into the cursor movement amount through conversion;
step c 3: and the control module calls a cursor movement instruction to realize human-computer interaction control.
The invention provides a barrier-free man-machine interaction method based on sensor fusion, wherein the process of converting sound signal data into control switching and mode conversion by a control module comprises the following steps:
step d 1: presetting a pinyin string as a keyword list by a data processing unit in the fusion sensor;
step d 2: the data processing unit carries out spectrum analysis and feature extraction on the voice stream input by the sound sensor, and then matches the voice stream with the key words to identify voice information;
step d 3: and carrying out system control switching and mode conversion on the identified result.
The invention provides a barrier-free man-machine interaction method based on sensor fusion, wherein the process of converting a surface electromyographic pulse signal into a cursor confirmation by a control module comprises the following steps:
step e 1: the control module performs band-pass filtering on the read surface electromyographic pulse signals and then calculates the mean square error of the electromyographic signals;
step e 2: the control module uses the moving time window function and double threshold value judgment to analyze, and recognizes eye blinking as different cursor confirmation information;
step e 3: and calling a left and right mouse button instruction to realize man-machine interaction.
According to the barrier-free man-machine interaction system based on sensor fusion, the control module is provided with the anti-shaking module, so that cursor shaking caused by slight movement of the head is eliminated; the control module sets the priority of the sound information to be highest, and can perform quick operation, such as opening a mouse, closing the mouse, creating, copying, pasting and the like; the system cursor confirmation information includes single click, double click, right key, etc.
The invention has the beneficial effects that: the functional design of the system fully considers the modular hardware construction and the human-computer interaction requirements of different people. The barrier-free man-machine interaction system based on the sensor fusion has the advantages that the multi-sensor data fusion and gesture recognition are achieved, a user can achieve man-machine interaction with a computer, a tablet and a mobile phone through head movement, eye blinking and voice control, the application range of traditional man-machine interaction equipment is expanded, and the barrier-free man-machine interaction system based on the sensor fusion has positive significance for improving human health and modifying traditional tools for disabled people. The system has good universality and can meet the requirements of different people.
Drawings
FIG. 1 is an overall architecture diagram of a human-computer interaction system based on a fusion sensor according to the present invention;
FIG. 2 is a frame diagram of the fusion sensor for determining Euler angles in the present invention;
FIG. 3 is a block diagram of a fusion sensor speech recognition in accordance with the present invention;
FIG. 4 is a frame diagram of the electromyographic signal preprocessing of the fusion sensor in the invention;
FIG. 5 is a block diagram of the overall hardware system of the present invention;
FIG. 6 is an overall framework diagram of the software system of the present invention;
FIG. 7 is a block diagram of an extended Kalman filter update process in accordance with the present invention;
FIG. 8 is a simulation of a mouse movement simulation process according to the present invention;
FIG. 9 is an electromyogram showing the single click, double click, right click, etc. of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following specific examples and the accompanying drawings. The procedures, conditions, experimental methods and the like for carrying out the present invention are general knowledge and common general knowledge in the art except for the contents specifically mentioned below, and the present invention is not particularly limited.
The invention provides a barrier-free man-machine interaction system based on sensor fusion. The sensor fusion module comprises six-axis sensors, a sound sensor, a myoelectricity sensor, a data processing unit and other functional modules. The control module makes a decision after receiving the data of the sensor fusion module, the control module is connected with the data receiving module through data sending, data are transmitted through a Bluetooth protocol, and a user can realize man-machine interaction with a computer, a tablet and a mobile phone through head movement, eye blinking and voice control. Not only provides practical and effective man-machine interaction equipment for special people, but also provides a new idea for freeing hands of ordinary people to prevent cervical spondylosis.
The invention has the innovation points that the multi-sensor data fusion is realized, three sensor data modeling and multi-channel intelligent control transmission are combined, and the attitude identification method and the barrier-free man-machine interaction system based on the multi-sensor fusion are constructed. Wherein, in the attitude resolution, an extended Kalman filter based on the current mean and covariance linearization is used.
The invention has the innovation points that the system maps the man-machine interaction means from the conventional two-dimensional plane to the three-dimensional plane, the use experience is improved, the man-machine interaction with a computer, a tablet and a mobile phone can be realized by the user through head movement, eye blinking and voice control, the application range of the mouse is expanded, and the system also provides a substantial step for reducing the morbidity of a mouse hand and improving man-machine interaction equipment for disabled people.
The invention has the innovation points that the system adopts decoupling design, including hardware decoupling and software decoupling, and the hardware decoupling is that the system can be communicated with the intelligent terminal through Bluetooth or directly communicated with the intelligent terminal through USB; the software decoupling is that the operations of clicking, double clicking, right clicking, opening, closing and the like in the human-computer interaction can be simultaneously controlled by the myoelectric sensor module and the sound sensor module, so that the human-computer interaction experience is improved.
After the system fuses the sensor module and acquires a head posture signal, a sound signal and a surface original muscle electric signal, the data processing unit performs data fusion and data processing and then transmits the data to the control module, the control module converts the data into a human-computer interaction instruction and transmits the human-computer interaction instruction to the data receiving module through the data sending module by using a Bluetooth protocol, and after the data receiving module receives the data, the micro-processing module calls a mouse and a keyboard instruction to realize human-computer interaction on intelligent terminal equipment.
The barrier-free man-machine interaction system based on the sensor fusion is designed in a modular design, is convenient to carry and high in expansibility, can particularly meet the man-machine interaction requirements of special people, and has a wide application prospect. The multi-sensor data fusion technology has the following characteristics:
1. multidimensional nature of data sources: the signals acquired by the fusion sensor comprise three-dimensional signals measured by a six-axis sensor, one-dimensional sound signals measured by a sound sensor and one-dimensional electromyographic signals measured by an electromyographic sensor.
2. Complementarity of data processing: the signals measured by the sensor can be complemented in the judging process, and can also be subjected to information complementation and optimized combined processing in the human-computer interaction process, so that the control accuracy is improved.
3. Timeliness of data transmission: the processing processes of the sensors are mutually independent, and a parallel processing mechanism can be adopted in the whole processing process, so that the system has higher processing speed. And the information interaction is parallelized, so that the efficiency is greatly improved.
4. Necessity of data fusion: the sensor data fusion is to utilize the advantage of cooperative operation of a plurality of sensors, reduce the defect of a single sensor and improve the effectiveness of a sensor system.
The invention uses the modularized design on hardware, enhances the applicability by adopting the decoupling design, and establishes a multi-channel transmission system for data fusion processing. And an extended Kalman filter is used for data processing, and the processing is carried out based on the current mean value and covariance linearization, so that more accurate attitude information is obtained. Different priorities are set for different commands in control, and meanwhile decoupling design is adopted, so that user experience is enriched.
In general, the system achieves the following five goals:
1. the fusion sensor can acquire information of head posture signals, sound signals and electromyographic signals;
2. the data processing unit in the fusion sensor can filter and reduce noise of the acquired original signals to extract useful information, so that data fusion is realized;
3. low-power consumption and high-stability Bluetooth transmission is realized, and a parallel processing mechanism is adopted in the transmission process;
4. the control module performs algorithm attitude identification on the extracted information and makes command judgment;
5. the data receiving module can receive data and can call a mouse and keyboard instruction to carry out man-machine interaction.
To realize the above functions, the system is roughly divided into four main modules. One is a control module, which is mainly used for carrying out algorithm calculation processing on multi-channel fusion sensor data to obtain a human-computer interaction instruction and uploading the human-computer interaction instruction to a data receiver module; one is a fusion sensor module which is mainly used for carrying out data fusion and preprocessing after taking a head posture signal, a sound signal and a surface original muscle electric signal and transmitting the signals to a control module; one is a data sending module, the main task of which is to connect the control module and the data receiving module, and the data transmission adopts a Bluetooth protocol; one is a data receiving module, which is mainly used for receiving the human-computer interaction instruction sent by the data sending module and calling a mouse and keyboard command through a processor thereof to realize the human-computer interaction function.
The whole system is designed from the aspects of system targets and functional modules, and then each functional module is realized one by one. The selection of components is important to ensure the system effect, and the components selected by the system will be described below.
In the system, the fusion sensor module is a core part and comprises an MPU6050 module, an LD3320A module, a Myoware module and a data processing module. The MPU6050 module comprises a three-axis gyroscope and a three-axis accelerometer, the data processing module can perform an extended Kalman filtering process on the measured three-axis acceleration and three-axis angular velocity, then obtains a quaternion and further obtains an Euler angle, and the calculation process is shown in FIG. 2; the LD3320A module integrates high-precision digital/analog and analog/digital interfaces, has dynamically edited keyword recognition, and the data processing unit presets a keyword list by using a pinyin string so as to perform control switching and mode conversion by using voice recognition, wherein the recognition process is shown in FIG. 3; the myowheel module can collect the original myoelectric signals of the stool surface, the data processing module amplifies the weak myoelectric signals on the surface of the human body, the signals are obtained through a high-pass filter, a 50Hz digital wave trap and a low-pass filter, and the signals are integrated through a sliding window function to obtain surface myoelectric pulse signals, and the processing process is shown in fig. 4. The control module selects an Arduino UNO module, the core processor of the Arduino UNO is ATmega328P, the Arduino UNO module is provided with abundant digital input/output pins, analog input pins, a crystal oscillator, a DC interface and an ICSP interface, and information collected by the fusion sensor can be converted into a man-machine interaction instruction. The data sending module selects an HC-05 module, and the HC-05 module can carry out short-distance point-to-point data transmission and reception through a Bluetooth protocol. The receiving module adopts HC-05 and Arduino Leonardo modules, HC-05 is responsible for receiving data, and Arduino Leonardo calls a mouse and keyboard command to convert the received data into a human-computer interaction instruction.
The hardware design of the system is shown in fig. 5, and the software design is shown in fig. 6. In the system, a parallel effective information channel is constructed, data are collected from a fusion sensor module, data are preprocessed, fusion data are collected from the fusion sensor module, the data are converted into instructions by a control module operation algorithm, a mouse and keyboard command is called by a data receiving module to realize man-machine interaction, the information exchange channel is important, and the processes specifically comprise the following steps:
step 1: the fusion sensor module reads six-axis sensor data through an I2C bus protocol to obtain a three-dimensional acceleration signal and a three-dimensional angular velocity signal, the data processing unit performs an extended Kalman filtering process on the signals, and the heading angle, rolling angle and pitch angle data of the partial motion attitude data are obtained through error correction compensation;
step 2: the fusion sensor module reads the data of the sound sensor through an I2C bus protocol, and the data processing unit carries out regular matching on the sound signal and key words programmed in the register to obtain matched voice data;
and step 3: the fusion sensor module reads myoelectric sensor data through an I2C bus protocol, the data processing module amplifies weak muscle electric signals on the surface of a human body, the signals are obtained through a high-pass filter, a 50Hz digital wave trap and a low-pass filter, and the signals are integrated through a sliding window function to obtain surface myoelectric pulse signals;
and 4, step 4: the signal sending module transmits a signal output by the fusion sensor module to the control module, the control module processes the obtained signal, converts course angle, rolling angle and pitch angle data of head movement posture data into cursor displacement data, converts voice data into control switching and mode switching data, and converts a surface electromyogram pulse signal into cursor confirmation data;
and 5: the data transmission module transmits the data processed by the control module to the data receiving module through a Bluetooth protocol, and the data is transmitted in a full duplex mode; after the data receiving module receives the data, the micro-processing module calls a mouse and keyboard instruction to realize human-computer interaction on the intelligent terminal equipment.
And (4) performing the steps 1-3 in parallel, and performing data fusion on the obtained results. As shown in figure 1, the barrier-free man-machine interaction system based on sensor fusion can automatically acquire data of each sensor, then carries out preprocessing and data fusion, the control module makes a decision after receiving the data of the sensor fusion module, transmits the data through a Bluetooth protocol, the micro-processing module calls a mouse and a keyboard command after the data receiving module receives the data, and a user can realize man-machine interaction with a computer, a tablet and a mobile phone through head movement, eye blinking and voice control.
In step 1: for filtering out signal noise, the invention adopts an extended Kalman filtering process, and the process of calculating the Euler angle after filtering out the noise is shown in figure 2. The extended Kalman filtering process comprises the following specific processes:
assuming state variables in a process
Figure BDA0002100248070000081
This process is subject to a non-linear difference equation and measurement
Figure BDA0002100248070000082
The equation:
Figure BDA0002100248070000083
wherein wkAnd vkRepresenting process and measurement noise, respectively, f (-) and h (-) are both non-linear functions.
In fact, at time k, w is not knownkOr vkThe specific value. However, the following formula can be used for the approximation:
Figure BDA0002100248070000091
wherein
Figure BDA0002100248070000092
Is some a posteriori estimate of the state (starting from the previous time step k).
In order to estimate the process with non-linear differences and measurement relationships, the present invention rewrites a new governing equation, which linearizes the estimate for (2),
Figure BDA0002100248070000093
wherein xkAnd zkIs the true state and the measurement vector,
Figure BDA0002100248070000094
and
Figure BDA0002100248070000095
is the approximation state and measurement vector, a is the jacobian matrix for the partial derivatives of x, W is the jacobian matrix for the partial derivatives of W, H is the jacobian matrix for the partial derivatives of x, and V is the jacobian matrix for the partial derivatives of V.
The above is a linearization process of the kalman filter, and the derivation process described above may be used to extend the kalman filter to an extended kalman filter. The EKF updates the non-linearity value in a particular time step based on the previous state vector and the measured state vector.
The update process is shown in fig. 7. Note that "superscript minus" denotes a prior estimate (Bayes rule) of the corresponding variable, e.g.
Figure BDA0002100248070000096
Is that
Figure BDA0002100248070000097
A priori estimate of (A1), QkAnd RkRepresenting the process noise and the measurement noise covariance, respectively, at time step k.
In step 2: a data processing unit in the fusion sensor edits a key word list in a pinyin string mode in advance, and a sound sensor receives voice stream signals, performs spectrum analysis and characteristic value extraction by Fourier transform, and then matches the voice stream signals with the keyword list in a voice recognition unit. The specific process is shown in fig. 3.
In step 3: the amplitude of the surface original electromyogram signal is about 100-5000 mu V, the surface original electromyogram signal is too small, a preamplifier is adopted to amplify the surface original electromyogram signal by 1000 times, and meanwhile, in order to prevent power frequency interference, an amplifying circuit needs to have a high common mode rejection ratio. Since the minimum frequency of the myoelectric effective signal is 20Hz, and most of the energy is distributed in 50-150 Hz, f is usedLFiltering with 10Hz high-pass filter, filtering with double-T RC active trap with 50Hz trap point frequency, gain amplifying, and filtering with fHAnd finally, integrating through a sliding window function to obtain a surface electromyogram pulse signal. The process is shown in figure 4.
In step 4: in the process of controlling cursor displacement, selecting a three-dimensional course angle and a pitch angle to map to a two-dimensional plane, and adding an anti-shaking design in the displacement process, wherein the specific judgment is as follows:
setting a moving velocity VX=0.06VY0.04; euler angle read value X ═ pitch () Y ═ yaw (); judging the threshold value to be FXAnd FYΔ X and Δ Y Euler angle variations;
the judgment threshold formula is as follows:
Figure BDA0002100248070000101
when in use
Figure BDA0002100248070000102
The requirement that the cursor starts to move is met, and if the cursor does not move, the anti-shaking effect is realized, and the shaking error value needs to be removed when the moving distance is long;
in the cursor confirmation judgment of the control module, a moving time window function and double threshold judgment method is used, and the specific judgment process is as follows:
the sampling point n is 15, the electromyographic pulse signal value is data [ i ], the sampling variation value is Fdata [ i ], the sum of the signal values is sum, the standard deviation threshold value is F, and the time threshold value is T;
which satisfies
Figure BDA0002100248070000103
Threshold of standard deviation
Figure BDA0002100248070000104
Time threshold satisfaction
Figure BDA0002100248070000105
The cursor is left clicked and right clicked respectively, and the double click is two times of continuous left click, and the simulation result is shown in fig. 9.
In step 5: the micro-processing module of the receiving module is Arduino Leonardo, the micro-processing module supports a USB-HID equipment protocol, an intelligent terminal equipment operating system is provided with an HID type driving program, communication can be completed only by using API system calling, and man-machine interaction control with the intelligent terminal equipment is achieved.
Examples
In the embodiment, a special person wears the equipment disclosed by the invention, a fusion sensor monitors a sound signal of a system opened by the user and then starts all modules, the user can control the mouse cursor to move by moving the head up, down, left and right, the user controls the cursor to move to an IE browser, the eyes are blinked for realizing a double-click function to open the browser, then the cursor is moved to a position where the maximization and the minimization are instructed to realize the maximization and the minimization control by blinking the eyes once, the right-click function can be realized by long-blink actions on a desktop, the functions of copying, pasting and the like can also be realized by voice, and the special person can also enjoy the Internet.
The human-computer interaction system based on sensor fusion can solve the problem that special people cannot use conventional mice and keyboards, and users can accurately control the intelligent terminal by using sensor data of the system to realize human-computer interaction.
The protection of the present invention is not limited to the above embodiments. Variations and advantages that may occur to those skilled in the art may be incorporated into the invention without departing from the spirit and scope of the inventive concept, and the scope of the appended claims is intended to be protected.

Claims (14)

1. An obstacle-free human-computer interaction system based on sensor fusion, comprising: the system comprises a sensor fusion module, a control module, a data sending module and a data receiving module; wherein the content of the first and second substances,
the sensor fuses the module and sets up in the head, and it includes: the system comprises a six-axis sensor module for controlling cursor displacement, an electromyographic sensor module for cursor confirmation, an acoustic sensor module for controlling switching and mode conversion and a data processing unit module for data preprocessing and data fusion;
the control module makes a decision after receiving the data of the sensor fusion module; the decision process adopts decoupling design, including hardware decoupling and software decoupling; the control module is connected with the data receiving module through the data sending module and transmits data through a Bluetooth protocol; the control module is connected with the computer through the USB, and a user can realize man-machine interaction with the computer, the tablet and the mobile phone through head movement, eye blinking and voice control.
2. The barrier-free human-computer interaction system based on sensor fusion of claim 1, wherein the six-axis sensor module integrates a three-axis accelerometer and a three-axis gyroscope, the accelerometer detects linear acceleration and gravity vector, the gyroscope measures rotation angular velocity, and the data processing unit obtains attitude data of the head after calculation; the electromyographic sensor module is used for suppressing noise by adopting a differential input and analog filter circuit mode, and the data processing unit is used for filtering, rectifying and integrating the surface original muscle electrical signal to obtain a surface electromyographic pulse signal so as to obtain eye blinking data; the sound sensor module integrates high-precision digital/analog and analog/digital interfaces, and the data processing unit can edit dynamically edited keywords to perform sound recognition, so that control switching and mode conversion are performed.
3. The barrier-free human-computer interaction system based on sensor fusion as claimed in claim 1, wherein the data processing unit performs data preprocessing and data fusion on the data of the sensor fusion module, the control module converts the data into human-computer interaction instructions and sends the data through the data sending module, and the intelligent terminal receives the data through the data receiving module, thereby realizing the human-computer interaction control of the control module on the intelligent terminal.
4. The barrier-free human-computer interaction system based on sensor fusion of claim 1, wherein the data sending module transmits data to the receiver module by a bluetooth protocol, and connects the control module and the data receiving module; the data receiving module is connected with the intelligent terminal through a USB and calls mouse and keyboard instructions to realize human-computer interaction.
5. The barrier-free human-computer interaction system based on sensor fusion as claimed in claim 1, wherein the decoupling design comprises hardware decoupling and software decoupling, the hardware decoupling is that the system can communicate with the intelligent terminal through Bluetooth or directly communicate with the intelligent terminal through USB; the software decoupling is that the single click, double click, right click, opening and closing operations in the human-computer interaction can be simultaneously controlled by the myoelectric sensor module and the sound sensor module, so that the human-computer interaction experience is improved.
6. An unobstructed human-computer interaction method based on sensor fusion, characterized in that a human-computer interaction system according to any one of claims 1-5 is used, said method comprising the steps of:
the method comprises the following steps: placing a fusion sensor module on the head, wherein the six-axis sensor module is placed on the top of the head, a microphone part of a sound sensor module is placed beside the mouth, electrodes of an electromyographic sensor module are placed on muscles outside the canthus, and head posture signals, sound signals and surface original muscle electric signals are respectively collected, wherein the posture signals comprise three-dimensional acceleration signals and three-dimensional angular velocity signals;
step two: a data processing unit in the fusion sensor module processes original data, an extended Kalman filtering process is carried out on the three-dimensional acceleration signal and the three-dimensional angular velocity signal, and then error correction compensation is carried out to obtain course angle, rolling angle and pitch angle data of head motion attitude data; carrying out regular matching on the sound signal and key words programmed in a register to obtain matched voice data; filtering, rectifying and integrating the surface original muscle electric signal to obtain a surface myoelectric pulse signal;
step three: the data sending module transmits signals output by the fusion sensor module to the control module, the control module processes the obtained signals, the course angle, the rolling angle and the pitch angle data of the head movement attitude data are converted into cursor displacement data, voice data are converted into control switching and mode switching data, and surface electromyogram pulse signals are converted into cursor confirmation data;
step four: and the data sending module transmits the data processed by the control module to the data receiving module through a Bluetooth protocol, and the transmission is in a full duplex mode.
Step five: after the data receiving module receives the data, the micro-processing module calls a mouse and keyboard instruction to realize human-computer interaction on the intelligent terminal equipment.
7. The method of claim 6, wherein the process of acquiring head motion pose data by the fusion sensor comprises the steps of:
step a 1: placing the six-axis sensor at the top of the head, and acquiring three-axis acceleration signals and three-axis angular velocity signals, wherein the three axes are an X axis, a Y axis and a Z axis which are mutually disposed;
step a 2: the data processing unit constructs a vector matrix from the triaxial acceleration signal and the triaxial angular velocity signal, and a quaternion can be obtained after Kalman filtering and secondary Gaussian covariance white noise matrix processing and error compensation and correction;
step a 3: and the data processing unit converts the quaternion data into Euler angles, namely heading angle, rolling angle and pitch angle data through a 3D Cartesian coordinate system.
8. The method of unobstructed human-computer interaction based on sensor fusion of claim 6 wherein said process of fusing sensor module to acquire eye blink data comprises the steps of:
step b 1: the electromyographic sensor is arranged at the canthus muscle, the middle electrode and the tail end electrode are arranged in the middle and at the tail end of the muscle, the reference electrode is arranged in the cheek muscle, and surface original muscle electric signals are collected;
step b 2: the data processing unit is used for amplifying, high-pass filtering, power frequency trap wave and low-pass filtering the collected surface original electromyographic signals, wherein a 50Hz digital trap wave device is adopted for filtering power frequency noise interference;
step b 3: the data processing unit rectifies the signals and then obtains the integral myoelectricity value in the time window by adopting a sliding time window, so as to obtain the blinking data of the eye, namely the surface myoelectricity pulse signals.
9. The method of claim 6, wherein the process of the control module converting the head pose data into cursor displacement comprises the steps of:
step c 1: the control module acquires Euler angle data, sets cursor movement rate and maps a three-dimensional course angle and a pitch angle to a two-dimensional plane;
step c 2: calculating the offset of the instant Euler angle, and converting the offset into the cursor movement amount through conversion;
step c 3: and the control module calls a cursor movement instruction to realize human-computer interaction control.
10. The method of claim 6, wherein the process of the control module converting the sound signal data into the control switch and the mode switch comprises the following steps:
step d 1: presetting a pinyin string as a keyword list by a data processing unit in the fusion sensor;
step d 2: the data processing unit carries out spectrum analysis and feature extraction on the voice stream input by the sound sensor, and then matches the voice stream with the key words to identify voice information;
step d 3: and carrying out system control switching and mode conversion on the identified result.
11. The barrier-free human-computer interaction method based on sensor fusion as claimed in claim 6, wherein the process of converting the surface electromyography pulse signal into the cursor confirmation by the control module comprises the following steps:
step e 1: the control module performs band-pass filtering on the read surface electromyographic pulse signals and then calculates the mean square error of the electromyographic signals;
step e 2: the control module uses the moving time window function and double threshold value judgment to analyze, and recognizes eye blinking as different cursor confirmation information;
step e 3: and calling a left and right mouse button instruction to realize man-machine interaction.
12. The barrier-free human-computer interaction method based on sensor fusion of claim 9, wherein the control module is provided with an anti-shaking algorithm to eliminate cursor shaking caused by slight head movement.
13. The method of claim 10, wherein the control module sets the priority of the voice message to be highest, and performs shortcut operations such as opening a mouse, closing the mouse, creating, copying, and pasting.
14. The method of claim 11, wherein the control module uses a moving time window function plus a double threshold method for cursor confirmation, and the confirmation information includes one click, double click, and right click.
CN201910532742.1A 2019-06-19 2019-06-19 Barrier-free man-machine interaction system and interaction method based on sensor fusion Pending CN112114655A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910532742.1A CN112114655A (en) 2019-06-19 2019-06-19 Barrier-free man-machine interaction system and interaction method based on sensor fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910532742.1A CN112114655A (en) 2019-06-19 2019-06-19 Barrier-free man-machine interaction system and interaction method based on sensor fusion

Publications (1)

Publication Number Publication Date
CN112114655A true CN112114655A (en) 2020-12-22

Family

ID=73796669

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910532742.1A Pending CN112114655A (en) 2019-06-19 2019-06-19 Barrier-free man-machine interaction system and interaction method based on sensor fusion

Country Status (1)

Country Link
CN (1) CN112114655A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112686132A (en) * 2020-12-28 2021-04-20 南京工程学院 Gesture recognition method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102835964A (en) * 2012-08-31 2012-12-26 漳州师范学院 Glasses for acquiring fatigue driving physiological signal transmitted via Bluetooth
CN102848918A (en) * 2012-08-31 2013-01-02 漳州师范学院 Fatigue driving detection control system based on physiological signal collection and control method thereof
CN202776330U (en) * 2012-08-31 2013-03-13 漳州师范学院 Glasses capable of acquiring fatigue driving physiological signals transmitted by USB (universal serial bus)
CN103513770A (en) * 2013-10-09 2014-01-15 中国科学院深圳先进技术研究院 Man-machine interface equipment and man-machine interaction method based on three-axis gyroscope
CN105988232A (en) * 2015-02-11 2016-10-05 贵州景浩科技有限公司 Electronic collimation device with wearable display device
CN205983391U (en) * 2016-06-23 2017-02-22 河北工业大学 Control device based on head and eye action
CN207367341U (en) * 2017-08-23 2018-05-15 中国人民解放军总医院 The glasses stimulated based on fatigue detecting with awakening
CN109364471A (en) * 2018-12-12 2019-02-22 歌尔科技有限公司 A kind of VR system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102835964A (en) * 2012-08-31 2012-12-26 漳州师范学院 Glasses for acquiring fatigue driving physiological signal transmitted via Bluetooth
CN102848918A (en) * 2012-08-31 2013-01-02 漳州师范学院 Fatigue driving detection control system based on physiological signal collection and control method thereof
CN202776330U (en) * 2012-08-31 2013-03-13 漳州师范学院 Glasses capable of acquiring fatigue driving physiological signals transmitted by USB (universal serial bus)
CN103513770A (en) * 2013-10-09 2014-01-15 中国科学院深圳先进技术研究院 Man-machine interface equipment and man-machine interaction method based on three-axis gyroscope
CN105988232A (en) * 2015-02-11 2016-10-05 贵州景浩科技有限公司 Electronic collimation device with wearable display device
CN205983391U (en) * 2016-06-23 2017-02-22 河北工业大学 Control device based on head and eye action
CN207367341U (en) * 2017-08-23 2018-05-15 中国人民解放军总医院 The glasses stimulated based on fatigue detecting with awakening
CN109364471A (en) * 2018-12-12 2019-02-22 歌尔科技有限公司 A kind of VR system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
姜晓旭等: "基于四元数和Kalman滤波器的多传感器数据融合算法", 计量科学与技术, pages 31 - 33 *
王坤朋等: "人体腿部表面肌电信号特征提取方法", 重庆大学学报, vol. 40, no. 11, pages 83 - 90 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112686132A (en) * 2020-12-28 2021-04-20 南京工程学院 Gesture recognition method and device

Similar Documents

Publication Publication Date Title
US9299248B2 (en) Method and apparatus for analyzing capacitive EMG and IMU sensor signals for gesture control
CN212112406U (en) Driving device based on user EOG signal and head gesture
CN104134060B (en) Sign language interpreter and display sonification system based on electromyographic signal and motion sensor
CN104536558B (en) A kind of method of intelligence finger ring and control smart machine
JP6064280B2 (en) System and method for recognizing gestures
CN103336580B (en) A kind of cursor control method of head-wearing device
CN102411440B (en) Wireless head-controlled mouse based on accelerometer and gyro sensor
Sahadat et al. Simultaneous multimodal PC access for people with disabilities by integrating head tracking, speech recognition, and tongue motion
WO2015062320A1 (en) Human body coupled intelligent information input system and method
CN108453742A (en) Robot man-machine interactive system based on Kinect and method
CN106326881B (en) Gesture recognition method and gesture recognition device for realizing man-machine interaction
JPH07248873A (en) Controller using myoelectric signal
CN210402266U (en) Sign language translation system and sign language translation gloves
Sim et al. The head mouse—Head gaze estimation" In-the-Wild" with low-cost inertial sensors for BMI use
CN106227433A (en) A kind of based on mobile terminal the control method of PC, mobile terminal
Ruzaij et al. Multi-sensor robotic-wheelchair controller for handicap and quadriplegia patients using embedded technologies
CN105867595A (en) Human-machine interaction mode combing voice information with gesture information and implementation device thereof
CN112114655A (en) Barrier-free man-machine interaction system and interaction method based on sensor fusion
CN103501473B (en) Based on multifunctional headphone and the control method thereof of MEMS sensor
CN203552178U (en) Wrist strip type hand motion identification device
CN112911458A (en) Wireless earphone capable of being controlled by head movement
Kim et al. Development of a wearable HCI controller through sEMG & IMU sensor fusion
CN108127667B (en) Mechanical arm somatosensory interaction control method based on joint angle increment
Al-Wesabi et al. A Smart-hand Movement-based System to Control a Wheelchair Wirelessly.
Wang et al. AirMouse: Turning a pair of glasses into a mouse in the air

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination