CN117606540A - Multi-mode sensing device for man-machine interaction and data processing method - Google Patents

Multi-mode sensing device for man-machine interaction and data processing method Download PDF

Info

Publication number
CN117606540A
CN117606540A CN202311361789.9A CN202311361789A CN117606540A CN 117606540 A CN117606540 A CN 117606540A CN 202311361789 A CN202311361789 A CN 202311361789A CN 117606540 A CN117606540 A CN 117606540A
Authority
CN
China
Prior art keywords
machine interaction
sensor array
electrical impedance
user
signals
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311361789.9A
Other languages
Chinese (zh)
Inventor
杨云杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yuanlu Technology Co ltd
Original Assignee
Shenzhen Yuanlu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yuanlu Technology Co ltd filed Critical Shenzhen Yuanlu Technology Co ltd
Priority to CN202311361789.9A priority Critical patent/CN117606540A/en
Publication of CN117606540A publication Critical patent/CN117606540A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • G01D21/02Measuring two or more variables by means not covered by a single other subclass
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/0205Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/053Measuring electrical impedance or conductance of a portion of the body
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/053Measuring electrical impedance or conductance of a portion of the body
    • A61B5/0531Measuring skin impedance
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6802Sensor mounted on worn items
    • A61B5/6804Garments; Clothes
    • A61B5/6806Gloves
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6802Sensor mounted on worn items
    • A61B5/681Wristwatch-type devices
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Biophysics (AREA)
  • Public Health (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Molecular Biology (AREA)
  • Pathology (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Theoretical Computer Science (AREA)
  • Social Psychology (AREA)
  • General Engineering & Computer Science (AREA)
  • Cardiology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Dentistry (AREA)
  • Psychology (AREA)
  • Dermatology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Multimedia (AREA)
  • Pulmonology (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)

Abstract

The invention discloses a multi-modal sensing device for man-machine interaction and a data processing method, wherein the multi-modal sensing device for man-machine interaction comprises a wearable main body, a multi-frequency electrical impedance imaging sensor array and a flexible strain sensor array, wherein the wearable main body is configured to be worn on a hand of a user; the multi-frequency electrical impedance imaging sensor array is arranged on the wearable main body, and the multi-frequency electrical impedance imaging sensor is configured to acquire electrical impedance signals caused by gesture changes of a user; a flexible strain sensor array is provided to the wearable body, the flexible strain sensor array configured to collect electrical signals resulting from deformation of the user's finger joints. The multi-mode sensing device for man-machine interaction in the technical scheme of the invention not only can improve gesture recognition precision, but also can realize more complex data output, and enhances man-machine interaction function and experience.

Description

Multi-mode sensing device for man-machine interaction and data processing method
Technical Field
The invention relates to the technical field of virtual reality and augmented reality, in particular to a multi-mode sensing device for man-machine interaction and a data processing method.
Background
Augmented and virtual reality (AR/VR) is an emerging human-machine interaction (HCI) paradigm that can overcome physical limitations of real life by seamless integration with the digital world. One of the key tasks of immersive AR/VR is to quickly and accurately restore the exact hand pose under dexterous movements. In addition, the perception and utilization of the emotion characteristics of human beings in human-computer interaction are also preconditions for going to human-like and more natural human-computer interaction.
Perceptual hand accessories (e.g., gloves, wristbands) for hand pose estimation have been the focus of research and product development in the last decades. Some popular gesture reconstruction/recognition methods are based on skin electronics or sensor arrays embedded within data gloves. For example, in some recent work, gesture recognition is achieved by a single layer electrode array of a conventional Electrical Impedance Tomography (EIT) wristband in a single frequency setting.
However, most current attempts involve only a single measurement modality, which is not accurate enough.
Disclosure of Invention
The invention mainly aims to provide a multi-mode sensing device for man-machine interaction, which aims to improve the accuracy of gesture recognition in the man-machine interaction process.
In order to achieve the above object, the present invention provides a multi-modal sensing device for man-machine interaction, comprising:
a wearable body configured to be worn on a user's hand;
a multi-frequency electrical impedance imaging sensor array disposed on the wearable body, the multi-frequency electrical impedance imaging sensor configured to acquire electrical impedance signals resulting from user gesture changes; and
and the flexible strain sensor array is arranged on the wearable main body and is configured to acquire electric signals caused by deformation of joints of fingers of a user.
In some embodiments, the multi-frequency electrical impedance imaging sensor array has at least two electrode layers, each electrode layer comprising 8 electrodes.
In some embodiments, the wearable body includes a wrist portion, the wrist portion being disposed in a ring shape;
the multi-frequency electrical impedance imaging sensor array is arranged on the wrist part, 8 electrodes of each electrode layer are annularly arranged around the wrist part, and the electrodes are exposed on the inner side of the wrist part.
In some embodiments, the multi-frequency electrical impedance imaging sensor array further comprises a first external circuit interface formed at the wrist portion.
In some embodiments, the multi-frequency electrical impedance imaging sensor array is further configured to externally acquire bioelectrical signals including blood pulse wave signals and skin conductivity.
In some embodiments, the wearable body includes a finger portion having a toe portion;
the flexible strain sensor array includes a fingertip sensor disposed at the fingertip.
In some embodiments, the flexible strain sensor array further comprises a flexible conductor and a second external circuit interface formed on the wrist of the wearable body, the fingertip sensor being connected to the second external circuit interface by the flexible conductor.
In some embodiments, the flexible conductor includes a tube and a liquid metal flowing inside the tube.
The invention also provides a multi-mode data processing method for man-machine interaction, which uses any multi-mode sensing device to collect signals, and comprises the following steps:
fusing the electrical impedance signals and the electrical signals acquired by the multi-modal sensing device, and inputting a pre-trained neural network to reconstruct user gestures;
and acquiring physiological parameters of the user according to the bioelectric signals acquired by the multi-mode sensing device, and identifying the health state of the user according to the physiological parameters.
The physiological parameters are input into a pre-trained neural network to identify the emotional state of the user.
In some embodiments, the method further comprises:
the electrical impedance signals, the electrical signals and the physiological parameters acquired by the multi-modal sensing device are fused and input into a pre-trained deep learning network to realize user behavior understanding.
According to the multi-modal sensing device for man-machine interaction, the multi-frequency electrical impedance imaging sensor array is used for collecting electrical impedance signals and bioelectric signals caused by gesture changes of a user, and the flexible strain sensor array is used for collecting electrical signals caused by finger joint deformation of the user, so that multi-modal signals can be collected in the man-machine interaction process, and based on the collected multi-modal signals, the data processing method of the multi-modal sensing device can accurately reconstruct the gestures of the user, and can also realize health detection and emotion monitoring of the user so as to provide multi-modal signal output; therefore, compared with a traditional man-machine interaction system with single-mode signals, the multi-mode sensing device and the data processing method can collect and process the multi-mode signals, so that gesture recognition accuracy can be improved, more complex data output can be realized, and man-machine interaction functions and experience are enhanced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the structures shown in these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a multi-modal sensing apparatus for human-computer interaction according to an embodiment of the present invention;
FIG. 2 is a schematic view of the embodiment of FIG. 1 from another perspective;
FIG. 3 is a schematic view of a portion of the flexible conductor of the embodiment of FIG. 1;
FIG. 4 is a flowchart of a multi-modal data processing method for human-computer interaction according to an embodiment of the present invention;
fig. 5 is a flowchart of another embodiment of a multi-modal data processing method for man-machine interaction according to the present invention.
Reference numerals illustrate:
10. a wearable body; 11. a wrist portion; 12. a palm portion; 13. a finger section; 131. finger tip; 20. a multi-frequency electrical impedance imaging sensor array; 21. an electrode; 22. a first external circuit interface; 30. a flexible strain sensor array; 31. a fingertip sensor; 32. a flexible conductor; 321. a tube body; 322. a liquid metal; 33. second external circuit interface
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that, if directional indications (such as up, down, left, right, front, and rear … …) are included in the embodiments of the present invention, the directional indications are merely used to explain the relative positional relationship, movement conditions, etc. between the components in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indications are correspondingly changed.
In addition, if there is a description of "first", "second", etc. in the embodiments of the present invention, the description of "first", "second", etc. is for descriptive purposes only and is not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the meaning of "and/or" as it appears throughout is meant to include three side-by-side schemes, for example, "A and/or B", including the A scheme, or the B scheme, or the scheme where A and B meet at the same time. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present invention.
The invention provides a multi-mode sensing device for man-machine interaction.
In an embodiment of the present invention, as shown in fig. 1 to 3, the multi-modal sensing apparatus for man-machine interaction includes a wearable body 10, a multi-frequency electrical impedance imaging sensor array 20, and a flexible strain sensor array 30.
In particular, the wearable body 10 is configured to be worn on a user's hand. In this embodiment, the wearable body 10 is provided in the form of a glove, designed to fit on the hand of a user for ease of wearing and use by the user. Also, the glove material may be soft and comfortable, such as with a woven material, rubber material, nylon material, etc., to ensure comfort and freedom for the wearer. It will be appreciated that the glove form of the wearable body 10 provides a better fit and more accurate perception of hand movements and finger joint deformations. This design makes the use of the multi-modal sensing apparatus more natural and convenient for the user.
Of course, the design of the present application is not limited thereto, and in other embodiments, the wearable body may also be provided in a wristband form, a bracelet form, or the like.
Further, the wearable body 10 includes a wrist portion 11, a palm portion 12, and a finger portion 13, and the wrist portion 11, the palm portion 12, and the finger portion 13 are substantially identical to the hand structure of the human body. Specifically, the wrist portion 11 is the bottommost portion of the wearable body 10, and is worn at the position of the wrist of the user. The palm portion 12 is located above the wrist portion 11 to simulate the shape and structure of the human palm. The finger portion 13 is located at the end of the palm portion 12 to simulate the structure and length of a human finger. The finger portion 13 also has a finger tip 131, i.e., the end position of the finger.
In particular, a Multi-frequency electrical impedance imaging (mfEIT) sensor is provided to the wearable body 10, the Multi-frequency EIT sensor being configured to acquire electrical impedance signals caused by user gesture changes. Specifically, the EIT sensor is an imaging technology for extracting biomedical information related to physiological and pathological states of human bodies by utilizing the electrical characteristics of biological tissues and organs and the change rule thereof.
Further, the multi-frequency electrical impedance imaging sensor array 20 generates electrical signals of different frequencies by a multi-frequency generator and transmits these signals through the array of electrodes 21 into the user's hand tissue. The impedance response of the hand tissue to each frequency current is then measured to collect the electrical impedance signal caused by the gesture change.
In this embodiment, the multi-frequency EIT sensor array 20 comprises two electrode 21 layers, each electrode 21 layer comprising 8 electrodes 21. That is, the multi-frequency EIT sensor array 20 of the present embodiment is a dual-layer multi-frequency EIT sensor array, and has 16 electrodes 21 in total. In this way, each electrode 21 layer can independently apply current signals of different frequencies. By applying a current signal to each electrode 21 layer separately, a multi-frequency signal transmission to the hand can be achieved. In addition, by measuring the voltage difference or current value between each electrode 21 layer and the other electrode 21 layers simultaneously, the impedance response of the hand tissue to currents of different frequencies can be measured. And the electrical impedance response value of each electrode 21 layer also provides conductivity change information about the hand.
It will be appreciated that by the two layer electrode 21 structure and arrangement of 8 electrodes 21 per layer, the multi-frequency EIT sensor array 20 may provide a richer and accurate electrical impedance measurement, thereby helping to achieve finer gesture recognition. Independent control and measurement of each electrode 21 layer enables the sensor to acquire impedance information of the hand at different frequencies simultaneously, enhancing performance and flexibility of the sensor.
Further, the multi-frequency electrical impedance imaging sensor array 20 is disposed on the wrist portion 11, 8 electrodes 21 of each electrode 21 layer are arranged in a ring shape around the wrist portion 11, and the electrodes 21 are exposed on the inner side of the wrist portion 11.
In particular, the multi-frequency electrical impedance imaging sensor array 20 is capable of capturing changes in electrical impedance of tissue around the wrist by the design of the annularly arranged electrodes 21 of the wrist portion 11. Such a design may provide a more comprehensive and accurate electrical impedance signal.
Further, the multi-frequency EIT sensor array 20 further comprises a first external circuit interface 22, the first external circuit interface 22 being formed at the wrist portion 11 of the wearable body 10.
Specifically, the first external circuit interface 22 is part of the multi-frequency electrical impedance imaging sensor array 20 for connection and communication with external circuitry or devices. It provides an interface for the transmission of electrical signals and the exchange of data between the sensor and other devices. The first external circuit interface 22 is provided on the surface or side of the wrist portion 11 of the wearable body 10 to connect with an external device. This location is selected to facilitate user connection with the sensor while avoiding interference with hand movements and gestures.
Alternatively, the interface type of the first external circuit interface 22 includes a wired interface (e.g., USB, serial, etc.) or a wireless interface (e.g., bluetooth, wi-Fi, etc.).
Further, the multi-frequency EIT sensor array 20 may also be configured to acquire bioelectric signals including blood pulse wave signals and skin conductivity. The blood pulse wave signal is generated by blood pulsation caused by the pulsation of the heart, and reflects the frequency and rhythm of the heart beat. Skin conductivity is the skin's ability to conduct electrical current. Skin conductivity can be affected by factors such as sweat secretion and skin temperature and can therefore be used to assess the activation state of the body, mood changes and stress responses. By measuring skin conductivity, the sensor may provide some change in biological index, such as emotional level and intensity of physical stress response.
Alternatively, a polyethylene terephthalate (PET) sheet may be used as the substrate for the multi-frequency EIT sensor array 20 and its operating frequency may be configured between 10kHz and 1 MHz.
It should be appreciated that the multi-frequency EIT sensor array 20 may be controlled to switch between two modes of operation, namely a multi-frequency three-dimensional tomographic mode and a physiological measurement mode, by a specific switching strategy and measurement sequence. In particular, in this frequency three-dimensional tomographic mode, the multi-frequency EIT sensor array 20 may be used to acquire electrical impedance signals caused by gesture changes; while in physiological measurement mode, the multi-frequency EIT sensor array 20 may be used to acquire blood pulse wave signals and skin conductivity.
In particular, a flexible strain sensor is a sensor capable of sensing and measuring deformation or strain of an object. It is made of soft, bendable and stretchable material and has excellent flexibility and deformation performance. When the sensor is subjected to external force or the object is deformed, the internal material of the sensor can be correspondingly strained, so that the electrical, mechanical or optical characteristics of the sensor are changed, and corresponding electric signals or outputs are generated.
In the present embodiment, the flexible strain sensor array 30 includes a fingertip sensor 31, the fingertip sensor 31 being provided to a fingertip 131 of the wearable body 10. In some embodiments, the fingertip sensor 31 may be integrated into the fabric of the wearable body without revealing or conforming to the skin to effect sensing. In this way, the fingertip sensor 131 can be hidden to enhance the wearing experience of the user.
In some embodiments, the fingertip sensor 31 may be exposed on the inside of the fingertip 131. By exposing the fingertip sensor 31 inside the fingertip 131, the fingertip sensor 31 can directly sense and record the deformation and motion of the user's finger, and capture the bending degree and deformation information of the finger joint to output an electrical signal.
Further, the fingertip 131 is provided with two fingertip sensors 31, and the two fingertip sensors 31 are disposed at intervals in the length direction of the finger 13. This design allows for more accurate capture of the deformation and motion of each finger, providing finer finger motion perception and measurement.
It should be noted that the fingertip of each finger portion 13 has two fingertip sensors 31.
Further, the flexible strain sensor array 30 further comprises a flexible conductor 32 and a second external circuit interface 33. Meanwhile, a second external circuit interface 33 is formed at the wrist portion 11 of the wearable body 10, and the fingertip sensor 31 is connected to the second external circuit interface 33 through a flexible conductor 32.
Among these, the flexible conductor 32 is a conductive material that can accommodate the shape and hand movements of the wearable body 10 and is capable of effectively transmitting electrical signals from the fingertip sensor 31 to the second external circuit interface 33.
Specifically, the flexible conductor 32 is composed of a tube body 321 and a liquid metal 322 flowing inside the tube body 321. The tube 321 is flexible and bendable to accommodate changes in user gestures. The liquid metal 322 has good electrical conductivity and deformability, and is suitable for transmitting electrical signals in the flexible pipe body 321. The fluid nature of the liquid metal 322 allows the flexible conductor 32 to maintain a continuous electrical connection while bending and stretching, and has a high degree of flexibility in the variation of conductor shape.
Alternatively, polydimethylsiloxane (PDMS) may be used as the substrate for the flexible strain sensor array 30 and eutectic gallium indium liquid metal 322 may be used as the liquid metal 322.
A second external circuit interface 33 is located on the wrist portion 11 of the wearable body 10 for connecting the fingertip sensor 31 with external circuits. It provides the function of receiving and transmitting the sensor signals so that the data of the fingertip sensor 31 can be transferred to an external device or system for further processing and analysis.
It will be appreciated that by the combination of the flexible conductor 32 and the liquid metal 322, the fingertip sensor 31 can establish an electrical connection with the second external circuit interface 33 through the flexible conductor 32. This design allows for efficient transmission of sensor signals to external circuitry and enables processing and integration at the wrist portion 11 of the wearable body 10. The properties of the flexible conductor 32 and the liquid metal 322 enable the sensor to maintain a continuous electrical connection during hand movements with good flexibility and reliability.
Of course, the design of the present application is not limited thereto, and in other embodiments, the flexible conductor 32 may be provided as a wire, flexible PCB, or the like.
As shown in fig. 4, based on the multi-modal sensing device for human-computer interaction provided by the above embodiment, the technical solution of the present application further provides a multi-modal data processing method for human-computer interaction, where the data processing method includes the following steps:
s10, fusing the electrical impedance signals and the electrical signals acquired by the multi-mode sensing device, and inputting the electrical signals into a pre-trained neural network to reconstruct gestures of a user.
In particular, the respective features may be extracted from the electrical impedance signal and the electrical signal and fused. Signal processing and feature extraction techniques such as time-frequency analysis, wavelet transformation, fourier transformation, etc. may be used to extract features of electrical impedance signals and electrical signals. These features are then combined or concatenated to form a comprehensive feature vector representing the information of the multi-modal signal. Or use a deep learning or machine learning model to fuse the electrical impedance signals with the electrical signals. The two signals can be used as the input of the model, and the fusion can be realized by constructing a multi-mode model. Such models may be multi-layer perceptrons (MLPs), convolutional Neural Networks (CNNs), recurrent Neural Networks (RNNs), or attention mechanisms, among others. By training the model, it is enabled to learn useful features and representations from the multi-modal signal, enabling higher level fusion.
Further, the fused multi-modal signals are input into a pre-trained neural network. This neural network may be a deep learning model, such as a Convolutional Neural Network (CNN) or a Recurrent Neural Network (RNN). Through the trained neural network, the multi-mode signals can be mapped to gesture representations of users, and gesture reconstruction and recognition are achieved.
Thus, accurate user gesture reconstruction can be achieved through multi-modal signals.
S20, acquiring physiological parameters of the user according to the bioelectric signals acquired by the multi-mode sensing device, and identifying the health state of the user according to the physiological parameters.
In particular, physiological parameters of the user may be further calculated using bioelectric signals acquired by the multi-modal sensing device, such as blood pulse wave signals and skin conductivity. These physiological parameters include, but are not limited to, heart rate, blood pressure, and skin conductivity. Among these, blood pulse wave signals can be used for assessment of heart rate and cardiovascular function, while skin conductivity in combination with physiological parameters can be used for emotion recognition and emotion analysis.
Because the health state of the user has specific association with the physiological parameters, a specific deep learning model can be trained by labeling the initial data to obtain the health state of the user according to the physiological parameters.
S30, inputting the physiological parameters into a pre-trained neural network to identify the emotion state of the user.
In real life, the emotional state of the user has strong correlation with the physiological parameters, so that the training and testing data set can be used for training a neural network model, such as an emotion classifier based on deep learning. Based on the trained neural network model, the emotion state of the user can be analyzed and identified, and the emotion, emotion tendency or psychological state of the user can be known.
It can be understood that the multi-mode signals acquired by the multi-mode sensing device are processed through the data method, so that the gestures of the user can be accurately reconstructed, the emotion state and the health state of the user can be simultaneously acquired, multi-mode data output and interaction are realized, and the man-machine interaction experience is enhanced.
As shown in fig. 5, in some embodiments, the multi-modal data processing method of the present application further includes the steps of:
s40, fusing the electrical impedance signals, the electrical signals and the physiological parameters acquired by the multi-modal sensing device, and inputting the electrical impedance signals, the electrical signals and the physiological parameters into a pre-trained deep learning network to realize user behavior understanding.
In particular, features may be extracted from the signals of each modality and fused. Or use deep learning or machine learning models to fuse multimodal data. The data of each mode can be used as the input of the model, and a multi-mode model is constructed to realize fusion. Such models may be multi-layer perceptrons (MLPs), convolutional Neural Networks (CNNs), recurrent Neural Networks (RNNs), or attention mechanisms, among others. By training the model, it is enabled to learn useful features and representations from multi-modal data, enabling higher level fusion.
Further, the fused multi-modal signals are input into a pre-trained deep learning network. This deep learning network may be a multi-layer perceptron (MLP), convolutional Neural Network (CNN), recurrent Neural Network (RNN), or attention mechanism, etc. By pre-training this network, understanding and recognition of user behavior can be achieved. For example, gestures and hand movements of the user may be identified from electrical impedance signals and electrical signals, while physiological states and emotions of the user may be identified from physiological parameters. By analyzing and integrating this information, the user's behavioral intent, emotional state, psychological needs, and the like can be inferred.
The method aims at comprehensively analyzing and understanding the behaviors of the user by fusing signals and physiological parameters acquired by the multi-modal sensing device and utilizing a pre-trained deep learning network. The behavior understanding method can be applied to the fields of intelligent auxiliary systems, personalized recommendation, man-machine interaction and the like, and provides more intelligent, personalized and self-adaptive user experience.
It can be understood that, according to the multi-modal sensing device for man-machine interaction in the technical scheme of the invention, the multi-frequency electrical impedance imaging sensor array 20 is used for collecting electrical impedance signals and bioelectric signals caused by the gesture change of a user, and the flexible strain sensor array 30 is used for collecting electrical signals caused by the deformation of the joints of the fingers of the user, so that multi-modal signals can be collected in the man-machine interaction process, and based on the collected multi-modal signals, the data processing method of the invention not only can accurately reconstruct the gesture of the user, but also can realize the health detection and emotion monitoring of the user so as to provide multi-modal signal output; therefore, compared with a traditional man-machine interaction system with single-mode signals, the multi-mode sensing device and the data processing method can collect and process the multi-mode signals, so that gesture recognition accuracy can be improved, more complex data output can be realized, and man-machine interaction functions and experience are enhanced.
It should be noted that, since the multi-mode data processing method of the present application adopts all the technical solutions of the embodiments of all the multi-mode data processing apparatuses, at least the technical solutions of the embodiments have all the beneficial effects brought by the technical solutions of the embodiments, and are not described in detail herein.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the invention, and all equivalent structural changes made by the description of the present invention and the accompanying drawings or direct/indirect application in other related technical fields are included in the scope of the invention.

Claims (10)

1. A multi-modal sensing apparatus for human-machine interaction, comprising:
a wearable body configured to be worn on a user's hand;
a multi-frequency electrical impedance imaging sensor array disposed on the wearable body, the multi-frequency electrical impedance imaging sensor configured to acquire electrical impedance signals resulting from user gesture changes; and
and the flexible strain sensor array is arranged on the wearable main body and is configured to acquire electric signals caused by deformation of joints of fingers of a user.
2. The multi-modal sensing apparatus for human-machine interaction of claim 1 wherein the multi-frequency electrical impedance imaging sensor array has at least two electrode layers, each electrode layer comprising 8 electrodes.
3. The multi-modal sensing apparatus for human-machine interaction as claimed in claim 2 wherein the wearable body includes a wrist portion, the wrist portion being in a ring-shaped arrangement;
the multi-frequency electrical impedance imaging sensor array is arranged on the wrist part, 8 electrodes of each electrode layer are annularly arranged around the wrist part, and the electrodes are exposed on the inner side of the wrist part.
4. The multi-modal sensing apparatus for human-machine interaction as in claim 3 wherein the multi-frequency electrical impedance imaging sensor array further comprises a first external circuit interface formed on the wrist portion.
5. The multi-modality sensing for human-machine interaction of claim 1, wherein the multi-frequency electrical impedance imaging sensor array is further configured to externally acquire bioelectrical signals including blood pulse wave signals and skin conductivity.
6. The multi-modal sensing apparatus for human-machine interaction as in claim 1 wherein the wearable body comprises a finger portion having a toe portion;
the flexible strain sensor array includes a fingertip sensor disposed at the fingertip.
7. The multi-modal sensing apparatus for human-machine interaction as in claim 6 wherein the flexible strain sensor array further comprises a flexible conductor and a second external circuit interface formed on the wrist of the wearable body, the fingertip sensor being connected to the second external circuit interface by the flexible conductor.
8. The multi-modal sensing apparatus for human-machine interaction as in claim 7 wherein the flexible conductor comprises a tube and a liquid metal flowing inside the tube.
9. A multi-modal data processing method for human-machine interaction, using the multi-modal sensing apparatus according to any one of claims 1 to 8 for signal acquisition, characterized in that the data processing method comprises:
fusing the electrical impedance signals and the electrical signals acquired by the multi-modal sensing device, and inputting a pre-trained neural network to reconstruct user gestures;
and acquiring physiological parameters of the user according to the bioelectric signals acquired by the multi-mode sensing device, and identifying the health state of the user according to the physiological parameters.
The physiological parameters are input into a pre-trained neural network to identify the emotional state of the user.
10. The method for multimodal data processing for human-machine interaction of claim 9, further comprising:
the electrical impedance signals, the electrical signals and the physiological parameters acquired by the multi-modal sensing device are fused and input into a pre-trained deep learning network to realize user behavior understanding.
CN202311361789.9A 2023-10-19 2023-10-19 Multi-mode sensing device for man-machine interaction and data processing method Pending CN117606540A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311361789.9A CN117606540A (en) 2023-10-19 2023-10-19 Multi-mode sensing device for man-machine interaction and data processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311361789.9A CN117606540A (en) 2023-10-19 2023-10-19 Multi-mode sensing device for man-machine interaction and data processing method

Publications (1)

Publication Number Publication Date
CN117606540A true CN117606540A (en) 2024-02-27

Family

ID=89950416

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311361789.9A Pending CN117606540A (en) 2023-10-19 2023-10-19 Multi-mode sensing device for man-machine interaction and data processing method

Country Status (1)

Country Link
CN (1) CN117606540A (en)

Similar Documents

Publication Publication Date Title
Jiang et al. A novel, co-located EMG-FMG-sensing wearable armband for hand gesture recognition
Jiang et al. Exploration of force myography and surface electromyography in hand gesture classification
US10970936B2 (en) Use of neuromuscular signals to provide enhanced interactions with physical objects in an augmented reality environment
McIntosh et al. Echoflex: Hand gesture recognition using ultrasound imaging
US20190033974A1 (en) Armband for tracking hand motion using electrical impedance measurement
Fujiwara et al. Optical fiber force myography sensor for identification of hand postures
Jin et al. Soft sensing shirt for shoulder kinematics estimation
Zhou et al. Non-invasive human-machine interface (HMI) systems with hybrid on-body sensors for controlling upper-limb prosthesis: A review
Wei et al. A multimodal multilevel converged attention network for hand gesture recognition with hybrid sEMG and A-mode ultrasound sensing
Zang et al. A facile, precise radial artery pulse sensor based on stretchable graphene-coated fiber
CN113616395A (en) Prosthesis control method, device, prosthesis equipment and computer readable storage medium
Tchantchane et al. A review of hand gesture recognition systems based on noninvasive wearable sensors
Yue et al. How to achieve human-machine interaction by foot gesture recognition: A review
Tang et al. From brain to movement: Wearables-based motion intention prediction across the human nervous system
Saggio et al. Sensory systems for human body gesture recognition and motion capture
Cutipa-Puma et al. A low-cost robotic hand prosthesis with apparent haptic sense controlled by electroencephalographic signals
Zongxing et al. Human-machine interaction technology for simultaneous gesture recognition and force assessment: A Review
EP3407228B1 (en) Wearable sensors for inputting a command
CN117606540A (en) Multi-mode sensing device for man-machine interaction and data processing method
Lin et al. Foot gesture recognition with flexible high-density device based on convolutional neural network
Limchesing et al. A Review on Recent Applications of EEG-based BCI in Wheelchairs and other Assistive Devices
Yoshimoto et al. Finger motion capture from wrist-electrode contact resistance
Fang Interacting with prosthetic hands via electromyography signals
An et al. High-Accuracy Hand Gesture Recognition on the Wrist Tendon Group Using Pneumatic Mechanomyography (pMMG)
Тятюшкина et al. «Brain–Computer» interface (BCI). Pt I: Classical technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination