CN113221671A - Environment-independent action identification method and system based on gradient and wireless signal - Google Patents

Environment-independent action identification method and system based on gradient and wireless signal Download PDF

Info

Publication number
CN113221671A
CN113221671A CN202110437246.5A CN202110437246A CN113221671A CN 113221671 A CN113221671 A CN 113221671A CN 202110437246 A CN202110437246 A CN 202110437246A CN 113221671 A CN113221671 A CN 113221671A
Authority
CN
China
Prior art keywords
environment
gradient
motion
independent
recognizer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110437246.5A
Other languages
Chinese (zh)
Inventor
刘建伟
何映晖
韩劲松
任奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202110437246.5A priority Critical patent/CN113221671A/en
Publication of CN113221671A publication Critical patent/CN113221671A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an environment-independent action recognition method and system based on gradient and wireless signals, which are characterized in that a wireless signal is used for collecting user action signal samples containing a plurality of environments, an environment recognition deep neural network is trained, the trained environment recognition network is used for calculating the gradient of an input signal sample, the gradient is multiplied by a weight and then added to an original signal sample to reduce the influence of environmental interference, and the processed signal sample can be used for training an environment-independent action classifier to realize action recognition. The invention also provides a motion recognition system which comprises a signal acquisition module, an environment recognizer, a data processing module, a motion recognizer and the like. The invention relates to a motion recognition method and system which are permanently used in one-time training, wherein after an environment recognizer and a motion classifier are trained by collecting signal samples in an environment where motion recognition is needed, the trained environment recognizer and motion classifier can be permanently used for recognizing new motions.

Description

Environment-independent action identification method and system based on gradient and wireless signal
Technical Field
The invention belongs to the field of motion recognition, and particularly relates to an environment-independent motion recognition method which is realized by capturing motion information of a user by using a wireless signal and eliminating environment components in a signal sample by using a gradient array of an environment recognizer.
Background
Motion recognition technology is the driving core of many human-computer interactive applications. For example, in virtual/augmented reality, the action of the user needs to be judged to execute the next operation, the instruction sent by the user can only be judged according to the action of the user in the intelligent home environment, and the system in the fall detection system needs to send an alarm in time when the user falls. Conventional motion recognition methods, such as camera-based, device-carried, and sonar radar-based methods, have their own drawbacks, such as leakage of visual privacy, inconvenience of wearing the device, and narrow sensing range. To address the deficiencies of conventional motion recognition methods, wireless signals (e.g., WiFi signals) are used for user motion information capture and perception.
However, the conventional wireless signal-based motion recognition method is affected and interfered by environmental changes. Wireless signal samples collected in multiple environments cannot be used for high accuracy motion recognition. To address this problem, the antagonistic network and limb velocity features are used to extract context-independent motion features. However, the previous solutions all have their own drawbacks, such as not high accuracy (less than 90%) of the counterlearning, and the limb velocity feature cannot be used for user gesture recognition. For this reason, there is an urgent need for a wireless signal-based motion recognition technique with high accuracy that can be used for both motion recognition and gesture recognition.
With the development of deep learning techniques and counterlearning techniques, countersample is used to confront the recognition result of the deep network. It is therefore possible to design an environment identifier, the influence of which is suppressed by the counteracting gradients of the environment identifier.
Disclosure of Invention
In order to solve the problems in the prior art, the invention multiplies the gradient array of the environment identifier by a coefficient and adds the multiplied gradient array to the original signal sample to inhibit the influence of environment change, and provides a method for realizing the environment-independent action identification based on the wireless signal by using the gradient array of the environment identifier.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
an environment-independent motion recognition method based on gradient and wireless signals comprises two stages:
a training stage:
wireless signal samples containing multiple actions are collected under multiple environments, and environment labels and action labels are marked on the signal samples one by one.
An environment recognizer is trained using the collected signal samples and environment labels.
And (3) solving a gradient array for each signal sample by using an environment identifier, multiplying the gradient array by a coefficient, and adding the multiplied gradient array to the original signal sample to obtain an environment-independent signal sample.
A motion classifier is trained using the environment-independent signal samples and corresponding motion labels.
And (3) identification:
and processing the acquired unknown wireless signal samples into environment-independent signal samples by using a trained environment recognizer, and inputting the environment-independent signal samples into a trained motion classifier to realize environment-independent motion recognition.
Further, the multi-action signal samples collected in multiple environments should be collected in each environment for each action. I.e. each environment contains all actions.
Further, the environment recognizer is a multi-layer convolutional neural network.
Further, the gradient array is a gradient symbol identity matrix obtained after the environment identifier is propagated backwards, and all elements in the matrix are 0 or 1.
Further, the value of the coefficient multiplied by the gradient array is between 0 and 1.
Further, the action classifier is a support vector machine or a neural network classifier.
Based on the same idea, the invention also provides a motion recognition system, which comprises:
the signal acquisition module is used for acquiring a wireless signal sample;
and the environment identifier is used for solving the gradient array corresponding to the wireless signal sample through back propagation.
And the data processing module is used for multiplying the gradient array by a coefficient and then adding the multiplied gradient array to the original wireless signal sample to obtain an environment-independent wireless signal sample.
And the action recognizer is used for carrying out action recognition on the wireless signal samples which are irrelevant to the environment.
Wherein the environment recognizer is a multilayer convolutional neural network. The motion classifier is preferably a support vector machine or a neural network classifier.
Compared with the existing action recognition technology based on wireless signals, the invention can realize the action recognition with high accuracy under multiple environments. The invention trains the environment recognizer by using wireless signal samples collected under multiple environments, obtains the gradient array by using the reverse propagation of the environment recognizer, and dilutes the environmental interference in the original data by using the weighted gradient array, so that the processed data is not easily influenced by environmental change. And finally, training a motion classifier by using the environment-independent signal sample to realize the environment-independent motion recognition of the new signal sample.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic diagram of wireless signal sample acquisition;
FIG. 3 is a schematic diagram of an environment classifier;
FIG. 4 is a schematic diagram of environment-independent processing;
FIG. 5 is a graph of experimental results;
Detailed Description
The invention provides a method for realizing high-accuracy action recognition by utilizing a gradient array of an environment recognizer to inhibit the influence of environment change aiming at the condition that the conventional action recognition system based on wireless signals cannot realize high-accuracy action recognition in a plurality of environments.
The method of the invention is further illustrated with reference to the accompanying drawings and specific examples:
a method for recognizing actions based on gradient and wireless signal independent of environment is disclosed, whose brief flow is shown in FIG. 1, and includes two stages:
the training phase comprises the following steps:
step 1) collecting signal samples containing all actions to be recognized in a plurality of environments, taking WiFi signals as an example, a schematic diagram of collecting signal samples in a laboratory environment is shown in fig. 2. And marking each signal sample with a corresponding environment label and an action label.
The signal samples of the multi-action collected in a plurality of environments are collected, and each action is used for collecting data in each environment. For example, in a WiFi signal-based motion recognition system deployed in a laboratory, home, and classroom environment, if three motions "walking", "standing", and "falling" need to be recognized, a user needs to collect samples of walking, standing, and falling signals in the laboratory, home, and classroom environments, respectively.
Preferably, the environment tag and the behavior tag are integers starting from 0. For example, the environment labels corresponding to the three environments of the laboratory, the family and the classroom can be 0, 1 and 2. The behavior labels corresponding to walking, standing and falling are respectively 0, 1 and 2.
And 2) training an environment recognizer based on the deep neural network by using the signal samples and the environment labels. For example, a deep neural network including three convolutional layers and two fully-connected layers is used as the environment recognizer, and the structure of the deep neural network is shown in fig. 3. The loss function and the environment label are used during training. For example using L1Loss:
Figure BDA0003033640370000031
where n is the number of signal samples, yiAnd yi' are the real environment signature of the ith signal sample and the predicted signature of the environment identifier output, respectively.
And 3) calculating a gradient array for each signal sample by using the trained environment recognizer in the step 2) in a back propagation manner, multiplying the gradient array by a coefficient, and adding the multiplied gradient array to the original signal sample to obtain an environment-independent signal sample. Processing schematic as shown in fig. 4, the signal sample is represented as x, the coefficient is represented as a, and the context identifier is represented as f (), the context-independent processing procedure can be represented as:
Figure BDA0003033640370000041
wherein
Figure BDA0003033640370000042
And the expression gradient array is a gradient symbol identity matrix obtained after the environment identifier reversely propagates, and all elements in the matrix are 0 or 1. The coefficient by which the gradient array is multiplied is an empirically derived value, typically between 0 and 1.
And 4) training a motion classifier by using the environment-independent signal sample. The action classifier may be any machine learning classifier, such as a support vector machine. If the motion classifier is represented as R (), the motion label predicted for the new signal sample x' can be represented as:
Figure BDA0003033640370000043
and (3) identification:
when a new signal sample needs to be identified, the signal sample is processed into an environment-independent signal sample through the step 3), and then the processed signal sample is input into a trained action classifier to obtain an action label corresponding to the new action.
In this embodiment, a volunteer is asked to collect WiFi signal data for five actions ("squat, turn, wave, kick, and bend") in three different environments. 50 signal samples were collected for each action. The cross entropy loss and the signal samples are then used to train an environment recognizer based on a deep convolutional neural network. And processes all signal samples into environment-independent signal samples. Of which 80% of the environment-independent signal samples are used to train a support vector machine-based motion classifier and the other 20% are used for motion recognition testing. The test effect is shown in fig. 5, and under different coefficients a, the accuracy of motion recognition in three environments exceeds 90%. Illustrating the high accuracy of the method of the invention. The invention is a motion recognition method of one-time training permanent use, after collecting signal samples in the environment needing motion recognition and training an environment recognizer and a motion classifier, the trained environment recognizer and motion classifier can be permanently used for recognizing new signal samples.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. This need not be, nor should all embodiments be exhaustive. And obvious variations or modifications of the invention may be made without departing from the scope of the invention.

Claims (8)

1. An environment-independent motion recognition method based on gradient and wireless signals is characterized by comprising two stages:
a training stage:
collecting wireless signal samples containing different actions in different environments, and marking each signal sample with a corresponding environment label and an action label;
training an environment recognizer based on a deep neural network by using all signal samples and environment labels;
and (3) calculating a gradient array corresponding to each signal sample by utilizing the back propagation of the trained environment recognizer, multiplying the gradient array by a coefficient, and adding the multiplied gradient array to the original signal sample to obtain the environment-independent signal sample.
A motion classifier is trained using environment-independent signal samples and motion labels.
And (3) identification:
and processing the acquired unknown wireless signal samples into environment-independent signal samples by using a trained environment recognizer, and inputting the environment-independent signal samples into a trained motion classifier to realize environment-independent motion recognition.
2. The gradient-based wireless-signal environment-independent motion recognition method of claim 1, wherein the environment recognizer is a multi-layer convolutional neural network.
3. The gradient-based environment-independent motion recognition method of claim 1, wherein the gradient array is a gradient symbolic identity matrix obtained by back propagation of an environment recognizer, and all elements in the matrix are 0 or 1.
4. The method of claim 1, wherein the coefficient multiplied by the gradient array is between 0 and 1.
5. The gradient and wireless signal based environment-independent motion recognition method of claim 1, wherein the motion classifier is a support vector machine or a neural network classifier.
6. A motion recognition system based on the motion recognition method according to claim 1, comprising:
the signal acquisition module is used for acquiring a wireless signal sample;
and the environment identifier is used for solving the gradient array corresponding to the wireless signal sample through back propagation.
And the data processing module is used for multiplying the gradient array by a coefficient and then adding the multiplied gradient array to the original wireless signal sample to obtain an environment-independent wireless signal sample.
And the action recognizer is used for carrying out action recognition on the wireless signal samples which are irrelevant to the environment.
7. The motion recognition system of claim 6, wherein the context recognizer is a multi-layered convolutional neural network.
8. The motion recognition system of claim 6, wherein the motion classifier is a support vector machine or a neural network classifier.
CN202110437246.5A 2021-04-22 2021-04-22 Environment-independent action identification method and system based on gradient and wireless signal Pending CN113221671A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110437246.5A CN113221671A (en) 2021-04-22 2021-04-22 Environment-independent action identification method and system based on gradient and wireless signal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110437246.5A CN113221671A (en) 2021-04-22 2021-04-22 Environment-independent action identification method and system based on gradient and wireless signal

Publications (1)

Publication Number Publication Date
CN113221671A true CN113221671A (en) 2021-08-06

Family

ID=77088666

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110437246.5A Pending CN113221671A (en) 2021-04-22 2021-04-22 Environment-independent action identification method and system based on gradient and wireless signal

Country Status (1)

Country Link
CN (1) CN113221671A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115457732A (en) * 2022-08-24 2022-12-09 电子科技大学 Fall detection method based on sample generation and feature separation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108629380A (en) * 2018-05-11 2018-10-09 西北大学 A kind of across scene wireless signal cognitive method based on transfer learning
WO2018196396A1 (en) * 2017-04-24 2018-11-01 清华大学 Person re-identification method based on consistency constraint feature learning
US20200397345A1 (en) * 2019-06-19 2020-12-24 University Of Southern California Human activity recognition using magnetic induction-based motion signals and deep recurrent neural networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018196396A1 (en) * 2017-04-24 2018-11-01 清华大学 Person re-identification method based on consistency constraint feature learning
CN108629380A (en) * 2018-05-11 2018-10-09 西北大学 A kind of across scene wireless signal cognitive method based on transfer learning
US20200397345A1 (en) * 2019-06-19 2020-12-24 University Of Southern California Human activity recognition using magnetic induction-based motion signals and deep recurrent neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LIU-JIANWEI ET AL.: "Adversary Helps: Gradient-based Device-Free Domain-Independent Gesture Recognition", 《ARXIV:2004.03961V1 [CS.CV]》, 8 April 2020 (2020-04-08), pages 3 - 5 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115457732A (en) * 2022-08-24 2022-12-09 电子科技大学 Fall detection method based on sample generation and feature separation
CN115457732B (en) * 2022-08-24 2023-09-01 电子科技大学 Fall detection method based on sample generation and feature separation

Similar Documents

Publication Publication Date Title
CN106960206B (en) Character recognition method and character recognition system
Xiang et al. Cross-modality person re-identification based on dual-path multi-branch network
CN112233664A (en) Network training method, device, equipment and storage medium
CN105117708A (en) Facial expression recognition method and apparatus
CN116204266A (en) Remote assisted information creation operation and maintenance system and method thereof
Xiao et al. Multi-sensor data fusion for sign language recognition based on dynamic Bayesian network and convolutional neural network
CN116245513B (en) Automatic operation and maintenance system and method based on rule base
CN111444850B (en) Picture detection method and related device
Chaudhary et al. Depth‐based end‐to‐end deep network for human action recognition
Silanon Thai Finger‐Spelling Recognition Using a Cascaded Classifier Based on Histogram of Orientation Gradient Features
CN112861808A (en) Dynamic gesture recognition method and device, computer equipment and readable storage medium
CN113221671A (en) Environment-independent action identification method and system based on gradient and wireless signal
CN116894210B (en) Electronic device comprising force sensor and data processing method
CN116719419B (en) Intelligent interaction method and system for meta universe
Kammoun et al. ArSign: Toward a mobile based Arabic sign language translator using LMC
CN115143128B (en) Fault diagnosis method and system for small-sized submersible electric pump
Hassan et al. Intelligent sign language recognition using enhanced fourier descriptor: a case of Hausa sign language
CN114627312B (en) Zero sample image classification method, system, equipment and storage medium
CN116502700A (en) Skin detection model training method, skin detection device and electronic equipment
CN115937993A (en) Living body detection model training method, living body detection device and electronic equipment
CN115331311A (en) Gesture recognition and position classification combined deep learning method
Melnyk et al. Towards computer assisted international sign language recognition system: a systematic survey
Velmula et al. Indian Sign Language Recognition Using Convolutional Neural Networks
Li et al. YOLOv3 target detection algorithm based on channel attention mechanism
CN112016540B (en) Behavior identification method based on static image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210806

WD01 Invention patent application deemed withdrawn after publication