CN115457732A - Fall detection method based on sample generation and feature separation - Google Patents

Fall detection method based on sample generation and feature separation Download PDF

Info

Publication number
CN115457732A
CN115457732A CN202211018371.3A CN202211018371A CN115457732A CN 115457732 A CN115457732 A CN 115457732A CN 202211018371 A CN202211018371 A CN 202211018371A CN 115457732 A CN115457732 A CN 115457732A
Authority
CN
China
Prior art keywords
falling
action
data
detection
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211018371.3A
Other languages
Chinese (zh)
Other versions
CN115457732B (en
Inventor
罗悦
周瑞
张子若
程宇
张宏
王佳昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202211018371.3A priority Critical patent/CN115457732B/en
Publication of CN115457732A publication Critical patent/CN115457732A/en
Application granted granted Critical
Publication of CN115457732B publication Critical patent/CN115457732B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0438Sensor means for detecting
    • G08B21/0469Presence detectors to detect unsafe condition, e.g. infrared sensor, microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/02Hierarchically pre-organised networks, e.g. paging networks, cellular networks, WLAN [Wireless Local Area Network] or WLL [Wireless Local Loop]
    • H04W84/10Small scale networks; Flat hierarchical networks
    • H04W84/12WLAN [Wireless Local Area Networks]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Gerontology & Geriatric Medicine (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a fall detection method based on sample generation and feature separation, which is used for extracting amplitude information of each subcarrier from CSI (channel state information) data of WiFi (wireless fidelity) signals to perform fall detection. In order to solve the problems of insufficient falling sample amount and dependence of a falling detection model on the environment, a falling action sample in source domain data is subjected to noise adding reconstruction to obtain a virtual falling action sample, a feature extractor is used for extracting feature vectors of the source domain data and the virtual falling data, a feature vector layer neuron obtained by the feature extractor can be divided into two parts, the upper part is brought into a falling and non-falling detection two-classifier training, the lower part is brought into a domain classifier training, the upper part gradually retains information related to falling and non-falling actions in the training process, and the lower part gradually retains environment related information in the training process, so that the falling and non-falling detection two-classifier can better distinguish the falling and non-falling actions, and the generalization capability of falling detection on the environment is enhanced.

Description

Fall detection method based on sample generation and feature separation
Technical Field
The invention relates to a WiFi signal-based behavior detection technology, in particular to a falling action detection technology.
Background
With the development of the internet of things and artificial intelligence, action recognition based on wireless signals becomes possible. Since falls are a major cause of injury to the elderly population, indoor fall detection has become an urgent need. Existing fall detection technologies are mostly based on wearable devices or video surveillance. However, wearable device-based fall detection requires wearing of the wearable device, while video surveillance-based fall detection is highly dependent on indoor light and poses privacy invasion risk. These limitations make it difficult to widely deploy fall detection systems in a residential environment.
In order to improve the current fall detection method and overcome the defects of the existing detection technology, the invention provides a method for fall detection based on WiFi signals. Compared with a traditional video detection mode, wiFi has penetrating capacity, can normally work under the conditions of weak light, no light and shielding, and almost does not invade privacy. Compare in wearable equipment mode, wiFi fall detection is a contactless detection mode, and the user need not to carry any equipment, has greatly increased the convenience. Due to the popularity and wide coverage of WiFi, wiFi-based fall detection is extremely low cost. The WiFi has high potential value and future prospect for indoor fall detection.
Currently, a WiFi-based fall detection method generally obtains Channel State Information (CSI) through a wireless network card and related tools to perform fall action identification. The propagation of WiFi indoors consists of the superposition of multiple paths, including direct paths, obstacle reflected paths, etc. The CSI has good stability and can well reflect the channel state. When the human body performs different actions, different influences are caused on the propagation of WiFi signals, and further CSI changes are caused, so that the falling action is detected. The method comprises the steps of collecting CSI information of daily actions (including falling actions), training a classification model capable of identifying the falling actions by using a machine learning algorithm, and inputting real-time CSI data into the model for classification so as to detect whether the falling occurs or not.
Fall detection is a two-classification problem, usually with supervised learning methods. In order to ensure the accuracy of fall detection, a large amount of labeled different motion data (including fall motions) needs to be collected to train the model, and the cost of collecting and labeling the labeled data is high, which consumes manpower. Due to the fact that environmental changes can generate large and difficult-to-predict influences on WiFi signals, prediction accuracy of a model trained by the supervised learning method in a changing environment is greatly reduced, and the method cannot be suitable for practical application.
Disclosure of Invention
Aiming at the defect that the existing WiFi-based fall detection method cannot well adapt to the environment change to cause the reduction of the detection precision, the invention aims to solve the technical problem of providing the fall detection method which enables the characteristics of a generated sample to be independent of the environment as far as possible so as to reduce the influence of the environment change on the detection.
The invention adopts the technical scheme that a fall detection method based on sample generation and feature separation comprises the following steps:
1) Fall detection environment deployment: deploying a WiFi transmitter and a WiFi receiver in a detection environment, and covering the whole detection area with WiFi signals;
2) A CSI data acquisition step: setting action types, wherein the action types comprise c actions, namely falling actions and non-falling actions; setting environment types, wherein each user repeatedly executes c actions in each environment, and each action is executed for multiple times; collecting Channel State Information (CSI) data in a detection area when actions are executed each time, and extracting a subcarrier amplitude value from the CSI data corresponding to one execution action as an action sample;
3) Data division: taking an action sample and corresponding action labels and environment labels as source domain data, and forming a source domain data set by using the source domain data corresponding to all actions executed in all environments;
4) Adding random Gaussian noise into an action sample of source domain data falling by utilizing a source domain data set action label, reconstructing by utilizing a self-encoder to generate a falling action virtual sample, generating t x N falling action virtual samples, and generating virtual falling data with the same format as the source domain data by utilizing the falling action virtual sample;
5) Inputting the source domain data and the virtual falling data into a feature extractor for feature separation training, and then respectively inputting the separation features output by feature extraction into a falling and non-falling detection two-domain classifier and a multi-domain classifier for detection training, specifically:
5-1) using a feature extractor to perform motion and environment separation extraction on the source domain data and the virtual fall data to obtain feature vectors
Figure BDA0003813192470000021
The feature separator to separate domain information related to the environment into feature vectors
Figure BDA0003813192470000022
The lower half of (1), storing the motion information in the feature vector
Figure BDA0003813192470000023
The upper half of the training system is used as a target to finish training;
5-2) feature vector
Figure BDA0003813192470000024
Divided into upper half
Figure BDA0003813192470000025
And the lower half
Figure BDA0003813192470000026
Layer of feature vectors
Figure BDA0003813192470000027
Upper half of (1)
Figure BDA0003813192470000028
Carrying in fall and non-fall detection two-classifier training, and carrying out the lower half part
Figure BDA0003813192470000029
Carrying out multi-domain classifier training; fall and non-fall detection bi-classifier for receiving feature vector layer
Figure BDA00038131924700000210
Upper half of (1)
Figure BDA00038131924700000211
Finishing training by taking information related to falling and non-falling actions as a target and performing two classification judgment of the falling and non-falling actions; multi-domain classifier training for receiving feature vector layers
Figure BDA0003813192470000031
The lower half of the training is completed with the goal of keeping the domain information related to the environment;
6) A detection step: and inputting the subcarrier amplitude of the CSI data to be classified into the trained feature extractor, and inputting the upper half part of features of the feature vector output by the feature extractor into the trained fall and non-fall detection secondary classifier to finish fall detection.
According to the influence of different actions of an experimenter in the detection area on the WiFi signals, the amplitude information of each subcarrier is extracted from the CSI data of the WiFi signals for fall detection. In order to solve the problems of insufficient falling sample size and dependence of a falling detection model on the environment, gaussian noise is added into a falling motion sample in source domain data, a self-encoder is used for reconstruction to obtain a virtual falling motion sample, a feature extractor is used for extracting feature vectors of the source domain data and the virtual falling motion data, a neuron of a feature vector layer obtained by the feature extractor can be divided into two parts, the upper part is brought into a falling and non-falling detection two-classifier for training, the lower part is brought into a domain classifier for training, information related to falling and non-falling motions is gradually reserved in the upper part in the training process, and information related to the environment is gradually reserved in the lower part in the training process, so that the falling and non-falling detection two-classifier can better distinguish the falling and non-falling motions.
The method has the beneficial effect that through the step of reconstructing the falling type samples, the problem that the model is over-fitted with non-falling type data due to the fact that the falling type samples are insufficient is solved. In order to further improve the environment-independent fall detection precision, the invention carries out feature separation on the feature vectors obtained by the feature extractor, and brings the two separated parts into a fall detection two-classifier and a non-fall detection two-classifier and a domain classifier for training respectively, thereby enhancing the generalization capability of the fall detection on the environment.
Drawings
Fig. 1 is a schematic diagram of an experimental scenario.
Fig. 2 is an overall frame diagram.
Detailed Description
The method comprises the following specific steps:
1) And (3) deployment of fall detection environments: the invention needs to be carried out in an environment covering WiFi, and a WiFi transmitter and a WiFi receiver are arranged in a detection environment, as shown in FIG. 1;
2) A CSI data acquisition step: setting the action types, wherein c actions are required to include falling actions and non-falling actions, specifically, c actions include 1 falling action 1 And c-1 non-fall actions c i The label of the falling action is i =2, \8230c, c, or the distribution number of other falling actions and non-falling actions is configured according to the requirement; setting environment types, wherein t environments are set; in each environment, each user repeatedly executes c actions, each action is executed for N times, and Channel State Information (CSI) data in a detection area is collected when the action is executed each time; extracting subcarrier amplitude values from the collected CSI data as action samples;
3) Data division: taking CSI data collected in t different environments as a source domain data set
Figure BDA0003813192470000041
Figure BDA0003813192470000042
Wherein
Figure BDA0003813192470000043
Represents the classification of the action as c i And the environment is classified as t l Of source domain CSI data, t l The l-th context class is represented,
Figure BDA00038131924700000415
representing the classification of actions as c i A category label of (2), wherein the fall action c 1 Label in the same category, non-falling action c i (i ≠ 1) is the same category label;
Figure BDA0003813192470000044
representing the classification of an environment as t l A domain tag of (a);
4) Generating a falling action virtual sample for an action sample of falling source domain data by utilizing the action label in the source domain data set, and generating t × N falling action virtual samples together, so that the virtual samples and the falling action samples are consistent with the action sample amount of non-falling categories, and the samples are balanced during the subsequent two-category training; generating virtual falling data with the same format as the source domain data by using a falling motion virtual sample; the generation of one virtual sample of the falling motion comprises the following steps:
4-1) based on a source domain dataset X S One source domain data set with fall as middle action label
Figure BDA0003813192470000045
Selecting a source domain data
Figure BDA0003813192470000046
Source domain data
Figure BDA0003813192470000047
Wherein each sample is data acquired by multiple transmit-receive antenna pairs
Figure BDA0003813192470000048
In combination wherein a i Representing antenna pair i. Source domain data
Figure BDA0003813192470000049
The motion samples in (1) are divided into n sets of data acquired by different antenna pairs:
Figure BDA00038131924700000410
wherein n represents the number of antenna pairs;
4-2) using a combined auto-encoder for n sets of data acquired by different antenna pairs
Figure BDA00038131924700000411
Reconstructing to generate n sets of corresponding intermediate layer vectors
Figure BDA00038131924700000412
The number of the combined self-encoders is n, and corresponds to the number of the antenna pairs;
each generating coder corresponds to a group of data acquired by the same antenna pair, and n generating coders are adopted to respectively correspond to n groups of data acquired by different antenna pairs
Figure BDA00038131924700000413
Performing feature extraction to obtain n groups of intermediate layer vectors
Figure BDA00038131924700000414
b is random gaussian noise, and function f represents the generation encoder; each generating encoder in this embodiment is a neural network including 4 fully-connected layers, and may also be implemented by using other fully-connected layers and other network structures;
4-3) each generating decoder corresponds to a group of intermediate layer vectors, n generating decoders are adopted to respectively reconstruct n groups of different intermediate layer vectors to obtain n groups of numbers acquired by different antenna pairsAccording to
Figure BDA0003813192470000051
N groups of virtual data after reconstruction
Figure BDA0003813192470000052
The structure of the generating decoder is opposite to that of the generating encoder, the generating decoder is a 4-layer fully-connected neural network, and the function g represents the generating decoder;
4-4) acquiring n groups of data acquired by different antenna pairs
Figure BDA0003813192470000053
N groups of reconstructed virtual data
Figure BDA0003813192470000054
Spliced into a complete and original motion sample
Figure BDA0003813192470000055
Similar fall action virtual samples:
Figure BDA0003813192470000056
5) Feature separation, comprising the steps of:
5-1) Source Domain data x Using feature extractor S And virtual fall data
Figure BDA0003813192470000057
Extracting action characteristics and environment characteristics to obtain a characteristic vector layer
Figure BDA0003813192470000058
The feature extractor is composed of 5 convolution blocks and a layer of full-connection network, each convolution block comprises 1 convolution layer and 1 pooling layer, and the 5 convolution blocks respectively adopt convolution kernels of 200, 150, 100, 20 and 10; layer of feature vectors
Figure BDA0003813192470000059
Has a dimension of (m × 2, 1), wherein the value of m needs to be dynamically adjusted according to the number of antenna pairs, and the feature vector layer is divided into two layers
Figure BDA00038131924700000510
Division into
Figure BDA00038131924700000511
And
Figure BDA00038131924700000512
two parts of which
Figure BDA00038131924700000513
Has the dimension of (m, 1),
Figure BDA00038131924700000514
has a dimension of (m, 1);
5-2) layer of feature vectors
Figure BDA00038131924700000515
Upper half of (2)
Figure BDA00038131924700000516
Bring into tumble and non-tumble and detect two classifier training, the latter half
Figure BDA00038131924700000517
Carrying in multi-domain classifier training, wherein a falling and non-falling detection two classifier is composed of 4 layers of fully-connected layer neural networks, and a multi-domain classifier is composed of 4 layers of fully-connected layer neural networks; layer of feature vectors
Figure BDA00038131924700000518
The upper half of (i.e.
Figure BDA00038131924700000519
Gradually retaining information related to falling and non-falling actions in the training process, and providing a feature vector layer
Figure BDA00038131924700000520
The lower half of (i.e.
Figure BDA00038131924700000521
Gradually retaining information related to the environment during the training process;
the feature extractor, the fall and non-fall detection two-classifier and the multi-domain classifier are not limited to the above structural description as long as the structure can support them to meet the training target:
the feature extractor is used for receiving the source domain data and the virtual falling data and outputting a feature vector
Figure BDA0003813192470000061
To separate context-related domain information into feature vectors
Figure BDA0003813192470000062
The lower half of (1), storing the motion information in the feature vector
Figure BDA0003813192470000063
The upper half of the training system is used as a target to finish training;
fall and non-fall detection bi-classifier for receiving feature vector layer
Figure BDA0003813192470000064
Upper half of (1)
Figure BDA0003813192470000065
Finishing training by taking the information related to falling and non-falling actions as a target and performing two-classification judgment of the falling and non-falling actions;
multi-domain classifier training for receiving layers of feature vectors
Figure BDA0003813192470000066
The lower half of the training is completed with the goal of keeping the domain information related to the environment;
6) Inputting the amplitude of the CSI data to be classified into a trained feature extractor, and obtaining featuresThe extractor extracts the feature vector layer
Figure BDA0003813192470000067
To pair
Figure BDA0003813192470000068
After characteristic separation to obtain
Figure BDA0003813192470000069
And
Figure BDA00038131924700000610
two parts, the upper half being characterised by
Figure BDA00038131924700000611
And carrying in a trained fall and non-fall detection two classifier to finish fall detection.
Experimental verification
A WiFi transmitter and a WiFi receiver are arranged in a detection environment, the transmitter is a common commercial router, the receiver is a wireless network card provided with an Intel WiFi Link 5300, and the transmitter and the receiver are respectively provided with 3 antennas and form 9 links. And acquiring CSI information from the Intel WiFi Link 5300 wireless network card by using a CSI tools package, wherein each antenna pair can acquire 30 groups of subcarrier information, and each data package totals 270 groups of subcarrier information.
The specific implementation steps are as follows:
step 1: a pair of WiFi transmitter and WiFi receiver is deployed in the detection area, wiFi signals are required to cover the whole detection area, and the experimental environment is schematically shown in FIG. 1.
And 2, step: CSI motion data are collected in the detection area. Selecting a plurality of time periods as different environments, and executing related actions in each environment by each user, wherein the actions specifically comprise standing posture falling, sitting posture falling, squatting, standing, walking, jumping and the like, and each action is repeated for a plurality of times. The sampling frequency was set at 100Hz.
And step 3: taking the action data of different environments as source domains to form a source domain data set
Figure BDA00038131924700000612
Each user performs each action N times in each context.
The CSI subcarriers may be described as complex forms
Figure BDA00038131924700000613
For H i Taking absolute value to obtain amplitude data set X a ={|H i An | }; the variance threshold clipping is performed on the extracted amplitude values so that each motion sample size is 300 × 270, and the sampled data can be represented as:
x=[x 1 ,x 2 ,...x 300 ]
where 300 is the number of packets per action, x i =[h 1 ,h 2 ...h 270 ]Is a data packet containing 270 subcarrier amplitudes.
And 4, step 4: a combined self-encoder is adopted to generate virtual samples of the falling action, so that the sample data size of the falling category in the training set is consistent with the sample data size of the non-falling category, as shown in fig. 2:
step 4-1: selecting a source domain data set x S Data set with fall category as middle action category
Figure BDA0003813192470000071
Source domain data
Figure BDA0003813192470000072
Data in which each sample is formed by a plurality of transmit-receive antenna pairs
Figure BDA0003813192470000073
In combination wherein a i Representing antenna pair i, source domain data
Figure BDA0003813192470000074
Each sample is divided into n sets of data acquired by different antenna pairs:
Figure BDA0003813192470000075
where n represents the number of antenna pairs, in this test n has a value of 9, i.e. there are 9 different sets of antenna pairs; a is i Representing different pairs of antennas, i.e. source domain data
Figure BDA0003813192470000076
Each sample in the array is divided into a plurality of groups of data collected by different antenna pairs;
step 4-2: using a combined auto-encoder to n sets of data acquired by different antenna pairs
Figure BDA0003813192470000077
Reconstructing to generate n sets of corresponding intermediate layer vectors
Figure BDA0003813192470000078
The number of the combined self-encoders is n, and the number of the combined self-encoders corresponds to the number of different antenna pairs;
each generating coder corresponds to a group of antenna pair data, and n generating coders are adopted to respectively pair n groups of data acquired by different antenna pairs
Figure BDA0003813192470000079
Feature extraction is performed, thereby obtaining n sets of intermediate layer vectors
Figure BDA00038131924700000710
Each generating coder is a neural network comprising 4 fully-connected layers, the input dimension is 300 multiplied by 30, 30 is the number of subcarriers contained in each signal link, b is random noise with the dimension of 300 multiplied by 30, and the output dimension is 64 multiplied by 30; in an embodiment, the number n of antenna pairs is 9, then a total of 9 sets of intermediate layer vectors are generated; the function f represents the generating encoder;
step 4-3: each generating decoder corresponds to a group of intermediate layer vectors, n generating decoders are adopted to respectively reconstruct n groups of different intermediate layer vectors to obtain n groups of data acquired by different antenna pairs
Figure BDA00038131924700000711
N groups of virtual data after reconstruction:
Figure BDA0003813192470000081
the generation decoder and the generation encoder are of opposite structures and are respectively a 4-layer fully-connected neural network, the input dimension is 64 multiplied by 30, and the output dimension is 300 multiplied by 30; in the embodiment of the present invention, if the number n of antenna pairs is 9, then 9 sets of virtual data are generated in total, each set corresponding to a different antenna pair; function g represents the generation decoder;
wherein, each group of the generation encoder and the generation decoder adopts Mean Square Error (MSE) as a loss function during training, and each group adopts different antenna pairs to collect falling motion samples
Figure BDA0003813192470000082
Corresponding to virtual samples of falling actions
Figure BDA0003813192470000083
As an MSE input, the loss function may be expressed as
Figure BDA0003813192470000084
Figure BDA0003813192470000085
Training the weights of the encoder and decoder such that reconstructed data is obtained for the data by different antennas
Figure BDA0003813192470000086
Source domain fall category data approaching corresponding antenna pairs
Figure BDA0003813192470000087
Step 4-4: will be provided withBy n sets of data acquired by different antenna pairs
Figure BDA0003813192470000088
Splicing n groups of reconstructed virtual data into a complete and original tumble motion sample
Figure BDA0003813192470000089
Similar virtual fall action sample
Figure BDA00038131924700000810
The transceivers adopted by the invention are all 3 antennas, so that 9 signal links are formed in total, namely n =9.
And 5: using feature extractor to align source domain data x S And virtual fall data
Figure BDA00038131924700000811
Performing characteristic separation to separate action information and environment information and bring action information characteristics and environment information characteristics into a classifier for training respectively;
step 5-1: the feature extractor is composed of 5 convolution blocks Conv1d and a layer of fully-connected network, each convolution block includes 1 convolution layer and 1 pooling layer, the input dimension of the feature extractor is 300 × 270 × 1, and the output dimension is (m × 2) × 1:
Figure BDA00038131924700000812
where x is the source domain data x S Or virtual fall data
Figure BDA00038131924700000813
Figure BDA00038131924700000814
For the feature vector layer, the feature vector layer
Figure BDA00038131924700000815
Dimension of (m × 2, 1), where the value of m needs to be dynamically adjusted according to the number of antenna pairs, in an embodiment of the present invention, the value of m is 1025, and the eigenvector layer is stacked
Figure BDA0003813192470000091
Division into
Figure BDA0003813192470000092
And
Figure BDA0003813192470000093
two parts, wherein
Figure BDA0003813192470000094
Has the dimension of (m, 1),
Figure BDA0003813192470000095
has the dimension of (m, 1):
Figure BDA0003813192470000096
step 5-2: will be provided with
Figure BDA0003813192470000097
The training is carried out in a falling and non-falling detection two-stage classifier formed by a 2-layer full-connection layer neural network, a probability distribution is obtained through the falling and non-falling detection two-stage classifier, and the probability distribution is combined with a real action label to calculate the cross entropy loss L 1
Figure BDA0003813192470000098
Where M is the number of samples in each training batch, y i The label representing sample i, with a falling action of 1 and a non-falling action of 0 i Representing the probability that sample i is predicted to fall.
Step 5-3: will be provided with
Figure BDA0003813192470000099
The method is carried out by training in a multi-domain classifier formed by a 2-layer full-connection layer neural network, a probability distribution is obtained through the multi-domain classifier, the probability distribution is combined with a real domain label, and cross entropy loss L is calculated 2
Figure BDA00038131924700000910
Where K is the number of domain classes, z id Is the true domain label (0 or 1) of the sample, z is the true class of sample i equals d id Get 1, otherwise z id Take 0,p ic Is the predicted probability that sample i belongs to class c. The classification effect of each category in the multi-domain classifier is not the focus, and the multi-domain classifier is not used for detection after training is finished. The multi-domain classifier is used for assisting the two-domain classifier to complete training and obtain an accurate two-class detection result.
Step 5-4: the final objective function L is
L=L 1 +L 2
And 6: after the training of the fully-connected neural network is completed, the amplitude value of the CSI data in the new environment to be classified is directly input into a trained feature extractor, the feature extractor can extract the falling action information and the environment information in the new environment data into a feature vector, the information related to the environment is gradually separated into the lower half part of a feature vector layer, and the required information related to the falling action and the non-falling action is stored in the upper half part of the feature vector layer; and inputting the upper half part of the feature vector for reserving the relevant information of the falling action into a trained falling and non-falling detection two classifier to finish falling detection.
In a verification experiment, 12 different environment data are used as source domain data, and the data of 2 new environments are input into a trained model, so that the average falling detection accuracy rate in the new environments is more than 87%, the accuracy is more than 86%, and the recall rate is more than 88%.

Claims (7)

1. A fall detection method based on sample generation and feature separation, comprising the steps of:
1) And (3) deployment of fall detection environments: deploying a WiFi transmitter and a WiFi receiver in a detection environment, and covering a WiFi signal in the whole detection area;
2) A CSI data acquisition step: setting the action types, wherein the action types comprise c actions, namely falling actions and non-falling actions; setting environment types, wherein each user repeatedly executes c actions in each environment, and each action is executed for multiple times; collecting Channel State Information (CSI) data in a detection area when actions are executed each time, and extracting a subcarrier amplitude value from the CSI data corresponding to one execution action as an action sample;
3) Data division: taking an action sample and corresponding action labels and environment labels as source domain data, and forming a source domain data set by using the source domain data corresponding to all actions executed in all environments;
4) Performing noise addition reconstruction by using an action sample of source domain data of which the action label is fallen in the source domain data set to generate a falling action virtual sample, generating t × N falling action virtual samples, and generating virtual falling data with the same format as the source domain data by using the falling action virtual sample;
5) Inputting source domain data and virtual falling data into a feature extractor for feature separation training, and then respectively inputting separation features output by feature extraction into a falling and non-falling detection two-domain classifier and a multi-domain classifier for detection training, wherein the method specifically comprises the following steps:
5-1) using a feature extractor to perform motion and environment separation extraction on the source domain data and the virtual fall data to obtain feature vectors
Figure FDA0003813192460000011
The feature separator to separate domain information related to an environment into feature vectors
Figure FDA0003813192460000012
The lower half of (1), storing the motion information in the feature vector
Figure FDA0003813192460000013
The upper half of the training system is used as a target to finish training;
5-2) feature vector
Figure FDA0003813192460000014
Divided into upper parts
Figure FDA0003813192460000015
And the lower half
Figure FDA0003813192460000016
Layer of feature vectors
Figure FDA0003813192460000017
Upper half of (2)
Figure FDA0003813192460000018
Carrying in fall and non-fall detection two-classifier training, and carrying out the lower half part
Figure FDA0003813192460000019
Carrying out multi-domain classifier training; fall and non-fall detection bi-classifier for receiving feature vector layer
Figure FDA00038131924600000110
Upper half of (2)
Figure FDA00038131924600000111
Finishing training by taking the information related to falling and non-falling actions as a target and performing two-classification judgment of the falling and non-falling actions; multi-domain classifier training for receiving feature vector layers
Figure FDA00038131924600000112
The lower half of (a) to preserve context-dependentThe domain information of the training system is taken as a target to complete training;
6) A detection step: and inputting the subcarrier amplitude of the CSI data to be classified into the trained feature extractor, and inputting the upper half part of features of the feature vector output by the feature extractor into the trained fall and non-fall detection secondary classifier to finish fall detection.
2. The method as claimed in claim 1, wherein the virtual sample of the fall action in step 4) is generated by:
4-1) grouping and dividing the motion sample of the source domain data with the falling motion label in the source domain data set according to the logarithm of the WiFi transceiving antenna pair;
4-2) adding noise to the motion samples subjected to grouping and segmentation by using a combined self-encoder to reconstruct, and generating falling virtual samples; the combined self-encoder consists of generating encoders and generating decoders, wherein the number of the generating encoders is the same as that of the antenna pairs, and the number of the generating decoders is the same as that of the antenna pairs; in the reconstruction process, each generation coder performs feature extraction on one group in the corresponding action sample to obtain a group of intermediate layer vectors;
4-3) reconstructing an intermediate layer vector by using a generating decoder, and obtaining the reconstructed virtual data with the same number as the antenna pairs after reconstructing all the intermediate layer vectors;
4-4) splicing the reconstructed virtual data with the same number as the antenna pairs into a virtual sample of the falling action.
3. The method of claim 2, wherein the generative decoder is a neural network of 4 fully-connected layers, as opposed to the generative encoder.
4. The method as claimed in claim 1, wherein the step 4-1) of grouping and dividing motion samples of one source domain data with a fall in the source domain data set according to the logarithm of WiFi transceiving antenna pairs is specifically represented as:
Figure FDA0003813192460000021
wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0003813192460000022
source domain data for action tags falling, c 1 An action tag for a fall, a i Representing the ith antenna pair, the total logarithm of the WiFi transmitting antenna and the WiFi receiving antenna is n.
5. The method as claimed in claim 1, wherein the c actions set in step 2) specifically include 1 falling action and c-1 non-falling actions; setting the environment types, wherein t environments are set; within each environment, each user repeatedly performs c actions, each action being performed N times;
and 4) generating t × N falling motion virtual samples.
6. The method of claim 1, wherein the feature extractor adopts a structure consisting of 5 convolutional blocks and a fully connected network, each convolutional block comprising 1 convolutional layer and 1 pooling layer; feature vector
Figure FDA0003813192460000023
Is (m x 2, 1), wherein the value of the dimension parameter m is dynamically adjusted according to the number of antenna pairs,
Figure FDA0003813192460000031
has the dimension of (m, 1),
Figure FDA0003813192460000032
has a dimension of (m, 1).
7. The method of claim 1, wherein the fall and non-fall detection bi-classifier is formed by a 2-layer fully-connected layer neural network, and the domain classifier is formed by a 2-layer fully-connected layer neural network.
CN202211018371.3A 2022-08-24 2022-08-24 Fall detection method based on sample generation and feature separation Active CN115457732B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211018371.3A CN115457732B (en) 2022-08-24 2022-08-24 Fall detection method based on sample generation and feature separation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211018371.3A CN115457732B (en) 2022-08-24 2022-08-24 Fall detection method based on sample generation and feature separation

Publications (2)

Publication Number Publication Date
CN115457732A true CN115457732A (en) 2022-12-09
CN115457732B CN115457732B (en) 2023-09-01

Family

ID=84298354

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211018371.3A Active CN115457732B (en) 2022-08-24 2022-08-24 Fall detection method based on sample generation and feature separation

Country Status (1)

Country Link
CN (1) CN115457732B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111488850A (en) * 2020-04-17 2020-08-04 电子科技大学 Neural network-based old people falling detection method
CN111597877A (en) * 2020-04-02 2020-08-28 浙江工业大学 Fall detection method based on wireless signals
CN112346050A (en) * 2020-10-23 2021-02-09 清华大学 Fall detection method and system based on Wi-Fi equipment
US20210041548A1 (en) * 2019-08-08 2021-02-11 Syracuse University Motion detection and classification using ambient wireless signals
CN113221671A (en) * 2021-04-22 2021-08-06 浙江大学 Environment-independent action identification method and system based on gradient and wireless signal
EP4027879A1 (en) * 2019-09-13 2022-07-20 ResMed Sensor Technologies Limited Systems and methods for detecting movement
CN114781463A (en) * 2022-06-16 2022-07-22 深圳大学 Cross-scene robust indoor tumble wireless detection method and related equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210041548A1 (en) * 2019-08-08 2021-02-11 Syracuse University Motion detection and classification using ambient wireless signals
EP4027879A1 (en) * 2019-09-13 2022-07-20 ResMed Sensor Technologies Limited Systems and methods for detecting movement
CN111597877A (en) * 2020-04-02 2020-08-28 浙江工业大学 Fall detection method based on wireless signals
CN111488850A (en) * 2020-04-17 2020-08-04 电子科技大学 Neural network-based old people falling detection method
CN112346050A (en) * 2020-10-23 2021-02-09 清华大学 Fall detection method and system based on Wi-Fi equipment
CN113221671A (en) * 2021-04-22 2021-08-06 浙江大学 Environment-independent action identification method and system based on gradient and wireless signal
CN114781463A (en) * 2022-06-16 2022-07-22 深圳大学 Cross-scene robust indoor tumble wireless detection method and related equipment

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
DANIEL KONINGS 等: "WiFi OR CSI OR channel state information", IEEE SENSORS LETTERS *
HONG ZHANG 等: "The Sensorless Nursing System Based on 5G Internet of Things", 2021 3RD INTERNATIONAL SYMPOSIUM ON SMART AND HEALTHY CITIES *
RUI ZHOU 等: "Device-free Localization Based on CSI Fingerprints and Deep Neural Networks", 2018 15TH ANNUAL IEEE INTERNATIONAL CONFERENCE ON SENSING, COMMUNICATION, AND NETWORKING *
XIN WEN 等: "A Multi-class Dataset Expansion Method for Wi-Fi-Based Fall Detection", 2022 IEEE MTT-S INTERNATIONAL MICROWAVE BIOMEDICAL CONFERENCE *
周瑞 等: "基于卡尔曼滤波的WiFi-PDR融合室内定位", 电子科技大学学报 *
杜宇锋 等: "基于WIFI信息熵特征分析的室内人员行为监测技术", 南开大学学报 *

Also Published As

Publication number Publication date
CN115457732B (en) 2023-09-01

Similar Documents

Publication Publication Date Title
CN108446716B (en) The PolSAR image classification method merged is indicated with sparse-low-rank subspace based on FCN
CN104376326B (en) A kind of feature extracting method for image scene identification
CN110796199B (en) Image processing method and device and electronic medical equipment
CN112101184B (en) Wireless cross-domain action identification method based on semi-supervised learning
Obinata et al. Temporal extension module for skeleton-based action recognition
CN112799128B (en) Method for seismic signal detection and seismic phase extraction
CN111428819A (en) CSI indoor positioning method based on stacked self-coding network and SVM
CN104463194A (en) Driver-vehicle classification method and device
KR20210095671A (en) Image processing method and related device
CN111881802A (en) Traffic police gesture recognition method based on double-branch space-time graph convolutional network
CN115438708A (en) Classification and identification method based on convolutional neural network and multi-mode fusion
CN107066980A (en) A kind of anamorphose detection method and device
Atikuzzaman et al. Human activity recognition system from different poses with cnn
CN115035381A (en) Lightweight target detection network of SN-YOLOv5 and crop picking detection method
CN114943245A (en) Automatic modulation recognition method and device based on data enhancement and feature embedding
CN112766201A (en) Behavior cross-domain identification model establishing and identifying method and system based on CSI data
CN113837122A (en) Wi-Fi channel state information-based non-contact human body behavior identification method and system
CN111652132B (en) Non-line-of-sight identity recognition method and device based on deep learning and storage medium
CN115826042B (en) Edge cloud combined distributed seismic data processing method and device
CN115457732A (en) Fall detection method based on sample generation and feature separation
CN116311357A (en) Double-sided identification method for unbalanced bovine body data based on MBN-transducer model
CN114444534A (en) Cross-domain wireless action identification method based on sample generation and modal fusion
CN116340849B (en) Non-contact type cross-domain human activity recognition method based on metric learning
CN110555342A (en) image identification method and device and image equipment
Sun et al. A CNN based localization and activity recognition algorithm using multi-receiver CSI measurements and decision fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Luo Yue

Inventor after: Zhou Rui

Inventor after: Zhang Ziruo

Inventor after: Cheng Yu

Inventor after: Zhang Hongwang

Inventor after: Wang Jiahao

Inventor before: Luo Yue

Inventor before: Zhou Rui

Inventor before: Zhang Ziruo

Inventor before: Cheng Yu

Inventor before: Zhang Hong

Inventor before: Wang Jiahao

GR01 Patent grant
GR01 Patent grant