CN114626419A - Motion recognition method based on channel state information in WIFI and improved convolutional neural network - Google Patents

Motion recognition method based on channel state information in WIFI and improved convolutional neural network Download PDF

Info

Publication number
CN114626419A
CN114626419A CN202210278541.5A CN202210278541A CN114626419A CN 114626419 A CN114626419 A CN 114626419A CN 202210278541 A CN202210278541 A CN 202210278541A CN 114626419 A CN114626419 A CN 114626419A
Authority
CN
China
Prior art keywords
data
action
training
improved
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210278541.5A
Other languages
Chinese (zh)
Other versions
CN114626419B (en
Inventor
潘甦
金律成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202210278541.5A priority Critical patent/CN114626419B/en
Priority claimed from CN202210278541.5A external-priority patent/CN114626419B/en
Publication of CN114626419A publication Critical patent/CN114626419A/en
Application granted granted Critical
Publication of CN114626419B publication Critical patent/CN114626419B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B17/00Monitoring; Testing
    • H04B17/30Monitoring; Testing of propagation channels
    • H04B17/309Measuring or estimating channel quality parameters

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Signal Processing (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a motion identification method based on channel state information in WIFI and an improved convolutional neural network, which comprises the following specific steps of: data acquisition, data preprocessing, construction of improved sha-CNN, addition of a batch regularization mechanism, feature extraction of a training set, and action classification model training by using the improved sha-CNN added with the batch regularization mechanism. According to the invention, a batch regularization mechanism is integrated in the improved convolution layer of the sha-CNN to accelerate the network convergence speed, prevent the problem of generalization performance reduction caused by different distribution of training data and test data, reduce the calculation cost and improve the human body action recognition speed by reducing the data dimension and compressing the network model to the maximum extent on the premise of improving the human body action recognition accuracy.

Description

Motion recognition method based on channel state information in WIFI and improved convolutional neural network
Technical Field
The invention relates to the technical field of communication, in particular to an action identification method based on channel state information in WIFI and an improved convolutional neural network.
Background
The military training exercises inevitably bring negative burden and exercise injuries to the body, and besides unreasonable military training amount and exercise intensity, irregular actions are also a big cause of injuries. The standard and standard actions can promote the contraction and the extension of muscles and the rotation of joints so as to achieve the aim of strengthening the body by correct actions and training effective military training activities. However, the number of people involved in military training activities is large, the instructor cannot give consideration to the overall training action of the trainees and can not confirm that the action made by the trainees meets the military training standard due to lack of professional literacy, so that the trainees can cause body injuries with different weights such as muscle soreness or accidental injury under random guidance, and the original intention of the military training activities on body building is violated. Therefore, it is necessary to accurately monitor and identify the movement of the trainee in the military training activities, determine the accuracy and normalization of the military training movements, and use the accuracy and normalization as an important basis for instructor guidance and trainee correction, so as to improve the military training performance of the trainee and ensure the safety of the military training activities.
The human body action recognition refers to the recognition of the motion state of a detected object by adopting specific equipment and a detection algorithm. The common identification method comprises a video-based identification method and a wearable device-based identification method, wherein the video-based identification method acquires data through a camera, analyzes visual motion characteristics of a human body in the video data, and extracts the human body characteristics in the video to complete motion identification. The identification method based on wearable equipment needs to wear some common sensing equipment on an identification object, such as an accelerometer, inertial navigation, a barometer and the like, collects data transmitted by a sensor, extracts data features and combines a classifier to realize identification of actions, wherein the extraction of mean values, variances, standard deviations, entropies and waveform correlations is common, but the condition that key information is lost or blurred in the extraction process occurs, so that the fluctuation of identification capability is large. Furthermore, the invasiveness of the device cannot be eliminated, and the user experience is not friendly. The identification method based on the WiFi Signal is a human body motion identification method which is newly proposed, and the method uses the Received Signal Strength (RSSI) or Channel State Information (CSI) of the receiving device as data for analysis, and cannot acquire the external visual Information of the user, so that the risk of privacy disclosure is avoided, the possibility of monitored shooting is not formed, the conflicting psychology caused by the fact that the user is monitored is directly eliminated, and the risk of personal privacy disclosure worried by the public is greatly reduced. Compare camera, wearable equipment etc. the hardware cost price of the wireless sensing technology installation based on channel state information is cheaper, and user comfort is high. Finally, the wireless sensing technology based on the channel state information utilizes the multipath effect, does not have too much rigid requirements on indoor furniture or decoration by receiving the channel state information of radio waves after scattering, diffraction and other physical phenomena, does not meet the shielding problem which is difficult to solve by a video identification technology, and has high identification capability. However, after the traditional human body action recognition method based on Wi-Fi signals extracts the feature sequences, the feature sequences are put into a classification algorithm for recognition and classification, classification results are greatly influenced by the type selection and the number of features, the calculation efficiency is low, and dynamic monitoring and recognition of human body actions cannot be realized. The present invention can solve the above problems well.
Disclosure of Invention
The invention aims to provide a shallow convolutional neural network model Sha-CNN based on Wi-Fi signal data, and on the premise of improving human body action recognition accuracy, the calculation cost is reduced to the maximum extent by reducing data dimensionality and compressing the network model, and the human body action recognition speed is improved.
In order to achieve the purpose, the invention is realized by the following technical scheme:
the invention relates to a motion identification method based on channel state information in WIFI and an improved convolutional neural network, which comprises the following steps:
step 1, data acquisition: collecting human body action WI-FI signals in an indoor sparse environment, and collecting channel state information at a receiving end;
step 2, data preprocessing: extracting amplitude information in a human body action WI-FI signal, removing high-frequency noise generated by disturbing factors through a low-pass filter, extracting effective data by using a threshold-based method, sampling uniform data size through a sliding window, simultaneously adding a label, and then dividing a training set and a test set;
step 3, constructing an improved sha-CNN, extracting features of the training set by using the improved sha-CNN, and obtaining global features of the human action image by adopting a local feature fusion strategy;
and 4, training an action classification model by using the improved sha-CNN, inputting a test set to perform action classification prediction, and comparing and predicting labels according to actual action classification.
The invention is further improved in that: the labeled data is divided into 70% of training set and 30% of testing set.
The invention is further improved in that: the local feature fusion strategy specifically operates as follows: the extracted action characteristics are decomposed into two or more independent actions, and a parameter model A of the independent actions is obtained1,A2,...,AnRespective modal characteristics are imported into a Mix module, the Mix module performs mixed characteristic parameter operation on the characteristic parameters of the parameter model with independent action, and the formula is as follows:
Figure BDA0003557066750000031
wherein the content of the first and second substances,
Figure BDA0003557066750000032
is the weighting factor of the ith layer of the Mix model,
Figure BDA0003557066750000033
is an action model AnMotion characteristic parameter of kth motion of ith layer,
Figure BDA0003557066750000034
Is that
Figure BDA0003557066750000035
The value range of the weight factor of (2) is (0,1),
Figure BDA0003557066750000036
and with
Figure BDA0003557066750000037
Can be in a value relationship of
Figure BDA0003557066750000038
n is the number of classes of the motion model classification.
The invention is further improved in that: the improved sha-CNN in step 4 includes convolution, pooling, and non-linear layers.
The invention is further improved in that: the size of the convolutional layer is 1 × 1, the depth is n, a convolutional kernel with the size of 3 × 3 is adopted for conv1 and conv2, the step size is 2, the size of the pooling layer is 2 × 2, and the step size movement is set to be 2; conv3 convolution kernel size was set to 2 × 20, vertical convolution step was set to 20, horizontal convolution step was set to 1, pooling layer size was set to 2 × 10, and step was set to 1.
The invention is further improved in that: input data of batch regularization is small batch data with dimension m, wherein B is { x ═ x1,x2,...,xmThe parameters gamma and beta to be learned are calculated, and the average value mu of the small batch data is calculated firstlyBThe calculation is as follows:
Figure BDA0003557066750000039
wherein x isωRepresenting a small batch dataset B ═ x1,x2,...,xmThe ω -th data in ω ∈ [1, m ]]M is the last data in the small batch data set B;
the variance of the small batch of data is then calculated
Figure BDA00035570667500000310
The calculation is as follows:
Figure BDA00035570667500000311
performing normalized operation to normalize the value
Figure BDA00035570667500000312
The calculation is as follows:
Figure BDA00035570667500000313
Figure BDA00035570667500000314
wherein, gamma and beta are respectively obtained by model data training, and are fixed values in each model training, and through the calculation, the forward calculation path length is reduced on the basis of not increasing the number of network layers, and the training time is effectively saved.
The invention has the beneficial effects that: 1. the convolution kernel, the step length, the size of the pooling layer and the like are optimally designed to be more fit with the shape characteristics of the motion matrix data, the global characteristics and the local characteristics of the human motion image are fused, the parameter learning capability of the convolution layer is improved by fewer model parameters, and the parameter learning burden of the full-connection layer is reduced.
2. The batch regularization layer is integrated into the convolutional layer, the network convergence speed is accelerated, the problem of generalization performance reduction caused by different distribution of training data and test data is solved, on the premise of improving human body action recognition accuracy, the calculation cost is reduced to the maximum extent by reducing data dimensionality and compressing a network model, and the human body action recognition speed is improved.
Drawings
Fig. 1 is a flowchart of the human body motion recognition task of the present invention.
FIG. 2 is a flow diagram of motion recognition model training based on the modified Sha-CNN.
Fig. 3 is a schematic diagram of a human body motion collection environment.
Fig. 4 is a comparison graph of the channel state information data characteristic before and after filtering.
FIG. 5 is a graph of the convergence rate of the motion recognition model of the training sample set according to the present invention.
FIG. 6 shows the accuracy of action type recognition under the present invention.
FIG. 7 is a schematic structural diagram of a modified Sha-CNN.
Fig. 8 is a schematic diagram of the Mix module.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
FIG. 1 is a flow chart of a human body action recognition task, which classifies and recognizes actions of trainees, classifies the same action of different trainees into the same action, classifies different actions of the same person into correct actions, automatically discovers characteristics in samples by using a classifier, classifies unknown classified samples, and essentially belongs to the problem of multi-classification.
The method comprises the following specific steps:
step 1, data acquisition. Data acquisition is realized through a commercial router and an industrial personal computer, the acquisition environment is an indoor sparse environment, and the position connecting line of the transmitting and receiving ends is vertically intersected with the central line of a room at a central point. The effective measurement range is about 1m wide and 3m long. When measuring the movement data, the volunteers were relatively stationary at the center point of the measurement range and performed the movement completely in situ.
And 2, preprocessing data. Extracting amplitude information in the signal, and removing high-frequency noise generated by disturbing factors such as environment and irrelevant channels through a Butterworth low-pass filter in order to better observe waveform characteristics, as shown in FIG. 4, after filtering, noise and burrs of a pass band basically disappear, the lifting amplitude of a frequency response curve is reduced, the curve is also smoothed to the maximum extent, a frequency response curve of a stop band is slowly reduced to a zero value, and amplitude data of the disturbing factors on the waveform such as the noise are removed is obtained. The method utilizes MATLAB to realize the Butterworth filtering process, and in consideration of noise generated by various external factors of high-frequency data, the method adopts a 9-order Butterworth low-pass filter to eliminate the high-frequency noise generated under the condition, and the low-pass frequency band is 3-200 Hz. In fig. 4, an image before filtering is shown at the upper part, and an image after filtering is shown at the lower part, and in consideration of the time delay problem, the amplitude characteristics before and after filtering are observed to be consistent. And comparing the CSI amplitude waveforms before and after the Butterworth filtering, so that the filtered CSI amplitude waveform is smoother. And extracting effective data by using a threshold-based method, sampling uniform data size through a sliding window, and simultaneously adding labels. And finally, dividing the training set and the test set according to a leaving method of a 7:3 ratio.
And 3, designing a model, namely designing a convolutional neural network (Sha-CNN) with a relatively shallow layer number according to the WI-FI signal characteristics of human body actions, wherein the structural schematic diagram of the convolutional neural network is shown in FIG. 7, the convolutional neural network is composed of convolution, pooling and nonlinear layers and is used for extracting amplitude characteristics of WI-Fi signals, the lower layer characteristic diagram has a small receptive field and focuses on learning simple characteristics of amplitude information, and the subsequent convolutional layer is responsible for extracting complex characteristics and learning meaning characteristics of the amplitude information. Along with the increase of the number of layers, the size of a receptive field of a convolution kernel in a feature map in the convolution layer is correspondingly increased, and the global feature representation capability is enhanced. In order to obtain enough abundant human motion image characteristics to achieve good human motion classification effect, the invention adopts a local characteristic fusion strategy, and adds two relatively independent single-channel networks together to obtain the global characteristics of the human motion image (such as the step in military training motions, which can be regarded as that the right hand transversely plays the two independent characteristics in front of the chest and the left foot forwards, and the two characteristics have time relevance, so the characteristics of the two have a mutual supervision or reference relationship)Can be disassembled into two or more independent actions, and the action model channels are independent single channels, so that the independent parameter model A can be implemented1,A2,...,AnRespective modal features are introduced into a Mix module, a schematic diagram of the Mix module of the embodiment is shown in fig. 8, and the Mix module on the left side of the classifier in the diagram is obtained by extracting an action model a1,A2The characteristic parameters of (2) are subjected to mixed characteristic parameter operation, and in the graph
Figure BDA0003557066750000051
The motion characteristic parameter of the kth motion of the ith layer is set with a certain weight value at each summation position of the upper graph, and is weighted to be sent to the next layer, so that the motion characteristic parameter of the kth motion of the ith layer is transmitted to the last layer to obtain a mixed characteristic sNmaxAnd the mixed characteristics are sent to a classifier for classification, and the obtained action classification effect is better than that of an independent classifier. Calculating the mixed characteristic parameter of each lower layer in a layer-by-layer manner
Figure BDA0003557066750000061
The calculation formula is as follows:
Figure BDA0003557066750000062
wherein the content of the first and second substances,
Figure BDA0003557066750000063
is the weighting factor for the i-th layer of the Mix model,
Figure BDA0003557066750000064
is an action model AnThe motion characteristic parameter of the kth motion of the ith layer,
Figure BDA0003557066750000065
is that
Figure BDA0003557066750000066
N is the number of classes of the motion model classification, in this example, only two unique classes are distinguishedThe vertical motion is performed, so that n is 2,
Figure BDA0003557066750000067
and
Figure BDA0003557066750000068
can be taken by oneself, the value range is (0,1),
Figure BDA0003557066750000069
and
Figure BDA00035570667500000610
can be in a value relationship of
Figure BDA00035570667500000611
In order to accelerate the speed of network convergence and prevent the problem of generalization performance reduction caused by different distribution of training data and test data, a batch regularization mechanism is added in a convolution layer of an improved Sha-CNN, wherein the batch regularization has the function of standardizing the input values and reducing the difference, so that on one hand, the convergence degree of the gradient is improved, and the training speed is accelerated, on the other hand, each layer can face the input values with the same characteristic distribution as much as possible, the uncertainty caused by the change is reduced, the influence on the following weight layers is also reduced, and the parameter adjustment of each weight layer becomes relatively independent. The loss function uses a cross entropy function, the activation function uses a ReLU function, the purpose is to guarantee the training speed, and the problem that the training speed is reduced due to the fact that the Sigmoid function and the Tanh function are possibly saturated at the tail end is avoided. Input data of batch regularization is a small batch data set B with dimension m ═ x1,x2,...,xm-and the parameters γ and β to be learned;
first, calculate the average value mu of the small batch dataBThe calculation formula is as follows:
Figure BDA00035570667500000612
wherein x isωRepresenting a small batch dataset B ═ x1,x2,...,xmThe ω -th data in ω ∈ [1, m ]]And m is the last data in the small batch data set B.
The variance of the small batch data B is then calculated
Figure BDA00035570667500000613
The calculation formula is as follows:
Figure BDA00035570667500000614
and then, carrying out normalization operation to normalize the input distribution which changes in each network layer, normalizing the input data to the standard normal distribution, and enabling the input values of each layer to fall into a region where the nonlinear transformation function is more sensitive to the input data, thereby overcoming the problem of gradient disappearance and accelerating the convergence speed of network training. Normalized numerical value
Figure BDA0003557066750000071
The calculation formula of (a) is as follows:
Figure BDA0003557066750000072
Figure BDA0003557066750000073
wherein, gamma and beta are respectively obtained by model data training, and are fixed values in each model training, and through the calculation, the forward calculation path length is reduced on the basis of not increasing the number of network layers, and the training time is effectively saved.
The specific process is as follows: when data are transmitted between hidden layers, firstly, a convolution result of the hidden layer is standardized on a batch unit, and is transmitted to a next layer of network under the action of an activation function after being subjected to treatments such as scaling and translation, the cross entropy function is used as a loss function, a ReLU function is used as an activation function, the training speed is guaranteed, and the problem that the training speed is reduced due to the fact that a Sigmoid function and a Tanh function are possibly saturated at the tail end is solved.
The input original data is a motion data feature map, namely an amplitude sequence with uniform size obtained through sliding window sampling. Representing the amplitude characteristic diagram of the motion data as XjLabeled amplitude data set denoted D2={(Xi,Yj)}JD2 contains J CSI sequences and their tags. For each j ═ 1., M, XJDenotes the jth, YjIs a classification value, here an action tag. The problem of motion recognition is to build a model that predicts the identity tag Y from the input X.
The invention uses 1 antenna pair when collecting the action data, after the data is compressed, the data dimension of one antenna pair is 700 × 30 × 1, wherein 700 represents the number of data packets, 30 represents the number of subcarriers, and 1 represents one antenna pair. The input to conv1 is therefore a 30 x 90 matrix, where 90 refers to the time dimension and 30 is the subcarrier dimension. The identification module firstly carries out further processing on the obtained feature map, then carries out regression on each feature point on the feature map, and predicts whether the motion is contained or not and the motion category. And (4) performing '0' complementing operation on the characteristic diagram by the pooling layer so as to ensure the consistency of the sizes of the characteristic diagram before and after the convolution operation.
The specific design and explanation are as follows:
first, in designing convolutional layers, this example uses convolution kernels of 3 × 3 size for conv1 and conv2 (since the convolution recognition efficiency is highest when the motion sample is 3 as verified by experiments) with reference to VGGNet-16, the step size is 2, the pooling layer size is 2 × 2, and the step movement is set to 2. The conv3 convolution kernel size is set to 2 × 20, the vertical convolution step is set to 20, the horizontal convolution step is set to 1, the pooling layer size is set to 2 × 10, and the step is set to 1, so as to better fit the shape characteristics of the motion matrix data, improve the parameter learning capability of the convolution layer, and reduce the parameter learning burden of the following all-connected layer.
And 4, training a model and identifying and classifying. Inputting the action training set divided in the step 2 into a designed Sha-CNN for action classification model training, inputting a test set for action classification prediction after the action classification model training is finished, comparing prediction labels according to actual action classification, and researching human action recognition accuracy. The training is divided into two stages, firstly, a data set is used as input data and input into a network model, after unit formula operation and maximum pooling of 'convolution ═ batch regularization ═ ReLU activation' in a convolution block, corresponding weight is output, a bias value calculated by a hidden neuron is added, and full-connection calculation is carried out on a calculation result to obtain an output value. And then performing deviation calculation on the output value and the target value, inputting the deviation value into a gradient descent function, learning to obtain a loss function and the loss of the randomly selected sample point, solving a gradient vector of the sample point loss function, initializing a parameter of the function to obtain a corresponding gradient vector, and performing gradient descent iteration according to a gradient descent step length until one-time iterative training is completed. In the training process, all training data are iterated for 40 times, namely, an Epoch is set to be 40, and each iteration is divided into 9 batches in specific training according to the size of the data volume, so that 360 batches are required for model training, and the accuracy and the Loss value of a test set are output by taking ten batches as a unit. And when the model training reaches the convergence condition (the descending gradient reaches the convergence value of 0.001 of the initial setting or the iteration times exceed the maximum iteration times of the initial setting by 40 times), finishing the training of the recognition model. Over 40 iterations, the convergence value reached 0.00186918.
And verifying the action type identification accuracy, as shown in fig. 6, performing action prediction on a CSI test set acquired and preprocessed after 8 actions (waving hands, sitting down, bending waist, stepping, leaning back, standing, stepping and turning) are performed on the antenna by a volunteer by using a trained action type identification model, comparing the predicted human body action with the actual action of the test set, and calculating the identification accuracy of each action. And calculating an average value of the recognition accuracy of all the actions to obtain the total action type recognition rate of 92.62%, and simultaneously drawing a confusion matrix for the recognition conditions of different actions so as to visually reflect the recognition accuracy of the human body action type recognition model. Through experiments, the recognition accuracy of the model for four actions of sitting down, standing up, stepping forward and turning back is high and is respectively 93.24%, 93.06%, 93.65% and 93.74%, and the recognition accuracy for two actions of waving hands and bending back is low and is respectively 91.76% and 91.18%; the overall recognition accuracy of the model reaches 92.63%, which shows that the human body action type recognition model designed and trained by the invention has higher recognition accuracy on different actions of volunteers.
Although the present invention has been described with reference to the preferred embodiments, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (6)

1. A motion identification method based on channel state information in WIFI and an improved convolutional neural network is characterized in that: the method comprises the following steps:
step 1, data acquisition: collecting human body action WI-FI signals in an indoor sparse environment, and collecting channel state information at a receiving end;
step 2, data preprocessing: extracting amplitude information in a human body action WI-FI signal, removing high-frequency noise generated by disturbing factors through a low-pass filter, extracting effective data by using a threshold-based method, sampling uniform data size through a sliding window, simultaneously adding a label, and then dividing a training set and a test set;
step 3, constructing an improved sha-CNN, adding a batch regularization mechanism, extracting features of the training set, and obtaining global features of the human action image by adopting a local feature fusion strategy;
and 4, training an action classification model by using the improved sha-CNN, inputting a test set to perform action classification prediction, and comparing and predicting labels according to actual action classification.
2. The method of claim 1, wherein the method comprises the following steps: the labeled data is divided into 70% of training set and 30% of testing set.
3. The method of claim 1, wherein the method comprises the following steps: the local feature fusion strategy operates specifically as: the extracted action characteristics are decomposed into two or more independent actions, and a parameter model A of the independent actions is obtained1,A2,...,AnRespective modal characteristics are imported into a Mix module, the Mix module performs mixed characteristic parameter operation on the characteristic parameters of the parameter model with independent action, and the formula is as follows:
Figure FDA0003557066740000011
wherein the content of the first and second substances,
Figure FDA0003557066740000012
is the weighting factor of the ith layer of the Mix model,
Figure FDA0003557066740000013
is an action model AnThe motion characteristic parameter of the kth motion of the ith layer,
Figure FDA0003557066740000014
is that
Figure FDA0003557066740000015
N is classified by motion modelThe number of categories.
4. The method of claim 1, wherein the method for recognizing the action based on the channel state information in the WIFI and the improved convolutional neural network is characterized in that: the improved sha-CNN in the step 4 comprises convolution, pooling and nonlinear layers, and a batch regularization mechanism is added into the convolution layers.
5. The method of claim 4, wherein the method for recognizing the action based on the channel state information in WIFI and the improved convolutional neural network is characterized in that: the size of the convolutional layer is 1 × 1, the depth is n, a convolution kernel with the size of 3 × 3 is adopted for conv1 and conv2, the step size is 2, the size of the pooling layer is 2 × 2, and the step movement is set to be 2; conv3 convolution kernel size was set to 2 × 20, vertical convolution step was set to 20, horizontal convolution step was set to 1, pooling layer size was set to 2 × 10, and step was set to 1.
6. The method of claim 1, wherein the method comprises the following steps: input data of batch regularization is small batch data with dimension m, wherein B is { x ═ x1,x2,…,xmThe parameters gamma and beta to be learned are calculated, and the average value mu of the small batch data is calculated firstlyBThe calculation is as follows:
Figure FDA0003557066740000021
wherein x isωRepresenting a small batch dataset B ═ x1,x2,...,xmThe ω -th data in ω ∈ [1, m ]]M is the last data in the small batch data set B;
the variance of the small batch of data is then calculated
Figure FDA0003557066740000022
The calculation is as follows:
Figure FDA0003557066740000023
performing normalized operation to normalize the numerical value
Figure FDA0003557066740000024
The calculation is as follows:
Figure FDA0003557066740000025
Figure FDA0003557066740000026
wherein, gamma and beta are respectively obtained by model data training and are fixed values in each model training.
CN202210278541.5A 2022-03-21 Action recognition method based on channel state information in WIFI and improved convolutional neural network Active CN114626419B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210278541.5A CN114626419B (en) 2022-03-21 Action recognition method based on channel state information in WIFI and improved convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210278541.5A CN114626419B (en) 2022-03-21 Action recognition method based on channel state information in WIFI and improved convolutional neural network

Publications (2)

Publication Number Publication Date
CN114626419A true CN114626419A (en) 2022-06-14
CN114626419B CN114626419B (en) 2024-06-28

Family

ID=

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117812552A (en) * 2023-12-27 2024-04-02 韶关学院 WiFi signal human body behavior identification method and system based on data packet compression network
CN117951614A (en) * 2024-03-21 2024-04-30 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Object activity recognition method and device based on channel state information

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109657584A (en) * 2018-12-10 2019-04-19 长安大学 Assist the improvement LeNet-5 converged network traffic sign recognition method driven
CN112308133A (en) * 2020-10-29 2021-02-02 成都明杰科技有限公司 Modulation identification method based on convolutional neural network
JP6980958B1 (en) * 2021-06-23 2021-12-15 中国科学院西北生態環境資源研究院 Rural area classification garbage identification method based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109657584A (en) * 2018-12-10 2019-04-19 长安大学 Assist the improvement LeNet-5 converged network traffic sign recognition method driven
CN112308133A (en) * 2020-10-29 2021-02-02 成都明杰科技有限公司 Modulation identification method based on convolutional neural network
JP6980958B1 (en) * 2021-06-23 2021-12-15 中国科学院西北生態環境資源研究院 Rural area classification garbage identification method based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
石翠萍;谭聪;左江;赵可新;: "基于改进AlexNet卷积神经网络的人脸表情识别", 电讯技术, no. 09, 27 September 2020 (2020-09-27), pages 11 - 18 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117812552A (en) * 2023-12-27 2024-04-02 韶关学院 WiFi signal human body behavior identification method and system based on data packet compression network
CN117951614A (en) * 2024-03-21 2024-04-30 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Object activity recognition method and device based on channel state information

Similar Documents

Publication Publication Date Title
CN108629380B (en) Cross-scene wireless signal sensing method based on transfer learning
CN111027487B (en) Behavior recognition system, method, medium and equipment based on multi-convolution kernel residual error network
CN104063719B (en) Pedestrian detection method and device based on depth convolutional network
CN107007257B (en) The automatic measure grading method and apparatus of the unnatural degree of face
CN103294199B (en) A kind of unvoiced information identifying system based on face's muscle signals
CN110070029A (en) A kind of gait recognition method and device
CN107290741A (en) Combine the indoor human body gesture recognition method apart from time-frequency conversion based on weighting
CN111597991A (en) Rehabilitation detection method based on channel state information and BilSTM-Attention
CN110852382A (en) Behavior recognition system based on space-time multi-feature extraction and working method thereof
CN106529377A (en) Age estimating method, age estimating device and age estimating system based on image
CN111466878A (en) Real-time monitoring method and device for pain symptoms of bedridden patients based on expression recognition
CN107609477A (en) It is a kind of that detection method is fallen down with what Intelligent bracelet was combined based on deep learning
CN106667506A (en) Method and device for detecting lies on basis of electrodermal response and pupil change
CN107967941A (en) A kind of unmanned plane health monitoring method and system based on intelligent vision reconstruct
CN113069117A (en) Electroencephalogram emotion recognition method and system based on time convolution neural network
CN111262637A (en) Human body behavior identification method based on Wi-Fi channel state information CSI
Wang et al. Distortion recognition for image quality assessment with convolutional neural network
CN109740418B (en) Yoga action identification method based on multiple acceleration sensors
CN112364770B (en) Commercial Wi-Fi-based human activity recognition and action quality evaluation method
CN111652132B (en) Non-line-of-sight identity recognition method and device based on deep learning and storage medium
Fan et al. Real-time machine learning-based recognition of human thermal comfort-related activities using inertial measurement unit data
CN112132788B (en) Bone age assessment method based on characteristic region grade identification
CN114626419A (en) Motion recognition method based on channel state information in WIFI and improved convolutional neural network
Jakkala et al. Deep CSI learning for gait biometric sensing and recognition
Zhou et al. Deep-WiID: WiFi-based contactless human identification via deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant