CN113837122A - Wi-Fi channel state information-based non-contact human body behavior identification method and system - Google Patents

Wi-Fi channel state information-based non-contact human body behavior identification method and system Download PDF

Info

Publication number
CN113837122A
CN113837122A CN202111143085.5A CN202111143085A CN113837122A CN 113837122 A CN113837122 A CN 113837122A CN 202111143085 A CN202111143085 A CN 202111143085A CN 113837122 A CN113837122 A CN 113837122A
Authority
CN
China
Prior art keywords
signal
behavior recognition
recognition result
data
behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111143085.5A
Other languages
Chinese (zh)
Other versions
CN113837122B (en
Inventor
程克非
徐家顺
张亮
陈京浩
罗维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202111143085.5A priority Critical patent/CN113837122B/en
Publication of CN113837122A publication Critical patent/CN113837122A/en
Application granted granted Critical
Publication of CN113837122B publication Critical patent/CN113837122B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B17/00Monitoring; Testing
    • H04B17/30Monitoring; Testing of propagation channels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of behavior recognition, and particularly relates to a non-contact human body behavior recognition method and a system based on Wi-Fi channel state information, wherein the method comprises the following steps: acquiring channel state information original data of different actions in real time, preprocessing the acquired data, and extracting characteristics of the preprocessed data; inputting the extracted features into a trained machine learning model and a classification model to obtain a final behavior recognition result; the invention uses the parallel classification model combining GRU and CNN, can well establish the correlation between the human body action and the CSI signal change mode, has good human body action identification effect, and improves the accuracy rate of identifying the human body action based on Wi-Fi signals.

Description

Wi-Fi channel state information-based non-contact human body behavior identification method and system
Technical Field
The invention belongs to the technical field of behavior recognition, and particularly relates to a non-contact human body behavior recognition method and system based on Wi-Fi channel state information.
Background
There are three representative methods for human behavior recognition: a video-based identification method, a wearable device-based identification method, and a Wi-Fi signal-based identification method. The video-based identification method is easily influenced by the light intensity and has the risk of invading privacy; wearable device-based identification methods require the person to wear additional hardware devices, and the price of these devices is high.
With the development and maturity of wireless network technology, Wi-Fi equipment such as wireless routers and the like are installed in many families and public places, and the equipment is easy to carry and install, wide in distribution, low in cost and one of the largest sensing networks. Wi-Fi adopts OFDM (orthogonal frequency division multiplexing) mode to realize the transmission of wireless signals, and the wireless signals can be divided into a plurality of concurrent subcarriers in the frequency domain. For wireless signals transmitted in OFDM, the acquisition mode includes conventional RSSI techniques as well as CSI (channel state information) techniques. The RSSI (signal strength indicator) can only reflect the total amplitude of multipath superposition, while the CSI presents the amplitude and phase of multipath propagation at different frequencies (corresponding to different subcarriers), thereby more accurately characterizing the channel with frequency selective fading characteristics. It describes the fading factor of the signal on each transmission path, i.e. the value of each element in the channel gain matrix, such as signal scattering, environmental fading, distance attenuation, etc. Because the sensitivity of CSI is far higher than that of RSSI, the interference rule of a human body is identified through physical electrical characteristics, and therefore identification is completed.
At present, most human behavior recognition mostly aims at the problems of low recognition precision, poor robustness and the like caused by poor generalization capability, high learning and training cost and lack of fine space-time modeling of behavior activities of people in a wireless signal space due to data denoising, behavior recognition of a single target and multiple scenes and too few effective feature extractions.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a non-contact human behavior identification method based on Wi-Fi channel state information, which comprises the following steps: deploying channel state information data acquisition equipment at a signal source, wherein the position of the data acquisition equipment is used as a monitoring point; the method comprises the following specific steps:
s1: acquiring CSI subcarrier data of different actions transmitted by a signal source in real time to obtain a first signal;
s2: preprocessing the first signal by the monitoring point to obtain a second signal;
s3: extracting a feature vector of the second signal, and selecting a maximum component related to a time sequence as a main feature variable of data to obtain a third signal;
s4: extracting a feature vector of the second signal, selecting a maximum component related to the phase and the amplitude as a main feature variable of the data, and obtaining a fourth signal;
s5: respectively inputting a third signal and a fourth signal into the trained machine learning model by each monitoring point to obtain a first behavior recognition result and a second behavior recognition result;
s6: inputting the first behavior recognition result and the second behavior recognition result into a processing function to obtain a third human behavior recognition result of the monitoring point;
s7: and inputting the third human behavior recognition result into the classification model to obtain a final behavior recognition result.
Preferably, the signal source is a Wi-Fi signal source.
Preferably, preprocessing the first signal includes performing data denoising, data smoothing and data dimension reduction on the first signal.
Preferably, obtaining the third signal comprises: and extracting the characteristics of the second signal by adopting a time series mining analysis method, and screening out the maximum component related to the time series from the extracted characteristics to be used as a third signal.
Preferably, the obtaining of the fourth signal includes performing feature extraction on the second signal by using a time domain and frequency combination analysis method, so as to obtain amplitude and phase information of the CSI subcarrier changing along with the human body behavior; and screening the extracted information by adopting a sliding window method to obtain a maximum component related to the phase and the amplitude, and taking the component as a fourth signal.
Preferably, the machine learning model comprises a gated cyclic unit and a convolutional neural network; performing behavior recognition on the third signal by adopting a trained gating cycle unit to obtain a first behavior recognition result; and performing behavior recognition on the fourth signal by adopting the trained convolutional neural network to obtain a second behavior recognition result.
Preferably, the processing function is a Softmax function.
Preferably, the classification model is a K-nearest neighbor algorithm model.
A system for contactless human behavior recognition based on Wi-Fi channel state information, the system comprising: the system comprises a data acquisition module, a data processing module, a front-end server and a cloud platform server;
the data acquisition module is channel state information data acquisition equipment which is used for receiving a CSI subcarrier signal to obtain a first signal;
the data processing module comprises a denoising module and a feature extraction module;
the denoising module is used for denoising, smoothing and data dimension reduction processing on the first signal to obtain a second signal;
the characteristic extraction module is used for extracting characteristics of the second signal to obtain a third signal and a fourth signal, and sending the third signal and the fourth signal to the front-end server;
the front-end server is used for identifying the behaviors of the third signal and the fourth signal to obtain a first behavior identification result and a second behavior identification result; sending the first behavior recognition result and the second behavior recognition result to a cloud platform server;
and the cloud deck server calculates and predicts the first behavior recognition result and the second behavior recognition result to obtain a third behavior recognition result, and classifies the third behavior recognition result to obtain a final recognition result.
Further, the formula for obtaining the third behavior recognition result is as follows:
predi=w1*pred_GRU+w2*pred_CNN
the invention has the beneficial effects that: the extracted features are respectively input into GRU and CNN parallel models, then the identification result of the current monitoring node is obtained by utilizing a softmax function, as a plurality of monitoring nodes are deployed in the current environment, each node can obtain a predicted result through the method, then the predicted results of all the nodes are subjected to a KNN algorithm to obtain the final identification result, and the identification effect is good.
Drawings
FIG. 1 is a general flow diagram of the present invention for Wi-Fi CSI based human body motion recognition;
FIG. 2 is a flow chart of CSI data preprocessing according to the present invention;
FIG. 3 is a schematic diagram of a GRU unit structure according to the present invention;
FIG. 4 is a flow chart of the KNN algorithm of the present invention;
FIG. 5 is a schematic structural diagram of a Wi-Fi channel state information-based non-contact human behavior recognition system according to the present invention
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A non-contact human behavior identification method based on Wi-Fi channel state information comprises the steps that channel state information data acquisition equipment is deployed at a signal source, the position of the data acquisition equipment is used as a monitoring point, and as shown in figure 1, the method comprises the following specific steps:
s1: and acquiring CSI subcarrier data of different actions transmitted by a signal source in real time to obtain a first signal. The signal source is a Wi-Fi signal source, and the deployed channel state information data acquisition equipment is CSI subcarrier data acquisition equipment.
In a preferred embodiment, the wireless signal monitoring device selects and carries an Intel 5300 wireless network card supporting IEEE 802.11a/b/g protocol, and uses the Linux 802.11CSITOOL open source software package proposed by halferin together. The Intel 5300 wireless network card may collect samples of Channel Frequency Responses (CFRs) on 30 OFDM subcarriers within a bandwidth, where a CSI signal corresponding to each subcarrier is:
Figure BDA0003284437860000041
wherein, the CSIkIndicating channel state information of the k-th subcarrier, f0Representing the center frequency, fkRepresents the kth subcarrier frequency, | HkAnd | | represents the amplitude of the k-th subcarrier. CSI data, i.e., the first signal, may be collected to each node by deployed monitoring devices.
S2: and the monitoring point preprocesses the first signal to obtain a second signal. Preprocessing the first signal comprises data denoising processing, data smoothing processing and data dimension reduction processing of the first signal.
Since the first signal contains various environmental noises and high-frequency noises, redundant information in the first signal needs to be removed through preliminary filtering and noise reduction filtering, and effective information is reserved for subsequent operations.
The invention mainly utilizes a Finite Impulse Response (FIR) digital filter to realize the preliminary denoising function for the CSI data. The window function method is a method commonly used for designing FIR digital filter, and because it operates in time domain, the frequency response H of ideal filter is used firstd(e) Deducing the unit impulse response h thereofd(n) approximating h with a unit impulse response h (n)d(n) to obtain the final product. According to the frequency response H in the ideal cased(e) Deducing ideal unit impulse response h through inverse Fourier transformd(n), the formula is as follows:
Figure BDA0003284437860000051
the unit impulse response h (n) of the filter is the ideal unit impulse response hd(n) and a window function Wd(n) the product in the time domain, as shown in the following equation:
h(n)=hd(n)×Wd(n)
in the present invention, the window function used is the hamming window. The time domain expression of the hamming window is:
Figure BDA0003284437860000052
wherein N denotes the size of the window, RN(n) represents an expression of a rectangular window in the window function in a time domain, and is specifically represented as:
Figure BDA0003284437860000053
according to the method, the preliminary filtering of the CSI data is realized, and then the data is subjected to denoising processing by using a PCA algorithm. The specific implementation steps are as follows:
(1) input sample set D ═ x1,x2,…,xnAnd centering all samples minus the mean, as shown below:
Figure BDA0003284437860000061
(2) calculating the covariance matrix ZZ of the samplesTAnd decomposing the covariance matrix to obtain each eigenvalue lambdaiAnd a feature vector wi
(3) According to a preset reconstruction threshold value t, a minimum d' for satisfying the following formula is selected as the dimension of the projection space. The formula is as follows:
Figure BDA0003284437860000062
where d is the dimension of the original template space.
(4) The characteristic value lambda is measurediArranging the data according to the sequence from small to large, and then taking the eigenvectors corresponding to the first d' eigenvalues to form a projection matrix W (W is equal to1,w2,...,wd′) I.e. the solution of the principal component analysis.
As shown in fig. 2, data in a channel link between every two antennas in a series of CSI data streams is convolved by using a hamming window low-pass filter, and a preliminary denoising process is performed. And then, carrying out dimension reduction and redundancy removal on the CSI data by utilizing the PCA technology, discarding a part of information related to noise, and finally obtaining an ideal smooth waveform to represent the state of the SCI power value changing along with time under the corresponding action, namely obtaining a second signal.
S3: and extracting the feature vector of the second signal, and selecting the maximum component related to the time sequence as a main feature variable of the data to obtain a third signal.
In order to better distinguish differences between various action behaviors and solve frequency similarity between different behaviors, feature information with finer granularity needs to be extracted.
In the present invention, time series data features are automatically extracted by a Python-based tsfresh package. tsfresh is an open-source python packet for extracting time series data features, and more than 64 features can be extracted, namely, a third signal is obtained.
S4: and extracting a feature vector of the second signal, and selecting the maximum component related to the phase and the amplitude as a main feature variable of the data to obtain a fourth signal.
The behavior data are divided into different sets according to the time sequence by using a sliding window method, and the characteristic values of the data in each set are extracted to form a data set with more samples, so that a certain behavior is more comprehensively represented.
The sliding window method involves two key variables in the use process, namely window size and sliding step length. The window size refers to the unit data size of each feature extraction, the sliding step length is equivalent to the number of times that one-time behavior data processing is finished and the sliding window needs to move, and the size of the behavior data is indirectly reflected. In specific implementation, in order to meet the data requirement of the signal in the time and frequency sampling process, the sliding step is generally equal to the sampling frequency of the signal, the window size depends on the sampling frequency, and the calculation formula is as follows:
Figure BDA0003284437860000071
where f is the sampling frequency of the signal.
And extracting characteristic data according to the selected window size and the sliding step length. In the aspect of time domain characteristics, the mean value and the standard deviation are selected as characteristic values. In the aspect of frequency domain characteristics, FFT is used for carrying out frequency domain analysis on signals, and direct current components of the signals, the first five maximum FFT values and frequency, signal energy, amplitude statistical characteristics and shape statistical characteristics corresponding to the five values are extracted. The last two characteristics are used for describing the energy difference of data in a frequency domain, and the amplitude statistical characteristics are used for counting the mean value, the standard deviation, the skewness and the kurtosis of frequency domain signals in different windows. The skewness is used to describe the degree of asymmetry and the direction, and the calculation formula is as follows:
Figure BDA0003284437860000072
where N represents the number of data points in each window, C (i) represents the frequency amplitude value of the ith sample in the window, μamp、σampRespectively, the mean and standard deviation of the sample amplitude within the window.
The kurtosis is used to judge whether the data distribution is steeper or gentler relative to the normal distribution, and the calculation formula is:
Figure BDA0003284437860000073
and performing feature extraction on the preprocessed behavior data. In the present invention, given a window size of 128 and a sliding step size of 50, each behavior datum will result in a feature matrix of 20 × 27, i.e. the signal after feature extraction, i.e. the fourth signal, is obtained. And finally, sending the third signal and the fourth signal obtained in the two steps to a cloud platform server through a front-end server.
S5: and respectively inputting the third signal and the fourth signal into the trained machine learning model by each monitoring point to obtain a first behavior recognition result and a second behavior recognition result.
The machine learning model comprises a gate control cycle unit and a convolution neural network; performing behavior recognition on the third signal by adopting a trained gating cycle unit to obtain a first behavior recognition result; and performing behavior recognition on the fourth signal by adopting the trained convolutional neural network to obtain a second behavior recognition result.
As shown in fig. 3, a schematic diagram of a unit structure of a GRU (recurrent neural network) is shown, the GRU has two gate structures of a reset gate and a refresh gate, which are respectively a green part and a blue part in the figure, respectively corresponding to r and z, and they replace three gate structures of an LSTM, so that the structure is more simplified. Structurally, it only needs to calculate two gating signals.
The state value update in the GRU can be described by the following formula:
rt=σ(Ur·xt+Wr·ht-1+br)
zt=σ(Uz·xt+Wz·ht-1+bz)
Figure BDA0003284437860000081
ht=zt×ht-1+(1-zt)×ct
wherein r istDenotes the reset gate, σ denotes the sigmoid function, Ur、UzAnd UcInput weight parameters, x, representing respective nodestInput representing the current time step t, Wr、WzAnd WcHidden weight parameter, h, representing respective nodest-1Representing the hidden state of the last time step t-1, br、bzAnd bcRespectively representing the deviation values, z, of the corresponding nodestIndicating an update door, ctThe state of the cell at the current time step,
Figure BDA0003284437860000082
denotes the tan h activation function, htRepresenting the hidden state of the current time step.
In one embodiment, inputting the extracted signal feature value into the GRU model results in a prediction pred _ GRU of the monitoring node for the captured behavior data, i.e. the first behavior recognition result.
In the behavior recognition process of the fourth signal by adopting the convolutional neural network, the convolutional neural network comprises a convolutional layer, a pooling layer and a full-link layer. The convolution layers are matched with the pooling layers to form a plurality of convolution groups, the features are extracted layer by layer, and finally classification is finished through a plurality of full-connection layers. In an alternative embodiment, the specific parameter settings of CNN are as shown in the following table.
TABLE 1 neural network convolutional layer parameters
Figure BDA0003284437860000091
TABLE 2 neural network pooling layer parameters
Figure BDA0003284437860000092
The GRU model and the CNN model are trained by using the collected human body behavior data, and the training process comprises the following steps:
various human behavior data packets are collected by utilizing the existing Wi-Fi infrastructure to obtain a human behavior data packet set, and the characteristics of the data packets are extracted from the human behavior data packet set, wherein the characteristics of the data packets are mainly analyzed and extracted from a time sequence and a time domain. Respectively inputting the extracted behavior data packet feature sets into GRU models and CNN models, and respectively obtaining a behavior recognition result by using the GRU models and the CNN models; using the error of the recognition result as a loss function and using a back propagation algorithm BP of the CNN to train parameters; after the parameter training is finished, the softmax function is used for activation, and a trained classification model can be obtained.
In one embodiment, inputting the fourth signal obtained by extracting the signal feature value into the trained CNN model will obtain a predicted result pred _ CNN of the monitoring node for the captured behavior data, i.e. the second behavior recognition result.
S6: and inputting the first behavior recognition result and the second behavior recognition result into a processing function to obtain a third human body recognition result of the monitoring point.
In specific implementation, the first behavior recognition result and the second behavior recognition result obtained through the GRU and the CNN are input into the softmax function, and then a prediction result of the monitoring node on the current behavior is obtained. The definition of the softmax function is as follows:
Figure BDA0003284437860000093
wherein z isiAnd C is the number of output nodes, namely the number of classes of the classifier.
S7: and inputting the third human behavior recognition result into the classification model to obtain a final behavior recognition result.
In a specific embodiment, a plurality of monitoring nodes are deployed in a monitoring range, and meanwhile, behavior change occurring in the range is monitored and behavior identification is carried out. In this step, the classification model used above is a K-nearest neighbor classification algorithm.
As shown in fig. 4, the flow of KNN algorithm is as follows: firstly, dividing an original data set according to a certain proportion to obtain a training data set for KNN model training and a test data set for verifying accuracy of the KNN model; then, the training dataset and the test dataset are normalized (Scaler); training the KNN model by using the training data set; searching for the optimal hyper-parameter in a grid searching and cross validation mode; finally, the prediction accuracy of the KNN model is tested by the test data set. Through the process, a trained KNN classification model can be obtained.
In the invention, the data obtained by each monitoring node is processed and then sent to a cloud platform server through a front end server, the received data of each monitoring node is operated according to a corresponding mode by depending on the strong computing capacity of the cloud platform server to obtain the identification result of each monitoring node, then the identification result of each monitoring node is sent to a KNN model to obtain a final behavior identification result, and then the identification result is sent back to the front end server to be presented to a user.
A system for contactless human behavior recognition based on Wi-Fi channel state information, as shown in fig. 5, the system comprising: the system comprises a data acquisition module, a data processing module, a front-end server and a cloud platform server;
the data acquisition module is channel state information data acquisition equipment which is used for receiving a CSI subcarrier signal to obtain a first signal;
the data processing module comprises a denoising module and a feature extraction module;
the denoising module is used for denoising, smoothing and data dimension reduction processing on the first signal to obtain a second signal;
the characteristic extraction module is used for extracting characteristics of the second signal to obtain a third signal and a fourth signal, and sending the third signal and the fourth signal to the front-end server;
the front-end server is used for identifying the behaviors of the third signal and the fourth signal to obtain a first behavior identification result and a second behavior identification result; sending the first behavior recognition result and the second behavior recognition result to a cloud platform server;
and the cloud deck server calculates and predicts the first behavior recognition result and the second behavior recognition result to obtain a third behavior recognition result, and classifies the third behavior recognition result to obtain a final recognition result.
The formula for obtaining the third behavior recognition result is as follows:
predi=w1*pred_GRU+w2*pred_CNN
wherein, w1Weight, w, representing gated-cyclic unit2Representing weights of the convolutional neural network, pred _ GRU representing a first behavior recognition result predicted by the gated cyclic unit, and pred _ CNN representing a second behavior recognition result predicted by the convolutional neural network.
Embodiments of the system of the present invention are similar to embodiments of the method.
The above-mentioned embodiments, which further illustrate the objects, technical solutions and advantages of the present invention, should be understood that the above-mentioned embodiments are only preferred embodiments of the present invention, and should not be construed as limiting the present invention, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A non-contact human behavior identification method based on Wi-Fi channel state information comprises the steps that channel state information data acquisition equipment is deployed at a signal source, the position of the data acquisition equipment is used as a monitoring point, and the method is characterized by comprising the following steps:
s1: acquiring CSI subcarrier data of different actions transmitted by a signal source in real time to obtain a first signal;
s2: preprocessing the first signal by the monitoring point to obtain a second signal;
s3: extracting a feature vector of the second signal, and selecting a maximum component related to a time sequence as a main feature variable of data to obtain a third signal;
s4: extracting a feature vector of the second signal, selecting a maximum component related to the phase and the amplitude as a main feature variable of the data, and obtaining a fourth signal;
s5: respectively inputting a third signal and a fourth signal into the trained machine learning model by each monitoring point to obtain a first behavior recognition result and a second behavior recognition result;
s6: inputting the first behavior recognition result and the second behavior recognition result into a processing function to obtain a third human body recognition result of the monitoring point;
s7: and inputting the third human behavior recognition result into the classification model to obtain a final behavior recognition result.
2. The Wi-Fi channel state information-based non-contact human behavior recognition method according to claim 1, wherein the signal source is a Wi-Fi signal source.
3. The method as claimed in claim 1, wherein the preprocessing the first signal includes performing data denoising, data smoothing and data dimension reduction on the first signal.
4. The method of claim 1, wherein the obtaining of the third signal comprises: and extracting the characteristics of the second signal by adopting a time series mining analysis method, and screening out the maximum component related to the time series from the extracted characteristics to be used as a third signal.
5. The method of claim 1, wherein the obtaining of the fourth signal comprises performing feature extraction on the second signal by using a time domain and frequency combination analysis method to obtain amplitude and phase information of a CSI subcarrier that changes with a human behavior; and screening the extracted information by adopting a sliding window method to obtain a maximum component related to the phase and the amplitude, and taking the component as a fourth signal.
6. The Wi-Fi channel state information-based contactless human behavior recognition method of claim 1, wherein the machine learning model comprises a gated cyclic unit and a convolutional neural network; performing behavior recognition on the third signal by adopting a trained gating cycle unit to obtain a first behavior recognition result; and performing behavior recognition on the fourth signal by adopting the trained convolutional neural network to obtain a second behavior recognition result.
7. The method according to claim 1, wherein the processing function is a Softmax function.
8. The method according to claim 1, wherein the classification model is a K-nearest neighbor algorithm model.
9. A non-contact human behavior recognition system based on Wi-Fi channel state information is characterized by comprising: the system comprises a data acquisition module, a data processing module, a front-end server and a cloud platform server;
the data acquisition module is channel state information data acquisition equipment which is used for receiving a CSI subcarrier signal to obtain a first signal;
the data processing module comprises a denoising module and a feature extraction module;
the denoising module is used for denoising, smoothing and data dimension reduction processing on the first signal to obtain a second signal;
the characteristic extraction module is used for extracting characteristics of the second signal to obtain a third signal and a fourth signal, and sending the third signal and the fourth signal to the front-end server;
the front-end server is used for identifying the behaviors of the third signal and the fourth signal to obtain a first behavior identification result and a second behavior identification result; sending the first behavior recognition result and the second behavior recognition result to a cloud platform server;
and the cloud deck server calculates and predicts the first behavior recognition result and the second behavior recognition result to obtain a third behavior recognition result, and classifies the third behavior recognition result to obtain a final recognition result.
10. The system of claim 9, wherein the formula for obtaining the third behavior recognition result is as follows:
predi=w1*pred_GRU+w2*pred_CNN
wherein, w1Weight, w, representing gated-cyclic unit2Representing weights of the convolutional neural network, pred _ GRU representing a first behavior recognition result predicted by the gated cyclic unit, and pred _ CNN representing a second behavior recognition result predicted by the convolutional neural network.
CN202111143085.5A 2021-09-28 2021-09-28 Wi-Fi channel state information-based contactless human body behavior recognition method and system Active CN113837122B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111143085.5A CN113837122B (en) 2021-09-28 2021-09-28 Wi-Fi channel state information-based contactless human body behavior recognition method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111143085.5A CN113837122B (en) 2021-09-28 2021-09-28 Wi-Fi channel state information-based contactless human body behavior recognition method and system

Publications (2)

Publication Number Publication Date
CN113837122A true CN113837122A (en) 2021-12-24
CN113837122B CN113837122B (en) 2023-07-25

Family

ID=78966980

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111143085.5A Active CN113837122B (en) 2021-09-28 2021-09-28 Wi-Fi channel state information-based contactless human body behavior recognition method and system

Country Status (1)

Country Link
CN (1) CN113837122B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114465678A (en) * 2022-04-13 2022-05-10 齐鲁工业大学 Complex activity WIFI perception method based on deep learning
CN116304888A (en) * 2023-05-17 2023-06-23 山东海看新媒体研究院有限公司 Continuous human activity perception recognition method and system based on channel state information

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110288018A (en) * 2019-06-24 2019-09-27 桂林电子科技大学 A kind of WiFi personal identification method merging deep learning model
WO2020022869A1 (en) * 2018-07-27 2020-01-30 Samsung Electronics Co., Ltd. Method and apparatus for intelligent wi-fi connection management
US20200188718A1 (en) * 2018-08-17 2020-06-18 Johnson Controls Technology Company Systems and methods for detecting building conditions based on wireless signal degradation
CN111556453A (en) * 2020-04-27 2020-08-18 南京邮电大学 Multi-scene indoor action recognition method based on channel state information and BilSTM
WO2020170221A1 (en) * 2019-02-22 2020-08-27 Aerial Technologies Inc. Handling concept drift in wi-fi-based localization
CN111797804A (en) * 2020-07-16 2020-10-20 西安交通大学 Channel state information human activity recognition method and system based on deep learning
CN111914709A (en) * 2020-07-23 2020-11-10 河南大学 Action segmentation framework construction method based on deep learning and aiming at WiFi signal behavior recognition
CN111954250A (en) * 2020-08-12 2020-11-17 郑州大学 Lightweight Wi-Fi behavior sensing method and system
CN112101235A (en) * 2020-09-16 2020-12-18 济南大学 Old people behavior identification and detection method based on old people behavior characteristics
CN112733609A (en) * 2020-12-14 2021-04-30 中山大学 Domain-adaptive Wi-Fi gesture recognition method based on discrete wavelet transform

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020022869A1 (en) * 2018-07-27 2020-01-30 Samsung Electronics Co., Ltd. Method and apparatus for intelligent wi-fi connection management
US20200188718A1 (en) * 2018-08-17 2020-06-18 Johnson Controls Technology Company Systems and methods for detecting building conditions based on wireless signal degradation
WO2020170221A1 (en) * 2019-02-22 2020-08-27 Aerial Technologies Inc. Handling concept drift in wi-fi-based localization
CN110288018A (en) * 2019-06-24 2019-09-27 桂林电子科技大学 A kind of WiFi personal identification method merging deep learning model
CN111556453A (en) * 2020-04-27 2020-08-18 南京邮电大学 Multi-scene indoor action recognition method based on channel state information and BilSTM
CN111797804A (en) * 2020-07-16 2020-10-20 西安交通大学 Channel state information human activity recognition method and system based on deep learning
CN111914709A (en) * 2020-07-23 2020-11-10 河南大学 Action segmentation framework construction method based on deep learning and aiming at WiFi signal behavior recognition
CN111954250A (en) * 2020-08-12 2020-11-17 郑州大学 Lightweight Wi-Fi behavior sensing method and system
CN112101235A (en) * 2020-09-16 2020-12-18 济南大学 Old people behavior identification and detection method based on old people behavior characteristics
CN112733609A (en) * 2020-12-14 2021-04-30 中山大学 Domain-adaptive Wi-Fi gesture recognition method based on discrete wavelet transform

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114465678A (en) * 2022-04-13 2022-05-10 齐鲁工业大学 Complex activity WIFI perception method based on deep learning
CN116304888A (en) * 2023-05-17 2023-06-23 山东海看新媒体研究院有限公司 Continuous human activity perception recognition method and system based on channel state information

Also Published As

Publication number Publication date
CN113837122B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
CN108962237B (en) Hybrid speech recognition method, device and computer readable storage medium
CN106658590B (en) Design and implementation of multi-person indoor environment state monitoring system based on WiFi channel state information
CN110188836B (en) Brain function network classification method based on variational self-encoder
CN110071913B (en) Unsupervised learning-based time series anomaly detection method
CN110161343B (en) Non-invasive real-time dynamic monitoring method for external powered device of intelligent train
CN113837122B (en) Wi-Fi channel state information-based contactless human body behavior recognition method and system
CN114429156A (en) Radar interference multi-domain feature countermeasure learning and detection identification method
CN112729381B (en) Fault diagnosis method of high-voltage circuit breaker based on neural network
CN108932535A (en) A kind of edge calculations clone's node recognition methods based on machine learning
CN108171119B (en) SAR image change detection method based on residual error network
CN111800414A (en) Convolutional neural network-based traffic anomaly detection method and system
CN114360030A (en) Face recognition method based on convolutional neural network
Al-Assaf Surface myoelectric signal analysis: dynamic approaches for change detection and classification
CN108717520A (en) A kind of pedestrian recognition methods and device again
CN116866129A (en) Wireless communication signal detection method
CN114781463A (en) Cross-scene robust indoor tumble wireless detection method and related equipment
CN113453180B (en) Intelligent detection method and system for human body tumble and information data processing terminal
CN114500335A (en) SDN network flow control method based on fuzzy C-means and mixed kernel least square support vector machine
CN114757224A (en) Specific radiation source identification method based on continuous learning and combined feature extraction
Numan et al. Machine learning-based joint vital signs and occupancy detection with IR-UWB sensor
CN117009841A (en) Model training method, motor fault diagnosis method and microcontroller
CN114970638A (en) Radar radiation source individual open set identification method and system
CN114580468A (en) Interference signal identification method based on time-frequency waterfall graph and convolutional neural network
CN111461007A (en) Automatic modulation signal identification method and device based on fuzzy logic
CN115618215B (en) Complex electromagnetic environment grading method based on morphological intelligent computation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant