CN108965585B - User identity recognition method based on smart phone sensor - Google Patents

User identity recognition method based on smart phone sensor Download PDF

Info

Publication number
CN108965585B
CN108965585B CN201810657431.3A CN201810657431A CN108965585B CN 108965585 B CN108965585 B CN 108965585B CN 201810657431 A CN201810657431 A CN 201810657431A CN 108965585 B CN108965585 B CN 108965585B
Authority
CN
China
Prior art keywords
data
time
sensor
hidden layer
time step
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810657431.3A
Other languages
Chinese (zh)
Other versions
CN108965585A (en
Inventor
秦臻
胡凌舟
丁熠
秦志光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Boyoi Technology Co ltd
University of Electronic Science and Technology of China
Original Assignee
Chengdu Boyoi Technology Co ltd
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Boyoi Technology Co ltd, University of Electronic Science and Technology of China filed Critical Chengdu Boyoi Technology Co ltd
Priority to CN201810657431.3A priority Critical patent/CN108965585B/en
Publication of CN108965585A publication Critical patent/CN108965585A/en
Application granted granted Critical
Publication of CN108965585B publication Critical patent/CN108965585B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72409User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention relates to a user identity identification method based on a smart phone sensor. In the invention, a Tensorflow deep learning framework is used, sensor data of a user smart phone during movement is analyzed by combining a convolutional neural network and a cyclic neural network, the user is identified, and the accuracy rate of conscious behaviors reaches 91.45%; for unconscious daily behaviors of the user, the accuracy rates of walking, riding, going up and down stairs, standing and sitting can reach 100 percent, 91.61 percent, 97.58 percent, 97.59 percent, 98.08 percent and 93.81 percent respectively, and the identification accuracy rate is high.

Description

User identity recognition method based on smart phone sensor
Technical Field
The invention relates to the technical field of human-computer interaction, in particular to a user identity identification method based on a smart phone sensor.
Background
With the rapid development of mobile internet, smart mobile devices, especially smart phones, become increasingly indispensable. In addition to basic communication functions, smart phones have become the largest media for people's social activities due to their portability and portability. In addition, cashless transactions are becoming more popular and people are more inclined to complete payments via smartphones. Thus, data and information in smartphones becomes especially important.
Nowadays, more and more accurate sensors are carried on smart phones, different mobile phone users can be identified by using data of the sensors and combining a neural network and deep learning, and the identity of the user is identified so as to protect the privacy of the mobile phone of the user, but the identification accuracy of the existing identity identification method is not high.
Disclosure of Invention
The invention aims to solve the technical problem of providing a user identity identification method based on a smart phone sensor, which is used for solving the problem of low identification accuracy of the existing identity identification method.
The technical scheme for solving the technical problems is as follows: a user identity identification method based on a smart phone sensor comprises the following steps:
s1, dividing the mobile phone sensor data into segments according to the sampling frequency, wherein the length of one segment is 10S, and removing the noise of the initial segment and the final segment to obtain the preprocessed sensor time domain data;
s2, subdividing the segments into small windows, wherein the length of one small window is one time step, and converting the preprocessed sensor time domain data into sensor frequency domain data through fast Fourier transform;
s3, performing double-bundle convolution and fusion convolution calculation on the time domain data and the frequency domain data of the sensor through a double-bundle convolution neural network to obtain the time-frequency characteristics of the multi-sensor at each time step;
s4, inputting the time-frequency characteristics of the multi-sensor at each time step into the CW-RNN, and calculating to obtain hidden layer information at the t-th time step;
s5, triangularizing the hidden layer information matrix to realize that the communication between the hidden layer groups points from a high period to a low period, dividing the hidden layer information into 4 groups, wherein the period T of each groupi1,2,4,8 in order, will satisfy tmodT at each time stepiThe group of 0 participates in the update operation;
s6, traversing all time steps, and calculating and updating to obtain a state tensor containing all time step information;
s7, averaging the state tensor, connecting with a full connection layer, and outputting a fraction tensor;
s8, classifying the fraction tensors through a softmax function, mapping the fraction tensors belonging to each class into probabilities, and obtaining the probability tensor of the whole network;
s9, dividing conscious action data and unconscious action data into a test set and a training set through a Tenssorflow training frame and the test data, wherein the training set is used for training a double-bundle convolutional neural network and a CW-RNN;
s10, classifying the probability tensor of the test data in the test set through a network to obtain a label vector of the test set;
and S11, outputting a test set label vector, comparing the test set label vector with a test set data real label, and if the values of the elements on the same index are equal, taking the elements on the index as the user identity.
On the basis of the technical scheme, the invention can be further improved as follows.
Further, the mobile phone sensor data comprises acceleration data and angular velocity data.
Further, the calculation formula of the fast fourier transform in step S2 is:
Figure BDA0001704773580000021
in the formula (1), xnFor information of the nth frame in time domain, XkAnd N is the total frame number of the sensor data to be converted.
Further, the calculation formula of the time-frequency characteristic in step S3 is as follows:
om=wtot+wfof (2)
in the formula (2), omIs a time-frequency feature, wtAnd wfAs a weight parameter, otAnd ofThe convolution calculations are time and frequency domain results, respectively.
Further, the formula for calculating the hidden layer information at the t-th time step in step S4 is as follows:
Figure BDA0001704773580000031
in the formula (3), x(t)Multi-sensor time-frequency characteristics for the t-th time step, y(t-1)Hidden layer information for the t-1 time step, WHAnd WIRespectively hidden layer matrix and input matrix, and σ is an activation function, typically a hyperbolic tangent function.
Further, 80% of the conscious action data is a training set, 20% is a testing set, and the unconscious action comprises 6 behaviors: walking, riding, climbing stairs, descending stairs, standing and sitting, wherein 1 behavior in unconscious actions is selected as a first behavior, the other 5 behaviors are selected as a second behavior, 50% of the first behavior data and the second behavior data are training sets, and 50% of the first behavior data is a testing set.
The invention has the beneficial effects that: in the invention, a Tensorflow deep learning framework is used, sensor data of a user smart phone during movement is analyzed by combining a convolutional neural network and a cyclic neural network, the user is identified, and the accuracy rate of conscious behaviors reaches 91.45%; for unconscious daily behaviors of the user, the accuracy rates of walking, riding, going up and down stairs, standing and sitting can reach 100 percent, 91.61 percent, 97.58 percent, 97.59 percent, 98.08 percent and 93.81 percent respectively, and the identification accuracy rate is high.
Drawings
FIG. 1 is a flow chart of the steps of the present invention;
fig. 2 is a schematic coordinate diagram of a mobile phone sensor according to the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
As shown in fig. 1, a user identification method based on a smart phone sensor includes the following steps:
s1, dividing the mobile phone sensor data into segments according to sampling frequency, wherein the length of one segment is 10S, removing noise of an initial segment and a final segment to obtain preprocessed sensor time domain data, acquiring the sensor data based on a standard mobile phone three-dimensional coordinate system, and defining the coordinate system relative to the equipment, as shown in FIG. 2, when the equipment is placed horizontally, the positive direction of an X axis is horizontal to the right, the positive direction of a Y axis is vertical to the upper, and the positive direction of a Z axis is vertical to the upward direction of a mobile phone screen;
s2, subdividing the segments into small windows, wherein the length of one small window is one time step, and converting the preprocessed sensor time domain data into sensor frequency domain data through fast Fourier transform;
s3, performing double-bundle convolution and fusion convolution calculation on the time domain data and the frequency domain data of the sensor through a double-bundle convolution neural network to obtain the time-frequency characteristics of the multi-sensor at each time step, and considering the comprehensive time domain and frequency domain characteristics to enhance the discrimination between users;
s4, inputting the time-frequency characteristics of the multi-sensor at each time step into the CW-RNN, and calculating to obtain hidden layer information at the t-th time step;
s5, dividing the hidden layer information of time step into 4 groups, each group having a period Ti1,2,4,8 in order, will satisfy tmodTiThe group of 0 participates in the update operation, where the mod function is a remainder function. By grouping the neurons of the hidden layer, the number of neurons in each group is the same, while setting different clock cycles for each group. At different time steps, the hidden layer groups participating in the calculation are different. By setting the hidden layer matrix as a triangular matrix, the communication between the hidden layer groups is directed from a high period (low frequency) to a low period (high frequency) so as to explore the dependency relationship inside each time step and among each time step and improve the training accuracy;
s6, traversing all time steps, and calculating and updating to obtain a state tensor containing all time step information;
s7, averaging the state tensor, connecting with a full connection layer, and outputting a fraction tensor;
s8, classifying the fraction tensors through a softmax model, mapping the fraction tensors belonging to each class into probabilities to obtain the probability tensor of the whole network, and calculating cross entropy as a loss function of training;
s9, dividing conscious action data and unconscious action data into a test set and a training set through a Tenssorflow training frame and the test data, wherein the training set is used for training a double-bundle convolutional neural network and a CW-RNN, the adopted loss function is cross entropy, the training optimization algorithm is Adam, the learning rate is set to be 0.0001, and l2 norm is used for regularization;
s10, classifying the probability tensor of the test data in the test set through the network, and returning the index of the maximum probability of each line (corresponding to one sample), namely the label of the class classified by the network, to obtain the label vector of the test set;
and S11, outputting a test set label vector, comparing the test set label vector with a test set data real label, and if the values of the elements on the same index are equal, taking the elements on the index as the user identity.
In an embodiment of the invention, the handset sensor data comprises acceleration data and angular velocity data.
In the embodiment of the present invention, the calculation formula of the fast fourier transform in step S2 is:
Figure BDA0001704773580000051
in the formula (1), xnFor information of the nth frame in time domain, XkAnd N is the total frame number of the sensor data to be converted.
In the embodiment of the present invention, in step S3, the calculation formula of the time-frequency characteristic is:
om=wtot+wfof (2)
in the formula (2), omIs a time-frequency feature, wtAnd wfAs a weight parameter, otAnd ofThe convolution calculations are time and frequency domain results, respectively.
In this embodiment of the present invention, the calculation formula of the hidden layer information at the t-th time step in step S4 is as follows:
Figure BDA0001704773580000052
in the formula (3), x(t)At t-th time stepFrequency characteristic, y(t-1)Hidden layer information for the t-1 time step, WHAnd WIThe hidden layer matrix and the input matrix are respectively, σ is an activation function, typically a hyperbolic tangent function, and the formula for calculating the update in step S6 is also formula (3).
In the embodiment of the present invention, 80% of the conscious action data is a training set, 20% is a testing set, and the unconscious action includes 6 behaviors: walking, riding, climbing stairs, descending stairs, standing and sitting, wherein 1 behavior in unconscious actions is selected as a first behavior, the other 5 behaviors are selected as a second behavior, 50% of the first behavior data and the second behavior data are training sets, and 50% of the first behavior data is a testing set.
And when the elements on the same index are equal, the sample identification is correct, the judgment tensor is 1, otherwise, the sample identification is wrong, the judgment tensor is 0, and the mean value of the judgment tensor is calculated to serve as the test accuracy.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (3)

1. A user identity recognition method based on a smart phone sensor is characterized by comprising the following steps:
s1, dividing the mobile phone sensor data into segments according to the sampling frequency, wherein the length of one segment is 10S, and removing the noise of the initial segment and the final segment to obtain the preprocessed sensor time domain data;
s2, subdividing the segments into small windows, wherein the length of one small window is one time step, and converting the preprocessed sensor time domain data into sensor frequency domain data through fast Fourier transform;
the calculation formula of the fast fourier transform in step S2 is:
Figure FDA0002730520610000011
in the formula (1), xnFor information of the nth frame in time domain, XkThe frequency domain information of the k frame after conversion, N is the total frame number of the sensor data to be converted;
s3, performing double-bundle convolution and fusion convolution calculation on the time domain data and the frequency domain data of the sensor through a double-bundle convolution neural network to obtain the time-frequency characteristics of the multi-sensor at each time step;
the calculation formula of the time-frequency characteristics in step S3 is as follows:
om=wtot+wfof (2)
in the formula (2), omIs a time-frequency feature, wtAnd wfAs a weight parameter, otAnd ofConvolution calculation results of a time domain and a frequency domain respectively;
s4, inputting the time-frequency characteristics of the multi-sensor at each time step into the CW-RNN, and calculating to obtain hidden layer information at the t-th time step;
hidden layer information at the t-th time step in the step S4
Figure FDA0002730520610000012
The calculation formula of (2) is as follows:
Figure FDA0002730520610000013
in the formula (3), x(t)Multi-sensor time-frequency characteristics for the t-th time step, y(t-1)Hidden layer information for the t-1 time step, WHAnd WIRespectively a hidden layer matrix and an input matrix, wherein sigma is an activation function;
s5, triangularizing the hidden layer information matrix to realize that the communication between the hidden layer groups points from a high period to a low period, dividing the hidden layer information into 4 groups, wherein the period T of each groupi1,2,4,8 in order, will satisfy tmodT at each time stepiThe group of 0 participates in the update operation;
s6, traversing all time steps, and calculating and updating to obtain a state tensor containing all time step information;
s7, averaging the state tensor, connecting with a full connection layer, and outputting a fraction tensor;
s8, classifying the fraction tensors through a softmax function, mapping the fraction tensors belonging to each class into probabilities, and obtaining the probability tensor of the whole network;
s9, dividing conscious action data and unconscious action data into a test set and a training set through a Tenssorflow training frame and the test data, wherein the training set is used for training a double-bundle convolutional neural network and a CW-RNN;
s10, classifying the probability tensor of the test data in the test set through a network to obtain a label vector of the test set;
and S11, outputting a test set label vector, comparing the test set label vector with a test set data real label, and if the values of the elements on the same index are equal, taking the elements on the index as the user identity.
2. The smart phone sensor based user identification method of claim 1, wherein the phone sensor data comprises acceleration data and angular velocity data.
3. The smart phone sensor based user identification method according to claim 1, wherein 80% of the conscious action data is training set, 20% is testing set, and the unconscious action comprises 6 behaviors: walking, riding, climbing stairs, descending stairs, standing and sitting, wherein 1 behavior in unconscious actions is selected as a first behavior, the other 5 behaviors are selected as a second behavior, 50% of the first behavior data and the second behavior data are training sets, and 50% of the first behavior data is a testing set.
CN201810657431.3A 2018-06-22 2018-06-22 User identity recognition method based on smart phone sensor Active CN108965585B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810657431.3A CN108965585B (en) 2018-06-22 2018-06-22 User identity recognition method based on smart phone sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810657431.3A CN108965585B (en) 2018-06-22 2018-06-22 User identity recognition method based on smart phone sensor

Publications (2)

Publication Number Publication Date
CN108965585A CN108965585A (en) 2018-12-07
CN108965585B true CN108965585B (en) 2021-01-26

Family

ID=64486168

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810657431.3A Active CN108965585B (en) 2018-06-22 2018-06-22 User identity recognition method based on smart phone sensor

Country Status (1)

Country Link
CN (1) CN108965585B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020164644A2 (en) * 2019-02-14 2020-08-20 上海寒武纪信息科技有限公司 Neural network model splitting method, apparatus, computer device and storage medium
US12019720B2 (en) 2020-12-16 2024-06-25 International Business Machines Corporation Spatiotemporal deep learning for behavioral biometrics

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107592422A (en) * 2017-09-20 2018-01-16 上海交通大学 A kind of identity identifying method and system based on gesture feature
US10558804B2 (en) * 2015-04-16 2020-02-11 Cylance Inc. Recurrent neural networks for malware analysis

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104299012B (en) * 2014-10-28 2017-06-30 银河水滴科技(北京)有限公司 A kind of gait recognition method based on deep learning
CN106980826A (en) * 2017-03-16 2017-07-25 天津大学 A kind of action identification method based on neutral net
CN106971203B (en) * 2017-03-31 2020-06-09 中国科学技术大学苏州研究院 Identity recognition method based on walking characteristic data
CN107507286B (en) * 2017-08-02 2020-09-29 五邑大学 Bimodal biological characteristic sign-in system based on face and handwritten signature

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10558804B2 (en) * 2015-04-16 2020-02-11 Cylance Inc. Recurrent neural networks for malware analysis
CN107592422A (en) * 2017-09-20 2018-01-16 上海交通大学 A kind of identity identifying method and system based on gesture feature

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
A Clockwork RNN;Jan Koutnik,Klaus Greff,Faustino Gomez,Jürgen Schmidhuber;《ICML2014》;20140214;第2014卷 *

Also Published As

Publication number Publication date
CN108965585A (en) 2018-12-07

Similar Documents

Publication Publication Date Title
CN106846729B (en) Tumble detection method and system based on convolutional neural network
Zhang et al. A comprehensive study of smartphone-based indoor activity recognition via Xgboost
CN108229268A (en) Expression Recognition and convolutional neural networks model training method, device and electronic equipment
CN110674875A (en) Pedestrian motion mode identification method based on deep hybrid model
CN108965585B (en) User identity recognition method based on smart phone sensor
CN104484644A (en) Gesture identification method and device
CN110197224A (en) Aerial hand-written character track restoration methods based on the confrontation study of feature space depth
CN109976526A (en) A kind of sign Language Recognition Method based on surface myoelectric sensor and nine axle sensors
CN113435335B (en) Microscopic expression recognition method and device, electronic equipment and storage medium
CN111797861A (en) Information processing method, information processing apparatus, storage medium, and electronic device
JP2022120775A (en) On-device activity recognition
CN107609501A (en) The close action identification method of human body and device, storage medium, electronic equipment
Zhu et al. Deep ensemble learning for human activity recognition using smartphone
CN111753683A (en) Human body posture identification method based on multi-expert convolutional neural network
Kim et al. Activity recognition using fully convolutional network from smartphone accelerometer
CN110516569B (en) Pedestrian attribute identification method based on identity and non-identity attribute interactive learning
CN108052960A (en) Method, model training method and the terminal of identification terminal grip state
CN111291804A (en) Multi-sensor time series analysis model based on attention mechanism
Shi et al. Sensor‐based activity recognition independent of device placement and orientation
Wang et al. Intelligent scene recognition based on deep learning
CN109567814B (en) Classification recognition method, computing device, system and storage medium for tooth brushing action
CN105184275B (en) Infrared local face key point acquisition method based on binary decision tree
Tao et al. Attention-based convolutional neural network and bidirectional gated recurrent unit for human activity recognition
Wang Data feature extraction method of wearable sensor based on convolutional neural network
US20230154077A1 (en) Training method for character generation model, character generation method, apparatus and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant