WO2019192235A1 - User identity authentication method and system based on mobile device - Google Patents

User identity authentication method and system based on mobile device Download PDF

Info

Publication number
WO2019192235A1
WO2019192235A1 PCT/CN2019/070668 CN2019070668W WO2019192235A1 WO 2019192235 A1 WO2019192235 A1 WO 2019192235A1 CN 2019070668 W CN2019070668 W CN 2019070668W WO 2019192235 A1 WO2019192235 A1 WO 2019192235A1
Authority
WO
WIPO (PCT)
Prior art keywords
mobile device
user identity
identity authentication
signal
authentication method
Prior art date
Application number
PCT/CN2019/070668
Other languages
French (fr)
Chinese (zh)
Inventor
伍楷舜
赵猛
王丹
邹永攀
Original Assignee
深圳大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳大学 filed Critical 深圳大学
Publication of WO2019192235A1 publication Critical patent/WO2019192235A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/316User authentication by observing the pattern of computer usage, e.g. typical user behaviour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • step 2 Those who have used these smart devices know that the fastest authentication step on smart devices is to first get the smart device (first step) and then put the fingerprint in the authentication office for authentication (step 2). At present, most of the authentication methods are: first get the smart device you want to use (first step), then turn on the power button of the smart device (the second step), and finally put the fingerprint on the verification verification (third step) . From the above verification steps, it is known that the fastest two-step verification is completed, and the three-step verification is often used.
  • the technical problem to be solved by the present invention is to provide a mobile device-based user identity authentication method that takes user behavior as a feature, thereby simplifying the user identity authentication process and improving the user experience, and provides the mobile device-based user identity authentication.
  • Method of user identity authentication system is to provide a mobile device-based user identity authentication method that takes user behavior as a feature, thereby simplifying the user identity authentication process and improving the user experience, and provides the mobile device-based user identity authentication.
  • the present invention provides a mobile device-based user identity authentication method, including the following steps:
  • Step S1 collecting, by the mobile device, a sensor signal of the mobile device from the picking position to the stationary state
  • Step S2 filtering the sensor signal
  • Step S3 performing signal preprocessing on the filtered sensor signal, and extracting time domain features and frequency domain features in the picking event of the mobile device;
  • step S4 the time domain feature and the frequency domain feature are put into a neural network recognition algorithm for training, and the user data is recognized and judged.
  • a further improvement of the present invention is that the method further includes a step S5 of placing a training sample whose confidence level reaches a preset confidence threshold into a data set used for training in the machine learning algorithm, and the original one The class recognition model is upgraded to a second class recognition model.
  • a further improvement of the present invention is that, in the step S5, in the prediction of the correct training samples, the ranking is performed in ascending order, and the first 5% of the training samples are selected as positive samples of the legal users in the second-class recognition model; In the training samples that predict errors, the order is sorted in descending order, and 5% of the training samples are selected as negative samples of illegal users in the second-class recognition model.
  • step S2 comprises the following substeps:
  • Step S201 analyzing characteristics of the time domain and the frequency domain of the sensor signal, and obtaining a distribution law of sensor signal energy in a frequency range;
  • Step S202 Filtering according to the frequency range obtained in step S201 by using a band pass filter.
  • step S3 comprises the following substeps:
  • Step S301 performing event detection on the filtered sensor signal
  • Step S302 performing signal preprocessing on the obtained event signal of the mobile device, where the signal preprocessing includes framing and windowing processing;
  • Step S303 acquiring an event signal, and extracting time domain features and frequency domain features of each event signal respectively.
  • the filtered sensor signal is windowed, and then the average and standard deviation calculation is performed on each windowed window signal, and then the detection and interception are performed.
  • the event signal of the mobile device is windowed, and then the average and standard deviation calculation is performed on each windowed window signal, and then the detection and interception are performed.
  • a further improvement of the present invention is that, in the step S302, the event signal obtained in the step S301 is framed, and then the window is processed again by the Hamming window, wherein the coverage time between the frame and the frame is smaller than each One-half of the length of the framing.
  • a further improvement of the present invention is that, in the step S303, the time domain feature and the frequency domain feature of each event signal are separately extracted, and then the time domain feature and the frequency domain feature of each event signal are combined into a feature vector; wherein, extracting The time domain characteristics of each event signal include: mean, variance, standard deviation, maximum, minimum, number of zero crossings, difference between maximum and minimum, and mode; extracting frequency domain characteristics of each event signal includes: DC component, mean, variance, standard deviation, slope, kurtosis, mean, variance, standard deviation, slope, kurtosis, and energy.
  • a further improvement of the present invention is that in the step S4, the time domain feature and the frequency domain feature are put into the Autoencoder neural network recognition algorithm for training, and the three-layer neural system is used in the algorithm model of the Autoencoder neural network recognition algorithm.
  • Network the number of nodes in the hidden layer is 10, the encoding function used is the satlin function, and the decoding function used is the purelin function.
  • the maximum error value in the training sample is used as the threshold value. Determine the identified user data.
  • the present invention also provides a mobile device-based user identity authentication system that employs a mobile device-based user identity authentication method as described above.
  • FIG. 1 is a schematic diagram of a workflow of an embodiment of the present invention.
  • this example provides a mobile device-based user identity authentication method, including the following steps:
  • Step S1 collecting, by the mobile device, a sensor signal of the mobile device from the picking position to the stationary state
  • Step S2 filtering the sensor signal
  • Step S3 performing signal preprocessing on the filtered sensor signal, and extracting time domain features and frequency domain features in the picking event of the mobile device;
  • the mobile device described in this example refers to an existing mobile smart device, including: a smart phone, a smart watch, a smart tablet, and the like. Before use, take out the mobile device from the pick-up position.
  • the pick-up position described in this example is defined by default as the user's trouser pocket (which can be defined according to the user's habits), the table and the bag.
  • the mobile device can be moved from the pick-up position to the position that it can comfortably watch. This process is also different; then, this example first uses the acceleration sensor and the gyro sensor in the mobile device to collect The mobile device picks up the sensor signal from the position to the stationary state, then performs a series of data processing and analysis, and uses the neural network recognition algorithm to train, and obtains a behavior-based authentication method corresponding to the user identity.
  • the identification of the user's identity can be realized by collecting the sensor signal between the pick-up position and the stationary state each time the mobile device matches the recognized recognition model of the neural network recognition algorithm; The accuracy is related to the threshold set by the recognition model, which can be set and adjusted according to actual needs.
  • the stationary state in step S1 of this example refers to a state in which the instantaneous data of the acceleration sensor and the gyro sensor is 0; of course, in practical applications, the instantaneous data of the acceleration sensor and the gyro sensor are not true every time.
  • the hold value is 0, and a floating range can be set, that is, the instantaneous data of the acceleration sensor and the gyro sensor defaults to a static state within a preset floating range, and the floating range can be customized according to the actual situation of the user.
  • the example further includes a step S5, where the training sample whose confidence level reaches a preset confidence threshold is put into the data set used for training in the machine learning algorithm, and the original One type of recognition model is upgraded to a second type of recognition model.
  • the confidence threshold is a preset standard value for determining the confidence level, and the confidence threshold can be adjusted and set according to actual conditions.
  • step S5 in this example is necessary because the accuracy of identifying the user's data into a class of recognition algorithm model may not be very high due to an additional small amount of noise. Therefore, this example is based on Autoencoder neural network identification.
  • the confidence evaluation algorithm of the recognition model used in the algorithm puts some training samples with high confidence (such as training samples whose confidence reaches the preset confidence threshold) into the data set used for training, and turns the original One type of recognition model is upgraded to a second-class recognition model, which can effectively improve the accuracy of recognition.
  • the Autoencoder neural network recognition algorithm is a deep neural learning algorithm with a total of three layers and a neural network with 13 nodes per layer.
  • step S5 of the present example in predicting the correct training samples, sorting is performed in order from small to large, and the first 5% of the training samples are selected as positive samples of legal users in the second-class recognition model; In the training samples that predict errors, the order is sorted in descending order, and 5% of the training samples are selected as negative samples of illegal users in the second-class recognition model.
  • the positive and negative samples are placed in the second-class recognition model to further improve the accuracy, for example, by the Decision Tree decision tree algorithm, and the Decision Tree decision tree algorithm uses the CART algorithm, which can be very good. Identifying legitimate users and illegal users; the CART algorithm is a decision tree algorithm.
  • Step S2 described in this example includes the following sub-steps:
  • Step S201 analyzing the characteristics of the time domain and the frequency domain of the collected sensor signals by the acceleration sensor and the gyro sensor, and obtaining the distribution law of the sensor signal energy in the frequency range, that is, obtained by time domain analysis and frequency domain analysis.
  • the distribution range of the frequency accompanying energy is obtained.
  • the frequency range is preferably set from 10 Hz to 60 Hz; in short, the acceleration sensor and the gyro sensor are used for the collected sensor signals. Time domain and frequency domain analysis to obtain the frequency of the signal set;
  • Step S202 Filtering according to the frequency range obtained in step S201 by using a band pass filter, for example, by Butterworth bandpass filtering, the preferred frequency range of the band pass is 10 Hz to 60 Hz, which can effectively remove noise interference. .
  • a band pass filter for example, by Butterworth bandpass filtering
  • Step S3 described in this example includes the following sub-steps:
  • Step S301 performing event detection on the filtered sensor signal
  • Step S302 performing signal preprocessing on the obtained event signal of the mobile device, where the signal preprocessing includes framing and windowing processing;
  • Step S303 acquiring an event signal, and extracting time domain features and frequency domain features of each event signal respectively.
  • the filtered signal is used to detect an event using a CFAR algorithm.
  • the CFAR algorithm is a constant false alarm rate algorithm.
  • some parameters are needed, such as Winsize (length of the window), step (current window). Distance from the length of the next window), Mu (average of data within the window), Sigma (variance of data within the window), lamda (first function), and lamda2 (second function).
  • Winsize length of the window
  • step current window
  • Mu average of data within the window
  • Sigma variant of data within the window
  • lamda first function
  • lamda2 second function
  • the Chinese name of the CFAR algorithm is called the constant false alarm rate algorithm (an algorithm in the common language radar).
  • the role of this algorithm is to use this algorithm to detect the start and end positions of the event.
  • the role of Step is that since the signals processed by the time are all discrete, in order to ensure the short-term stability of the signal, this example adds a window, which is the front of the windowsize. When processing the data point of the next window, this example needs Move the current window backwards by a small amount of data to continue processing the data points in this window. This step is the length of the moving window in this example. Mu is the average of the data in this window, and Sigma is the variance of the data in this window.
  • the CFAR algorithm can be used to detect the filtered sensor signal, as follows: Since the signal collected in this example is a time-varying signal, the processing is not very well processed, and the default is short-time.
  • the signal is a time-invariant signal, and a series of operations can be performed on the signal.
  • a window function is added to a signal, that is, a part of the signal is processed, which is a window.
  • the concept, this example separately calculates the mean and standard deviation of the signal in this window, the formula is as follows:
  • ⁇ (i) is the average value within the window
  • ⁇ (i) is the variance within the window
  • W is the window size
  • A(i) and B(i) are calculated as follows:
  • S(k) is the original signal in this window and k is the sample point.
  • this example performs detection events based on the data calculated above:
  • this example considers this to be the starting point of the event. If the requirements of the following formula are met, this example is considered to be the end point.
  • ⁇ 1 and ⁇ 2 are constant parameters depending on noise, Winsize is the length of this window, and step is the length of time from the current window to the next window.
  • the event signal obtained in the step S301 is framed, and then the window is processed again by the Hamming window, wherein the coverage time between the frame and the frame is less than the duration of each frame.
  • the window is processed again by the Hamming window, wherein the coverage time between the frame and the frame is less than the duration of each frame.
  • the signal preprocessing is performed on the obtained mobile signal event, including framing and windowing processing.
  • each framing of the signal is 5 milliseconds, in order to To avoid moving signal features spanning two frames, this example uses a frame-to-frame coverage of 2 milliseconds to achieve effective feature extraction.
  • the window function added in this example is a hamming window.
  • Step S4 in this example obtains event-related features according to the above, and then puts these features into a machine learning recognition algorithm, such as an Autoencoder neural network recognition algorithm, to obtain user data identification and judgment.
  • a machine learning recognition algorithm such as an Autoencoder neural network recognition algorithm
  • step S4 of this example the time domain feature and the frequency domain feature are put into the Autoencoder neural network recognition algorithm for training, and the three-layer neural system is used in the algorithm model of the Autoencoder neural network recognition algorithm.
  • Network the number of nodes in the hidden layer is 10, the encoding function used is the satlin function, and the decoding function used is the purelin function.
  • the maximum error value in the training sample is used as the threshold value. Judging the identified user data; that is, if the error is smaller than the maximum value of the training sample, it is determined to be a legitimate user, otherwise it is determined to be an illegal user.
  • the Autoencoder neural network recognition algorithm in this example uses a three-layer neural network model.
  • the purpose of this three-layer neural network model is to reconstruct the input data, ie: input (2.1, 3, 5.5, 2), then through the middle Constantly adjusting the weights makes it possible to re-reconstruct these data points.
  • the last reconstructed data is (2, 3.1, 5.3, 2.1).
  • Reconstructing the data points is actually re-encoding the original data through an encoding function.
  • a decoding function is used to decode the data.
  • the encoding function and the decoding function are mentioned above.
  • the weight between the adjustments is constantly adjusted to make this error the smallest.
  • the L2 parameter can be used as the precision parameter to be obtained.
  • the weight of the L2 parameter is preferably 0.01.
  • This example also provides a mobile device-based user identity authentication system that employs a mobile device-based user identity authentication method as described above.
  • this example provides a new non-intrusive, behavioral activity-based user identity authentication method that enhances the user experience and implements mobile device data privacy protection.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Social Psychology (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

Provided are a user identity authentication method and system based on a mobile device. The user identity authentication method comprises the following steps: step S1, collecting, by means of a mobile device, a sensor signal regarding the progress of the mobile device from being at a pick-up position to being in a stationary state; step S2, filtering the sensor signal; step S3, pre-processing the filtered sensor signal, and extracting a time domain feature and a frequency domain feature in a pick-up event of the mobile device; and step S4, putting the time domain feature and the frequency domain feature into a neural network recognition algorithm for training to recognize and determine user data. According to the present invention, existing mobile devices extracting behavior features of a user and using a neural network recognition algorithm of a machine learning recognition model to recognize the user identity can effectively simplify the process of user identity authentication, and can greatly improve the user experience, without the need to increase any hardware costs. The system is simple and effective, is convenient to use and has good practicability.

Description

一种基于移动设备的用户身份认证方法及系统User identity authentication method and system based on mobile device 技术领域Technical field
本发明涉及一种用户身份认证方法,尤其涉及一种基于移动设备的用户身份认证方法,并涉及采用了该基于移动设备的用户身份认证方法的用户身份认证系统。The invention relates to a user identity authentication method, in particular to a mobile device based user identity authentication method, and to a user identity authentication system using the mobile device based user identity authentication method.
背景技术Background technique
随着智能化社会进程的不断推进,新兴智能设备的不断发展,智能设备在日常生活中显得越来越重要了,但是数据隐私保护存在着很多的不安全和不方便,现在的智能设备的解锁大致分为以下几个方面。With the continuous advancement of intelligent social processes and the continuous development of emerging smart devices, smart devices are becoming more and more important in daily life, but there are many insecure and inconvenient data privacy protections. Now unlock smart devices. It is roughly divided into the following aspects.
传统的保护手段有如下几个:在智能手机上,常常使用PIN码、密码或者图案和指纹识别保护;还有一些使用指纹识别、静脉识别、虹膜识别和人脸识别等技术。Traditional protection methods are as follows: on smartphones, PIN codes, passwords or patterns and fingerprint recognition are often used; others use techniques such as fingerprint recognition, vein recognition, iris recognition and face recognition.
首先,比较常用的PIN码、密码和图案保护和指纹认证,经过权威机构调查,此种保护方式是相当的不安全和不方便,首先这些保护措施,是非常容易被偷窥者发现,并且很容易记录下来,那么的数据隐私就相当的不安全。同时,智能化的发展,每个人都有很多的密码需要记忆,很容易忘记,造成一系列的麻烦。最不方便的是,假如手上比较脏,比如触摸到一些油水,手上的掉皮严重,严重影响智能设备认证过程,很大可能是验证不成功,达到失败的效果。First of all, the commonly used PIN code, password and pattern protection and fingerprint authentication, after the investigation by the authority, this protection method is quite insecure and inconvenient. First of all, these protection measures are very easy to be discovered by voyeurs, and it is easy Recorded, then the data privacy is quite unsafe. At the same time, the development of intelligence, everyone has a lot of passwords to remember, it is easy to forget, causing a series of troubles. The most inconvenient is that if the hand is dirty, such as touching some oil and water, the skin on the hand is seriously damaged, which seriously affects the smart device authentication process. It is very likely that the verification is unsuccessful and the failure effect is achieved.
使用过这些智能设备的都知道,在智能设备上,目前最快的认证步骤是,首先先拿到智能设备(第一步),然后把指纹放到认证处进行认证(第二步)。而目前大多数的认证方式是:首先拿到想要使用的智能设备(第一步),然后打开智能设备的电源键(第二步),最后把指纹放在验证出验证(第三步)。从以上的验证步数,就知道最快两步验证完成,经常使用的是三步验证完成。Those who have used these smart devices know that the fastest authentication step on smart devices is to first get the smart device (first step) and then put the fingerprint in the authentication office for authentication (step 2). At present, most of the authentication methods are: first get the smart device you want to use (first step), then turn on the power button of the smart device (the second step), and finally put the fingerprint on the verification verification (third step) . From the above verification steps, it is known that the fastest two-step verification is completed, and the three-step verification is often used.
此外,国内外现在还有一些识别精度比较高的生物识别技术,在识别精度高的同时,也要付出一些应有的代价。静脉识别和虹膜识别也都根据每个人的生理特征不一样然后采取生理特征信号,然后对信号进行一系列的分析和处理,最终达到识别的作用,这两种生物识别技术能达到很高的识别精度,但是也伴随着一些弊端,静脉识别里手背的静脉仍有可能随着年龄和生理的变化而变化,永久性尚未得到证实,由于采集方式受自身特点的限制,产品难以小型化,不能够在智能设备上进行量产,在同时,采集设备有特殊要求,设计相对复杂,制造成本较高。In addition, there are some biometric identification technologies with high recognition accuracy at home and abroad. At the same time, the recognition accuracy is high, and some cost should be paid. Vein recognition and iris recognition are also based on each person's physiological characteristics and then take physiological characteristics signals, and then a series of analysis and processing of the signal, and finally achieve the role of recognition, these two biometric technologies can achieve high recognition Accuracy, but also with some drawbacks, veins in the back of the hand in vein recognition may still change with age and physiology, and the permanent has not been confirmed. Because the collection method is limited by its own characteristics, the product is difficult to be miniaturized and cannot be In mass production on smart devices, at the same time, the collection equipment has special requirements, the design is relatively complicated, and the manufacturing cost is high.
发明内容Summary of the invention
本发明所要解决的技术问题是需要提供一种以用户行为作为特征,进而简化用户身份认证过程,提升用户体验的基于移动设备的用户身份认证方法,并提供采用了该基于移动设备的用户身份认证方法的用户身份认证系统。The technical problem to be solved by the present invention is to provide a mobile device-based user identity authentication method that takes user behavior as a feature, thereby simplifying the user identity authentication process and improving the user experience, and provides the mobile device-based user identity authentication. Method of user identity authentication system.
对此,本发明提供一种基于移动设备的用户身份认证方法,包括以下步骤:In this regard, the present invention provides a mobile device-based user identity authentication method, including the following steps:
步骤S1,通过移动设备采集该移动设备从拿起位置到静止状态的传感器信号;Step S1, collecting, by the mobile device, a sensor signal of the mobile device from the picking position to the stationary state;
步骤S2,对所述传感器信号进行滤波;Step S2, filtering the sensor signal;
步骤S3,对滤波后的传感器信号进行信号预处理,提取所述移动设备拿起事件中的时域特征和频域特征;Step S3: performing signal preprocessing on the filtered sensor signal, and extracting time domain features and frequency domain features in the picking event of the mobile device;
步骤S4,将所述时域特征和频域特征放入至神经网络识别算法中进行训练,实现对用户数据的识别和判断。In step S4, the time domain feature and the frequency domain feature are put into a neural network recognition algorithm for training, and the user data is recognized and judged.
本发明的进一步改进在于,还包括步骤S5,所述步骤S5将置信度达到预设的置信度阈值的训练样本放入至所述机器学习算法中用来训练的数据集中,并将原来的一类识别模型升级为二类识别模型。A further improvement of the present invention is that the method further includes a step S5 of placing a training sample whose confidence level reaches a preset confidence threshold into a data set used for training in the machine learning algorithm, and the original one The class recognition model is upgraded to a second class recognition model.
本发明的进一步改进在于,所述步骤S5中,在预测正确的训练样本里,按照从小到大的顺序进行排序,选择前5%的训练样本作为二类识别模型中合法用户的正样本;在预测错误的训练样本里,按照从大到小的顺序进行排序,选择5%的训练样本作为二类识别模型中非法用户的负样本。A further improvement of the present invention is that, in the step S5, in the prediction of the correct training samples, the ranking is performed in ascending order, and the first 5% of the training samples are selected as positive samples of the legal users in the second-class recognition model; In the training samples that predict errors, the order is sorted in descending order, and 5% of the training samples are selected as negative samples of illegal users in the second-class recognition model.
本发明的进一步改进在于,所述步骤S2包括以下子步骤:A further improvement of the invention is that said step S2 comprises the following substeps:
步骤S201,分析所述传感器信号的时域和频域的特点,得出传感器信号能量在频率范围内的分布规律;Step S201, analyzing characteristics of the time domain and the frequency domain of the sensor signal, and obtaining a distribution law of sensor signal energy in a frequency range;
步骤S202、根据所述步骤S201得到的频率范围,采用带通滤波器对其进行滤波。Step S202: Filtering according to the frequency range obtained in step S201 by using a band pass filter.
本发明的进一步改进在于,所述步骤S3包括以下子步骤:A further improvement of the invention is that said step S3 comprises the following substeps:
步骤S301,对经过滤波后的传感器信号进行事件检测;Step S301, performing event detection on the filtered sensor signal;
步骤S302,对获得的移动设备的事件信号进行信号预处理,所述信号预处理包括分帧和加窗处理;Step S302, performing signal preprocessing on the obtained event signal of the mobile device, where the signal preprocessing includes framing and windowing processing;
步骤S303,获取事件信号,分别提取每个事件信号的时域特征和频域特征。Step S303, acquiring an event signal, and extracting time domain features and frequency domain features of each event signal respectively.
本发明的进一步改进在于,所述步骤S301中,先对经过滤波后的传感器信号进行 加窗处理,然后对每一个加窗后的窗内信号进行均值和标准差计算,进而检测和截取到所述移动设备的事件信号。According to a further improvement of the present invention, in the step S301, the filtered sensor signal is windowed, and then the average and standard deviation calculation is performed on each windowed window signal, and then the detection and interception are performed. The event signal of the mobile device.
本发明的进一步改进在于,所述步骤S302中,对所述步骤S301获得的事件信号进行分帧,然后通过汉明窗实现再次加窗处理,其中,帧与帧之间的覆盖时长小于每一个分帧时长的二分之一。A further improvement of the present invention is that, in the step S302, the event signal obtained in the step S301 is framed, and then the window is processed again by the Hamming window, wherein the coverage time between the frame and the frame is smaller than each One-half of the length of the framing.
本发明的进一步改进在于,所述步骤S303中,先分别提取每个事件信号的时域特征和频域特征,然后将每个事件信号的时域特征和频域特征组成特征向量;其中,提取每个事件信号的时域特征包括:均值、方差、标准差、最大值、最小值、过零点个数、最大值与最小值之差以及众数;提取每个事件信号的频域特征包括:直流分量、图形的均值、方差、标准差、斜度、峭度、幅度的均值、方差、标准差、斜度、峭度以及能量。A further improvement of the present invention is that, in the step S303, the time domain feature and the frequency domain feature of each event signal are separately extracted, and then the time domain feature and the frequency domain feature of each event signal are combined into a feature vector; wherein, extracting The time domain characteristics of each event signal include: mean, variance, standard deviation, maximum, minimum, number of zero crossings, difference between maximum and minimum, and mode; extracting frequency domain characteristics of each event signal includes: DC component, mean, variance, standard deviation, slope, kurtosis, mean, variance, standard deviation, slope, kurtosis, and energy.
本发明的进一步改进在于,所述步骤S4中,将所述时域特征和频域特征放入至Autoencoder神经网络识别算法中进行训练,在该Autoencoder神经网络识别算法的算法模型中采用三层神经网络,隐藏层的节点数是10个,采用的编码函数是satlin函数,采用的解码函数是purelin函数,在所述Autoencoder神经网络识别算法的训练中,采用训练样本中的误差最大值作为阈值来判断所识别的用户数据。A further improvement of the present invention is that in the step S4, the time domain feature and the frequency domain feature are put into the Autoencoder neural network recognition algorithm for training, and the three-layer neural system is used in the algorithm model of the Autoencoder neural network recognition algorithm. Network, the number of nodes in the hidden layer is 10, the encoding function used is the satlin function, and the decoding function used is the purelin function. In the training of the Autoencoder neural network recognition algorithm, the maximum error value in the training sample is used as the threshold value. Determine the identified user data.
本发明还提供一种基于移动设备的用户身份认证系统,采用了如上所述的基于移动设备的用户身份认证方法。The present invention also provides a mobile device-based user identity authentication system that employs a mobile device-based user identity authentication method as described above.
与现有技术相比,本发明的有益效果在于:通过现有的移动设备提取用户的行为特征,运用机器学习识别模型的神经网络识别算法来识别用户身份,这一过程实际上在用户把移动设备拿起的过程就完成了,能够有效简化用户身份认证的过程,大大提升了用户体验,不影响用户本身在数据隐私保护时对移动设备的良好使用;并且,本发明无需增加任何硬件成本,系统简单有效且使用方便,能够准确识别出合法用户和非法用户,具有很好的实用性。本发明提供了基于新型非侵入式的、以行为活动为特征的用户身份认证方法,提升了用户体验,并实现了移动设备数据隐私保护。Compared with the prior art, the beneficial effects of the present invention are: extracting the user's behavior characteristics by the existing mobile device, and using the neural network recognition algorithm of the machine learning recognition model to identify the user identity, which process actually moves the user. The process of picking up the device is completed, which can effectively simplify the process of user identity authentication, greatly improving the user experience, and not affecting the user's own good use of the mobile device in data privacy protection; and the present invention does not need to add any hardware cost. The system is simple, effective and easy to use, and can accurately identify legitimate users and illegal users, and has good practicability. The invention provides a user identity authentication method based on a novel non-intrusive behavioral activity, which improves the user experience and realizes data privacy protection of mobile devices.
附图说明DRAWINGS
图1是本发明一种实施例的工作流程示意图。1 is a schematic diagram of a workflow of an embodiment of the present invention.
具体实施方式detailed description
下面结合附图,对本发明的较优的实施例作进一步的详细说明。The preferred embodiments of the present invention are further described in detail below with reference to the accompanying drawings.
如图1所示,本例提供一种基于移动设备的用户身份认证方法,包括以下步骤:As shown in FIG. 1 , this example provides a mobile device-based user identity authentication method, including the following steps:
步骤S1,通过移动设备采集该移动设备从拿起位置到静止状态的传感器信号;Step S1, collecting, by the mobile device, a sensor signal of the mobile device from the picking position to the stationary state;
步骤S2,对所述传感器信号进行滤波;Step S2, filtering the sensor signal;
步骤S3,对滤波后的传感器信号进行信号预处理,提取所述移动设备拿起事件中的时域特征和频域特征;Step S3: performing signal preprocessing on the filtered sensor signal, and extracting time domain features and frequency domain features in the picking event of the mobile device;
步骤S4,将所述时域特征和频域特征放入至神经网络识别算法中进行训练,实现对用户数据的识别和判断。In step S4, the time domain feature and the frequency domain feature are put into a neural network recognition algorithm for training, and the user data is recognized and judged.
本例所述的移动设备指的是现有的移动智能设备,包括:智能手机、智能手表和智能平板等。在使用前,从拿起位置拿出移动设备,本例所述的拿起位置默认定义为用户的裤口袋(无论左右,可以根据用户使用设备的习惯定义)、桌子和包这三个位置。拿移动设备的过程:首先拿到移动设备,然后正常速度把移动设备拿到用户自己看的比较舒服的位置即可,正常范围是20cm-40cm这个范围。The mobile device described in this example refers to an existing mobile smart device, including: a smart phone, a smart watch, a smart tablet, and the like. Before use, take out the mobile device from the pick-up position. The pick-up position described in this example is defined by default as the user's trouser pocket (which can be defined according to the user's habits), the table and the bag. Take the process of mobile devices: first get the mobile device, then take the mobile device to the more comfortable position that the user can see at normal speed. The normal range is 20cm-40cm.
针对不同的用户,将移动设备从拿起位置到自己能够舒服观看的位置即可,这一过程也是有所不同的;那么,本例首先利用所述移动设备中的加速度传感器和陀螺仪传感器收集其移动设备从拿起位置到静止状态之间的传感器信号,然后进行一系列的数据处理和分析,并利用神经网络识别算法中进行训练,得到与用户身份一一对应的基于行为特征的认证方法;以后,只要通过采集每一次该移动设备从拿起位置到静止状态之间的传感器信号是否与神经网络识别算法已训练的识别模型相匹配,就能够实现对用户身份的识别;当然,其识别的精度与识别模型设置的阈值有关,这个可以根据实际需要进行设置和调整。For different users, the mobile device can be moved from the pick-up position to the position that it can comfortably watch. This process is also different; then, this example first uses the acceleration sensor and the gyro sensor in the mobile device to collect The mobile device picks up the sensor signal from the position to the stationary state, then performs a series of data processing and analysis, and uses the neural network recognition algorithm to train, and obtains a behavior-based authentication method corresponding to the user identity. In the future, the identification of the user's identity can be realized by collecting the sensor signal between the pick-up position and the stationary state each time the mobile device matches the recognized recognition model of the neural network recognition algorithm; The accuracy is related to the threshold set by the recognition model, which can be set and adjusted according to actual needs.
本例所述步骤S1中的静止状态指的是加速度传感器和陀螺仪传感器的瞬时数据为0所对应的状态;当然,在实际应用中,加速度传感器和陀螺仪传感器的瞬时数据不是每次都会真的保持为0,可以设置一个浮动范围,即加速度传感器和陀螺仪传感器的瞬时数据在预设的浮动范围内即默认为静止状态,该浮动范围可以根据用户的实际情况进行自定义设置。The stationary state in step S1 of this example refers to a state in which the instantaneous data of the acceleration sensor and the gyro sensor is 0; of course, in practical applications, the instantaneous data of the acceleration sensor and the gyro sensor are not true every time. The hold value is 0, and a floating range can be set, that is, the instantaneous data of the acceleration sensor and the gyro sensor defaults to a static state within a preset floating range, and the floating range can be customized according to the actual situation of the user.
如图1所示,本例还包括步骤S5,所述步骤S5将置信度达到预设的置信度阈值的训练样本放入至所述机器学习算法中用来训练的数据集中,并将原来的一类识别模型升级为二类识别模型。所述置信度阈值为预设的用于判断置信度的一个标准值,该置信度阈值可以根据实际情况进行调整和设置。As shown in FIG. 1 , the example further includes a step S5, where the training sample whose confidence level reaches a preset confidence threshold is put into the data set used for training in the machine learning algorithm, and the original One type of recognition model is upgraded to a second type of recognition model. The confidence threshold is a preset standard value for determining the confidence level, and the confidence threshold can be adjusted and set according to actual conditions.
本例所述步骤S5的设置是有必要的,因为由于额外的少部分噪音,把用户的数据 放入一类识别算法模型中识别的精度可能不是很高,因此,本例根据Autoencoder神经网络识别算法中所采用的识别模型的置信度评估算法,把一些具有很高置信度的训练样本(如置信度达到预设的置信度阈值的训练样本)放入到用来训练的数据集中,把原来的一类识别模型升级为二类识别模型,这样可以有效提高识别的精度。所述Autoencoder神经网络识别算法是一种深度神经学习算法,一共三层,每层有13个结点的神经网络。The setting of step S5 in this example is necessary because the accuracy of identifying the user's data into a class of recognition algorithm model may not be very high due to an additional small amount of noise. Therefore, this example is based on Autoencoder neural network identification. The confidence evaluation algorithm of the recognition model used in the algorithm puts some training samples with high confidence (such as training samples whose confidence reaches the preset confidence threshold) into the data set used for training, and turns the original One type of recognition model is upgraded to a second-class recognition model, which can effectively improve the accuracy of recognition. The Autoencoder neural network recognition algorithm is a deep neural learning algorithm with a total of three layers and a neural network with 13 nodes per layer.
更为优选的,本例所述步骤S5中,在预测正确的训练样本里,按照从小到大的顺序进行排序,选择前5%的训练样本作为二类识别模型中合法用户的正样本;在预测错误的训练样本里,按照从大到小的顺序进行排序,选择5%的训练样本作为二类识别模型中非法用户的负样本。本例分别把正样本和负样本放入二类识别模型中能够再进一步提高精度,比如:通过Decision Tree决策树算法来实现,而Decision Tree决策树算法使用的是CART算法,进而能够很好的识别合法用户和非法用户;所述CART算法为决策树算法。More preferably, in step S5 of the present example, in predicting the correct training samples, sorting is performed in order from small to large, and the first 5% of the training samples are selected as positive samples of legal users in the second-class recognition model; In the training samples that predict errors, the order is sorted in descending order, and 5% of the training samples are selected as negative samples of illegal users in the second-class recognition model. In this example, the positive and negative samples are placed in the second-class recognition model to further improve the accuracy, for example, by the Decision Tree decision tree algorithm, and the Decision Tree decision tree algorithm uses the CART algorithm, which can be very good. Identifying legitimate users and illegal users; the CART algorithm is a decision tree algorithm.
本例所述步骤S2包括以下子步骤:Step S2 described in this example includes the following sub-steps:
步骤S201,分析所述加速度传感器和陀螺仪传感器对采集到的传感器信号的时域和频域的特点,得出传感器信号能量在频率范围内的分布规律,即通过时域分析和频域分析获得能量主要分布在哪个频率范围内,得到频率伴随能量的分布范围,本例优选设定的频率范围为10HZ到60HZ;简单地说,就是所述加速度传感器和陀螺仪传感器对采集到的传感器信号的时域和频域分析,得到信号集中的频率;Step S201, analyzing the characteristics of the time domain and the frequency domain of the collected sensor signals by the acceleration sensor and the gyro sensor, and obtaining the distribution law of the sensor signal energy in the frequency range, that is, obtained by time domain analysis and frequency domain analysis. In which frequency range the energy is mainly distributed, the distribution range of the frequency accompanying energy is obtained. In this example, the frequency range is preferably set from 10 Hz to 60 Hz; in short, the acceleration sensor and the gyro sensor are used for the collected sensor signals. Time domain and frequency domain analysis to obtain the frequency of the signal set;
步骤S202、根据所述步骤S201得到的频率范围,采用带通滤波器对其进行滤波,比如通过巴特沃斯带通滤波,带通的优选频率范围是10HZ到60HZ,可以有效的去除噪声的干扰。Step S202: Filtering according to the frequency range obtained in step S201 by using a band pass filter, for example, by Butterworth bandpass filtering, the preferred frequency range of the band pass is 10 Hz to 60 Hz, which can effectively remove noise interference. .
本例所述步骤S3包括以下子步骤:Step S3 described in this example includes the following sub-steps:
步骤S301,对经过滤波后的传感器信号进行事件检测;Step S301, performing event detection on the filtered sensor signal;
步骤S302,对获得的移动设备的事件信号进行信号预处理,所述信号预处理包括分帧和加窗处理;Step S302, performing signal preprocessing on the obtained event signal of the mobile device, where the signal preprocessing includes framing and windowing processing;
步骤S303,获取事件信号,分别提取每个事件信号的时域特征和频域特征。Step S303, acquiring an event signal, and extracting time domain features and frequency domain features of each event signal respectively.
对所述的经过滤波后的信号优选使用CFAR算法进行事件的检测,所述CFAR算法为恒虚警率算法,CFAR算法中,需要使用一些参数,比如Winsize(窗口的长度)、 step(当前窗口距离下一个窗口的时间长度)、Mu(窗口内数据的平均值)、Sigma(窗口内数据的方差)、lamda(第一函数)和lamda2(第二函数)。根据用户行为信号的特征,本例所采取的参数为:Winsize=256、Step=1、Mu和Sigma根据已知参数来求、lamda=2、lamda2=5,使用这个CFAR算法,就能够有效检测和截取到移动信号的事件。Preferably, the filtered signal is used to detect an event using a CFAR algorithm. The CFAR algorithm is a constant false alarm rate algorithm. In the CFAR algorithm, some parameters are needed, such as Winsize (length of the window), step (current window). Distance from the length of the next window), Mu (average of data within the window), Sigma (variance of data within the window), lamda (first function), and lamda2 (second function). According to the characteristics of the user behavior signal, the parameters adopted in this example are: Winsize=256, Step=1, Mu and Sigma are based on known parameters, lamda=2, lamda2=5, and can be effectively detected by using this CFAR algorithm. And intercepting events to the mobile signal.
CFAR算法中文名字叫恒虚警率算法(是一种常用语雷达中的算法),在本专利中的作用是使用这个算法来检测事件发生的起始位置和终点位置。Step作用在于,由于平时处理的信号都是离散的,为了保证信号的短时间的稳定性,所以本例加了一个窗,就是前面的windsize,当处理下一个窗口的数据点时候,本例需要将当前的窗口往后移动一小段数据来继续处理这个窗口里的数据点,这个step就是本例往后移动窗的长度。Mu是这个窗口内数据的平均值,Sigma是这个窗口内数据的方差。The Chinese name of the CFAR algorithm is called the constant false alarm rate algorithm (an algorithm in the common language radar). The role of this algorithm is to use this algorithm to detect the start and end positions of the event. The role of Step is that since the signals processed by the time are all discrete, in order to ensure the short-term stability of the signal, this example adds a window, which is the front of the windowsize. When processing the data point of the next window, this example needs Move the current window backwards by a small amount of data to continue processing the data points in this window. This step is the length of the moving window in this example. Mu is the average of the data in this window, and Sigma is the variance of the data in this window.
本例速搜步骤S301中,可以采用CFAR算法对经过滤波后的传感器信号进行事件检测,具体如下:由于本例采集的信号是一个时变的信号,所以处理起来不是很好处理,默认短时的信号是一个时不变信号,就能对信号做一系列的操作了,为了达到短时时不变信号,因此给一段信号加一个窗函数,也就是一段信号一部分一部分来处理,这也就是窗的概念,本例分别计算这个窗内信号的均值和标准差,计算公式如下:In the instant search step S301, the CFAR algorithm can be used to detect the filtered sensor signal, as follows: Since the signal collected in this example is a time-varying signal, the processing is not very well processed, and the default is short-time. The signal is a time-invariant signal, and a series of operations can be performed on the signal. In order to achieve a short-time constant signal, a window function is added to a signal, that is, a part of the signal is processed, which is a window. The concept, this example separately calculates the mean and standard deviation of the signal in this window, the formula is as follows:
Figure PCTCN2019070668-appb-000001
Figure PCTCN2019070668-appb-000001
Figure PCTCN2019070668-appb-000002
Figure PCTCN2019070668-appb-000002
其中,i是每个样本点,μ(i)为窗口内的平均值,σ(i)为窗口内的方差,W为窗口大小,A(i)和B(i)的计算公式如下:Where i is the sample point, μ(i) is the average value within the window, σ(i) is the variance within the window, W is the window size, and A(i) and B(i) are calculated as follows:
Figure PCTCN2019070668-appb-000003
Figure PCTCN2019070668-appb-000003
Figure PCTCN2019070668-appb-000004
Figure PCTCN2019070668-appb-000004
其中,S(k)是这个窗内的原始信号,k为样本点。Where S(k) is the original signal in this window and k is the sample point.
然后本例根据以上计算得到的数据分别进行检测事件:Then this example performs detection events based on the data calculated above:
|S(i)|*|S(i)|>μ(i)+γ 1σ(i) |S(i)|*|S(i)|>μ(i)+γ 1 σ(i)
如果信号满足上式要求,本例认为这个是事件的起始点。如果满足下面这个公式的要求,本例认为是结束点。If the signal satisfies the above requirement, this example considers this to be the starting point of the event. If the requirements of the following formula are met, this example is considered to be the end point.
Figure PCTCN2019070668-appb-000005
Figure PCTCN2019070668-appb-000005
γ 1和γ 2是取决于噪音的常量参数,Winsize是这个窗口的长度,step是这个当前窗口距离下一个窗口的时间长度。 γ 1 and γ 2 are constant parameters depending on noise, Winsize is the length of this window, and step is the length of time from the current window to the next window.
当然,本例所述步骤S301是通过CFAR算法来距离说明如何实现事件的检测,在实际应用中,也可以是别的算法或方式,只要能实现事件检测就行。作为对事件检测,重点在于如何获取事件的起点和重点,进而能够有效截取事件,因此,其方法有很多种,并不局限于采用CFAR算法,比如,在本例所述步骤S301中,先对经过滤波后的传感器信号进行加窗处理,然后对每一个加窗后的窗内信号进行均值和标准差计算,进而检测和截取到所述移动设备的事件信号,这种方式也是完全可以实现的。Of course, the step S301 in this example is to describe how to implement the event detection by using the CFAR algorithm. In actual applications, other algorithms or methods may be used, as long as event detection can be implemented. As the detection of the event, the focus is on how to obtain the starting point and focus of the event, so that the event can be intercepted effectively. Therefore, there are many methods, and the CFAR algorithm is not limited. For example, in step S301 in this example, The filtered sensor signal is windowed, and then the mean and standard deviation calculations are performed on each windowed window signal, thereby detecting and intercepting the event signal of the mobile device, which is also fully achievable. .
本例所述步骤S302中,对所述步骤S301获得的事件信号进行分帧,然后通过汉明窗实现再次加窗处理,其中,帧与帧之间的覆盖时长小于每一个分帧时长的二分之一。In the step S302, the event signal obtained in the step S301 is framed, and then the window is processed again by the Hamming window, wherein the coverage time between the frame and the frame is less than the duration of each frame. One of the points.
优选的,所述步骤S302中,根据以上所述,对已经获得的移动信号事件,进行信号的预处理,包括分帧和加窗处理,本例对信号的每个分帧为5毫秒,为了避免移动信号特征在跨越两帧,本例所采用的是帧与帧之间的覆盖是2毫秒,达到特征的有效提取,本例所加的窗函数是汉明窗(hamming)。Preferably, in the step S302, according to the above, the signal preprocessing is performed on the obtained mobile signal event, including framing and windowing processing. In this example, each framing of the signal is 5 milliseconds, in order to To avoid moving signal features spanning two frames, this example uses a frame-to-frame coverage of 2 milliseconds to achieve effective feature extraction. The window function added in this example is a hamming window.
本例所述步骤S303中,先分别提取每个事件信号的时域特征和频域特征,然后将每个事件信号的时域特征和频域特征组成特征向量;其中,提取每个事件信号的时域特征包括:均值、方差、标准差、最大值、最小值、过零点个数、最大值与最小值之差以及众数;提取每个事件信号的频域特征包括:直流分量、图形的均值、方差、标准差、斜度、峭度、幅度的均值、方差、标准差、斜度、峭度以及能量;进而组合起来一共20维的特征向量。In step S303, the time domain feature and the frequency domain feature of each event signal are separately extracted, and then the time domain feature and the frequency domain feature of each event signal are combined into a feature vector; wherein each event signal is extracted. The time domain features include: mean, variance, standard deviation, maximum, minimum, number of zero crossings, difference between maximum and minimum, and mode; the frequency domain characteristics of each event signal are extracted: DC component, graphic Mean, variance, standard deviation, slope, kurtosis, mean, variance, standard deviation, slope, kurtosis, and energy; and then combine a total of 20-dimensional eigenvectors.
也就是说,本例在获取每个事件之后,优选分别在每个事件中,对整个这段信号进行特征向量的数学运算,然后把这些数值组合在一起就可以,一共长度为21个。当然,这些时域特征和频域特征越多,其获取的特征向量约精确,最为重要的特征向量包括均值和方差等。That is to say, in this example, after each event is acquired, it is preferable to perform a mathematical operation of the feature vector on the entire segment in each event, and then combine the values together, and the total length is 21. Of course, the more these time domain features and frequency domain features, the more accurate the feature vectors are, and the most important feature vectors include mean and variance.
本例所述步骤S4根据以上所述得到了事件相关的特征,然后把这些特征放入机器学习的识别算法中,比如:Autoencoder神经网络识别算法,来获取用户数据的识别和判断。Step S4 in this example obtains event-related features according to the above, and then puts these features into a machine learning recognition algorithm, such as an Autoencoder neural network recognition algorithm, to obtain user data identification and judgment.
更为具体的,本例所述步骤S4中,将所述时域特征和频域特征放入至Autoencoder神经网络识别算法中进行训练,在该Autoencoder神经网络识别算法的算法模型中采用三层神经网络,隐藏层的节点数是10个,采用的编码函数是satlin函数,采用的解码函数是purelin函数,在所述Autoencoder神经网络识别算法的训练中,采用训练样本中的误差最大值作为阈值来判断所识别的用户数据;也就是说,小于该训练样本中的误差最大值,则判断为合法用户,否则判断为非法用户。More specifically, in step S4 of this example, the time domain feature and the frequency domain feature are put into the Autoencoder neural network recognition algorithm for training, and the three-layer neural system is used in the algorithm model of the Autoencoder neural network recognition algorithm. Network, the number of nodes in the hidden layer is 10, the encoding function used is the satlin function, and the decoding function used is the purelin function. In the training of the Autoencoder neural network recognition algorithm, the maximum error value in the training sample is used as the threshold value. Judging the identified user data; that is, if the error is smaller than the maximum value of the training sample, it is determined to be a legitimate user, otherwise it is determined to be an illegal user.
本例所述Autoencoder神经网络识别算法采用的是一个三层神经网络模型,此三层神经网络模型的目的是重构输入的数据,即:输入(2.1,3,5.5,2),那么经过中间不断的调整权重使得最后能够重新重构这些数据点,如最后重构的数据为(2,3.1,5.3,2.1),重构数据点事实上就是通过一个编码函数对原始数据进行再次编码,然后本例再使用一个解码函数,对这些数据进行解码,编码函数和解码函数在上面提到了。The Autoencoder neural network recognition algorithm in this example uses a three-layer neural network model. The purpose of this three-layer neural network model is to reconstruct the input data, ie: input (2.1, 3, 5.5, 2), then through the middle Constantly adjusting the weights makes it possible to re-reconstruct these data points. For example, the last reconstructed data is (2, 3.1, 5.3, 2.1). Reconstructing the data points is actually re-encoding the original data through an encoding function. In this example, a decoding function is used to decode the data. The encoding function and the decoding function are mentioned above.
经过实验可以看到他们之间很接近,存在一点误差,本例根据这些误差,不断的调整之间的权重,使得这个误差最小,一共有三层,第一层是本例输入的数据,经过不断的调整层与层之间的权重,来动态减少误差,最后这个模型输出的是重构后的数据点,然后本例选择这些误差点中最大的那个误差,作为本例判断的一个阈值,大于这个阈值的本例认为不是合法用户,小于这个阈值的本例认为是自己的合法用户;在识别和判断中,可以通过L2参数作为想要得到的精度参数,L2参数的权重优选为0.01。After the experiment, we can see that they are very close, there is a little error. In this case, according to these errors, the weight between the adjustments is constantly adjusted to make this error the smallest. There are three layers. The first layer is the data input in this example. Constantly adjust the weight between the layers to dynamically reduce the error. Finally, the model outputs the reconstructed data points. Then, in this case, the largest error among these error points is selected as a threshold for this example. This example is larger than this threshold. It is considered to be not a legitimate user. In this case, the value is less than this threshold. It is considered to be its own legitimate user. In the identification and judgment, the L2 parameter can be used as the precision parameter to be obtained. The weight of the L2 parameter is preferably 0.01.
本例还还提供一种基于移动设备的用户身份认证系统,采用了如上所述的基于移动设备的用户身份认证方法。This example also provides a mobile device-based user identity authentication system that employs a mobile device-based user identity authentication method as described above.
综上,本例提供了通过现有的移动设备提取用户的行为特征,运用机器学习识别模型的神经网络识别算法来识别用户身份,这一过程实际上在用户把移动设备拿起的过程就完成了,能够有效简化用户身份认证的过程,大大提升了用户体验,不影响用户本身在数据隐私保护时对移动设备的良好使用;并且,本发明无需增加任何硬件成本,系统简单有效且使用方便,能够准确识别出合法用户和非法用户,具有很好的实用性。In summary, this example provides a neural network recognition algorithm that uses the machine learning recognition model to identify the user's identity by extracting the user's behavior characteristics from the existing mobile device. This process is actually completed when the user picks up the mobile device. The process of user identity authentication can be effectively simplified, the user experience is greatly improved, and the user's own good use of the mobile device during data privacy protection is not affected; and the invention does not need to add any hardware cost, and the system is simple, effective, and convenient to use. It can accurately identify legitimate users and illegal users, and has good practicability.
也就是说,本例提供了基于新型非侵入式的、以行为活动为特征的用户身份认证方法,提升了用户体验,并实现了移动设备数据隐私保护。That is to say, this example provides a new non-intrusive, behavioral activity-based user identity authentication method that enhances the user experience and implements mobile device data privacy protection.
以上内容是结合具体的优选实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的普通技术人员来说, 在不脱离本发明构思的前提下,还可以做出若干简单推演或替换,都应当视为属于本发明的保护范围。The above is a further detailed description of the present invention in connection with the specific preferred embodiments, and the specific embodiments of the present invention are not limited to the description. It will be apparent to those skilled in the art that the present invention may be practiced without departing from the spirit and scope of the invention.

Claims (10)

  1. 一种基于移动设备的用户身份认证方法,其特征在于,包括以下步骤:A mobile device-based user identity authentication method, comprising the steps of:
    步骤S1,通过移动设备采集该移动设备从拿起位置到静止状态的传感器信号;Step S1, collecting, by the mobile device, a sensor signal of the mobile device from the picking position to the stationary state;
    步骤S2,对所述传感器信号进行滤波;Step S2, filtering the sensor signal;
    步骤S3,对滤波后的传感器信号进行信号预处理,提取所述移动设备拿起事件中的时域特征和频域特征;Step S3: performing signal preprocessing on the filtered sensor signal, and extracting time domain features and frequency domain features in the picking event of the mobile device;
    步骤S4,将所述时域特征和频域特征放入至神经网络识别算法中进行训练,实现对用户数据的识别和判断。In step S4, the time domain feature and the frequency domain feature are put into a neural network recognition algorithm for training, and the user data is recognized and judged.
  2. 根据权利要求1所述的基于移动设备的用户身份认证方法,其特征在于,还包括步骤S5,将置信度达到预设的置信度阈值的训练样本放入至所述机器学习算法中用来训练的数据集中,并将原来的一类识别模型升级为二类识别模型。The mobile device-based user identity authentication method according to claim 1, further comprising a step S5 of placing a training sample whose confidence level reaches a preset confidence threshold into the machine learning algorithm for training The data set and upgrade the original one type of recognition model to the second type of recognition model.
  3. 根据权利要求2所述的基于移动设备的用户身份认证方法,其特征在于,所述步骤S5中,在预测正确的训练样本里,按照从小到大的顺序进行排序,选择前5%的训练样本作为二类识别模型中合法用户的正样本;在预测错误的训练样本里,按照从大到小的顺序进行排序,选择5%的训练样本作为二类识别模型中非法用户的负样本。The mobile device-based user identity authentication method according to claim 2, wherein in the step S5, in the prediction of the correct training samples, the ranking is performed in ascending order, and the first 5% of the training samples are selected. As a positive sample of the legal user in the second-class recognition model; in the training samples that predict the error, the order is sorted in descending order, and 5% of the training samples are selected as the negative samples of the illegal users in the second-class recognition model.
  4. 根据权利要求1至3任意一项所述的基于移动设备的用户身份认证方法,其特征在于,所述步骤S2包括以下子步骤:The mobile device-based user identity authentication method according to any one of claims 1 to 3, wherein the step S2 comprises the following sub-steps:
    步骤S201,分析所述传感器信号的时域和频域的特点,得出传感器信号能量在频率范围内的分布规律;Step S201, analyzing characteristics of the time domain and the frequency domain of the sensor signal, and obtaining a distribution law of sensor signal energy in a frequency range;
    步骤S202、根据所述步骤S201得到的频率范围,采用带通滤波器对其进行滤波。Step S202: Filtering according to the frequency range obtained in step S201 by using a band pass filter.
  5. 根据权利要求1至3任意一项所述的基于移动设备的用户身份认证方法,其特征在于,所述步骤S3包括以下子步骤:The mobile device-based user identity authentication method according to any one of claims 1 to 3, wherein the step S3 comprises the following sub-steps:
    步骤S301,对经过滤波后的传感器信号进行事件检测;Step S301, performing event detection on the filtered sensor signal;
    步骤S302,对获得的移动设备的事件信号进行信号预处理,所述信号预处理包括分帧和加窗处理;Step S302, performing signal preprocessing on the obtained event signal of the mobile device, where the signal preprocessing includes framing and windowing processing;
    步骤S303,获取事件信号,分别提取每个事件信号的时域特征和频域特征。Step S303, acquiring an event signal, and extracting time domain features and frequency domain features of each event signal respectively.
  6. 根据权利要求5所述的基于移动设备的用户身份认证方法,其特征在于,所述步骤S301中,先对经过滤波后的传感器信号进行加窗处理,然后对每一个加窗后的窗内信号进行均值和标准差计算,进而检测和截取到所述移动设备的事件信号。The mobile device-based user identity authentication method according to claim 5, wherein in the step S301, the filtered sensor signal is first windowed, and then the windowed signal is filtered for each window. Mean and standard deviation calculations are performed to detect and intercept event signals to the mobile device.
  7. 根据权利要求5所述的基于移动设备的用户身份认证方法,其特征在于,所述 步骤S302中,对所述步骤S301获得的事件信号进行分帧,然后通过汉明窗实现再次加窗处理,其中,帧与帧之间的覆盖时长小于每一个分帧时长的二分之一。The mobile device-based user identity authentication method according to claim 5, wherein in the step S302, the event signal obtained in the step S301 is framed, and then the window is processed again through the Hamming window. The frame duration between frames is less than one-half of the length of each frame.
  8. 根据权利要求5所述的基于移动设备的用户身份认证方法,其特征在于,所述步骤S303中,先分别提取每个事件信号的时域特征和频域特征,然后将每个事件信号的时域特征和频域特征组成特征向量;其中,提取每个事件信号的时域特征包括:均值、方差、标准差、最大值、最小值、过零点个数、最大值与最小值之差以及众数;提取每个事件信号的频域特征包括:直流分量、图形的均值、方差、标准差、斜度、峭度、幅度的均值、方差、标准差、斜度、峭度以及能量。The mobile device-based user identity authentication method according to claim 5, wherein in step S303, time domain features and frequency domain features of each event signal are separately extracted, and then each event signal is timed. The domain feature and the frequency domain feature form a feature vector; wherein the time domain features of each event signal are extracted: mean, variance, standard deviation, maximum value, minimum value, number of zero crossing points, difference between maximum value and minimum value, and The frequency domain characteristics of each event signal are extracted: DC component, mean, variance, standard deviation, slope, kurtosis, mean, variance, standard deviation, slope, kurtosis, and energy.
  9. 根据权利要求1至3任意一项所述的基于移动设备的用户身份认证方法,其特征在于,所述步骤S4中,将所述时域特征和频域特征放入至Autoencoder神经网络识别算法中进行训练,在该Autoencoder神经网络识别算法的算法模型中采用三层神经网络,隐藏层的节点数是10个,采用的编码函数是satlin函数,采用的解码函数是purelin函数,在所述Autoencoder神经网络识别算法的训练中,采用训练样本中的误差最大值作为阈值来判断所识别的用户数据。The mobile device-based user identity authentication method according to any one of claims 1 to 3, wherein in the step S4, the time domain feature and the frequency domain feature are put into an Autoencoder neural network recognition algorithm. For training, a three-layer neural network is used in the algorithm model of the Autoencoder neural network recognition algorithm. The number of nodes in the hidden layer is 10. The coding function used is a satlin function, and the decoding function used is a purelin function in the Autoencoder nerve. In the training of the network identification algorithm, the maximum value of the error in the training sample is used as a threshold to judge the identified user data.
  10. 一种基于移动设备的用户身份认证系统,其特征在于,采用了如权利要求1至9任意一项所述的基于移动设备的用户身份认证方法。A mobile device-based user identity authentication system, characterized in that the mobile device-based user identity authentication method according to any one of claims 1 to 9 is employed.
PCT/CN2019/070668 2018-04-04 2019-01-07 User identity authentication method and system based on mobile device WO2019192235A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810299974.2 2018-04-04
CN201810299974.2A CN108537014B (en) 2018-04-04 2018-04-04 User identity authentication method and system based on mobile equipment

Publications (1)

Publication Number Publication Date
WO2019192235A1 true WO2019192235A1 (en) 2019-10-10

Family

ID=63483115

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/CN2019/070668 WO2019192235A1 (en) 2018-04-04 2019-01-07 User identity authentication method and system based on mobile device
PCT/CN2019/073512 WO2019192253A1 (en) 2018-04-04 2019-01-28 Mobile device-based user identity authentication method and system

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/073512 WO2019192253A1 (en) 2018-04-04 2019-01-28 Mobile device-based user identity authentication method and system

Country Status (2)

Country Link
CN (1) CN108537014B (en)
WO (2) WO2019192235A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116861217A (en) * 2023-07-18 2023-10-10 菏泽学院 Identity recognition method and system for mobile terminal

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108537014B (en) * 2018-04-04 2020-03-20 深圳大学 User identity authentication method and system based on mobile equipment
CN109189895B (en) * 2018-09-26 2021-06-04 杭州大拿科技股份有限公司 Question correcting method and device for oral calculation questions
CN109284355B (en) * 2018-09-26 2020-09-22 杭州大拿科技股份有限公司 Method and device for correcting oral arithmetic questions in test paper
CN110321689A (en) * 2019-07-08 2019-10-11 深圳大学 A kind of personal identification method and system based on snap
CN110324350B (en) * 2019-07-09 2021-12-07 中国工商银行股份有限公司 Identity authentication method and server based on mobile terminal non-sensitive sensor data
CN110837130B (en) * 2019-11-22 2021-08-17 中国电子科技集团公司第四十一研究所 Target automatic detection algorithm based on millimeter wave/terahertz wave radiation
CN113630484B (en) * 2020-05-07 2024-03-19 Oppo广东移动通信有限公司 Equipment control method and device, storage medium and electronic equipment
CN113259368B (en) * 2021-06-01 2021-10-12 北京芯盾时代科技有限公司 Identity authentication method, device and equipment
CN113626785B (en) * 2021-07-27 2023-10-27 武汉大学 Fingerprint authentication security enhancement method and system based on user fingerprint pressing behavior

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105373712A (en) * 2015-10-22 2016-03-02 上海斐讯数据通信技术有限公司 Mobile terminal unlocking system and mobile terminal unlocking method based on neural network
US20170300124A1 (en) * 2017-03-06 2017-10-19 Microsoft Technology Licensing, Llc Ultrasonic based gesture recognition
CN107273728A (en) * 2017-05-05 2017-10-20 西安交通大学苏州研究院 Intelligent watch unblock and authentication method based on motion-sensing behavioural characteristic
CN107609501A (en) * 2017-09-05 2018-01-19 东软集团股份有限公司 The close action identification method of human body and device, storage medium, electronic equipment
CN108537014A (en) * 2018-04-04 2018-09-14 深圳大学 A kind of method for authenticating user identity and system based on mobile device

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7119716B2 (en) * 2003-05-28 2006-10-10 Legalview Assets, Limited Response systems and methods for notification systems for modifying future notifications
CN105023581A (en) * 2015-07-24 2015-11-04 南京工程学院 Audio tampering detection device based on time-frequency domain joint features
CN105224104B (en) * 2015-09-01 2018-06-19 电子科技大学 Pedestrian movement's state identification method based on smart mobile phone grip mode
CN105279411A (en) * 2015-09-22 2016-01-27 电子科技大学 Gait bio-feature based mobile device identity recognition method
CN106899968B (en) * 2016-12-29 2020-04-24 南京航空航天大学 Active non-contact identity authentication method based on WiFi channel state information
CN106971203B (en) * 2017-03-31 2020-06-09 中国科学技术大学苏州研究院 Identity recognition method based on walking characteristic data
CN107026928A (en) * 2017-05-24 2017-08-08 武汉大学 A kind of behavioural characteristic identification authentication method and device based on mobile phone sensor
CN107341379A (en) * 2017-06-13 2017-11-10 广东欧珀移动通信有限公司 Unlocking screen method and device, electronic installation and computer-readable recording medium
CN107315354A (en) * 2017-06-27 2017-11-03 苏州楚博生物技术有限公司 A kind of intelligent domestic system based on radio sensing network
CN107592422B (en) * 2017-09-20 2019-07-02 上海交通大学 A kind of identity identifying method and system based on gesture feature

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105373712A (en) * 2015-10-22 2016-03-02 上海斐讯数据通信技术有限公司 Mobile terminal unlocking system and mobile terminal unlocking method based on neural network
US20170300124A1 (en) * 2017-03-06 2017-10-19 Microsoft Technology Licensing, Llc Ultrasonic based gesture recognition
CN107273728A (en) * 2017-05-05 2017-10-20 西安交通大学苏州研究院 Intelligent watch unblock and authentication method based on motion-sensing behavioural characteristic
CN107609501A (en) * 2017-09-05 2018-01-19 东软集团股份有限公司 The close action identification method of human body and device, storage medium, electronic equipment
CN108537014A (en) * 2018-04-04 2018-09-14 深圳大学 A kind of method for authenticating user identity and system based on mobile device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116861217A (en) * 2023-07-18 2023-10-10 菏泽学院 Identity recognition method and system for mobile terminal

Also Published As

Publication number Publication date
WO2019192253A1 (en) 2019-10-10
CN108537014A (en) 2018-09-14
CN108537014B (en) 2020-03-20

Similar Documents

Publication Publication Date Title
WO2019192235A1 (en) User identity authentication method and system based on mobile device
Sun et al. Accelerometer-based speed-adaptive gait authentication method for wearable IoT devices
CN108925144B (en) Identity authentication method and communication terminal
WO2019109433A1 (en) Identity authentication method and device based on gait recognition
Nickel et al. Classification of acceleration data for biometric gait recognition on mobile devices
CN111178155B (en) Gait feature extraction and gait recognition method based on inertial sensor
WO2018223491A1 (en) Identification method and system based on tooth occlusion sound
KR20210047539A (en) EMG-based user authentication device and authentication method
US9223297B2 (en) Systems and methods for identifying a user of an electronic device
CN109256139A (en) A kind of method for distinguishing speek person based on Triplet-Loss
Al-Naffakh et al. Continuous user authentication using smartwatch motion sensor data
Hestbek et al. Biometric gait recognition for mobile devices using wavelet transform and support vector machines
EP3140765B1 (en) User authentication based on body tremors
CN108306736A (en) Identity authentication method and equipment are carried out using electrocardiosignal
CN111371951B (en) Smart phone user authentication method and system based on electromyographic signals and twin neural network
Mufandaidza et al. Continuous user authentication in smartphones using gait analysis
Beton et al. Biometric secret path for mobile user authentication: A preliminary study
CN106971203B (en) Identity recognition method based on walking characteristic data
CN112069483A (en) User identification and authentication method of intelligent wearable device
CN108737623A (en) The method for identifying ID of position and carrying mode is carried based on smart mobile phone
US11790073B2 (en) Vibration signal-based smartwatch authentication method
Chakraborty et al. An approach for designing low cost deep neural network based biometric authentication model for smartphone user
CN115248910A (en) Identity authentication method and device applied to mobile terminal
CN113627238B (en) Biological identification method, device, equipment and medium based on vibration response characteristics of hand structure
CN113126794A (en) Abnormal operation identification method, abnormal operation identification device and computer-readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19781919

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19/01/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19781919

Country of ref document: EP

Kind code of ref document: A1