CN107122641B - Intelligent equipment owner identification method and intelligent equipment owner identification device based on use habit - Google Patents

Intelligent equipment owner identification method and intelligent equipment owner identification device based on use habit Download PDF

Info

Publication number
CN107122641B
CN107122641B CN201710276970.8A CN201710276970A CN107122641B CN 107122641 B CN107122641 B CN 107122641B CN 201710276970 A CN201710276970 A CN 201710276970A CN 107122641 B CN107122641 B CN 107122641B
Authority
CN
China
Prior art keywords
identification
owner
acceleration
motion
motion parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710276970.8A
Other languages
Chinese (zh)
Other versions
CN107122641A (en
Inventor
陈焰
朱添田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Yidun Information Technology Co., Ltd.
Original Assignee
Hangzhou Yidun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Yidun Information Technology Co Ltd filed Critical Hangzhou Yidun Information Technology Co Ltd
Priority to CN201710276970.8A priority Critical patent/CN107122641B/en
Publication of CN107122641A publication Critical patent/CN107122641A/en
Application granted granted Critical
Publication of CN107122641B publication Critical patent/CN107122641B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/316User authentication by observing the pattern of computer usage, e.g. typical user behaviour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions

Abstract

The invention discloses an intelligent equipment owner identification method and an intelligent equipment owner identification device based on use habits, wherein the following steps are carried out during identification: acquiring motion parameters of the intelligent equipment within sampling time according to a preset sampling frequency under a preset triggering condition, wherein the motion parameters comprise acceleration and a motion direction; carrying out segmentation processing on the motion parameters according to a time sequence, and respectively extracting corresponding feature vectors for each segment obtained by the segmentation processing; inputting all segmented feature vectors into a pre-trained recognition model to recognize to obtain the probability that the intelligent device operates as the owner, and if the probability is greater than a preset threshold value, the recognition result is that the owner operates; otherwise, the identification result is non-self operation. The invention overcomes the limitations of privacy disclosure, data sharing and specific training in the existing risk control system, and the main identification method has high identification precision and high speed and can realize real-time identification.

Description

Intelligent equipment owner identification method and intelligent equipment owner identification device based on use habit
Technical Field
The invention relates to the technical field of intelligent equipment safety, in particular to an intelligent equipment owner identification method and an intelligent equipment owner identification device based on use habits.
Background
Due to the rapid development of smart devices (including portable mobile communication devices such as smart phones, tablet computers, and the like), mobile payment has become the mainstream payment method at present. Mobile payment vendors, such as pay-for-treasure, WeChat payments, etc., typically require the user to bind a bank card to a local application for completing a quick payment. During transaction, the merchant can use the POS system to enable the user to carry out close-range payment or directly enable the user to complete payment by using a payment related interface on the smart phone. Behind these convenient operations, a series of risks are also introduced: if the user's cell phone is stolen, the attacker has the opportunity to pay using the victim's cell phone. Conventional authentication practices typically require a user to establish a credential, such as a password, that is owned by the user. But this approach only identifies whether the operator of the phone knows the corresponding credentials and lacks a fundamental way to identify the user himself, i.e. whether he/she is the owner of the smartphone. In view of these issues, risk control strategies are integrated into mobile phone payments, e.g., face recognition, location recognition, etc. These features can effectively describe the smartphone owner, making it difficult for an attacker to circumvent to improve security of mobile payments.
Although risk control policies are used in mobile application payments, these authentication approaches have several disadvantages that make none of these approaches capable of providing continuous protection for users:
(1) the existing risk control strategy needs a user to provide some privacy information more or less and calls some privacy related authorities, and the extraction of the information can form potential threats to the smart phone user;
(2) each mobile application server has a set of data collection scheme, and different applications cannot share data, so that user information is repeatedly uploaded for many times and becomes redundant;
(3) training and verification mechanisms in risk control strategies sometimes require a user to perform a specific action, such as placing a face within the range of a smartphone camera, or placing a smartphone on a fixed part of the body, etc.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides an intelligent equipment owner identification method and an intelligent equipment owner identification device based on a use habit, which can effectively solve the privacy disclosure problem in the existing identification method.
A smart device owner identification method based on use habits comprises the following steps:
s1, collecting motion parameters of the intelligent equipment in sampling time according to a preset sampling frequency under a preset trigger condition, wherein the motion parameters comprise acceleration and a motion direction;
s2, carrying out segmentation processing on the motion parameters according to a time sequence, and respectively extracting corresponding feature vectors for each segment obtained by the segmentation processing;
s3, inputting all segmented feature vectors into a pre-trained recognition model, recognizing to obtain the probability that the intelligent device operates as the owner, and if the probability is larger than a preset threshold value, recognizing to obtain the operation of the owner; otherwise, the identification result is non-self operation.
Aiming at the limitations of the prior art, the invention provides an intelligent equipment owner identification method based on use habits, and the intelligent equipment owner and user real-time identification method is suitable for most intelligent equipment. The mobile phone owner identification method based on the mobile phone comprises the steps that the characteristics of the mobile phone owner are extracted through the motion parameters of the mobile phone owner when the mobile phone owner uses the intelligent device, a behavior characteristic model (namely an identification model) of the owner of each intelligent device is created, then the data collected when the mobile phone is used by a person who follows the mobile phone is authenticated by the existing model, and therefore the mobile phone owner identification function is achieved, further risk control can be effectively conducted on the intelligent device, and any extra authority and privacy related data are not needed.
The invention directly utilizes an acceleration sensor and a gyroscope sensor arranged in the intelligent equipment to respectively acquire the acceleration and the motion direction of the intelligent equipment.
The triggering condition predetermined in step S1 is that the smart device transitions from the dormant state to the active state. Generally, when the smart device is not used, the smart device is in a sleep state, and at the moment, a screen of the smart device is darkened or the smart device is not operated. In the invention, when any one of the following conditions occurs, the intelligent equipment is considered to be switched from the dormant state to the active state:
(1) the screen of the smart device is illuminated;
(2) the intelligent device has a new application opened in the foreground.
It should be noted that, the sampling time and the sampling frequency are set according to the actual application requirements, and preferably, in the present invention, the sampling time is 3s, and the sampling frequency is 50Hz/s, that is, the sampling is performed 50 times per second.
In the step S2, when the motion parameters are segmented in time sequence, any two adjacent segments obtained have an overlapping portion.
And corresponding sampling time is corresponded to each motion parameter acquisition, and segmentation is carried out according to the time sequence. Preferably, in step S2, when the motion parameters are segmented in time sequence, a sliding window is used for segmentation, and the window displacement during segmentation is smaller than the window size. In specific implementation, the motion parameters acquired at different corresponding moments (i.e., sampling times) within the sampling time need to be sorted according to a time sequence, and then are segmented by using a sliding window. During segmentation, the window displacement of each moving sliding window is smaller than the size of the window. Further preferably, the window size of the sliding window is 0.2s, and the window displacement is 0.1 s.
Preferably, in step S2, the following feature parameters are extracted for each segment to form a corresponding feature vector: mean, standard deviation, mean, skewness, kurtosis, minimum and maximum values, zero crossing rate and root mean square amplitude, and the sum square root of acceleration and the sum square root of motion direction at X, Y and Z, respectively, in the three directions.
In the invention, the recognition model is obtained by training through the following method:
the feature vector of the intelligent equipment operated by the owner of the non-owner and the feature vector of the intelligent equipment operated by the owner are obtained, a training set is formed, and the ratio of the number of the feature vectors of the owner to the number of the feature vectors of the non-owner is 1: n, wherein n is 3-6;
and training by using a support vector machine and a training set to obtain the recognition model.
The training set comprises 2000-5000 eigenvectors, wherein the eigenvectors corresponding to the owner are obtained by acquiring motion parameters at the initial stage of use; the feature vector corresponding to the non-owner is obtained from a third party (most Internet companies providing mass data and storing motion parameter information of a large number of users), and a layered sampling mode is adopted during obtaining.
In order to improve the identification precision, the intelligent equipment owner identification method further comprises the step of updating the identification model until the identification precision of the updated identification model meets the following requirements:
αnewis > A ∩ Var (α) < V (i.e., the condition α is satisfied simultaneouslynew> A and Var (α) < V)
Stopping, wherein A is an identification precision threshold, V is a variance threshold of the identification precision, and Var (α) is the variance of the identification precision of all the identification models obtained before the current update and the identification models obtained by the current update;
when the recognition model is updated, the following operations are carried out:
adding the feature vectors extracted from one part of motion parameters to the training set to obtain a new training set every time the motion parameters are collected, and forming a test set by using the feature vectors extracted from the other part of motion parameters;
training by using a new training set to obtain a new recognition model, and testing the recognition accuracy of the new recognition model and the old recognition model by using a test set; and judging whether the recognition accuracy of the new recognition model and the old recognition model meets the following conditions:
λαnew+(1-λ)αold>αold-β,
if so, replace the old recognition model with the new recognition model to complete the update, otherwise not replace, wherein αnewAnd αoldThe identification precision of the new identification model and the identification precision of the old identification model are respectively, lambda is the weight of the new identification model, the value is 0-1, and β is a correction factor.
In the invention, the updating of the recognition model is only started after the recognition model is obtained by the first training. When the recognition model is updated, aiming at a training set, the training set gradually increases new feature vectors on the basis of keeping the existing feature vectors in the training set when the recognition model is trained for the first time; for the test set, the adjoint update recognition model operation reformulates each time (i.e., does not retain the feature vectors in the test set at the last update recognition model operation).
Without special explanation, in the present invention, an old recognition model is understood as a recognition model before the current updating operation, and a new recognition model is understood as a recognition model obtained again in the process of the current updating operation.
In the present invention, the values of λ, β, a and V are adjusted according to the specific application requirements, and preferably, λ is 0.8, β is 0.05, a is 0.8, and V is 0.1.
It should be noted that, in the intelligent device identification method of the present invention, when the identification model is trained using the motion parameters determined as the use of the owner, the identification accuracy of the identification model is further used as the performance index of the identification model, and the identification model is updated using the identification accuracy in the initial use, so as to improve the identification accuracy.
Preferably, the motion parameters further include a gravitational acceleration, and the step S1 further includes screening the collected motion parameters by using the gravitational acceleration to remain valid, and performing steps S2 to S3 with respect to the screened acceleration and motion direction.
In the invention, when the collected motion parameters are screened by using the gravity acceleration so as to reserve effective acceleration and motion direction, if the gravity acceleration meets the following conditions, the corresponding acceleration and motion direction are considered invalid and deleted, otherwise, the corresponding acceleration and motion direction are considered valid and reserved:
{Xmin<Xgr(m)<Xmax}∪{Ymin<Ygr(m)<Ymax}∪{Zmin<|Zgr(m)|<Zmax},
wherein, Xgr(m)、Ygr(m) and Zgr(m) components of the gravity acceleration acquired at the mth time in the X direction, the Y direction and the Z direction respectively; xminAnd XmaxRespectively a minimum threshold and a maximum threshold in the X direction; y isminAnd YmaxRespectively a minimum threshold and a maximum threshold in the Y direction; zminAnd ZmaxRespectively a minimum threshold and a maximum threshold in the Z direction; the value of M is 1-M, and M is the total times of collecting the motion parameters in the sampling time.
In the invention, Xmin=-1.5、Xmax=1.5;Ymin=-1.5、Ymax=1.5;Zmin=9、Z max10. It should be noted that the maximum threshold and the minimum threshold of the gravitational acceleration in each direction can be adjusted according to the actual use situation.
The user is probably not handheld state when using smart machine, but puts on the plane, and the motion parameter of gathering this moment is invalid, carries out the data screening through acceleration of gravity, can effectively delete a large amount of this invalid data, helps improving owner's discernment accuracy.
In the present invention, the direction X, Y, Z is defined as X representing the acceleration of the mobile phone moving left and right, Y representing the acceleration of the mobile phone moving back and forth, and Z representing the acceleration of the mobile phone in the vertical direction.
Preferably, the motion parameter further includes a gravitational acceleration, and the step S1 further includes determining a current relative state of the smart device according to an euclidean distance between the gravitational acceleration and the acceleration within the sampling time; correspondingly, the recognition models include a relatively static recognition model and a relatively moving recognition model, and the feature vectors of all the segments are input to the corresponding recognition models according to the relative states in step S3 for the main recognition.
The method for determining the relative state of the intelligent device according to the relation between the gravity acceleration and the acceleration in the motion parameters comprises the following steps:
and respectively calculating the average values of the gravity acceleration and the acceleration in the motion parameters in the sampling time, and respectively calculating the Euclidean distance between the two average values, wherein if the Euclidean distance is greater than a preset distance threshold value, the intelligent equipment is considered to be in a relative motion state, and otherwise, the intelligent equipment is considered to be in a relative static state.
The method for training the relative stationary recognition model and the relative motion recognition model is the same as the method for training the recognition model, except that:
the feature vectors contained in the training set adopted when the relatively static recognition model is trained are all in a relatively static state;
the feature vectors contained in the training set used in training the relative motion recognition model are all in relative motion state.
The invention also provides an intelligent equipment owner identification device based on the use habit, which comprises:
the data acquisition module is used for acquiring motion parameters of the intelligent equipment within sampling time according to a preset sampling frequency under a preset trigger condition, wherein the motion parameters comprise acceleration and a motion direction;
the data processing module is used for carrying out segmentation processing on the motion parameters according to a time sequence and extracting corresponding feature vectors for each segment obtained by the segmentation processing;
the recognition module is used for inputting all segmented feature vectors into a pre-trained recognition model to recognize and obtain the probability that the intelligent device operates as the owner, and if the probability is greater than a preset threshold value, the recognition result is that the owner operates; otherwise, the identification result is non-self operation.
Compared with the prior art, the intelligent equipment owner identification method and the intelligent equipment owner identification device based on the use habits can effectively realize the owner identification of the intelligent equipment and effectively avoid the risk of revealing the privacy of the user.
Drawings
Fig. 1 is a flowchart of an intelligent device owner identification method based on usage habits according to this embodiment;
fig. 2A, 2B and 2C are schematic diagrams of fluctuation of a difference value (D-value) between an acceleration and a gravitational acceleration of the smart device in a relative motion state in three directions of X, Y and Z, respectively;
fig. 3A, 3B and 3C are schematic diagrams illustrating fluctuation of the difference (D-value) between the acceleration and the gravitational acceleration of the smart device in the relatively stationary state in three directions X, Y and Z, respectively.
Detailed Description
The invention will be described in detail below with reference to the drawings and specific embodiments.
As shown in fig. 1, the method for identifying the owner of the smart device based on the usage habit according to the embodiment includes the following steps:
s1, acquiring motion parameters of the intelligent equipment in sampling time according to a preset sampling frequency under a preset trigger condition, wherein the motion parameters comprise acceleration and a motion direction;
generally, when the smart device is not used, the smart device is in a dormant state, and at the moment, the screen of the smart device is darkened or no new application program is started in the smart device. In this embodiment, the predetermined trigger condition is that the intelligent device is switched from the dormant state to the active state, and when any one of the following conditions occurs in the intelligent device, the intelligent device is considered to be switched from the dormant state to the active state:
(1) the screen of the smart device is illuminated;
(2) the intelligent device has a new application opened in the foreground.
The sampling frequency when acquiring the motion parameters is 50HZ, that is, 50 times per second are acquired, and the values on the x, y and z axes of 3 sensors are acquired each time, and the total value is 9. The sampling time for each acquisition is 3 seconds.
S2, carrying out segmentation processing on the motion parameters according to the time sequence, and respectively extracting corresponding feature vectors for each segment obtained by the segmentation processing;
in this embodiment, when the motion parameters are segmented in time sequence, any two adjacent segments have an overlapping portion.
The overlapping rate of the overlapping part is usually between 30% and 80%, and can be adjusted according to the actual application requirements. And corresponding sampling time is corresponded to each motion parameter acquisition, and segmentation is carried out according to the time sequence. As an implementation manner, in step S2, the motion parameter is segmented according to the time sequence by using a sliding window, and the window displacement is smaller than the window size during the segmentation. In specific implementation, the motion parameters acquired at different corresponding moments (i.e., sampling times) within the sampling time need to be sorted according to a time sequence, and then are segmented by using a sliding window. During segmentation, the window displacement of each moving sliding window is smaller than the size of the window.
In this embodiment, during segmentation, the window size of the sliding window is 0.2s (that is, each segment includes 10 sets of motion parameters, and a set of motion parameters is understood as a combination of results of one acquisition, in this embodiment, motion parameters obtained by 50 acquisitions every 1 second are acquired every 0.02 second, and 0.2 second is 10), and the displacement of the window is 0.1s, that is, the data coincidence rate between two adjacent segments is 50%.
It should be noted that, if the motion parameters include an acceleration and a motion direction, a group of motion parameters acquired at one time should be a combination of the acquired acceleration and motion direction; similarly, if the motion parameters include acceleration, motion direction, and gravitational acceleration, the set of motion parameters acquired at one time should be a combination of the acquired acceleration, motion direction, and gravitational acceleration.
The feature vector is a feature representation of the user operation and represents the habit of the user operation. In this embodiment, the following feature parameters are respectively extracted for each segment to form a corresponding feature vector: mean, standard deviation, mean, skewness, kurtosis, minimum and maximum values, zero crossing rate and root mean square amplitude, and the sum square root of acceleration and the sum square root of motion direction at X, Y and Z, respectively, in the three directions. The following description will be given taking a data characteristic value of the acceleration in the X direction as an example:
1. average number:
Figure BDA0001278557050000081
2. standard deviation:
Figure BDA0001278557050000082
3. average number:
Figure BDA0001278557050000083
4. skewness:
Figure BDA0001278557050000084
5. kurtosis:
Figure BDA0001278557050000085
6. minimum value L min (x (K)), K1, …, K }
7. Maximum value H max (x (K)), K1, …, K }
8. Zero crossing rate:
Figure BDA0001278557050000086
9. root mean square amplitude:
Figure BDA0001278557050000087
10. and a square root:
Figure BDA0001278557050000088
wherein, x (K), y (K), and z (K) are components of the acceleration in X, Y, Z three directions in the motion parameters acquired at the K-th time, sgn [ x (K) is a step function of the acceleration in the motion parameters acquired at the K-th time, sgn [ x (K +1) ] is a step function of the acceleration in the motion parameters acquired at the K + 1-th time, and K is the total number of the motion parameters included in the segment. The size of K depends on the sampling time and the preset sampling frequency, and the window size, where K is 10 in this embodiment.
S3, inputting all segmented feature vectors into a pre-trained recognition model, recognizing to obtain the probability that the intelligent device operates as the owner, and if the probability is larger than a preset threshold value, recognizing to obtain the operation of the owner; otherwise, the identification result is non-self operation.
In this embodiment, the recognition model is obtained by training as follows:
the feature vector of the intelligent equipment operated by the owner of the non-owner and the feature vector of the intelligent equipment operated by the owner are obtained, a training set is formed, and the ratio of the number of the feature vectors of the owner to the number of the feature vectors of the non-owner is 1: n, wherein n is 3 to 6 (in the embodiment, n is 5);
and training by using a support vector machine and a training set to obtain the recognition model.
RFB (radial basis and function) parameters were set during training:
i punishment coefficient C (value range is 1 ~ 90000, preferred value is 100)
ii, distribution of data after mapping to a new feature space, gamma (value range is 0-0.1, preferably 0.01)
As a preferred embodiment, after the training to obtain the recognition model, the method further includes updating the recognition model until the recognition accuracy of the updated recognition model satisfies:
αnew>A∩Var(α)<V
stopping, wherein A is an identification precision threshold, V is a variance threshold of the identification precision, and Var (α) is the variance of the identification precision of all the identification models obtained before the current update and the identification models obtained by the current update;
when the recognition model is updated, the following operations are carried out:
adding the feature vectors extracted from one part of motion parameters to the training set to obtain a new training set every time the motion parameters are collected, and forming a test set by using the feature vectors extracted from the other part of motion parameters;
training by using a new training set to obtain a new recognition model, and testing the recognition accuracy of the new recognition model and the old recognition model by using a test set; and judging whether the recognition accuracy of the new recognition model and the old recognition model meets the following conditions:
λαnew+(1-λ)αold>αold-β,
if so, replace the old recognition model with the new recognition model to complete the update, otherwise not replace, wherein αnewAnd αoldThe identification precision of the new identification model and the identification precision of the old identification model are respectively, lambda is the weight of the new identification model, the value is 0-1, and β is a correction factor.
In this embodiment, the training set preferably includes 4000 feature vectors during the first training, and then, in the process of updating the recognition model, the newly acquired feature vectors are added to the training set each time, that is, the training set is continuously expanded.
In the embodiment, λ is 0-1, preferably 0.8, β is 0-0.1, preferably 0.05, a is 0.7-1, preferably 0.8, and V is 0-1, preferably 0.1.
And judging the new feature vector by using a classifier obtained by training, wherein the output result is between 0 and 1, and the corresponding feature vector is more close to the owner or others. The actual recognition model recognizes the probability p of whether the owner is actually present, and then:
Figure BDA0001278557050000101
where γ is a determination threshold value between 0 and 1. And finally, counting all input feature vectors, and judging whether the owner uses the intelligent equipment.
The training method of the embodiment adopts a Support Vector Machine (SVM) to carry out classification learning, and adopts semi-supervised online learning:
firstly, the motion parameters of the owner are required to be trained, if n groups of feature vectors are obtained when one user uses the intelligent device, and p users are in total, n × p groups of feature vectors form a training set, the training set formed by the motion parameters of the owner user is marked as 1, a data set formed by the motion parameters of other people (namely, other than the owner) except the owner is marked as 0, and when p is large, the data volume of the owner and the data volume of the other people are unbalanced.
Further, in this embodiment, the above problem is solved by using hierarchical sampling, 56 features are analyzed by using a principal component analysis method, it is found that the most significant influence of the 10 th feature and the square root average square of the acceleration sensor on the result is the most important feature, the feature vectors of all other people are sorted according to the size of the ARSSA, the ARSSA is divided into five identical intervals from small to large, and then the same amount of data is selected from each interval. To ensure temporal continuity, 99 samples in succession after each selected sample (feature vector) are added to the training set.
The intelligent device may not be in a handheld state but placed on a plane when in use, the acquired motion parameters are invalid data, and in order to avoid the influence of the invalid data on the identification precision, the embodiment further judges the validity of the motion parameters acquired each time by using the gravity acceleration, and deletes the invalid data (namely, the motion parameters judged to be invalid). The method comprises the following specific steps:
the motion parameters further include a gravitational acceleration, the step S1 further includes screening the collected motion parameters by using the gravitational acceleration to remain valid, and the steps S2 to S3 are performed with respect to the screened acceleration and motion direction. When the gravity acceleration is utilized to screen the collected motion parameters so as to reserve effective acceleration and motion direction, if the gravity acceleration meets the following conditions, the corresponding acceleration and motion direction are considered invalid and deleted, otherwise, the corresponding acceleration and motion direction are considered valid and reserved:
{Xmin<Xgr(m)<Xmax}∪{Ymin<Ygr(m)<Ymax}∪{Zmin<|Zgr(m)|<Zmax},
wherein, Xgr(m)、Ygr(m) and Zgr(m) components of the gravity acceleration acquired at the mth time in the X direction, the Y direction and the Z direction respectively; xminAnd XmaxRespectively a minimum threshold and a maximum threshold in the X direction; y isminAnd YmaxRespectively a minimum threshold and a maximum threshold in the Y direction; zminAnd ZmaxRespectively a minimum threshold and a maximum threshold in the Z direction; the value of M is 1-M, and M is the total times of collecting the motion parameters in the sampling time.
In this example, Xmin=-1.5,Xmax=1.5,Ymin=-1.5,Ymax=1.5,Zmin=9,Zmax=10,M=150。
The change of the state of the user can have a great influence on the judgment when the user uses the intelligent device. In the present embodiment, the motion state is divided into a relatively stationary state and a relatively moving state. In the relative motion state, the difference (D-value) between the acceleration of the smart device and the acceleration of gravity may fluctuate greatly compared to the relative rest state. Fig. 2A, 2B, and 2C are schematic fluctuation diagrams of a difference (D-value) between an acceleration and a gravitational acceleration of the smart device in the relative motion state in three directions X, Y and Z, respectively. Fig. 3A, 3B and 3C are schematic diagrams illustrating fluctuation of the difference (D-value) between the acceleration and the gravitational acceleration of the smart device in the relatively stationary state in three directions X, Y and Z, respectively.
In order to improve the recognition accuracy, in consideration of the fluctuation conditions in different relative states, the present embodiment provides a preferred implementation scheme:
the motion parameters further include a gravitational acceleration, and the step S1 further includes determining a current relative state of the smart device according to the euclidean distance between the gravitational acceleration and the acceleration within the sampling time; correspondingly, the recognition models include a relatively static recognition model and a relatively moving recognition model, and the feature vectors of all the segments are input to the corresponding recognition models according to the relative states in step S3 for the main recognition. The method for determining the relative state of the intelligent device according to the relation between the gravity acceleration and the acceleration in the motion parameters comprises the following steps:
and respectively calculating the average values of the gravity acceleration and the acceleration in the motion parameters in the sampling time, and respectively calculating the Euclidean distance between the two average values, wherein if the Euclidean distance is greater than a preset distance threshold value, the intelligent equipment is considered to be in a relative motion state, and otherwise, the intelligent equipment is considered to be in a relative static state.
The method comprises the steps of judging the relative state of the intelligent equipment when the motion parameters are acquired by utilizing the relation between the gravity acceleration and the acceleration in the motion parameters acquired each time, dividing the relative state into two categories of a relative motion state and a relative static state according to different states, and identifying each category by respectively adopting a corresponding classifier (namely an identification model).
When a new motion parameter needs to be identified, firstly, the relative state (relative motion state or relative static state) of the intelligent device when the motion parameter is collected needs to be determined, and then the classifier corresponding to the value is input for identification.
The feature vectors contained in the training set adopted when the relatively static recognition model is trained are all in a relatively static state;
the feature vectors contained in the training set used in training the relative motion recognition model are all in relative motion state.
The intelligent equipment owner identification method is completed based on an identification system, and the identification system comprises intelligent equipment and a server in communication connection with intelligent setting.
The method can be in an off-line identification mode and can also adopt on-line identification:
(a) in an offline recognition mode:
a user downloads a pre-trained recognition model from a server through network connection, and in the using process of the intelligent equipment, the motion parameters are locally acquired, data processing (including segmentation processing and feature vector extraction) and final recognition are carried out on the intelligent equipment;
(b) in an online identification mode:
the server obtains the motion parameters of the intelligent device, performs data processing (including segmentation processing and feature vector extraction), and finally identifies the intelligent device.
In order to facilitate implementation, as an implementation manner, in the online identification mode, when acquiring the motion parameters of the intelligent device, the server may locally acquire the motion parameters through the intelligent device and then send the acquired motion parameters to the server.
The smart phone owner identification method of the embodiment will be described below with a smart phone as an optimal implementation scheme. Each smart phone has a unique device number as the identification of the phone.
Before the owner identification is carried out:
the smart phone needs to install and deploy a corresponding application program, and then the values acquired by the acceleration sensor, the gyroscope sensor and the gravity sensor are used as motion parameters under a preset trigger condition by using the application deployed at the phone end, and are uploaded to a remote server end (namely a server) connected with the smart phone together with the device number of the smart phone.
After the server side obtains the motion parameters, the following preprocessing is carried out:
screening the obtained motion parameters by using the gravity acceleration, segmenting the screened parameters, and extracting the characteristic vector of each segment;
and determining the relative state corresponding to each motion parameter according to the relation between the gravity acceleration and the acceleration.
After the preprocessing is finished, the server trains a recognition model according to the feature vectors corresponding to the screened motion parameters, wherein the recognition model comprises a relatively static recognition model and a relatively moving recognition model, and the method comprises the following steps:
the training method of the relative static recognition model comprises the following steps:
and forming a training set by the feature vector in the relative static state and the feature vector of the non-owner obtained from the third party so as to obtain a recognition model through training, and updating the obtained recognition model.
The method for identifying the model by relative motion only needs to change the characteristic vectors in the training set into corresponding relative motion states by referring to the relative stationary identification model.
After the updating is finished, the owner identification can be carried out, and when the offline identification is adopted: firstly, downloading from a server to obtain an updated recognition model, carrying out local preprocessing on the acquired motion parameters in the intelligent equipment to determine the relative state of the intelligent equipment and the feature vector of the motion parameters when the motion parameters are acquired, and then inputting the feature vector into the corresponding recognition model according to the determined relative state.
During online identification, the collected motion parameters are directly sent to the server side, and preprocessing and identification operations are performed by the server side, which is not described in detail herein.
The mobile payment application identification method can be directly related to the mobile payment application installed on the intelligent equipment, and different mobile payment providers receive transaction requests transmitted from the mobile phone terminal, and only need to inquire whether the owner operates the intelligent mobile phone from the server terminal, so that the purpose of safety protection is achieved. The verification method can unify the original respective authentication modes of each mobile payment supplier, and in addition, the verified data only needs to be uploaded once instead of being uploaded to the server of each authentication supplier respectively.
The owner identification method of the embodiment is mainly used for solving the defects in the existing risk control mechanism of the smart phone. The method comprises the steps of obtaining a specified classifier from behavior habits of a user using the intelligent equipment, and further judging whether the user is an authenticated owner when the user of the intelligent mobile phone uses the mobile phone. It is only necessary to collect data from motion sensors that are not sensitive to user privacy to convert it to a three-party service without additional support. Therefore, the method for identifying the owner and the user of the smart phone based on the sensor in real time is actually used for improving the safety protection of any mobile application needing user authentication. For example, Alice and Bob are good friends, Alice has her phone left in Bob's home, and if Facebook in Alice's phone can log in automatically, Bob can view Alice's Facebook privacy information without Alice's consent. Then, if the sensor-based smart phone owner-user real-time identification method is applied to Alice's mobile phone, Bob as a non-owner-user operation behavior will be checked, so as to perform subsequent protection operations, such as e-mail notification to the owner or redirection to a blank page.
In practical application, in order to improve the safety, when the owner is identified not to operate, the owner can be notified through a mail reminding mode and the like.
The above-mentioned embodiments are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only the most preferred embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions, equivalents, etc. made within the scope of the principles of the present invention should be included in the scope of the present invention.

Claims (8)

1. A smart device owner identification method based on use habits is characterized by comprising the following steps:
s1, collecting motion parameters of the intelligent equipment in sampling time according to a preset sampling frequency under a preset trigger condition, wherein the motion parameters comprise acceleration and a motion direction;
s2, carrying out segmentation processing on the acceleration and the motion direction according to the time sequence, and respectively extracting corresponding feature vectors for each segment obtained by the segmentation processing;
s3, inputting all segmented feature vectors into a pre-trained recognition model, recognizing to obtain the probability that the intelligent device operates as the owner, and if the probability is larger than a preset threshold value, recognizing to obtain the operation of the owner; otherwise, the identification result is non-self operation;
the motion parameters further include a gravitational acceleration, and the step S1 further includes determining a current relative state of the smart device according to an euclidean distance between the gravitational acceleration and the acceleration within the sampling time; correspondingly, the identification models comprise a relative static identification model and a relative motion identification model, and the feature vectors of all the segments are input into the corresponding identification models according to the relative states in the step S3 for main identification;
the method for determining the relative state of the intelligent device according to the relation between the gravity acceleration and the acceleration in the motion parameters comprises the following steps:
and respectively calculating the average values of the gravity acceleration and the acceleration in the motion parameters in the sampling time, and respectively calculating the Euclidean distance between the two average values, wherein if the Euclidean distance is greater than a preset distance threshold value, the intelligent equipment is considered to be in a relative motion state, and otherwise, the intelligent equipment is considered to be in a relative static state.
2. The intelligent device owner identification method based on usage habits of claim 1, wherein the trigger condition predetermined by the step S1 is that the intelligent device is changed from a dormant state to an active state.
3. The intelligent device owner identification method based on usage habits of claim 1, wherein the identification model is trained by the following method:
the feature vector of the intelligent equipment operated by the owner of the non-owner and the feature vector of the intelligent equipment operated by the owner are obtained, a training set is formed, and the ratio of the number of the feature vectors of the owner to the number of the feature vectors of the non-owner is 1: n, wherein n is 3-6; and training by using a support vector machine and a training set to obtain the recognition model.
4. The intelligent device owner identification method based on usage habits of claim 3, further comprising updating the identification model until the identification accuracy of the updated identification model satisfies:
αnew>A∩Var(α)<V
stopping, wherein A is an identification precision threshold, V is a variance threshold of the identification precision, and Var (α) is the variance of the identification precision of all the identification models obtained before the current update and the identification models obtained by the current update;
when the recognition model is updated, the following operations are carried out:
adding the feature vectors extracted from one part of motion parameters to the training set to obtain a new training set every time the motion parameters are collected, and forming a test set by using the feature vectors extracted from the other part of motion parameters;
training by using a new training set to obtain a new recognition model, and testing the recognition accuracy of the new recognition model and the old recognition model by using a test set; and judging whether the recognition accuracy of the new recognition model and the old recognition model meets the following conditions:
λαnew+(1-λ)αold>αold-β,
if so, replace the old recognition model with the new recognition model to complete the update, otherwise not replace, wherein αnewAnd αoldThe identification precision of the new identification model and the identification precision of the old identification model are respectively, lambda is the weight of the new identification model, the value is 0-1, and β is a correction factor.
5. The intelligent equipment owner identification method based on usage habits of any one of claims 1 to 4, wherein the step S1 further comprises utilizing the gravitational acceleration to screen the collected motion parameters to keep effective, and performing steps S2-S3 according to the screened acceleration and motion direction.
6. The intelligent device owner identification method based on usage habits of claim 5, wherein when the collected motion parameters are screened by the gravitational acceleration to keep effective acceleration and motion direction, if the gravitational acceleration satisfies the following conditions, the corresponding acceleration and motion direction are considered invalid and deleted, otherwise, the corresponding acceleration and motion direction are considered valid and kept:
{Xmin<Xgr(m)<Xmax}∪{Ymin<Ygr(m)<Ymax}∪{Zmin<|Zgr(m)|<Zmax},
wherein, Xgr(m)、Ygr(m) and Zgr(m) components of the gravity acceleration acquired at the mth time in the X direction, the Y direction and the Z direction respectively; xminAnd XmaxRespectively a minimum threshold and a maximum threshold in the X direction; y isminAnd YmaxRespectively a minimum threshold and a maximum threshold in the Y direction; zminAnd ZmaxRespectively a minimum threshold and a maximum threshold in the Z direction; the value of M is 1-M, and M is the total times of collecting the motion parameters in the sampling time.
7. The intelligent device owner identification method based on usage habits of claim 6, wherein the feature vectors contained in the training set adopted in training the relatively stationary identification model are all in a relatively stationary state; the feature vectors contained in the training set used in training the relative motion recognition model are all in relative motion state.
8. The utility model provides an intelligent equipment owner recognition device based on use custom which characterized in that includes:
the data acquisition module is used for acquiring motion parameters of the intelligent equipment in sampling time according to a preset sampling frequency under a preset trigger condition, wherein the motion parameters comprise acceleration, a motion direction and gravitational acceleration;
the data processing module is used for carrying out segmentation processing on the acceleration and the motion direction according to a time sequence and extracting corresponding feature vectors for each segment obtained by the segmentation processing;
the state judgment module is used for judging the current relative state of the intelligent equipment according to the gravity acceleration and the Euclidean distance between the accelerations in the sampling time;
the motion state comprises a relative static state and a relative motion state, if the Euclidean distance of the average values of the gravity acceleration and the acceleration in the motion parameters in the sampling time is greater than a preset distance threshold value, the intelligent device is considered to be in the relative motion state, otherwise, the intelligent device is considered to be in the relative static state; the recognition module is used for inputting all segmented feature vectors into corresponding recognition models trained in advance according to the relative states to recognize and obtain the probability that the intelligent device operates as the owner, and if the probability is greater than a preset threshold value, the recognition result is that the owner operates; otherwise, the identification result is non-self operation; the recognition models include a relatively stationary recognition model and a relative motion recognition model.
CN201710276970.8A 2017-04-25 2017-04-25 Intelligent equipment owner identification method and intelligent equipment owner identification device based on use habit Active CN107122641B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710276970.8A CN107122641B (en) 2017-04-25 2017-04-25 Intelligent equipment owner identification method and intelligent equipment owner identification device based on use habit

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710276970.8A CN107122641B (en) 2017-04-25 2017-04-25 Intelligent equipment owner identification method and intelligent equipment owner identification device based on use habit

Publications (2)

Publication Number Publication Date
CN107122641A CN107122641A (en) 2017-09-01
CN107122641B true CN107122641B (en) 2020-06-16

Family

ID=59725823

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710276970.8A Active CN107122641B (en) 2017-04-25 2017-04-25 Intelligent equipment owner identification method and intelligent equipment owner identification device based on use habit

Country Status (1)

Country Link
CN (1) CN107122641B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214447B (en) * 2018-08-27 2021-10-29 郑州云海信息技术有限公司 Disk life prediction method and device
CN109977639B (en) * 2018-10-26 2021-05-04 招商银行股份有限公司 Identity authentication method and device and computer readable storage medium
CN110223672B (en) * 2019-05-16 2021-04-23 九牧厨卫股份有限公司 Offline multi-language voice recognition method
CN110795722A (en) * 2019-10-25 2020-02-14 支付宝(杭州)信息技术有限公司 Deployment method and device of security authentication model and electronic equipment
CN112784224B (en) * 2019-11-08 2024-01-30 中国电信股份有限公司 Terminal safety protection method, device and system
CN111062353B (en) * 2019-12-25 2023-04-28 每日互动股份有限公司 Method and server for acquiring gait feature data of terminal user based on mobile terminal data
CN111126294B (en) * 2019-12-25 2023-06-16 每日互动股份有限公司 Method and server for identifying gait of terminal user based on mobile terminal data
CN111061376B (en) * 2019-12-25 2023-07-18 每日互动股份有限公司 Method and server for identifying terminal user machine changing based on mobile terminal data
CN110990819B (en) * 2019-12-25 2023-04-21 每日互动股份有限公司 Method and server for acquiring gait feature data of terminal user based on mobile terminal data
CN111142688B (en) * 2019-12-25 2023-05-12 每日互动股份有限公司 Method and server for identifying terminal user machine changing based on mobile terminal data
CN111062352B (en) * 2019-12-25 2023-07-14 每日互动股份有限公司 Method and server for identifying gait of terminal user based on mobile terminal data
CN111404941B (en) * 2020-03-17 2022-08-09 广东九联科技股份有限公司 Network security protection method and network security protection device
CN111626769B (en) * 2020-04-30 2021-04-06 北京芯盾时代科技有限公司 Man-machine recognition method and device and storage medium
CN111490995A (en) * 2020-06-12 2020-08-04 支付宝(杭州)信息技术有限公司 Model training method and device for protecting privacy, data processing method and server
CN112989980A (en) * 2021-03-05 2021-06-18 华南理工大学 Target detection system and method based on web cloud platform

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101365193A (en) * 2007-08-09 2009-02-11 财团法人Seoul大学校产学协力财团 System and method for customer authentication execution based on customer behavior mode
CN103077356A (en) * 2013-01-11 2013-05-01 中国地质大学(武汉) Protecting and tracking method for primary information of mobile terminal based on user behavior pattern
CN103530546A (en) * 2013-10-25 2014-01-22 东北大学 Identity authentication method based on mouse behaviors of user
CN103530543A (en) * 2013-10-30 2014-01-22 无锡赛思汇智科技有限公司 Behavior characteristic based user recognition method and system
CN103533546A (en) * 2013-10-29 2014-01-22 无锡赛思汇智科技有限公司 Implicit user verification and privacy protection method based on multi-dimensional behavior characteristics
CN103699822A (en) * 2013-12-31 2014-04-02 同济大学 Application system and detection method for users' abnormal behaviors in e-commerce based on mouse behaviors
CN103927471A (en) * 2014-04-18 2014-07-16 电子科技大学 Authentication method and device
CN104268481A (en) * 2014-10-10 2015-01-07 中国联合网络通信集团有限公司 Method and device for realizing early warning of smart phone
CN105389486A (en) * 2015-11-05 2016-03-09 同济大学 Authentication method based on mouse behavior
CN105528613A (en) * 2015-11-30 2016-04-27 南京邮电大学 Behavior identification method based on GPS speed and acceleration data of smart phone

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9743279B2 (en) * 2014-09-16 2017-08-22 Samsung Electronics Co., Ltd. Systems and methods for device based authentication

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101365193A (en) * 2007-08-09 2009-02-11 财团法人Seoul大学校产学协力财团 System and method for customer authentication execution based on customer behavior mode
CN103077356A (en) * 2013-01-11 2013-05-01 中国地质大学(武汉) Protecting and tracking method for primary information of mobile terminal based on user behavior pattern
CN103530546A (en) * 2013-10-25 2014-01-22 东北大学 Identity authentication method based on mouse behaviors of user
CN103533546A (en) * 2013-10-29 2014-01-22 无锡赛思汇智科技有限公司 Implicit user verification and privacy protection method based on multi-dimensional behavior characteristics
CN103530543A (en) * 2013-10-30 2014-01-22 无锡赛思汇智科技有限公司 Behavior characteristic based user recognition method and system
CN103699822A (en) * 2013-12-31 2014-04-02 同济大学 Application system and detection method for users' abnormal behaviors in e-commerce based on mouse behaviors
CN103927471A (en) * 2014-04-18 2014-07-16 电子科技大学 Authentication method and device
CN104268481A (en) * 2014-10-10 2015-01-07 中国联合网络通信集团有限公司 Method and device for realizing early warning of smart phone
CN105389486A (en) * 2015-11-05 2016-03-09 同济大学 Authentication method based on mouse behavior
CN105528613A (en) * 2015-11-30 2016-04-27 南京邮电大学 Behavior identification method based on GPS speed and acceleration data of smart phone

Also Published As

Publication number Publication date
CN107122641A (en) 2017-09-01

Similar Documents

Publication Publication Date Title
CN107122641B (en) Intelligent equipment owner identification method and intelligent equipment owner identification device based on use habit
CN110163611B (en) Identity recognition method, device and related equipment
CN106022030B (en) A kind of identity authorization system and method based on user&#39;s acquired behavior feature
EP1783650B1 (en) Method and communication system for comparing biometric data obtained by means of biometric sensors with reference data
CN1972186B (en) A mobile identity authentication system and its authentication method
CN106156702A (en) Identity identifying method and equipment
CN106127130A (en) The notice system and method based on living things feature recognition being managed
CN108920921B (en) Sustainable identity authentication method for smart phone sensitive APP
Witte et al. Context-aware mobile biometric authentication based on support vector machines
CN111625792B (en) Identity recognition method based on abnormal behavior detection
CN110991249A (en) Face detection method, face detection device, electronic equipment and medium
US11695746B2 (en) Multi-layer user authentication with live interaction
CN108595923A (en) Identity identifying method, device and terminal device
CN110324350A (en) Identity identifying method and server based on the non-sensitive sensing data in mobile terminal
CN106228133A (en) User authentication method and device
CN112861082A (en) Integrated system and method for passive authentication
CN107316023A (en) A kind of face identification system for being used to share equipment
US20240028698A1 (en) System and method for perfecting and accelerating biometric identification via evolutionary biometrics via continual registration
US20210192032A1 (en) Dual-factor identification system and method with adaptive enrollment
CN110546638A (en) Improvements in biometric authentication
CN113096291A (en) Regional personnel management and control method, system, machine readable medium and equipment
CN110121174B (en) Implicit identity authentication method of mobile intelligent terminal
CN108508862A (en) A kind of authentication system and vehicle for vehicle
EP2202699A2 (en) Identity database bureau
CN112261222A (en) System-level user identity continuous authentication method on smart phone

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200211

Address after: Room 431, Building 7, No. 5, No. 3 Road, Genshan Branch, Jianggan District, Hangzhou City, Zhejiang Province, 310004

Applicant after: Hangzhou Yidun Information Technology Co., Ltd.

Address before: 311231, room 3, building 2, building 28, 301 poly Road, Hangzhou, Zhejiang, Binjiang District, -271

Applicant before: Hangzhou Anshi Information Technology Co. Ltd.

GR01 Patent grant
GR01 Patent grant