CN107592422B - A kind of identity identifying method and system based on gesture feature - Google Patents

A kind of identity identifying method and system based on gesture feature Download PDF

Info

Publication number
CN107592422B
CN107592422B CN201710848687.8A CN201710848687A CN107592422B CN 107592422 B CN107592422 B CN 107592422B CN 201710848687 A CN201710848687 A CN 201710848687A CN 107592422 B CN107592422 B CN 107592422B
Authority
CN
China
Prior art keywords
acceleration data
gesture
resampling
data
environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710848687.8A
Other languages
Chinese (zh)
Other versions
CN107592422A (en
Inventor
易平
吴琰磊
谢谦
张维
徐超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201710848687.8A priority Critical patent/CN107592422B/en
Publication of CN107592422A publication Critical patent/CN107592422A/en
Application granted granted Critical
Publication of CN107592422B publication Critical patent/CN107592422B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Collating Specific Patterns (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a kind of identity identifying method and system based on gesture feature.This method comprises: the acceleration information of acquisition user gesture;Resampling is carried out to the acceleration information, the acceleration information that obtains that treated;To treated, the acceleration information carries out environment denoising, the acceleration information after being denoised;It is identified using correctness of the support vector machines to the acceleration information after denoising, if recognition result is correct, authentication success.Identity identifying method and system provided by the invention based on gesture feature, on the one hand, it can preferably ensure the integrality of acquisition data, on the other hand, consider the influence of environmental factor, so that identity identifying method provided by the invention and system are still able to maintain preferable discrimination under different environment scenes.

Description

Identity authentication method and system based on gesture characteristics
Technical Field
The invention relates to the field of identity recognition, in particular to an identity authentication method and system based on gesture features.
Background
With the continuous development of smart phones and internet technologies, mobile phones become an indispensable part of people's lives, and people can use mobile phones as an information processing platform, and the mobile phones have penetrated into aspects of our lives from chat, communication, payment to banks, learning and office work. With the improvement of the convenience of the mobile phone, various private information is inevitably stored in the mobile phone, and if the mobile phone is lost, the result is no doubt disastrous harm. As a first barrier for protecting the private information, the identity authentication of the mobile phone is very important.
The identity authentication method commonly used by mobile phones on the market at present mainly comprises two types, the first type is an identity authentication method based on private information, a user inputs a preset password or marks out a screen locking pattern to finish identity authentication. In addition, the security of the password information is low, the password information is easy to be acquired by other people, and if the password information is stolen by other people, irreparable loss can be caused once the mobile phone is stolen. The other is an identity authentication mode based on biological characteristics, fingerprint unlocking, iris unlocking and the like are put into use at present, and the identity authentication mode needs additional hardware support provided by the mobile phone, so that the cost of the mobile phone is undoubtedly and greatly increased.
In many cases, the traditional mobile phone identity authentication mode is not convenient enough, when a user drives a car or takes things, only one hand can be used for freeing up the mobile phone, and the user is difficult to conveniently complete the identity authentication process through one hand.
At present, gesture authentication application is not widely applied in the market, and there are two methods used in related papers: one method is to use a Dynamic Time Warping (DTW) algorithm to match test samples and template data, and the other method is to use a naive algorithm to extract gesture track angles and features, and put the extracted features into a support vector machine classification model.
Both algorithms inevitably result in partial loss of information volume because of the discreteness and uncertainty of the gesture data. Furthermore, gesture recognition in the prior art does not take into account the effects of environmental factors.
Disclosure of Invention
The invention aims to provide an identity authentication method and system based on gesture characteristics, which can better guarantee the integrity of collected data on one hand and can still keep a better recognition rate under different environmental scenes by considering the influence of environmental factors on the other hand.
In order to achieve the purpose, the invention provides the following scheme:
a method of identity authentication based on gesture features, the method comprising:
acquiring acceleration data of a user gesture;
resampling the acceleration data to obtain the processed acceleration data;
carrying out environment denoising on the processed acceleration data to obtain denoised acceleration data;
and identifying the correctness of the denoised acceleration data by adopting a support vector machine, and if the identification result is correct, successfully authenticating the identity.
Optionally, the resampling the acceleration data to obtain the processed acceleration data specifically includes:
and resampling the acceleration data by adopting a mixed sampling mode combining linear resampling and Bessel resampling.
Optionally, resampling the acceleration data by using a mixed sampling mode combining linear resampling and bessel resampling, specifically including:
carrying out N-point linear resampling on the collected acceleration data according to time to obtain a linear resampled acceleration data point sequenceWherein, ti' is as followsThe number i of sampling time points is,t0first sampling instant, t, for linear resamplingN-1Recording the acceleration data of the collected user gesture as an original sample for the last sampling moment of the linear resampling,in the original samplingThe acceleration data acquired at a time is,in the original samplingThe acceleration data acquired at a time is,is less than t in the original sampling sequenceiThe maximum sampling time point of' is,is greater than t in the original sampling sequencei' the original sampling timing is the sampling timing adopted in the original sampling;
carrying out N-point Bessel resampling on the acquired acceleration data according to time to obtain a Bessel resampled acceleration data point sequence
Wherein,in the original sampling time sequenceThe next sampling time point of (a) is,in the original samplingThe acceleration data collected at any moment;
according to the formulaCalculating to obtain a resampled acceleration data point sequenceklWeighting coefficients, k, for linear resamplingbAre weighting coefficients for the bezier resampling.
Optionally, the performing environment denoising on the processed acceleration data to obtain the denoised acceleration data specifically includes:
training the convolutional neural network by adopting the characteristic information of the set environment scene to obtain a convolutional neural network model;
when the user gesture is made, collecting environmental characteristic information of the user to obtain gesture environmental information;
classifying and identifying the gesture environment information by adopting the convolutional neural network model to obtain the set environment scene to which the gesture environment information belongs;
determining environmental noise data when the user gesture is made according to the set environmental scene to which the gesture environmental information belongs;
and removing the environmental noise data from the processed acceleration data to obtain the de-noised acceleration data.
Optionally, the determining, according to the set environment scene to which the gesture environment information belongs, the environment noise data when the user gesture is made specifically includes:
acquiring a preset noise prediction function N (t) corresponding to the set environment scene to which the gesture environment information belongs;
and substituting the gesture making time of the user into the preset noise prediction function to obtain the environmental noise data of the gesture making time of the user.
Optionally, the identifying the correctness of the denoised acceleration data by using the support vector machine specifically includes:
respectively training a support vector machine by adopting a positive sample and a negative sample to obtain a trained support vector, wherein the positive sample is the acceleration data corresponding to the correct gesture of the user, and the negative sample is the acceleration data corresponding to the preset irregular gesture and the forest line data generated by the data curve of the positive sample;
and carrying out classification and identification on the denoised acceleration data by adopting the trained support vector.
The invention also provides an identity authentication system based on the gesture characteristics, which comprises:
the data acquisition module is used for acquiring the acceleration data of the user gesture;
the preprocessing module is used for resampling the acceleration data to obtain the processed acceleration data;
the environment denoising module is used for carrying out environment denoising on the processed acceleration data to obtain the denoised acceleration data;
and the classification identification module is used for identifying the correctness of the denoised acceleration data by adopting a support vector machine, and if the identification result is correct, the identity authentication is successful.
Optionally, the preprocessing module specifically includes:
and the data resampling unit is used for resampling the acceleration data by adopting a mixed sampling mode combining linear resampling and Bessel resampling.
The data resampling unit specifically includes:
a linear resampling sub-unit for performing N-point linear resampling on the acquired acceleration data according to time to obtain a linear resampled acceleration data point sequenceWherein,' is the ith sampling time point,t0first sampling instant, t, for linear resamplingN-1Recording the acceleration data of the collected user gesture as an original sample for the last sampling moment of the linear resampling,in the original samplingThe acceleration data acquired at a time is,in the original samplingThe acceleration data acquired at a time is,is less than in the original sampling sequenceThe maximum sampling time point of' is,is greater than in the original sampling sequenceThe original sampling time sequence is the sampling time sequence adopted in the original sampling;
a Bessel resampling subunit, configured to perform N-point Bessel resampling on the acquired acceleration data according to time to obtain a Bessel resampled acceleration data point sequence
Wherein,in the original sampling time sequenceThe next sampling time point of (a) is,in the original samplingThe acceleration data collected at any moment;
a resampling mixing subunit for obtaining the sum of the two sub-valuesi')=klPl(ti')+kbPb(ti') calculating to obtain a resampled acceleration data point sequence,klWeighting coefficients, k, for linear resamplingbAre weighting coefficients for the bezier resampling.
Optionally, the environment denoising module specifically includes:
the convolutional neural network training unit is used for training the convolutional neural network by adopting the characteristic information of the set environment scene to obtain a convolutional neural network model;
the gesture environment information acquisition unit is used for acquiring environment characteristic information of the user when the user gesture is made to obtain gesture environment information;
the environment scene determining unit is used for classifying and identifying the gesture environment information by adopting the convolutional neural network model to obtain the set environment scene to which the gesture environment information belongs;
the environment noise data determining unit is used for determining environment noise data when the user gesture is made according to the set environment scene to which the gesture environment information belongs;
the environment denoising unit is used for removing the environment noise data from the processed acceleration data to obtain the denoised acceleration data;
the ambient noise data determination unit specifically includes:
a preset noise prediction function obtaining subunit, configured to obtain a preset noise prediction function corresponding to the set environment scene to which the gesture environment information belongs;
and the environmental noise data determining subunit is used for substituting the gesture making time of the user into the preset noise prediction function to obtain the environmental noise data of the gesture making time of the user.
Optionally, the classification identifying module specifically includes:
the support vector machine training unit is used for respectively training a support vector machine by adopting a positive sample and a negative sample to obtain a trained support vector, wherein the positive sample is the acceleration data corresponding to the correct gesture of the user, and the negative sample is the acceleration data corresponding to the preset irregular gesture and the forest line data generated by the data curve of the positive sample;
and the classification and identification unit is used for performing classification and identification on the denoised acceleration data by adopting the trained support vector.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects: the identity authentication method and system based on the gesture features provided by the invention resamples the acquired acceleration data of the user gesture by adopting a mixed sampling mode combining linear resampling and Bessel resampling, ensures the information quantity of feature mapping, and ensures the integrity of the acceleration data as much as possible.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic flow chart illustrating an identity authentication method based on gesture features according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating a gesture entry state according to an embodiment of the present invention;
fig. 3 is a structural diagram of an identity authentication system based on gesture features according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide an identity authentication method and system based on gesture characteristics, which can better guarantee the integrity of collected data on one hand and can still keep a better recognition rate under different environmental scenes by considering the influence of environmental factors on the other hand.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Fig. 1 is a schematic flow chart of an identity authentication method based on gesture features according to an embodiment of the present invention, and as shown in fig. 1, the identity authentication method based on gesture features provided by the present invention includes the following steps:
step 101: acquiring acceleration data of a user gesture; the sensors used in the process of acquiring the acceleration data of the gesture of the user comprise an acceleration sensor, a gravity sensor, a linear acceleration sensor and a rotary vector sensor, and the time sequence of the acquired data of the sensors is a vector V;
step 102: resampling the acceleration data to obtain the processed acceleration data;
step 103: carrying out environment denoising on the processed acceleration data to obtain denoised acceleration data;
step 104: and identifying the correctness of the denoised acceleration data by adopting a support vector machine, and if the identification result is correct, successfully authenticating the identity.
Wherein, step 102 specifically comprises:
and resampling the acceleration data by adopting a mixed sampling mode combining linear resampling and Bessel resampling, and mapping the acceleration data to a fixed-dimension space, thereby creating conditions for the classification and identification of a later support vector machine classifier. The method specifically comprises the following steps:
carrying out N-point linear resampling on the collected acceleration data according to time to obtain a linear resampled acceleration data point sequenceWherein,for the ith sampling time point, the sampling time is,t0first sampling instant, t, for linear resamplingN-1Recording the acceleration data of the collected user gesture as an original sample for the last sampling moment of the linear resampling,in the original samplingThe acceleration data acquired at a time is,in the original samplingThe acceleration data acquired at a time is,is less than in the original sampling sequenceIs detected by the sampling time point of the maximum,is greater than in the original sampling sequenceThe original sampling time sequence is the sampling time sequence adopted in the original sampling;
carrying out N-point Bessel resampling on the acquired acceleration data according to time to obtain a Bessel resampled acceleration data point sequence Wherein,in the original sampling time sequenceThe next sampling time point of (a) is,in the original samplingThe acceleration data collected at any moment;
according to the formulaCalculating to obtain a resampled acceleration data point sequenceklWeighting coefficients, k, for linear resamplingbAre weighting coefficients for the bezier resampling.
Step 103 specifically comprises:
training the convolutional neural network by adopting the characteristic information of the set environment scene to obtain a convolutional neural network model; setting an environmental scene as a scene considered to be preset, such as a running scene, a walking scene, a going upstairs, a going downstairs, a vehicle riding scene and the like, wherein the scenes respectively have preset environmental noise functions, and the environmental noise functions are periodic changes of environmental noise values along with time; the environmental noise function is recorded as a preset noise prediction function;
when the user gesture is made, collecting environmental characteristic information of the user to obtain gesture environmental information; the user gesture is that the user shakes when holding the mobile terminal or makes a specific gesture action when holding the mobile terminal, and since the mobile terminal always collects the acceleration of the user, the collected acceleration data is considered to be the environmental acceleration data when the user gesture is made in a short time before the user gesture is made;
classifying and identifying the gesture environment information by adopting the convolutional neural network model to obtain the set environment scene to which the gesture environment information belongs;
determining environmental noise data when the user gesture is made according to the set environmental scene to which the gesture environmental information belongs;
and removing the environmental noise data from the processed acceleration data to obtain the de-noised acceleration data.
Preferably, the determining, according to the set environment scene to which the gesture environment information belongs, the environment noise data when the user gesture is made specifically includes:
acquiring a preset noise prediction function N (t) corresponding to the set environment scene to which the gesture environment information belongs;
and substituting the gesture making time of the user into the preset noise prediction function to obtain the environmental noise data of the gesture making time of the user.
Step 104 specifically includes:
respectively training a support vector machine by adopting a positive sample and a negative sample to obtain a trained support vector, wherein the positive sample is the acceleration data corresponding to the correct gesture of the user, and the negative sample is the acceleration data corresponding to the preset irregular gesture and the forest line data generated by the data curve of the positive sample;
and carrying out classification and identification on the denoised acceleration data by adopting the trained support vector.
The identity authentication method based on the gesture features provided by the invention resamples the acquired acceleration data of the user gesture by adopting a mixed sampling mode combining linear resampling and Bessel resampling, ensures the information quantity of feature mapping, and ensures the integrity of the acceleration data as much as possible.
As still another embodiment of the present invention, the user gestures are characterized by the user's strength, shape of action, habits, and the like. According to the invention, a user-triggered method is adopted to define the gesture range, as shown in fig. 2, in a state that the input gesture is ready, the screen is pressed to start to record the gesture, and the gesture is lifted to finish. When the gesture is trained, the user can repeat the process for a certain number of times to train; when the unlocking is needed, the user also carries out one acquisition operation, and once the acquisition operation is judged to be passed, the unlocking can be carried out.
The identity recognition method provided by the invention comprises four stages of user gesture acceleration data acquisition, acceleration data preprocessing, environment noise reduction and classification recognition. When a user inputs a set gesture, three stages of gesture acceleration data acquisition, acceleration data preprocessing and environmental noise reduction which are the same as the recognition process are also needed. And in the classification identification stage, whether the unlocking gesture of the user is consistent with the input set gesture or not is compared, and if the unlocking gesture of the user is consistent with the input set gesture, the user passes identity identification.
A user gesture acceleration data acquisition stage:
based on the android platform hardware standard, the standard sensor is used for tracking and monitoring aiming at short gestures input by a user, so that the gesture information amount is kept as far as possible.
The specific tracking sensor data TYPE is TYPE _ accelerome, TYPE _ category, TYPE _ GYROSCOPE, TYPE _ ROTATION _ VECTOR.
Feedback data frequency specified by android sensor standard is f at mostmax20Hz, standard frequency fnormalUnder the test, the electricity consumption is very small under the normal frequency. Therefore, the data acquisition of the sensor is divided into two modes, firstly, the background of the mobile phone acquires environmental data for a long time for environmental noise reduction, and the standard frequency f is adoptednormal5 Hz; secondly, when the mobile phone has short gestures, in order to keep gesture information as much as possible, the highest frequency f is adoptedmax=20Hz。
The original data of sampling points collected by the sensor is P (t) ═ ax,ay,az,rx,ry,rzWhere t is time, a (t) ═ ax,ay,azIs the three-dimensional linear acceleration relative to the coordinate system of the mobile phone, R (t))={rx,ry,rzAnd the rotation angle vector of the mobile phone relative to the orientation of the terrestrial coordinate system is used as the vector. The original data point collected is P (t)i),i=0,1,2,…,n-1。
Acceleration data preprocessing stage:
assuming that the time length of the same gesture is basically consistent, therefore, the dimension-fixed mapping of data is realized by using a resampling mode: and a linear resampling-Bessel resampling mixed sampling mode is adopted.
Linear resampling
Linear resampling, i.e. the data points are directly connected, and the points on the connecting line are taken as new sampled data points:
B(t)=(1-t)P0+tP1,t∈[0,1]wherein P is0,P1Two adjacent data points, B (t) being P0,P1At any point in between.
In the invention, the original data is linearly resampled by 1000 points according to the time point. The time point of the linear resampling isAccording to the linear superposition principle, the data point data after linear resampling is as follows:
wherein,is less than t in the original time sequenceiThe point of time of the maximum of' is,is greater than in the original time seriesThe minimum point in time.
Bessel resampling
The lines between the points are smoothed using bezier curves and resampled. For quadratic bezier curves, there is a track:
B(t)=(1-t)2P0+2t(1-t)P1+t2P2,t∈[0,1]
wherein, P0,P1,P2Is 3 adjacent data points, B (t) is P0,P2At any point in between.
In the present invention, we resample 1000 points for a short gesture, there are data points after bezier resampling:
wherein,is less than in the original time seriesIs determined to be the maximum point in time of,is greater than in the original time seriesThe minimum point in time.In the original sampling time sequenceThe next sampling time point.
Weighted hybrid sampling
Since the linear resampling averages the data between two points, the characteristics of the original data can be better preserved without information attenuation. However, the redundancy of the resampled data expanded to 1000 points is high, the characteristics are not obvious, and the effect of using the SVM is not particularly good. The Bezier curve can better restore smoothly-changed data, and the sampling distortion of smooth gestures is reduced. But a certain amount of information is lost, and particularly, the reduction degree of mutation data is poor.
Thus, in the practical implementation of the present invention, two resampling techniques are weighted and mixed:
to obtainIs the data point sequence after resampling processing.
And (3) an environment noise reduction stage:
short gesture trajectories are largely influenced by the environment in which the user is located. Moreover, because the gesture duration is short, the periodicity of the environment is not well reflected. Thus, in fact many prior art techniques employ dtw algorithms or moving average filtering that do not filter the ambient components well. Therefore, if the user is in a mobile state, the success rate of the identity authentication is greatly reduced.
In the invention, the environmental noise reduction is carried out by using coordinate transformation and noise periodicity characteristics as parameters, so that the success rate of identity authentication of a user in various environments is ensured.
Training of the CNN network: the rotation angle vector R of the known mobile phone is R ═ Rx,ry,rzThen can getThe quaternion Q in the unit of the mobile phone direction is { w, x, y, z }, where:
the rotation matrix can be derived from the quaternion Q:
by utilizing the rotation matrix, the linear acceleration vector A can be converted from the coordinate system of the mobile phone to the coordinate system of the world
A=MA
Also, because the user may be in different orientations, the acceleration vector a' should be reduced from 3 dimensions to 2 dimensions, regardless of the angle around the world coordinate system z-axis: a ″ { a ″)0,a1}, wherein:
can be represented by A' and rx,ryAnd carrying out scene recognition training on the CNN network. Meanwhile, a ″ in an environment steady state will be the environment noise N ═ N0.n1}。
The invention presets several environmental scenes, and each environmental scene presets a corresponding noise curve function, such as the following scenes in daily activities: 1. normal walking, 2, going downstairs, 3, going upstairs, 4, riding a bicycle, 5, running, 6, and taking a vehicle (such as a subway and the like). The scene classification set can be denoted as Cstage={Lwalk,Ldownstairs,Lupstairs,Lbilk,Lrun,Ltransportation}. The information characteristics of each type of scene are unique and are significantly different from other scenes. The time-domain sample data is subjected to FFT transformation by the processed data, and converted into frequency-domain data. Every T points of the frequency domain data are selected and made into a scene feature sample. And finally, expressing the data format of one scene feature sample as T multiplied by the number of channels. In the process of environmental noise reduction, environmental characteristic information is collected, a trained CNN network is adopted to identify and classify the collected environmental characteristic information, and after the category of the environmental characteristic information is obtained, the environmental noise is solved according to a preset noise curve function corresponding to the category of the environmental characteristic information.
And (3) solving the environmental noise category: after the CNN classifier network is trained, when the trained CNN classifier network is needed to be used for identifying environmental noise, firstly, a background sensor collects the environment all the time, the environmental scene where the user gesture is made is considered to be consistent with the environmental scene where the user gesture is made in a short time before the user gesture is made, a noise data sequence N in the short time before the user gesture is made is classified by the trained CNN classifier network, the environmental scene category to which the noise data sequence N belongs is identified, a preset noise function corresponding to the environment scene to which the noise data sequence N belongs is called, and the noise data sequence N meets the function:
NT(t) noise data at time t within a time period, wherein,two components of the ambient noise vector at time t respectively correspond to { a 'in the two-dimensional acceleration vector A' after dimension reduction0,a1}。
Within a phase periodNoisy data of the phase, where,are respectively asTwo components of the phase environment noise vector correspond to { a 'in the two-dimensional acceleration vector A' after dimension reduction0,a1}。
Recording the actual time point t corresponding to the 0 moment of the period0
Assuming that the user's behavior is already in a steady state, i.e. the noise remains well periodic, the period T and phase of the noise can be quickly identifiedWhen processing the gesture data, only the time corresponding to the gesture data point is needed' carry over phase equation:
to obtainNoise data may be predicted as the user gesture is made:
namely byObtaining the phase within the period corresponding to the current time t, thereby obtaining the time point (-t) within the corresponding time period0+t)ModT。
And (3) noise reduction is realized: a ″ -N ═ a0-n0,a1-n1And expressing the dimensionality reduction value of the denoised acceleration in a terrestrial coordinate system. This value needs to be returned to the handset coordinate system without letting C ═ a0-n0,0,a1-n1}. Then, there is a mobile phone coordinate system:
C'=M-1C
and obtaining the denoised 6-dimensional data { C ', R'.
And (3) classification and identification stage:
after the previous preprocessing and noise reduction, the data input into the classifier is a fixed-dimension gesture vector.
By using the binary SVM classifier, a better effect can be achieved under the condition of extremely few training positive samples. And selecting the kernel function as a Gaussian kernel function.
In order to effectively prevent illegal unlocking, a negative sample is also required to be designed. And setting one part of negative samples as irregular gesture data prefabricated in advance, and the other part of negative samples as the forest line data generated by a positive sample data curve.
The Boll (Boll) line is a sequence generation algorithm based on the meta-time series, and can perfectly describe the upper and lower bound curves of the time series.
Is calculated by the formula
Wherein, ball, MA and STD are equal-length numerical value sequences, MA is a moving average line, STD is a moving standard deviation, α is a positive real number, and the interval size is indicated.
Where X is the original sequence, l (ma) is the preset average line window length, l (std) is the preset moving standard deviation window length, and the brink line is usually used to describe the upper and lower trend bounds of a time sequence, and is used here to describe the acceptable gesture distinguishing bounds.
And the positive samples and the negative samples form a training set of the SVM for training, and each sample is provided with a corresponding label, so that the SVM classifier is trained. When the user unlocks and predicts, the trained SVM classifier is used for classifying and recognizing the gesture data of the user, the output is 1 to indicate acceptance, and the output is 0 to indicate rejection.
The identity authentication method based on the gesture features provided by the invention resamples the acquired acceleration data of the user gesture by adopting a mixed sampling mode combining linear resampling and Bessel resampling, ensures the information quantity of feature mapping, and ensures the integrity of the acceleration data as much as possible.
The invention also provides an identity authentication system based on gesture features, fig. 3 is a structure diagram of the identity authentication system based on gesture features in the embodiment of the invention, as shown in fig. 3, the system includes:
the data acquisition module 301 is used for acquiring acceleration data of the user gesture;
the preprocessing module 302 is configured to resample the acceleration data to obtain processed acceleration data;
the environment denoising module 303 is configured to perform environment denoising on the processed acceleration data to obtain denoised acceleration data;
and the classification identification module 304 is configured to identify the correctness of the denoised acceleration data by using a support vector machine, and if the identification result is correct, the identity authentication is successful.
The preprocessing module 302 specifically includes:
and the data resampling unit is used for resampling the acceleration data by adopting a mixed sampling mode combining linear resampling and Bessel resampling.
The data resampling unit specifically includes:
a linear resampling sub-unit for performing N-point linear resampling on the acquired acceleration data according to time to obtain a linear resampled acceleration data point sequenceWherein,for the ith sampling time point, the sampling time is,t0first sampling instant, t, for linear resamplingN-1Recording the acceleration data of the collected user gesture as an original sample for the last sampling moment of the linear resampling,in the original samplingAcquired at all timesThe acceleration data is stored in a memory of the device,in the original samplingThe acceleration data acquired at a time is,is less than in the original sampling sequenceIs detected by the sampling time point of the maximum,is greater than in the original sampling sequenceThe original sampling time sequence is the sampling time sequence adopted in the original sampling;
a Bessel resampling subunit, configured to perform N-point Bessel resampling on the acquired acceleration data according to time to obtain a Bessel resampled acceleration data point sequence
Wherein,in the original sampling time sequenceThe next sampling time point of (a) is,in the original samplingThe acceleration data collected at any moment;
a resampling mixing subunit forCalculating to obtain a resampled acceleration data point sequenceklWeighting coefficients, k, for linear resamplingbAre weighting coefficients for the bezier resampling.
The environment denoising module 303 specifically includes:
the convolutional neural network training unit is used for training the convolutional neural network by adopting the characteristic information of the set environment scene to obtain a convolutional neural network model;
the gesture environment information acquisition unit is used for acquiring environment characteristic information of the user when the user gesture is made to obtain gesture environment information;
the environment scene determining unit is used for classifying and identifying the gesture environment information by adopting the convolutional neural network model to obtain the set environment scene to which the gesture environment information belongs;
the environment noise data determining unit is used for determining environment noise data when the user gesture is made according to the set environment scene to which the gesture environment information belongs;
the environment denoising unit is used for removing the environment noise data from the processed acceleration data to obtain the denoised acceleration data;
the ambient noise data determination unit specifically includes:
a preset noise prediction function obtaining subunit, configured to obtain a preset noise prediction function corresponding to the set environment scene to which the gesture environment information belongs;
and the environmental noise data determining subunit is used for substituting the gesture making time of the user into the preset noise prediction function to obtain the environmental noise data of the gesture making time of the user.
The classification identifying module 304 specifically includes:
the support vector machine training unit is used for respectively training a support vector machine by adopting a positive sample and a negative sample to obtain a trained support vector, wherein the positive sample is the acceleration data corresponding to the correct gesture of the user, and the negative sample is the acceleration data corresponding to the preset irregular gesture and the forest line data generated by the data curve of the positive sample;
and the classification and identification unit is used for performing classification and identification on the denoised acceleration data by adopting the trained support vector.
The identity authentication system based on the gesture features resamples the acquired acceleration data of the user gesture in a mixed sampling mode combining linear resampling and Bessel resampling, guarantees the information quantity of feature mapping, and guarantees the integrity of the acceleration data as much as possible.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (8)

1. An identity authentication method based on gesture features, the method comprising:
acquiring acceleration data of a user gesture;
resampling the acceleration data to obtain the processed acceleration data;
carrying out environment denoising on the processed acceleration data to obtain denoised acceleration data;
identifying the correctness of the denoised acceleration data by adopting a support vector machine, and if the identification result is correct, successfully authenticating the identity;
the resampling is performed on the acceleration data to obtain the processed acceleration data, and specifically includes: and resampling the acceleration data by adopting a mixed sampling mode combining linear resampling and Bessel resampling.
2. The identity authentication method based on the gesture features as claimed in claim 1, wherein the resampling of the acceleration data is performed by a hybrid sampling method combining linear resampling and bezier resampling, and specifically comprises:
carrying out N-point linear resampling on the collected acceleration data according to time to obtain a linear resampled acceleration data point sequenceWherein, t'iFor the ith sampling time point, the sampling time is,t0first sampling instant, t, for linear resamplingN-1Recording the acceleration data of the collected user gesture as an original sample for the last sampling moment of the linear resampling,in the original samplingThe acceleration data acquired at a time is,in the original samplingThe acceleration data acquired at a time is,is less than t in the original sampling sequenceiThe maximum sampling time point of' is,is more than t 'in the original sampling sequence'iThe original sampling time sequence is the sampling time sequence adopted in the original sampling;
carrying out N-point Bessel resampling on the acquired acceleration data according to time to obtain a Bessel resampled acceleration data point sequence
Wherein,in the original sampling time sequenceThe next sampling time point of (a) is,in the original samplingThe acceleration data collected at any moment;
according to the formula P (t'i)=klPl(t′i)+kbPb(t′i) Calculating to obtain a resampled acceleration data point sequence P (t'i),klWeighting coefficients, k, for linear resamplingbAre weighting coefficients for the bezier resampling.
3. The identity authentication method based on the gesture features as claimed in claim 1, wherein the performing environment denoising on the processed acceleration data to obtain the denoised acceleration data specifically comprises:
training the convolutional neural network by adopting the characteristic information of the set environment scene to obtain a convolutional neural network model;
when the user gesture is made, collecting environmental characteristic information of the user to obtain gesture environmental information;
classifying and identifying the gesture environment information by adopting the convolutional neural network model to obtain the set environment scene to which the gesture environment information belongs;
determining environmental noise data when the user gesture is made according to the set environmental scene to which the gesture environmental information belongs;
and removing the environmental noise data from the processed acceleration data to obtain the de-noised acceleration data.
4. The identity authentication method based on the gesture features as claimed in claim 3, wherein the determining the environmental noise data when the user gesture is made according to the setting environmental scene to which the gesture environmental information belongs specifically includes:
acquiring a preset noise prediction function N (t) corresponding to the set environment scene to which the gesture environment information belongs;
and substituting the gesture making time of the user into the preset noise prediction function to obtain the environmental noise data of the gesture making time of the user.
5. The identity authentication method based on the gesture features as claimed in claim 1, wherein the identifying the correctness of the denoised acceleration data by using a support vector machine specifically comprises:
respectively training a support vector machine by adopting a positive sample and a negative sample to obtain a trained support vector, wherein the positive sample is the acceleration data corresponding to the correct gesture of the user, and the negative sample is the acceleration data corresponding to the preset irregular gesture and the forest line data generated by the data curve of the positive sample;
and carrying out classification and identification on the denoised acceleration data by adopting the trained support vector.
6. An identity authentication system based on gesture features, the system comprising:
the data acquisition module is used for acquiring the acceleration data of the user gesture;
the preprocessing module is used for resampling the acceleration data to obtain the processed acceleration data;
the environment denoising module is used for carrying out environment denoising on the processed acceleration data to obtain the denoised acceleration data;
the classification identification module is used for identifying the correctness of the denoised acceleration data by adopting a support vector machine, and if the identification result is correct, the identity authentication is successful;
wherein, the preprocessing module specifically comprises:
the data resampling unit is used for resampling the acceleration data by adopting a mixed sampling mode combining linear resampling and Bessel resampling;
the data resampling unit specifically includes:
a linear resampling sub-unit for performing N-point linear resampling on the acquired acceleration data according to time to obtain a linear resampled acceleration data point sequence
Wherein, t'iFor the ith sampling time point, the sampling time is,t0first sampling instant, t, for linear resamplingN-1Accelerating the acquisition of the user gesture for the last sampling instant of the linear resamplingThe data is recorded as the original samples,in the original samplingThe acceleration data acquired at a time is,in the original samplingThe acceleration data acquired at a time is,is less than t 'in the original sampling sequence'iIs detected by the sampling time point of the maximum,is more than t 'in the original sampling sequence'iThe original sampling time sequence is the sampling time sequence adopted in the original sampling;
a Bessel resampling subunit, configured to perform N-point Bessel resampling on the acquired acceleration data according to time to obtain a Bessel resampled acceleration data point sequence
Whereinin the original sampling time sequenceThe next sampling time point of (a) is,in the original samplingThe acceleration data collected at any moment;
resampling mixing subunit for obtaining the product according to formula P (t'i)=klPl(t′i)+kbPb(t′i) Calculating to obtain a resampled acceleration data point sequence P (t'i),klWeighting coefficients, k, for linear resamplingbAre weighting coefficients for the bezier resampling.
7. The gesture-feature-based identity authentication tradition of claim 6, wherein the environment denoising module specifically comprises:
the convolutional neural network training unit is used for training the convolutional neural network by adopting the characteristic information of the set environment scene to obtain a convolutional neural network model;
the gesture environment information acquisition unit is used for acquiring environment characteristic information of the user when the user gesture is made to obtain gesture environment information;
the environment scene determining unit is used for classifying and identifying the gesture environment information by adopting the convolutional neural network model to obtain the set environment scene to which the gesture environment information belongs;
the environment noise data determining unit is used for determining environment noise data when the user gesture is made according to the set environment scene to which the gesture environment information belongs;
the environment denoising unit is used for removing the environment noise data from the processed acceleration data to obtain the denoised acceleration data;
the ambient noise data determination unit specifically includes:
a preset noise prediction function obtaining subunit, configured to obtain a preset noise prediction function corresponding to the set environment scene to which the gesture environment information belongs;
and the environmental noise data determining subunit is used for substituting the gesture making time of the user into the preset noise prediction function to obtain the environmental noise data of the gesture making time of the user.
8. The system according to claim 6, wherein the classification recognition module specifically includes:
the support vector machine training unit is used for respectively training a support vector machine by adopting a positive sample and a negative sample to obtain a trained support vector, wherein the positive sample is the acceleration data corresponding to the correct gesture of the user, and the negative sample is the acceleration data corresponding to the preset irregular gesture and the forest line data generated by the data curve of the positive sample;
and the classification and identification unit is used for performing classification and identification on the denoised acceleration data by adopting the trained support vector.
CN201710848687.8A 2017-09-20 2017-09-20 A kind of identity identifying method and system based on gesture feature Active CN107592422B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710848687.8A CN107592422B (en) 2017-09-20 2017-09-20 A kind of identity identifying method and system based on gesture feature

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710848687.8A CN107592422B (en) 2017-09-20 2017-09-20 A kind of identity identifying method and system based on gesture feature

Publications (2)

Publication Number Publication Date
CN107592422A CN107592422A (en) 2018-01-16
CN107592422B true CN107592422B (en) 2019-07-02

Family

ID=61048347

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710848687.8A Active CN107592422B (en) 2017-09-20 2017-09-20 A kind of identity identifying method and system based on gesture feature

Country Status (1)

Country Link
CN (1) CN107592422B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108537014B (en) * 2018-04-04 2020-03-20 深圳大学 User identity authentication method and system based on mobile equipment
CN108965585B (en) * 2018-06-22 2021-01-26 成都博宇科技有限公司 User identity recognition method based on smart phone sensor
CN112067015B (en) * 2020-09-03 2022-11-22 青岛歌尔智能传感器有限公司 Step counting method and device based on convolutional neural network and readable storage medium
CN114935721B (en) * 2022-05-30 2023-03-24 深圳先进技术研究院 Lithium ion battery state-of-charge estimation method based on fiber bragg grating sensor

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103530543A (en) * 2013-10-30 2014-01-22 无锡赛思汇智科技有限公司 Behavior characteristic based user recognition method and system
CN105447506A (en) * 2015-11-05 2016-03-30 广东省自动化研究所 Gesture recognition method based on interval distribution probability characteristics
CN105530357A (en) * 2015-12-02 2016-04-27 武汉理工大学 Gesture identity authentication system and method based on sensor on mobile phone
CN106648068A (en) * 2016-11-11 2017-05-10 哈尔滨工业大学深圳研究生院 Method for recognizing three-dimensional dynamic gesture by two hands
CN107037878A (en) * 2016-12-14 2017-08-11 中国科学院沈阳自动化研究所 A kind of man-machine interaction method based on gesture

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102433382B1 (en) * 2014-12-08 2022-08-16 로힛 세스 Wearable wireless hmi device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103530543A (en) * 2013-10-30 2014-01-22 无锡赛思汇智科技有限公司 Behavior characteristic based user recognition method and system
CN105447506A (en) * 2015-11-05 2016-03-30 广东省自动化研究所 Gesture recognition method based on interval distribution probability characteristics
CN105530357A (en) * 2015-12-02 2016-04-27 武汉理工大学 Gesture identity authentication system and method based on sensor on mobile phone
CN106648068A (en) * 2016-11-11 2017-05-10 哈尔滨工业大学深圳研究生院 Method for recognizing three-dimensional dynamic gesture by two hands
CN107037878A (en) * 2016-12-14 2017-08-11 中国科学院沈阳自动化研究所 A kind of man-machine interaction method based on gesture

Also Published As

Publication number Publication date
CN107592422A (en) 2018-01-16

Similar Documents

Publication Publication Date Title
CN107592422B (en) A kind of identity identifying method and system based on gesture feature
CN106599772B (en) Living body verification method and device and identity authentication method and device
CN104361276B (en) A kind of multi-modal biological characteristic identity identifying method and system
Bigun et al. Multimodal biometric authentication using quality signals in mobile communications
CN108280332B (en) Biological characteristic authentication, identification and detection method, device and equipment of mobile terminal
CN107844748A (en) Auth method, device, storage medium and computer equipment
Blanco‐Gonzalo et al. Performance evaluation of handwritten signature recognition in mobile environments
CN106599866A (en) Multidimensional user identity identification method
CN103595538A (en) Identity verification method based on mobile phone acceleration sensor
CN109146492B (en) Vehicle-end mobile payment device and method
JP2022521038A (en) Face recognition methods, neural network training methods, devices and electronic devices
CN108847941B (en) Identity authentication method, device, terminal and storage medium
CN105912910A (en) Cellphone sensing based online signature identity authentication method and system
CN107480586B (en) Face characteristic point displacement-based biometric photo counterfeit attack detection method
CN103903318A (en) Identity authentication system and identity authentication method in home care based on gesture recognition
CN102411712B (en) Handwriting-based method for identity identification and terminal thereof
Ehatisham-ul-Haq et al. Identifying smartphone users based on their activity patterns via mobile sensing
CN109543635A (en) Biopsy method, device, system, unlocking method, terminal and storage medium
CN112492090A (en) Continuous identity authentication method fusing sliding track and dynamic characteristics on smart phone
JP6311237B2 (en) Collation device and collation method, collation system, and computer program
US20210390167A1 (en) Authenticating a user subvocalizing a displayed text
CN106650685B (en) Identity recognition method and device based on electrocardiogram signal
Tolosana et al. Increasing the robustness of biometric templates for dynamic signature biometric systems
CN109886084B (en) Face authentication method based on gyroscope, electronic equipment and storage medium
Al-Naffakh A comprehensive evaluation of feature selection for gait recognition using smartwatches

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant