CN108536314A - Method for identifying ID and device - Google Patents

Method for identifying ID and device Download PDF

Info

Publication number
CN108536314A
CN108536314A CN201710128556.2A CN201710128556A CN108536314A CN 108536314 A CN108536314 A CN 108536314A CN 201710128556 A CN201710128556 A CN 201710128556A CN 108536314 A CN108536314 A CN 108536314A
Authority
CN
China
Prior art keywords
stroke
information
track
user
dimensional coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710128556.2A
Other languages
Chinese (zh)
Inventor
张军平
黄晨宇
张黔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201710128556.2A priority Critical patent/CN108536314A/en
Priority to PCT/CN2018/078139 priority patent/WO2018161893A1/en
Publication of CN108536314A publication Critical patent/CN108536314A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/014Hand-worn input/output arrangements, e.g. data gloves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Position Input By Displaying (AREA)
  • Image Analysis (AREA)

Abstract

A kind of method for identifying ID of the application offer and device.This method includes:The track of user gesture is built according to the angular velocity of rotation that gyroscope in terminal device measures in real time, track is made of the three-dimensional coordinate point at user's arm each moment, stroke segmentation processing is carried out to track, and feature information extraction is carried out to each section of stroke, user identity identification is carried out according to the characteristic information of the characteristic information of each section of stroke of extraction each section of stroke corresponding with a track at least one set of pre-stored characteristics information model, each group of pre-stored characteristics information model includes the characteristic information of the corresponding each section of stroke at least one track.To which the posture that will not be held terminal device by user is limited, and the accuracy and user experience of identification are improved.

Description

User identity identification method and device
Technical Field
The present application relates to the field of communications technologies, and in particular, to a user identity identification method and apparatus.
Background
Along with the popularization of smart devices, most families have more and more smart devices such as smart watches, smart bracelets, smart oxygen meters and smart blood pressure meters. In these smart devices, it is sometimes necessary to record some sensitive data (such as medical data, exercise data, etc.) that the user does not want to see, and therefore it is desirable for the smart device to be able to identify the current user. However, unlike the mobile phone for inputting the user password, most of the smart devices do not have a touch screen, and in addition, most of the smart devices do not choose to add an additional expensive sensor (such as a fingerprint reader) for identification in order to reduce the cost, and most of the smart devices do not have strong computing power.
In the related art, an air signature (AirSig) is an identity authentication method based on gesture recognition, the AirSig can be used for inputting a password for intelligent equipment which is lack of a touch screen, does not have an expensive sensor and limited computing resources so as to identify the identity of a user, the user holds the intelligent equipment to initiate an air gesture, and the intelligent equipment can identify the identity of an initiator of the air gesture. The AirSig is provided with an accelerometer and a gyroscope in the intelligent equipment, and when a user holds the intelligent equipment to initiate an air gesture, the user identity is recognized by comparing real-time data output by the accelerometer and the gyroscope with pre-stored characteristic information template data.
In the related art, the real-time data output by the accelerometer and the gyroscope is directly compared with the pre-stored characteristic information template data to identify the user identity, so that the gesture of the user corresponding to the real-time data for holding the intelligent device is consistent with the gesture of the user corresponding to the pre-stored characteristic information template for holding the intelligent device, and the identification accuracy can be ensured, which is limited by the gesture of the user for holding the intelligent device.
Disclosure of Invention
The application provides a user identity identification method and device, which are used for solving the problem that the gesture of holding an intelligent device by a user is limited in the related technology.
In a first aspect, the present application provides a user identity identification method, including:
constructing a track of a user gesture according to a rotation angular velocity measured by a gyroscope in the terminal equipment in real time, wherein the track is formed by three-dimensional coordinate points of the user arm at each moment; performing stroke segmentation processing on the track, and extracting characteristic information of each segment of strokes; and identifying the identity of the user according to the extracted characteristic information of each stroke and the characteristic information of each stroke corresponding to one track in at least one group of pre-stored characteristic information templates, wherein each group of pre-stored characteristic information templates comprises the characteristic information of each stroke corresponding to at least one track.
The method comprises the steps of constructing a track of a user gesture according to a rotation angular velocity measured by a gyroscope in the terminal device in real time, conducting stroke segmentation processing on the track, conducting characteristic information extraction on each section of stroke, and finally conducting user identity recognition according to the extracted characteristic information of each section of stroke and the characteristic information of each section of stroke corresponding to one track in at least one group of pre-stored characteristic information templates. The user and the terminal equipment can still identify the user identity under various gestures, such as standing, standing and lying, so that the identification accuracy and the user experience are improved.
In one possible design, the characteristic information includes shape information and velocity information, or the characteristic information includes length information, angle information, and velocity information, or the characteristic information includes shape information, velocity information, and acceleration information.
In one possible design, the constructing the trajectory of the user gesture according to the rotation angular velocity measured by the gyroscope in real time includes:
calculating a rotation matrix C of the attitude change of the terminal equipment from the previous moment to the current moment according to the rotation angular velocity at the current momentt
According to the formulaCalculating a three-dimensional coordinate point P of the current moment of the arm of the usertObtaining a three-dimensional coordinate point of the user arm at each moment;
wherein,the three-dimensional coordinate point of the previous moment of the current moment is the three-dimensional coordinate point of the starting gesture of the user, and the three-dimensional coordinate point is the origin point.
In one possible design, the performing stroke segmentation processing on the track includes:
determining segmentation points in the track according to the three-dimensional curvature to obtain at least one segment of stroke;
and carrying out size normalization and rotation normalization on each stroke.
In one possible design, the normalizing the size of each stroke includes:
dividing the size of each stroke by the length of the track;
the performing rotation normalization on each stroke includes:
and rotating the axis of each stroke segment to be parallel to the X axis of an initial absolute coordinate system, wherein the axis of each stroke segment is a line segment from a starting point to an end point, the initial absolute coordinate system is the same as the three coordinate axes of the coordinate system of the terminal equipment at the starting moment of the gesture of the user, and the origin of the initial absolute coordinate system is the three-dimensional coordinate point of the position of the elbow of the user.
In one possible design, the extracting feature information of each stroke includes:
extracting a three-dimensional coordinate sequence with a medium time interval in each stroke section to obtain shape information of each stroke section;
and calculating a first derivative sequence of three-dimensional positions at equal distance intervals in each stroke segment with respect to time to obtain the speed information of each stroke segment.
In one possible design, the extracting feature information of each stroke includes:
extracting a three-dimensional coordinate sequence with a medium time interval in each stroke section to obtain shape information of each stroke section;
calculating a first derivative sequence of three-dimensional positions at equal distance intervals in each stroke segment with respect to time to obtain speed information of each stroke segment;
and calculating the acceleration information of each stroke according to the speed information of each stroke.
In one possible design, the performing stroke segmentation processing on the track includes:
performing rotation normalization and size normalization on the track;
and determining segmentation points in the track after the rotation normalization and the size normalization according to the two-dimensional curvature to obtain at least one segment of stroke.
In one possible design, the rotation normalizing the trajectory includes:
determining a two-dimensional coordinate sequence [ u (i) ] i, v (i) ] i 1 … N from the three-dimensional coordinate points constituting the trajectory, wherein u (i) is a value of a Y-axis of the three-dimensional coordinate points constituting the trajectory projected on a Y-Z plane, v (i) is a value of a Z-axis of the three-dimensional coordinate points constituting the trajectory projected on a Y-Z plane, and N is the number of three-dimensional coordinate points constituting the trajectory;
searching a rotating shaft of the minimum moment of inertia of the track, and rotating the track to a position where the rotating shaft of the minimum moment of inertia of the track is parallel to a Y axis projected to a Y-Z plane;
calculating the center of gravity of the trajectory
Computing a covariance matrix of the trajectoriesWherein,
multiplying all three-dimensional coordinate points constituting the trajectory by I;
the size normalization of the track includes:
calculating the width W and height H of the track, dividing u (i) in the two-dimensional coordinate sequence by W, and dividing V (i) in the two-dimensional coordinate sequence by H.
In one possible design, the extracting feature information of each stroke includes:
calculating the length of each stroke segment to obtain the length information of each stroke segment;
calculating an included angle between two continuous strokes, wherein the included angle is the included angle of the minimum moment of inertia axis of each of the two continuous strokes, and obtaining angle information between the two continuous strokes;
and calculating a first derivative sequence of three-dimensional positions at equal distance intervals in each stroke segment with respect to time to obtain the speed information of each stroke segment.
In a possible design, the performing user identification according to the extracted feature information of each stroke and the feature information of each corresponding stroke in at least one group of pre-stored feature information templates includes:
for each group of pre-stored characteristic information templates, calculating the dynamic time normalization (DTW) distance between the extracted characteristic information of each stroke and the characteristic information of each stroke corresponding to one track in the group of pre-stored characteristic information templates;
and determining whether to accept the user initiating the user gesture according to the calculated DTW distance and a preset threshold value.
In one possible design, the preset threshold is a sum of an average value of DTW distances between feature information of each segment of strokes corresponding to all tracks in a set of pre-stored feature information templates and a standard deviation of DTW distances between feature information of each segment of strokes corresponding to all tracks in the set of pre-stored feature information templates.
In one possible design, each set of pre-stored characteristic information templates carries a user identifier.
In a second aspect, the present application provides a user identification apparatus, including:
the track building module is used for building a track of a user gesture according to a rotation angular velocity measured by a gyroscope in the terminal equipment in real time, wherein the track is formed by three-dimensional coordinate points of each moment of an arm of the user; the stroke segmentation processing module is used for performing stroke segmentation processing on the track; the information extraction module is used for extracting the characteristic information of each stroke; and the identification module is used for carrying out user identity identification according to the extracted characteristic information of each stroke and the characteristic information of each stroke corresponding to one track in at least one group of pre-stored characteristic information templates, and each group of pre-stored characteristic information templates comprises the characteristic information of each stroke corresponding to at least one track.
The method comprises the steps of constructing a track of a user gesture according to a rotation angular velocity measured by a gyroscope in the terminal device in real time, conducting stroke segmentation processing on the track, conducting characteristic information extraction on each section of stroke, and finally conducting user identity recognition according to the extracted characteristic information of each section of stroke and the characteristic information of each section of stroke corresponding to one track in at least one group of pre-stored characteristic information templates. The user and the terminal equipment can still identify the user identity under various gestures, such as standing, standing and lying, so that the identification accuracy and the user experience are improved.
In one possible design, the characteristic information includes shape information and velocity information, or the characteristic information includes length information, angle information, and velocity information, or the characteristic information includes shape information, velocity information, and acceleration information.
In one possible design, the trajectory construction module is specifically configured to:
calculating a rotation matrix C of the attitude change of the terminal equipment from the previous moment to the current moment according to the rotation angular velocity at the current momentt
According to the formulaCalculating a three-dimensional coordinate point P of the current moment of the arm of the usertObtaining a three-dimensional coordinate point of the user arm at each moment;
wherein,the three-dimensional coordinate point of the previous moment of the current moment is the three-dimensional coordinate point of the starting gesture of the user, and the three-dimensional coordinate point is the origin point.
In one possible design, the stroke segmentation processing module includes:
the first determining unit is used for determining segmentation points in the track according to the three-dimensional curvature to obtain at least one segment of stroke;
and the first normalization unit is used for carrying out size normalization and rotation normalization on each stroke.
In a possible design, the first normalization unit is specifically configured to:
dividing the size of each stroke by the length of the track;
and rotating the axis of each stroke segment to be parallel to the X axis of an initial absolute coordinate system, wherein the axis of each stroke segment is a line segment from a starting point to an end point, the initial absolute coordinate system is the same as the three coordinate axes of the coordinate system of the terminal equipment at the starting moment of the gesture of the user, and the origin of the initial absolute coordinate system is the three-dimensional coordinate point of the position of the elbow of the user.
In one possible design, the information extraction module is specifically configured to:
extracting a three-dimensional coordinate sequence with a medium time interval in each stroke section to obtain shape information of each stroke section;
and calculating a first derivative sequence of three-dimensional positions at equal distance intervals in each stroke segment with respect to time to obtain the speed information of each stroke segment.
In one possible design, the information extraction module is specifically configured to:
extracting a three-dimensional coordinate sequence with a medium time interval in each stroke section to obtain shape information of each stroke section;
calculating a first derivative sequence of three-dimensional positions at equal distance intervals in each stroke segment with respect to time to obtain speed information of each stroke segment;
and calculating the acceleration information of each stroke according to the speed information of each stroke.
In one possible design, the stroke segmentation processing module includes:
the second normalization unit is used for carrying out rotation normalization and size normalization on the track;
and the second determining unit is used for determining the segmentation points in the track after the rotation normalization and the size normalization according to the two-dimensional curvature to obtain at least one segment of stroke.
In a possible design, the second normalization unit is specifically configured to:
determining a two-dimensional coordinate sequence [ u (i) ] i, v (i) ] i 1 … N from the three-dimensional coordinate points constituting the trajectory, wherein u (i) is a value of a Y-axis of the three-dimensional coordinate points constituting the trajectory projected on a Y-Z plane, v (i) is a value of a Z-axis of the three-dimensional coordinate points constituting the trajectory projected on a Y-Z plane, and N is the number of three-dimensional coordinate points constituting the trajectory;
searching a rotating shaft of the minimum moment of inertia of the track, and rotating the track to a position where the rotating shaft of the minimum moment of inertia of the track is parallel to a Y axis projected to a Y-Z plane;
calculating the center of gravity of the trajectory
Computing a covariance matrix of the trajectoriesWherein,
multiplying all three-dimensional coordinate points constituting the trajectory by I;
calculating the width W and height H of the track, dividing u (i) in the two-dimensional coordinate sequence by W, and dividing V (i) in the two-dimensional coordinate sequence by H.
In one possible design, the information extraction module is specifically configured to:
calculating the length of each stroke segment to obtain the length information of each stroke segment;
calculating an included angle between two continuous strokes, wherein the included angle is the included angle of the minimum moment of inertia axis of each of the two continuous strokes, and obtaining angle information between the two continuous strokes;
and calculating a first derivative sequence of three-dimensional positions at equal distance intervals in each stroke segment with respect to time to obtain the speed information of each stroke segment.
In one possible design, the identification module is specifically configured to:
for each group of pre-stored characteristic information templates, calculating the dynamic time normalization (DTW) distance between the extracted characteristic information of each stroke and the characteristic information of each stroke corresponding to one track in the group of pre-stored characteristic information templates;
and determining whether to accept the user initiating the user gesture according to the calculated DTW distance and a preset threshold value.
In one possible design, the preset threshold is a sum of an average value of DTW distances between feature information of each segment of strokes corresponding to all tracks in a set of pre-stored feature information templates and a standard deviation of DTW distances between feature information of each segment of strokes corresponding to all tracks in the set of pre-stored feature information templates.
In one possible design, each set of pre-stored characteristic information templates carries a user identifier.
Drawings
FIG. 1 is a flowchart of a first embodiment of a user identification method according to the present application;
FIG. 2 is a schematic diagram of a training process;
FIG. 3 is a schematic diagram of an authentication process;
FIG. 4 is a flowchart of a second embodiment of a user identification method of the present application;
FIG. 5 is a flowchart of a third embodiment of a user identification method of the present application;
FIG. 6 is a flowchart of a fourth embodiment of a user identification method of the present application;
FIG. 7 is a schematic structural diagram of a first embodiment of a user identification apparatus according to the present application;
FIG. 8 is a schematic structural diagram of a second embodiment of a user identification device according to the present application;
fig. 9 is a schematic structural diagram of a third embodiment of a user identification device according to the present application.
Detailed Description
The method and the device for identifying the user identity are applicable to various terminal devices, such as mobile phones, intelligent watches, intelligent blood pressure meters and the like, and are used for identifying the user identity, the terminal devices do not need touch screens, the user holds the terminal devices to initiate air gestures, and the terminal devices can identify the identities of air gesture initiators. The user and the terminal device can still identify the user identity in various postures, such as standing, standing and lying. The technical solution of the present application is described in detail below with reference to the accompanying drawings.
Fig. 1 is a flowchart of a first embodiment of a user identity identification method according to the present application, and as shown in fig. 1, the method according to the present embodiment may include:
s101, constructing a gesture track of the user according to a rotation angular velocity measured by a gyroscope in the terminal equipment in real time, wherein the gesture track is formed by three-dimensional coordinate points of the arm of the user at each moment.
Specifically, a sensor of a gyroscope is installed in the terminal device, a user can take a hand track by taking the hand or a terminal device with the gyroscope as an input of the terminal device in a writing mode in the air after wearing the terminal device on his wrist, the gyroscope can acquire the hand track, and each user can also have different gesture tracks even if doing the same gesture track.
The method comprises the steps of constructing a track of a user gesture, and designing two coordinate systems, namely an equipment coordinate system and an initial absolute coordinate system, wherein the equipment coordinate system is the coordinate system of the terminal equipment, and the equipment coordinate system is defined by the terminal equipment. The initial absolute coordinate system is the same as the three coordinate axes of the terminal equipment coordinate system at the starting moment of the gesture of the user, and the origin of the initial absolute coordinate system is a three-dimensional coordinate point of the position of the elbow of the user. In order to construct a trajectory of a user's gesture, it is first necessary to model a human arm. The human arm is divided into an upper arm connecting the elbow and the shoulder and a forearm connecting the wrist and the elbow. When the user writes in the air, the motion is not very large, and the main part of the displacement is caused by the motion of the wrist relative to the elbow, and the process can be regarded as the motion of the rigid body (forearm) around the elbow (fulcrum). The gyroscope records the angular velocity of the rotation, so the motion trajectory can be calculated as long as the length of the rigid body is known.
The track of the user gesture is constructed according to the rotation angular velocity measured by the gyroscope in real time, and specifically, the track can be as follows:
calculating a rotation matrix C of the attitude change of the terminal equipment from the previous moment to the current moment according to the rotation angular velocity at the current momenttAccording to the formulaCalculating a three-dimensional coordinate point P of the current moment of the arm of the usertAnd obtaining a three-dimensional coordinate point of the arm of the user at each moment. Wherein,the three-dimensional coordinate point of the previous moment of the current moment is the three-dimensional coordinate point of the starting gesture of the user, and the three-dimensional coordinate point is the origin point.
On the premise that the motion of the elbow of the user is not very large when the elbow of the user is in gesture, the forearm of the user can be regarded as a rigid body moving around the elbow, namely the forearm of the user is a rigid bodyThe segment straight line moves around the origin in the initial absolute coordinate system. The gyroscope can obtain the angular speed of the current equipment rotation, so that a rotation matrix of the equipment posture change from the previous moment to the current moment can be calculated, and the rotation matrix also describes the posture change of the arm of the user. Rotation matrix C at the present timetComprises the following steps:
wherein (ω)x,ωy,ωz) Angular velocity of rotation, C, of the current moment t obtained for the gyroscopetIs the previous time t-I is an identity matrix.
And obtaining a three-dimensional coordinate point of the arm of the user at each moment, namely obtaining the gesture track of the user.
S102, stroke segmentation processing is carried out on the track, and characteristic information extraction is carried out on each segment of strokes.
Specifically, after obtaining the trajectory of the user's gesture, according to the kinematics principle, when a person writes in the air, the whole trajectory is segmented and sequentially written in the brain, and each segment is similar to an arc. Therefore, the trajectory needs to be segmented, with the principle that each stroke approximates an arc. Distinguishing between different people requires extracting features hidden in each stroke. For a track, the segmentation points are all the joints of two strokes, namely the turning points between the two strokes. And according to the shape characteristics of the track, finding the turning point of each stroke and segmenting the strokes. After the stroke segmentation, considering that the size and the speed of the gesture made by the user each time are not completely consistent, the embodiment performs normalization processing on the basis of each track, and extracts the personal characteristics of each stroke after normalization.
As an implementable manner, the stroke segmentation processing is performed on the trajectory, which may specifically be:
and determining segmentation points in the track according to the three-dimensional curvature to obtain at least one stroke, and then carrying out size normalization and rotation normalization on each stroke.
In this embodiment, a threshold is set to determine whether the point is a segmentation point because the curvature of the stroke segmentation is much larger than the curvature of the peripheral points. The size normalization is performed on each stroke, which may specifically be: the size of each stroke is divided by the length of the track. So that the trace length of the entire gesture becomes unit 1. The rotation normalization is performed on each stroke, which may specifically be: and rotating the axis of each stroke segment to be parallel to the X axis of the initial absolute coordinate system, wherein the axis of each stroke segment is a line segment from the starting point to the ending point, the initial absolute coordinate system is the same as the three coordinate axes of the terminal equipment coordinate system at the starting moment of the gesture of the user, and the origin of the initial absolute coordinate system is the three-dimensional coordinate point of the position of the elbow of the user.
The characteristic information comprises shape information and speed information, or the characteristic information comprises length information, angle information and speed information, or the characteristic information comprises shape information, speed information and acceleration information.
1. When the feature information includes shape information and speed information, extracting feature information of each stroke, which may specifically be:
extracting a three-dimensional coordinate sequence with a medium time interval in each stroke section to obtain shape information of each stroke section; and calculating a first derivative sequence of three-dimensional positions at equal distance intervals in each stroke segment with respect to time to obtain the speed information of each stroke segment.In order to be the current position of the mobile terminal,is the position of the previous time, t is the current time, t-The previous time, the speed informationThe calculation formula of (2) is as follows:
2. when the feature information includes shape information, speed information and acceleration information, feature information extraction is performed on each stroke, which may specifically be:
extracting a three-dimensional coordinate sequence with a medium time interval in each stroke section to obtain shape information of each stroke section; calculating a first derivative sequence of three-dimensional positions at equal distance intervals in each stroke segment with respect to time to obtain speed information of each stroke segment; and calculating the acceleration information of each stroke according to the speed information of each stroke.Is the speed at the present moment in time,velocity, acceleration information for the previous momentThe calculation formula is as follows:
3. when the feature information includes length information, angle information, and speed information, extracting feature information of each stroke, which may specifically be:
calculating the length of each stroke segment to obtain the length information of each stroke segment; calculating an included angle between two continuous strokes, wherein the included angle is the included angle of the minimum rotational inertia axis of each of the two continuous strokes, and obtaining angle information between the two continuous strokes; and calculating a first derivative sequence of three-dimensional positions at equal distance intervals in each stroke segment with respect to time to obtain the speed information of each stroke segment.
As another implementable manner, the stroke segmentation processing may be performed on the trajectory, and specifically, the stroke segmentation processing may be:
carrying out rotation normalization and size normalization on the track; and determining segmentation points in the track after the rotation normalization and the size normalization according to the two-dimensional curvature to obtain at least one segment of stroke.
The trajectory is subjected to rotation normalization, which specifically may be:
first, a two-dimensional coordinate sequence [ u (i) ] where u (i) is a value of a Y axis on which three-dimensional coordinate points constituting the trajectory are projected on a Y-Z plane, v (i) is a value of a Z axis on which three-dimensional coordinate points constituting the trajectory are projected on a Y-Z plane, and i is 1 … N where N is the number of three-dimensional coordinate points constituting the trajectory. Because the plane of writing a person in the air is parallel to the plane of the person's body in front of the body, i.e. the Y-Z plane of the initial absolute coordinate system, the three-dimensional coordinate points are projected on the Y-Z two-dimensional coordinate plane.
Then, a rotating shaft of the minimum moment of inertia of the track is searched, and the track is rotated to a position where the rotating shaft of the minimum moment of inertia of the track is parallel to the Y axis projected to the Y-Z plane.
Calculating the center of gravity of the trajectory
Computing covariance matrices for tracesWherein,
finally, all three-dimensional coordinate points constituting the trajectory are multiplied by I.
The track is subjected to size normalization, which specifically may be:
the width W and height H of the trajectory are calculated, u (i) in the two-dimensional coordinate sequence is divided by W, and V (i) in the two-dimensional coordinate sequence is divided by H.
On the normalized track, strokes are divided according to curvature, and for each stroke, the characteristic information of the stroke is extracted.
S103, identifying the identity of the user according to the extracted characteristic information of each stroke and the characteristic information of each stroke corresponding to one track in at least one group of pre-stored characteristic information templates, wherein each group of pre-stored characteristic information templates comprises the characteristic information of each stroke corresponding to at least one track.
Specifically, the pre-stored feature information template includes feature information of each stroke corresponding to at least one track, after the feature information of the track of the user gesture is obtained, user identity recognition is performed, if the terminal device has not been trained by the user gesture before, the terminal device prompts the user to perform multiple training (for example, 5 to 10 times), when the terminal device is in a user training state, the terminal device extracts feature information corresponding to each user gesture track according to the feature information extraction process, and stores the feature information of the next multiple gesture tracks as the pre-stored feature information template. When training is complete, the device may enter an authentication state. One group of pre-stored characteristic information templates corresponds to one user, when the device is used by multiple users, multiple groups of pre-stored characteristic information templates can be stored, and each group of pre-stored characteristic information templates can carry user identifications to be used for distinguishing different users.
The user identity recognition is performed according to the extracted feature information of each stroke and the feature information of each corresponding stroke in at least one group of pre-stored feature information templates, and specifically, the user identity recognition may be:
for each group of pre-stored characteristic information templates, calculating the Dynamic Time Warping (DTW) distance between the extracted characteristic information of each stroke and the characteristic information of each stroke corresponding to one track in the group of pre-stored characteristic information templates, and determining whether to accept the user initiating the user gesture according to the calculated DTW distance and a preset threshold value.
Taking the characteristic information including shape information and speed information as an example, after the characteristic information of each stroke corresponding to the gesture track is obtained, the DTW distance of the shape information and the speed information between each stroke and the template is respectively calculated. For each piece of feature information, because multiple strokes exist, the DTW distances of corresponding strokes in the gesture track and the template need to be calculated, and the DTW distances of all strokes are added to be the DTW distance of one feature of the two tracks. It is worth noting that the two pieces of feature information are respectively speed information and shape information, and the physical units of the two pieces of feature information are different, so that the DTW distances of the two pieces of feature information need to be normalized when calculating the overall DTW distance, and the values of the two pieces of feature information are added after being within the range of 0-1. After obtaining the DTW distances between all gesture tracks in a group of templates and the gesture tracks needing to be recognized currently, each gesture track template can judge whether to accept the current user according to a preset threshold value, and when the DTW distances between more than half of the templates and the tracks needing to be authenticated currently are smaller than the preset threshold value, the user is accepted. The preset threshold is set to, for example, an average value of DTW distances between feature information of each stroke corresponding to all tracks in a set of pre-stored feature information templates plus a standard deviation.
Taking the feature information including shape information, speed information and acceleration information as an example, after the feature information of each stroke corresponding to the gesture track is obtained, DTW distances between three feature information of each stroke and three feature information corresponding to the template are respectively calculated, the DTW distances are used as distances in a k nearest neighbor algorithm (KNN), and a current specific user is determined according to the KNN (k equals to 3) algorithm.
In this embodiment, the user may start the gesture recognition process of the terminal device in an active or passive manner, after the terminal device prompts the start of the recognition process, the user may start to make a gesture, and the terminal device starts to record gyroscope data in the gesture process. The user can customize the style of the gesture, such as five-pointed star, Chinese characters and the like. After finishing the gesture, the user can finish the recognition in an active or passive mode, when the equipment recognizes that the gesture is finished, track construction, extraction and recognition of the characteristic information of the individual user are started, if the user exists in the database, the identified user is finally prompted on the display screen and the user is accepted to log in the terminal equipment, and if the user does not exist in the database and cannot be recognized, the user is prohibited to log in.
According to the user identity recognition method provided by the embodiment, the track of the user gesture is constructed according to the rotation angular velocity measured by the gyroscope in the terminal device in real time, stroke segmentation processing is carried out on the track, characteristic information extraction is carried out on each segment of stroke, and finally user identity recognition is carried out according to the extracted characteristic information of each segment of stroke and the characteristic information of each segment of stroke corresponding to one track in at least one group of pre-stored characteristic information templates. The user and the terminal device can still identify the user identity in various postures, such as standing, standing and lying. The accuracy of identification and the user experience are improved.
The technical solution of the embodiment of the method shown in fig. 1 will be described in detail below by using several specific examples.
The user identity recognition is divided into two processes, wherein the training process stores the characteristic information of the gesture track of the user as a verification template, and the recognition is started aiming at the received gesture. Fig. 2 is a schematic diagram of a training process, as shown in fig. 2, including:
s201, after the gesture of the user is determined to be finished, the rotation angular velocity measured by the gyroscope is obtained.
S202, constructing a track of the user gesture according to the rotation angular velocity measured by the gyroscope, carrying out stroke segmentation processing on the track, and extracting characteristic information of each segment of the stroke.
And S203, storing the extracted characteristic information.
Generally, the terminal device prompts the user to perform a plurality of training sessions (for example, 5 to 10 training sessions), and stores the feature information of the next gesture trajectory as a pre-stored feature information template.
Fig. 3 is a schematic diagram of an authentication identification process, as shown in fig. 3, including:
s301, detecting whether the user triggers an authentication process. If yes, go to step S202.
And S302, after the gesture of the user is determined to be finished, acquiring the rotation angular velocity measured by the gyroscope.
S303, constructing a track of the user gesture according to the rotation angular velocity measured by the gyroscope, carrying out stroke segmentation processing on the track, and extracting characteristic information of each segment of the stroke.
S304, carrying out user identity recognition according to the extracted characteristic information of each stroke and the characteristic information of each stroke corresponding to one track in the pre-stored characteristic information template, determining whether the extracted characteristic information is matched with the pre-stored characteristic information template, if so, executing S305, and otherwise, ending.
S305, authenticating the user successfully and executing a corresponding command.
The user identification method of the present application is described in detail below in three different usage scenarios.
Firstly, a single user identification scenario is presented, fig. 4 is a flowchart of a second embodiment of the user identification method of the present application, and with reference to fig. 4, the method of the present embodiment may include:
s401, constructing a track of the gesture of the user according to the rotation angular velocity measured by a gyroscope in the terminal equipment in real time, wherein the track is formed by three-dimensional coordinate points of the arm of the user at each moment.
Specifically, a rotation matrix C of the attitude change of the terminal device from the previous time to the current time is calculated from the rotation angular velocity at the current timetAccording to the formulaCalculating a three-dimensional coordinate point P of the current moment of the arm of the usertAnd obtaining a three-dimensional coordinate point of the arm of the user at each moment. Wherein,the three-dimensional coordinate point of the previous moment of the current moment is the three-dimensional coordinate point of the starting gesture of the user, and the three-dimensional coordinate point is the origin point.
On the premise that the motion of the elbow of the user is not very large when the elbow of the user is gestured, the forearm of the user can be regarded as a rigid body moving around the elbow, namely a straight line moving around the origin in the initial absolute coordinate system. The gyroscope can obtain the angular speed of the current equipment rotation, so that a rotation matrix of the equipment posture change from the previous moment to the current moment can be calculated, and the rotation matrix also describes the posture change of the arm of the user. Rotation matrix C at the present timetComprises the following steps:
wherein (ω)x,ωy,ωz) Angular velocity of rotation, C, of the current moment t obtained for the gyroscopet-Is the previous time t-I is an identity matrix.
And obtaining a three-dimensional coordinate point of the arm of the user at each moment, namely obtaining the gesture track of the user.
S402, determining segmentation points in the track according to the three-dimensional curvature to obtain at least one segment of stroke.
And S403, carrying out size normalization and rotation normalization on each stroke.
Specifically, the size of each stroke is divided by the length of the track. So that the trace length of the entire gesture becomes unit 1. And rotating the axis of each stroke segment to be parallel to the X axis of the initial absolute coordinate system, wherein the axis of each stroke segment is a line segment from the starting point to the ending point.
S404, extracting a three-dimensional coordinate sequence with a medium time interval in each stroke to obtain shape information of each stroke; and calculating a first derivative sequence of three-dimensional positions at equal distance intervals in each stroke segment with respect to time to obtain the speed information of each stroke segment.In order to be the current position of the mobile terminal,is the position of the previous time, t is the current time, t-The previous time, the speed informationThe calculation formula of (2) is as follows:
s405, identifying the identity of the user according to the extracted characteristic information of each stroke and the characteristic information of each stroke corresponding to each track in the pre-stored characteristic information template.
Specifically, the DTW distance of the shape information and the velocity information between each stroke and the template is calculated, respectively. For each piece of feature information, because multiple strokes exist, the DTW distances of corresponding strokes in the gesture track and the template need to be calculated, and the DTW distances of all strokes are added to be the DTW distance of one feature of the two tracks. It is worth noting that the two pieces of feature information are respectively speed information and shape information, and the physical units of the two pieces of feature information are different, so that the DTW distances of the two pieces of feature information need to be normalized when calculating the overall DTW distance, and the values of the two pieces of feature information are added after being within the range of 0-1. After obtaining the DTW distances between all gesture tracks in a group of templates and the gesture tracks needing to be recognized currently, each gesture track template can judge whether to accept the current user according to a preset threshold value, and when the DTW distances between more than half of the templates and the tracks needing to be authenticated currently are smaller than the preset threshold value, the user is accepted. The preset threshold is set to, for example, an average value of DTW distances between feature information of each stroke corresponding to all tracks in a set of pre-stored feature information templates plus a standard deviation.
This is followed by a multi-user identification scenario, which is the case of how users can be identified when the same device can be used by multiple users. Fig. 5 is a flowchart of a third embodiment of the user identity identification method in the present application, and with reference to fig. 5, the method in this embodiment may include:
s501, constructing a gesture track of the user according to the rotation angular velocity measured by the gyroscope in the terminal equipment in real time, wherein the gesture track is formed by three-dimensional coordinate points of the arm of the user at each moment.
The specific process is the same as S401, and is not described here again.
And S502, carrying out rotation normalization and size normalization on the track.
The trajectory is subjected to rotation normalization, which specifically may be:
first, a two-dimensional coordinate sequence [ u (i) ] where u (i) is a value of a Y axis on which three-dimensional coordinate points constituting the trajectory are projected on a Y-Z plane, v (i) is a value of a Z axis on which three-dimensional coordinate points constituting the trajectory are projected on a Y-Z plane, and i is 1 … N where N is the number of three-dimensional coordinate points constituting the trajectory. Because the plane of writing a person in the air is parallel to the plane of the person's body in front of the body, i.e. the Y-Z plane of the initial absolute coordinate system, the three-dimensional coordinate points are projected on the Y-Z two-dimensional coordinate plane.
Then, a rotating shaft of the minimum moment of inertia of the track is searched, and the track is rotated to a position where the rotating shaft of the minimum moment of inertia of the track is parallel to the Y axis projected to the Y-Z plane.
Calculating the center of gravity of the trajectory
Computing covariance matrices for tracesWherein,
finally, all three-dimensional coordinate points constituting the trajectory are multiplied by I.
The track is subjected to size normalization, which specifically may be:
the width W and height H of the trajectory are calculated, u (i) in the two-dimensional coordinate sequence is divided by W, and V (i) in the two-dimensional coordinate sequence is divided by H.
S503, determining segmentation points in the track after the rotation normalization and the size normalization according to the two-dimensional curvature to obtain at least one segment of stroke.
S504, calculating the length of each stroke segment to obtain the length information of each stroke segment; calculating an included angle between two continuous strokes, wherein the included angle is the included angle of the minimum rotational inertia axis of each of the two continuous strokes, and obtaining angle information between the two continuous strokes; and calculating a first derivative sequence of three-dimensional positions at equal distance intervals in each stroke segment with respect to time to obtain the speed information of each stroke segment.
And S505, for each group of pre-stored modules in the plurality of groups of pre-stored characteristic information templates, carrying out user identity identification according to the extracted characteristic information of each stroke and the characteristic information of each stroke corresponding to each track in the group of pre-stored characteristic information templates.
Specifically, the DTW distance between the templates corresponding to the respective users is calculated according to the length information, the angle information, and the speed information, and finally, whether the current user is one of the owners and which one of the owners is the current user is determined by majority voting of the templates corresponding to the respective users. In this embodiment, each group of pre-stored characteristic information templates may carry a user identifier for distinguishing different users.
The method is characterized in that a multi-user scene under the shared terminal equipment is followed, the scene is a weak safety scene and aims to distinguish different users through gesture tracks, for example, the users of the family shared blood pressure meter are distinguished, when the users use the blood pressure meter, the current users are judged, and the history records of the users are extracted. Fig. 6 is a flowchart of a fourth embodiment of the user identity identification method in the present application, and with reference to fig. 6, the method in this embodiment may include:
s601, constructing a track of the gesture of the user according to the rotation angular velocity measured by the gyroscope in the terminal equipment in real time, wherein the track is formed by three-dimensional coordinate points of the arm of the user at each moment.
The specific process is the same as S401, and is not described here again.
S602, determining segmentation points in the track according to the three-dimensional curvature to obtain at least one segment of stroke.
And S603, carrying out size normalization and rotation normalization on each segment of stroke.
Specifically, the size of each stroke is divided by the length of the track. So that the trace length of the entire gesture becomes unit 1. And rotating the axis of each stroke segment to be parallel to the X axis of the initial absolute coordinate system, wherein the axis of each stroke segment is a line segment from the starting point to the ending point.
S604, extracting a three-dimensional coordinate sequence with a medium time interval in each stroke to obtain shape information of each stroke; and calculating a first derivative sequence of three-dimensional positions at equal distance intervals in each stroke segment with respect to time to obtain the speed information of each stroke segment.In order to be the current position of the mobile terminal,is the position of the previous time, t is the current time, t-The previous time, the speed informationThe calculation formula of (2) is as follows:
and calculating the acceleration information of each stroke according to the speed information of each stroke.Is the speed at the present moment in time,velocity, acceleration information for the previous momentThe calculation formula is as follows:
s605, for each group of pre-stored modules in the plurality of groups of pre-stored characteristic information templates, carrying out user identity identification according to the extracted characteristic information of each stroke and the characteristic information of each stroke corresponding to each track in the group of pre-stored characteristic information templates.
Specifically, the DTW distance between the feature information of each stroke in each group of pre-stored feature information templates and the feature information of the corresponding stroke of the gesture to be recognized currently is calculated, the distance is used as the distance in the KNN classification algorithm, and the current specific user is determined according to the KNN (k is 3) algorithm.
Fig. 7 is a schematic structural diagram of a first embodiment of a user identification apparatus according to the present application, where the user identification apparatus may be implemented as part or all of a terminal device through software, hardware, or a combination of the software and the hardware, as shown in fig. 7, the apparatus of the present embodiment may include: a track building module 11, a stroke segmentation processing module 12, an information extraction module 13 and a recognition module 14, wherein,
the trajectory construction module 11 is configured to construct a trajectory of a user gesture according to a rotational angular velocity measured by a gyroscope in the terminal device in real time, where the trajectory is formed by three-dimensional coordinate points of the user arm at each moment.
The stroke segmentation processing module 12 is configured to perform stroke segmentation processing on the trajectory.
The information extraction module 13 is configured to perform feature information extraction on each stroke.
The identification module 14 is configured to perform user identity identification according to the extracted feature information of each stroke and feature information of each stroke corresponding to one track in at least one group of pre-stored feature information templates, where each group of pre-stored feature information templates includes feature information of each stroke corresponding to at least one track.
The characteristic information comprises shape information and speed information, or the characteristic information comprises length information, angle information and speed information, or the characteristic information comprises shape information, speed information and acceleration information.
Further, the trajectory construction module 11 is specifically configured to:
calculating a rotation matrix C of the attitude change of the terminal equipment from the previous moment to the current moment according to the rotation angular velocity at the current momenttAccording to the formulaCalculating a three-dimensional coordinate point P of the current moment of the arm of the usertAnd obtaining a three-dimensional coordinate point of the arm of the user at each moment. Wherein,the three-dimensional coordinate point of the previous moment of the current moment is the three-dimensional coordinate point of the starting gesture of the user, and the three-dimensional coordinate point is the origin point.
The apparatus of this embodiment may be used to implement the technical solution of the method embodiment shown in fig. 1, and the implementation principle thereof is similar, which is not described herein again.
The user identification device provided by this embodiment constructs a trajectory of a user gesture according to a rotational angular velocity measured by a gyroscope in a terminal device in real time, performs stroke segmentation processing on the trajectory, extracts characteristic information of each segment of stroke, and performs user identification according to the extracted characteristic information of each segment of stroke and the characteristic information of each segment of stroke corresponding to one trajectory in at least one group of pre-stored characteristic information templates. The user and the terminal device can still identify the user identity in various postures, such as standing, standing and lying. The accuracy of identification and the user experience are improved.
Fig. 8 is a schematic structural diagram of a second embodiment of the user identification apparatus of the present application, as shown in fig. 8, based on the apparatus structure shown in fig. 7, the apparatus of the present embodiment further includes a first determining unit 121 and a first normalizing unit 122, where the first determining unit 121 is configured to determine a segmentation point in the trajectory according to the three-dimensional curvature to obtain at least one segment of a stroke, and the first normalizing unit 122 is configured to perform size normalization and rotation normalization on each segment of the stroke.
Further, the first normalization unit 122 is specifically configured to:
dividing the size of each stroke by the length of the track, rotating the axis of each stroke to be parallel to the X axis of an initial absolute coordinate system, wherein the axis of each stroke is a line segment from a starting point to an ending point, the initial absolute coordinate system is the same as the three coordinate axes of the coordinate system of the terminal equipment at the starting moment of the gesture of the user, and the origin of the initial absolute coordinate system is the three-dimensional coordinate point of the position of the elbow of the user.
Optionally, the information extraction module 13 is specifically configured to:
extracting a three-dimensional coordinate sequence of equal time intervals in each stroke section to obtain shape information of each stroke section, and calculating a first derivative sequence of three-dimensional positions of equal distance intervals in each stroke section with respect to time to obtain speed information of each stroke section.
Optionally, the information extraction module 13 is specifically configured to:
extracting a three-dimensional coordinate sequence with equal time intervals in each stroke section to obtain shape information of each stroke section, calculating a first derivative sequence of three-dimensional positions with equal distance intervals in each stroke section with respect to time to obtain speed information of each stroke section, and calculating acceleration information of each stroke section according to the speed information of each stroke section.
The apparatus of this embodiment may be used to implement the technical solution of the method embodiment shown in fig. 1, and the implementation principle thereof is similar, which is not described herein again.
Fig. 9 is a schematic structural diagram of a third embodiment of the user identification apparatus of the present application, and as shown in fig. 9, based on the apparatus structure shown in fig. 7, in the apparatus of the present embodiment, further, the stroke segmentation processing module 12 includes a second normalization unit 123 and a second determination unit 124, where the second normalization unit 123 is configured to perform rotation normalization and size normalization on a trajectory. The second determining unit 124 is configured to determine, according to the two-dimensional curvature, a segmentation point in the trajectory after the rotation normalization and the size normalization, and obtain at least one segment of the stroke.
Further, the second normalization unit 123 is specifically configured to:
determining a two-dimensional coordinate sequence [ u (i) ] i 1 … N from the three-dimensional coordinate points constituting the trajectory, wherein u (i) is the value of the Y-axis on which the three-dimensional coordinate points constituting the trajectory are projected on the Y-Z plane, v (i) is the value of the Z-axis on which the three-dimensional coordinate points constituting the trajectory are projected on the Y-Z plane, and N is the number of the three-dimensional coordinate points constituting the trajectory;
searching a rotating shaft of the minimum moment of inertia of the track, and rotating the track to a position where the rotating shaft of the minimum moment of inertia of the track is parallel to a Y axis projected to a Y-Z plane;
calculating the center of gravity of the trajectory
Computing covariance matrices for tracesWherein,
multiplying all three-dimensional coordinate points forming the track by I;
the width W and height H of the trajectory are calculated, u (i) in the two-dimensional coordinate sequence is divided by W, and V (i) in the two-dimensional coordinate sequence is divided by H.
Optionally, the information extraction module 13 is specifically configured to:
calculating the length of each stroke to obtain length information of each stroke, calculating an included angle between two continuous strokes, wherein the included angle is the included angle of the minimum rotational inertia axis of the two continuous strokes to obtain angle information between the two continuous strokes, and calculating a first derivative sequence of three-dimensional positions at equal distance intervals in each stroke with respect to time to obtain speed information of each stroke.
In the foregoing embodiment, further, the identification module 14 is specifically configured to:
and for each group of pre-stored characteristic information templates, calculating the dynamic time normalization (DTW) distance of the extracted characteristic information of each stroke and the characteristic information of each stroke corresponding to one track in the group of pre-stored characteristic information templates, and determining whether to accept the user initiating the user gesture according to the calculated DTW distance and a preset threshold value. The preset threshold is the average value of DTW distances between the characteristic information of each stroke corresponding to all tracks in a group of pre-stored characteristic information templates and a standard deviation.
Further, each group of pre-stored characteristic information templates carries a user identifier.
The apparatus of this embodiment may be used to implement the technical solution of the method embodiment shown in fig. 1, and the implementation principle thereof is similar, which is not described herein again.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.

Claims (26)

1. A user identity recognition method is characterized by comprising the following steps:
constructing a track of a user gesture according to a rotation angular velocity measured by a gyroscope in the terminal equipment in real time, wherein the track is formed by three-dimensional coordinate points of the user arm at each moment;
performing stroke segmentation processing on the track, and extracting characteristic information of each segment of strokes;
and identifying the identity of the user according to the extracted characteristic information of each stroke and the characteristic information of each stroke corresponding to one track in at least one group of pre-stored characteristic information templates, wherein each group of pre-stored characteristic information templates comprises the characteristic information of each stroke corresponding to at least one track.
2. The method of claim 1,
the characteristic information includes shape information and speed information, or,
the characteristic information includes length information, angle information, and velocity information, or,
the characteristic information includes shape information, velocity information, and acceleration information.
3. The method of claim 1, wherein constructing the trajectory of the user gesture from rotational angular velocities measured in real time by a gyroscope comprises:
calculating a rotation matrix C of the attitude change of the terminal equipment from the previous moment to the current moment according to the rotation angular velocity at the current momentt
According to formula Pt=Pt-*CtCalculating a three-dimensional coordinate point P of the current moment of the arm of the usertObtaining a three-dimensional coordinate point of the user arm at each moment;
wherein, Pt-the three-dimensional coordinate point at the time preceding the current time, the three-dimensional coordinate point at which the user gesture starts being the origin.
4. The method of claim 3, wherein the stroke segmentation processing of the trajectory comprises:
determining segmentation points in the track according to the three-dimensional curvature to obtain at least one segment of stroke;
and carrying out size normalization and rotation normalization on each stroke.
5. The method of claim 4, wherein the normalizing the size of each stroke segment comprises:
dividing the size of each stroke by the length of the track;
the performing rotation normalization on each stroke includes:
and rotating the axis of each stroke segment to be parallel to the X axis of an initial absolute coordinate system, wherein the axis of each stroke segment is a line segment from a starting point to an end point, the initial absolute coordinate system is the same as the three coordinate axes of the coordinate system of the terminal equipment at the starting moment of the gesture of the user, and the origin of the initial absolute coordinate system is the three-dimensional coordinate point of the position of the elbow of the user.
6. The method according to claim 4 or 5, wherein the extracting feature information of each stroke includes:
extracting a three-dimensional coordinate sequence with a medium time interval in each stroke section to obtain shape information of each stroke section;
and calculating a first derivative sequence of three-dimensional positions at equal distance intervals in each stroke segment with respect to time to obtain the speed information of each stroke segment.
7. The method according to claim 4 or 5, wherein the extracting feature information of each stroke includes:
extracting a three-dimensional coordinate sequence with a medium time interval in each stroke section to obtain shape information of each stroke section;
calculating a first derivative sequence of three-dimensional positions at equal distance intervals in each stroke segment with respect to time to obtain speed information of each stroke segment;
and calculating the acceleration information of each stroke according to the speed information of each stroke.
8. The method of claim 3, wherein the stroke segmentation processing of the trajectory comprises:
performing rotation normalization and size normalization on the track;
and determining segmentation points in the track after the rotation normalization and the size normalization according to the two-dimensional curvature to obtain at least one segment of stroke.
9. The method of claim 8, wherein the rotation normalizing the trajectory comprises:
determining a two-dimensional coordinate sequence [ u (i) ] i, v (i) ] i 1 … N from the three-dimensional coordinate points constituting the trajectory, wherein u (i) is a value of a Y-axis of the three-dimensional coordinate points constituting the trajectory projected on a Y-Z plane, v (i) is a value of a Z-axis of the three-dimensional coordinate points constituting the trajectory projected on a Y-Z plane, and N is the number of three-dimensional coordinate points constituting the trajectory;
searching a rotating shaft of the minimum moment of inertia of the track, and rotating the track to a position where the rotating shaft of the minimum moment of inertia of the track is parallel to a Y axis projected to a Y-Z plane;
calculating the center of gravity of the trajectory
Computing a covariance matrix of the trajectoriesWherein,
multiplying all three-dimensional coordinate points constituting the trajectory by I;
the size normalization of the track includes:
calculating the width W and height H of the track, dividing u (i) in the two-dimensional coordinate sequence by W, and dividing V (i) in the two-dimensional coordinate sequence by H.
10. The method according to claim 6 or 7, wherein the extracting feature information of each stroke includes:
calculating the length of each stroke segment to obtain the length information of each stroke segment;
calculating an included angle between two continuous strokes, wherein the included angle is the included angle of the minimum moment of inertia axis of each of the two continuous strokes, and obtaining angle information between the two continuous strokes;
and calculating a first derivative sequence of three-dimensional positions at equal distance intervals in each stroke segment with respect to time to obtain the speed information of each stroke segment.
11. The method according to claim 1, wherein the performing user identification according to the extracted feature information of each stroke and the feature information of each corresponding stroke in at least one set of pre-stored feature information templates comprises:
for each group of pre-stored characteristic information templates, calculating the dynamic time normalization (DTW) distance between the extracted characteristic information of each stroke and the characteristic information of each stroke corresponding to one track in the group of pre-stored characteristic information templates;
and determining whether to accept the user initiating the user gesture according to the calculated DTW distance and a preset threshold value.
12. The method according to claim 11, wherein the preset threshold is a sum of an average value of DTW distances between feature information of each stroke corresponding to all tracks in a set of pre-stored feature information templates and a standard deviation of DTW distances between feature information of each stroke corresponding to all tracks in the set of pre-stored feature information templates.
13. The method of claim 1, wherein each set of pre-stored characteristic information templates carries a user identification.
14. A user identification apparatus, comprising:
the track building module is used for building a track of a user gesture according to a rotation angular velocity measured by a gyroscope in the terminal equipment in real time, wherein the track is formed by three-dimensional coordinate points of each moment of an arm of the user;
the stroke segmentation processing module is used for performing stroke segmentation processing on the track;
the information extraction module is used for extracting the characteristic information of each stroke;
and the identification module is used for carrying out user identity identification according to the extracted characteristic information of each stroke and the characteristic information of each stroke corresponding to one track in at least one group of pre-stored characteristic information templates, and each group of pre-stored characteristic information templates comprises the characteristic information of each stroke corresponding to at least one track.
15. The apparatus of claim 14,
the characteristic information includes shape information and speed information, or,
the characteristic information includes length information, angle information, and velocity information, or,
the characteristic information includes shape information, velocity information, and acceleration information.
16. The apparatus of claim 14, wherein the trajectory construction module is specifically configured to:
calculating a rotation matrix C of the attitude change of the terminal equipment from the previous moment to the current moment according to the rotation angular velocity at the current momentt
According to formula Pt=Pt-*CtCalculating a three-dimensional coordinate point P of the current moment of the arm of the usertObtaining a three-dimensional coordinate point of the user arm at each moment;
wherein, Pt-the three-dimensional coordinate point at the time preceding the current time, the three-dimensional coordinate point at which the user gesture starts being the origin.
17. The apparatus of claim 16, wherein the stroke segmentation processing module comprises:
the first determining unit is used for determining segmentation points in the track according to the three-dimensional curvature to obtain at least one segment of stroke;
and the first normalization unit is used for carrying out size normalization and rotation normalization on each stroke.
18. The apparatus according to claim 17, wherein the first normalization unit is specifically configured to:
dividing the size of each stroke by the length of the track;
and rotating the axis of each stroke segment to be parallel to the X axis of an initial absolute coordinate system, wherein the axis of each stroke segment is a line segment from a starting point to an end point, the initial absolute coordinate system is the same as the three coordinate axes of the coordinate system of the terminal equipment at the starting moment of the gesture of the user, and the origin of the initial absolute coordinate system is the three-dimensional coordinate point of the position of the elbow of the user.
19. The apparatus according to claim 17 or 18, wherein the information extraction module is specifically configured to:
extracting a three-dimensional coordinate sequence with a medium time interval in each stroke section to obtain shape information of each stroke section;
and calculating a first derivative sequence of three-dimensional positions at equal distance intervals in each stroke segment with respect to time to obtain the speed information of each stroke segment.
20. The apparatus according to claim 17 or 18, wherein the information extraction module is specifically configured to:
extracting a three-dimensional coordinate sequence with a medium time interval in each stroke section to obtain shape information of each stroke section;
calculating a first derivative sequence of three-dimensional positions at equal distance intervals in each stroke segment with respect to time to obtain speed information of each stroke segment;
and calculating the acceleration information of each stroke according to the speed information of each stroke.
21. The apparatus of claim 16, wherein the stroke segmentation processing module comprises:
the second normalization unit is used for carrying out rotation normalization and size normalization on the track;
and the second determining unit is used for determining the segmentation points in the track after the rotation normalization and the size normalization according to the two-dimensional curvature to obtain at least one segment of stroke.
22. The apparatus according to claim 21, wherein the second normalization unit is specifically configured to:
determining a two-dimensional coordinate sequence [ u (i) ] i, v (i) ] i 1 … N from the three-dimensional coordinate points constituting the trajectory, wherein u (i) is a value of a Y-axis of the three-dimensional coordinate points constituting the trajectory projected on a Y-Z plane, v (i) is a value of a Z-axis of the three-dimensional coordinate points constituting the trajectory projected on a Y-Z plane, and N is the number of three-dimensional coordinate points constituting the trajectory;
searching a rotating shaft of the minimum moment of inertia of the track, and rotating the track to a position where the rotating shaft of the minimum moment of inertia of the track is parallel to a Y axis projected to a Y-Z plane;
calculating the center of gravity of the trajectory
Computing a covariance matrix of the trajectoriesWherein,
multiplying all three-dimensional coordinate points constituting the trajectory by I;
calculating the width W and height H of the track, dividing u (i) in the two-dimensional coordinate sequence by W, and dividing V (i) in the two-dimensional coordinate sequence by H.
23. The apparatus according to claim 19 or 20, wherein the information extraction module is specifically configured to:
calculating the length of each stroke segment to obtain the length information of each stroke segment;
calculating an included angle between two continuous strokes, wherein the included angle is the included angle of the minimum moment of inertia axis of each of the two continuous strokes, and obtaining angle information between the two continuous strokes;
and calculating a first derivative sequence of three-dimensional positions at equal distance intervals in each stroke segment with respect to time to obtain the speed information of each stroke segment.
24. The apparatus of claim 14, wherein the identification module is specifically configured to:
for each group of pre-stored characteristic information templates, calculating the dynamic time normalization (DTW) distance between the extracted characteristic information of each stroke and the characteristic information of each stroke corresponding to one track in the group of pre-stored characteristic information templates;
and determining whether to accept the user initiating the user gesture according to the calculated DTW distance and a preset threshold value.
25. The apparatus according to claim 24, wherein the preset threshold is a sum of an average value of DTW distances between feature information of each stroke corresponding to all tracks in a set of pre-stored feature information templates and a standard deviation of DTW distances between feature information of each stroke corresponding to all tracks in the set of pre-stored feature information templates.
26. The apparatus of claim 14, wherein each set of pre-stored characteristic information templates carries a user identifier.
CN201710128556.2A 2017-03-06 2017-03-06 Method for identifying ID and device Pending CN108536314A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710128556.2A CN108536314A (en) 2017-03-06 2017-03-06 Method for identifying ID and device
PCT/CN2018/078139 WO2018161893A1 (en) 2017-03-06 2018-03-06 User identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710128556.2A CN108536314A (en) 2017-03-06 2017-03-06 Method for identifying ID and device

Publications (1)

Publication Number Publication Date
CN108536314A true CN108536314A (en) 2018-09-14

Family

ID=63447267

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710128556.2A Pending CN108536314A (en) 2017-03-06 2017-03-06 Method for identifying ID and device

Country Status (2)

Country Link
CN (1) CN108536314A (en)
WO (1) WO2018161893A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109409316A (en) * 2018-11-07 2019-03-01 极鱼(北京)科技有限公司 Aerial endorsement method and device
CN110490059A (en) * 2019-07-10 2019-11-22 广州幻境科技有限公司 A kind of gesture identification method, system and the device of wearable intelligent ring
CN110942042A (en) * 2019-12-02 2020-03-31 深圳棒棒帮科技有限公司 Three-dimensional handwritten signature authentication method, system, storage medium and equipment
CN112598424A (en) * 2020-12-29 2021-04-02 武汉天喻聚联科技有限公司 Authentication method and system based on action password
CN116630993A (en) * 2023-05-12 2023-08-22 北京竹桔科技有限公司 Identity information recording method, device, computer equipment and storage medium
CN116630993B (en) * 2023-05-12 2024-09-06 北京竹桔科技有限公司 Identity information recording method, device, computer equipment and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117633276B (en) * 2024-01-25 2024-06-07 江苏欧帝电子科技有限公司 Writing track recording and broadcasting method, system and terminal

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2428802A (en) * 2005-07-30 2007-02-07 Peter Mccarthy Wearable motion sensor device with RFID tag
CN102810008A (en) * 2012-05-16 2012-12-05 北京捷通华声语音技术有限公司 Air input system, method and air input acquisition equipment
CN103257711A (en) * 2013-05-24 2013-08-21 河南科技大学 Space gesture input method
CN103679213A (en) * 2013-12-13 2014-03-26 电子科技大学 3D gesture recognition method
CN103927532A (en) * 2014-04-08 2014-07-16 武汉汉德瑞庭科技有限公司 Handwriting registration method based on stroke characteristics
CN103984416A (en) * 2014-06-10 2014-08-13 北京邮电大学 Gesture recognition method based on acceleration sensor
CN105630174A (en) * 2016-01-22 2016-06-01 上海斐讯数据通信技术有限公司 Intelligent terminal dialing system and method
CN105912910A (en) * 2016-04-21 2016-08-31 武汉理工大学 Cellphone sensing based online signature identity authentication method and system
WO2016182361A1 (en) * 2015-05-12 2016-11-17 Samsung Electronics Co., Ltd. Gesture recognition method, computing device, and control device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103034429A (en) * 2011-10-10 2013-04-10 北京千橡网景科技发展有限公司 Identity authentication method and device for touch screen
US9129478B2 (en) * 2013-05-20 2015-09-08 Microsoft Corporation Attributing user action based on biometric identity
CN103295028B (en) * 2013-05-21 2018-09-04 深圳Tcl新技术有限公司 gesture operation control method, device and intelligent display terminal
CN103631501B (en) * 2013-10-11 2017-03-01 金硕澳门离岸商业服务有限公司 Data transmission method based on gesture control

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2428802A (en) * 2005-07-30 2007-02-07 Peter Mccarthy Wearable motion sensor device with RFID tag
CN102810008A (en) * 2012-05-16 2012-12-05 北京捷通华声语音技术有限公司 Air input system, method and air input acquisition equipment
CN103257711A (en) * 2013-05-24 2013-08-21 河南科技大学 Space gesture input method
CN103679213A (en) * 2013-12-13 2014-03-26 电子科技大学 3D gesture recognition method
CN103927532A (en) * 2014-04-08 2014-07-16 武汉汉德瑞庭科技有限公司 Handwriting registration method based on stroke characteristics
CN103984416A (en) * 2014-06-10 2014-08-13 北京邮电大学 Gesture recognition method based on acceleration sensor
WO2016182361A1 (en) * 2015-05-12 2016-11-17 Samsung Electronics Co., Ltd. Gesture recognition method, computing device, and control device
CN105630174A (en) * 2016-01-22 2016-06-01 上海斐讯数据通信技术有限公司 Intelligent terminal dialing system and method
CN105912910A (en) * 2016-04-21 2016-08-31 武汉理工大学 Cellphone sensing based online signature identity authentication method and system

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109409316A (en) * 2018-11-07 2019-03-01 极鱼(北京)科技有限公司 Aerial endorsement method and device
CN109409316B (en) * 2018-11-07 2022-04-01 极鱼(北京)科技有限公司 Over-the-air signature method and device
CN110490059A (en) * 2019-07-10 2019-11-22 广州幻境科技有限公司 A kind of gesture identification method, system and the device of wearable intelligent ring
CN110942042A (en) * 2019-12-02 2020-03-31 深圳棒棒帮科技有限公司 Three-dimensional handwritten signature authentication method, system, storage medium and equipment
CN110942042B (en) * 2019-12-02 2022-11-08 深圳棒棒帮科技有限公司 Three-dimensional handwritten signature authentication method, system, storage medium and equipment
CN112598424A (en) * 2020-12-29 2021-04-02 武汉天喻聚联科技有限公司 Authentication method and system based on action password
CN116630993A (en) * 2023-05-12 2023-08-22 北京竹桔科技有限公司 Identity information recording method, device, computer equipment and storage medium
CN116630993B (en) * 2023-05-12 2024-09-06 北京竹桔科技有限公司 Identity information recording method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
WO2018161893A1 (en) 2018-09-13

Similar Documents

Publication Publication Date Title
CN108536314A (en) Method for identifying ID and device
CN107679446B (en) Human face posture detection method, device and storage medium
Tian et al. KinWrite: Handwriting-Based Authentication Using Kinect.
CN106897592B (en) User authentication method, user authentication device, and writing instrument
TWI569176B (en) Method and system for identifying handwriting track
US9965608B2 (en) Biometrics-based authentication method and apparatus
EP2523149B1 (en) A method and system for association and decision fusion of multimodal inputs
US11675883B2 (en) Passive identification of a kiosk user
Kumar et al. Authenticating users through their arm movement patterns
CN109829368B (en) Palm feature recognition method and device, computer equipment and storage medium
CN104850773B (en) Method for authenticating user identity for intelligent mobile terminal
CN105980973A (en) User-authentication gestures
US9619041B2 (en) Biometric authentication apparatus and biometric authentication method
JP2013206002A (en) Non-contact biometrics device
CN102411712B (en) Handwriting-based method for identity identification and terminal thereof
Xu et al. Challenge-response authentication using in-air handwriting style verification
Mendels et al. User identification for home entertainment based on free-air hand motion signatures
CN110633004A (en) Interaction method, device and system based on human body posture estimation
CN106062681A (en) Disambiguation of styli by correlating acceleration on touch inputs
CN109286499B (en) Behavior feature-based presence authentication method
Behera et al. Air signature recognition using deep convolutional neural network-based sequential model
US20170262055A1 (en) Information processing device, information processing method, and program
Pan et al. OrsNet: A hybrid neural network for official sports referee signal recognition
Iyer et al. Generalized hand gesture recognition for wearable devices in IoT: Application and implementation challenges
Sufyan et al. A novel and lightweight real-time continuous motion gesture recognition algorithm for smartphones

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination