WO2018161893A1 - User identification method and device - Google Patents

User identification method and device Download PDF

Info

Publication number
WO2018161893A1
WO2018161893A1 PCT/CN2018/078139 CN2018078139W WO2018161893A1 WO 2018161893 A1 WO2018161893 A1 WO 2018161893A1 CN 2018078139 W CN2018078139 W CN 2018078139W WO 2018161893 A1 WO2018161893 A1 WO 2018161893A1
Authority
WO
WIPO (PCT)
Prior art keywords
stroke
trajectory
feature information
information
user
Prior art date
Application number
PCT/CN2018/078139
Other languages
French (fr)
Chinese (zh)
Inventor
张军平
黄晨宇
张黔
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2018161893A1 publication Critical patent/WO2018161893A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/014Hand-worn input/output arrangements, e.g. data gloves

Definitions

  • the present application relates to the field of communications technologies, and in particular, to a user identity identification method and apparatus.
  • smart devices such as smart watches, smart bracelets, smart oxygen meters, and smart blood pressure monitors.
  • it is sometimes necessary to record part of sensitive data such as its own medical data, motion data, etc.
  • the smart device is required to recognize the identity of the current user.
  • most smart devices do not have a touch screen, and most smart devices do not choose to add additional expensive sensors (such as fingerprint readers) for identification in order to reduce costs, and Most smart devices do not have strong computing power.
  • AirSig is an identity authentication method based on gesture recognition.
  • AirSig can input a password for a smart device lacking a touch screen, no expensive sensors and limited computing resources for user identification, and the user holds the smart device.
  • the smart device can identify the identity of the air gesture initiator.
  • AirSig identifies the user's identity by installing an accelerometer and a gyroscope in the smart device.
  • the real-time data output by the accelerometer and the gyroscope is compared with the pre-stored feature information template data.
  • the real-time data outputted by the accelerometer and the gyroscope is directly compared with the pre-stored feature information template data to identify the user identity. Therefore, the user's hand holding the smart device corresponding to the real-time data needs the user hand corresponding to the pre-stored feature information template.
  • the gesture of holding the smart device is consistent to ensure the accuracy of the recognition, which is limited by the posture of the user holding the smart device.
  • the present application provides a user identification method and apparatus to solve the problem that the posture of the user holding the smart device is limited in the related art.
  • the application provides a user identity identification method, including:
  • the trajectory is composed of three-dimensional coordinate points of the user's arm at each moment; segmenting the stroke of the trajectory, and performing features on each stroke Information extraction; performing user identification according to the extracted feature information of each stroke of each stroke and the feature information of each stroke corresponding to one of the at least one set of pre-stored feature information templates, each set of pre-stored feature information templates including at least one track corresponding to each Characteristic information of a stroke.
  • the trajectory of the user gesture is constructed according to the rotational angular velocity measured by the gyroscope in the terminal device in real time, the stroke segmentation processing is performed on the trajectory, and the feature information is extracted for each stroke, and finally, according to the extracted feature information of each stroke
  • the feature information of each stroke corresponding to one track in at least one set of pre-stored feature information templates is used for user identification. Since the feature information is extracted based on the user gesture track and compared with the pre-stored feature information template, the user identity is not recognized. It is limited by the posture of the user holding the terminal device. Users and terminal devices can still identify the user in various postures, such as standing, standing, and lying, improving the accuracy of recognition and user experience.
  • the feature information includes shape information and speed information, or the feature information includes length information, angle information, and speed information, or the feature information includes shape information, speed information, and acceleration information.
  • the trajectory of the user gesture is constructed according to the rotational angular velocity measured by the gyroscope in real time, including:
  • P t - is the three-dimensional coordinate point of the previous moment of the current time
  • the three-dimensional coordinate point at which the user gesture starts is the origin.
  • the segmenting processing of the trajectory includes:
  • the size of each stroke is normalized, including:
  • the rotation normalizes each stroke, including:
  • the axis of each stroke is a line segment from the starting point to the ending point, the initial absolute coordinate system and the terminal device of the user gesture start time
  • the three coordinate axes of the coordinate system are the same, and the origin of the initial absolute coordinate system is a three-dimensional coordinate point of the position where the user's elbow is located.
  • the feature information extraction is performed for each stroke, including:
  • a sequence of first derivative of the three-dimensional position of each stroke of the middle distance interval with respect to time is calculated, and the velocity information of each stroke is obtained.
  • the feature information extraction is performed for each stroke, including:
  • the acceleration information of each stroke is calculated according to the speed information of each stroke.
  • the segmenting processing of the trajectory includes:
  • the segmentation points in the trajectory after the rotation normalization and the size normalization are determined according to the two-dimensional curvature, and at least one stroke is obtained.
  • the rotation normalization of the trajectory includes:
  • the normalizing the size of the trajectory includes:
  • the width W and height H of the trajectory are calculated, u(i) in the two-dimensional coordinate sequence is divided by W, and V(i) in the two-dimensional coordinate sequence is divided by H.
  • the feature information extraction is performed for each stroke, including:
  • a sequence of first derivative of the three-dimensional position of each stroke of the middle distance interval with respect to time is calculated, and the velocity information of each stroke is obtained.
  • the user identity is determined according to the feature information of each of the extracted strokes and the feature information of each of the corresponding strokes in the at least one set of pre-stored feature information templates, including:
  • Determining whether to accept the user who initiated the user gesture is based on the calculated DTW distance and a preset threshold.
  • the preset threshold is an average of DTW distances between feature information of each segment of strokes corresponding to all the tracks in a set of pre-stored feature information templates, and all of the set of pre-stored feature information templates. The sum of the standard deviations of the DTW distances between the feature information of each stroke corresponding to the track.
  • each set of pre-stored feature information templates carries a user identifier.
  • the application provides a user identity identification device, including:
  • a trajectory construction module configured to construct a trajectory of the user gesture according to a real-time measured rotational angular velocity of the gyroscope in the terminal device, where the trajectory is composed of three-dimensional coordinate points of the user's arm at each moment;
  • a stroke segmentation processing module is configured to The trajectory performs segmentation processing of the stroke;
  • the information extraction module is configured to extract feature information for each segment of the stroke;
  • the identification module is configured to: according to the extracted feature information of each segment of the stroke and each of the at least one set of pre-stored feature information templates
  • the feature information of a stroke is used for user identification, and each set of pre-stored feature information templates includes feature information of each stroke corresponding to at least one track.
  • the trajectory of the user gesture is constructed according to the rotational angular velocity measured by the gyroscope in the terminal device in real time, the stroke segmentation processing is performed on the trajectory, and the feature information is extracted for each stroke, and finally, according to the extracted feature information of each stroke
  • the feature information of each stroke corresponding to one track in at least one set of pre-stored feature information templates is used for user identification. Since the feature information is extracted based on the user gesture track and compared with the pre-stored feature information template, the user identity is not recognized. It is limited by the posture of the user holding the terminal device. Users and terminal devices can still identify the user in various postures, such as standing, standing, and lying, improving the accuracy of recognition and user experience.
  • the feature information includes shape information and speed information, or the feature information includes length information, angle information, and speed information, or the feature information includes shape information, speed information, and acceleration information.
  • the trajectory building module is specifically configured to:
  • P t - is the three-dimensional coordinate point of the previous moment of the current time
  • the three-dimensional coordinate point at which the user gesture starts is the origin.
  • the stroke segmentation processing module includes:
  • a first determining unit configured to determine a segment point in the trajectory according to the three-dimensional curvature, to obtain at least one stroke
  • the first normalization unit is used for size normalization and rotation normalization of each stroke.
  • the first normalization unit is specifically configured to:
  • the axis of each stroke is a line segment from the starting point to the ending point, the initial absolute coordinate system and the terminal device of the user gesture start time
  • the three coordinate axes of the coordinate system are the same, and the origin of the initial absolute coordinate system is a three-dimensional coordinate point of the position where the user's elbow is located.
  • the information extraction module is specifically configured to:
  • a sequence of first derivative of the three-dimensional position of each stroke of the middle distance interval with respect to time is calculated, and the velocity information of each stroke is obtained.
  • the information extraction module is specifically configured to:
  • the acceleration information of each stroke is calculated according to the speed information of each stroke.
  • the stroke segmentation processing module includes:
  • a second normalization unit configured to perform rotation normalization and size normalization on the trajectory
  • a second determining unit configured to determine a segmentation point in the trajectory after the rotation normalization and the size normalization according to the two-dimensional curvature, to obtain at least one stroke.
  • the second normalization unit is specifically configured to:
  • the width W and height H of the trajectory are calculated, u(i) in the two-dimensional coordinate sequence is divided by W, and V(i) in the two-dimensional coordinate sequence is divided by H.
  • the information extraction module is specifically configured to:
  • a sequence of first derivative of the three-dimensional position of each stroke of the middle distance interval with respect to time is calculated, and the velocity information of each stroke is obtained.
  • the identification module is specifically used to:
  • Determining whether to accept the user who initiated the user gesture is based on the calculated DTW distance and a preset threshold.
  • the preset threshold is an average of DTW distances between feature information of each segment of strokes corresponding to all the tracks in a set of pre-stored feature information templates, and all of the set of pre-stored feature information templates. The sum of the standard deviations of the DTW distances between the feature information of each stroke corresponding to the track.
  • each set of pre-stored feature information templates carries a user identifier.
  • the application provides a user identity recognition apparatus, including:
  • the memory is used to store program instructions
  • the processor is configured to perform the user identification method in any of the possible aspects of the first aspect and the first aspect.
  • the present application provides a readable storage medium, where an execution instruction is stored, and when at least one processor of the user identification device executes the execution instruction, the user identification device performs the first aspect and the On the one hand, any possible design of the user identification method.
  • the present application provides a program product comprising an execution instruction stored in a readable storage medium.
  • At least one processor of the user identification device can read the execution instructions from a readable storage medium, the at least one processor executing the execution instructions such that the user identification device implements the first aspect and any of the possible aspects of the first aspect User identification method.
  • Embodiment 1 is a flowchart of Embodiment 1 of a method for identifying a user of the present application
  • FIG. 2 is a schematic diagram of a training process
  • FIG. 3 is a schematic diagram of the authentication identification process
  • Embodiment 4 is a flowchart of Embodiment 2 of a method for identifying a user of the present application
  • FIG. 5 is a flowchart of Embodiment 3 of a method for identifying a user identity according to the present application
  • Embodiment 6 is a flowchart of Embodiment 4 of a method for identifying a user identity according to the present application
  • FIG. 7 is a schematic structural diagram of Embodiment 1 of a user identity identification apparatus according to the present application.
  • Embodiment 8 is a schematic structural diagram of Embodiment 2 of a user identity identification apparatus according to the present application.
  • FIG. 9 is a schematic structural diagram of Embodiment 3 of a user identity identification apparatus according to the present application.
  • the user identification method and device provided by the application can be applied to various terminal devices, such as a mobile phone, a smart watch, an intelligent blood pressure meter, etc., for identifying a user identity, the terminal device does not need a touch screen, and the user holds the terminal device.
  • the terminal device can identify the identity of the air gesture initiator.
  • the present application draws the user gesture track by using the gyroscope installed in the terminal device, extracts the feature information based on the user gesture track, and compares it with the pre-stored feature template.
  • the identity of the user is recognized, and the real-time data of the accelerometer and the gyroscope output is directly compared with the pre-stored feature information template data to identify the user identity, which is not limited by the posture of the user holding the terminal device.
  • the user and the terminal device can still recognize the user identity in various postures, such as standing, standing, and lying.
  • FIG. 1 is a flowchart of Embodiment 1 of a user identification method of the present application. As shown in FIG. 1 , the method in this embodiment may include:
  • the terminal device is equipped with a gyro sensor, and the user can hold the hand device or the terminal device with the gyroscope on the wrist and then write in the air as an input of the terminal device, and the gyroscope can acquire the hand.
  • the trajectory of each user is different even if they do the same gesture trajectory, because according to the principle of kinesiology, when a person wants to make a gesture, the brain will first segment the gesture, and each segment is an arc. For each segment, ie each arc, each segment is not a perfect arc and has a perfect arc due to the different ways in which the nerves control the muscles, resulting in a normal distribution of velocity from start to finish in each stroke. Personal characteristics, people writing is also based on this principle.
  • the trajectory of the user gesture is constructed, and the two coordinate systems, the device coordinate system and the initial absolute coordinate system are designed, and the device coordinate system is the coordinate system of the terminal device itself, which is defined by the terminal device itself.
  • the initial absolute coordinate system is the same as the three coordinate axes of the terminal device coordinate system at the start of the user gesture, and the origin of the initial absolute coordinate system is the three-dimensional coordinate point of the position where the user's elbow is located.
  • the human arm is divided into an upper arm and a forearm, the upper arm connects the elbow and the shoulder, and the forearm connects the wrist and the elbow. When the user writes in the air, the movement is not very large.
  • the main part of the displacement is the movement of the wrist relative to the elbow.
  • This process can be regarded as the rigid body (forearm) moving around the elbow (the fulcrum).
  • the gyroscope records the angular velocity of the rotation, so the motion trajectory can be calculated as long as the length of the rigid body is known.
  • the trajectory of the user gesture is constructed according to the rotational angular velocity measured by the gyroscope in real time, which may be:
  • P t - is the three-dimensional coordinate point of the previous moment of the current time
  • the three-dimensional coordinate point at which the user gesture starts is the origin.
  • the user's forearm can be regarded as a rigid body moving around the elbow, that is, a straight line moves around the origin in the initial absolute coordinate system.
  • the gyroscope can obtain the angular velocity of the current device rotation, and thus can calculate the rotation matrix of the device posture change from the previous moment to the current moment, and this rotation matrix also describes the posture change of the user's arm.
  • the current rotation matrix C t is:
  • ( ⁇ x , ⁇ y , ⁇ z ) is the rotational angular velocity of the current time t obtained by the gyroscope
  • C t ⁇ is the rotation matrix of the previous time t ⁇
  • I is the unit matrix.
  • the three-dimensional coordinate point of each moment of the user's arm is obtained, that is, the trajectory of the user's gesture is obtained
  • S102 Perform stroke segmentation processing on the trajectory, and perform feature information extraction on each stroke.
  • the trajectory needs to be segmented, and the principle of segmentation is that each pen approximates an arc. Differentiating between different people requires extracting the features hidden in each pen. For a trajectory, the segmentation point is the junction of the two strokes, which is the turning point between the two strokes.
  • the normalization process is performed on the basis of each trajectory, and each of the trajectories is normalized and extracted. Personal characteristics of the pen.
  • the stroke processing is performed on the trajectory, which may specifically be:
  • the segmentation points in the trajectory are determined according to the three-dimensional curvature, at least one stroke is obtained, and then each of the strokes is subjected to size normalization and rotation normalization.
  • a threshold is set to determine whether the point is a segmentation point.
  • the size of each stroke is normalized, which can be: the size of each stroke is divided by the length of the track. Thus the track length of the entire gesture becomes unity.
  • Rotate the normalization of each stroke which can be: rotate the axis of each stroke to be parallel to the X axis of the initial absolute coordinate system.
  • the axis of each stroke is a line segment from the starting point to the ending point.
  • the coordinate system is the same as the three coordinate axes of the terminal device coordinate system at the start of the user gesture, and the origin of the initial absolute coordinate system is the three-dimensional coordinate point of the position where the user's elbow is located.
  • the feature information includes shape information and speed information, or the feature information includes length information, angle information, and speed information, or the feature information includes shape information, speed information, and acceleration information.
  • feature information is extracted for each stroke, which may be:
  • the three-dimensional coordinate sequence of each stroke is extracted at a medium time interval, and the shape information of each stroke is obtained; the first derivative sequence of the three-dimensional position of each stroke with respect to time is calculated, and the speed information of each stroke is obtained.
  • the speed information For the current location, For the position of the previous moment, t is the current moment, t - is the previous moment, then the speed information
  • t is the current moment
  • t - is the previous moment
  • feature information is extracted for each stroke, which may be:
  • feature information is extracted for each stroke, which may be:
  • the stroke processing of the trajectory may be specifically:
  • Rotate normalization and size normalization of the trajectory determine the segmentation points in the trajectory after rotation normalization and size normalization according to the two-dimensional curvature, and obtain at least one stroke.
  • the rotation of the trajectory is normalized, which may be:
  • u(i) is a three-dimensional coordinate point constituting the trajectory projected on the YZ plane
  • v(i) is the value of the Z-axis projected on the YZ plane of the three-dimensional coordinate point constituting the trajectory
  • N is the number of three-dimensional coordinate points constituting the trajectory.
  • the rotation axis of the minimum moment of inertia of the trajectory is searched, and the trajectory is rotated to a position where the rotation axis of the minimum moment of inertia is parallel to the Y axis projected to the Y-Z plane.
  • the size of the trajectory is normalized, which may be:
  • the feature information of the stroke is extracted for each stroke.
  • the pre-stored feature information template includes feature information of each segment of the stroke corresponding to the at least one track.
  • the user identity is performed. If the terminal device has not been trained by the user gesture before, the terminal The device will prompt the user to perform multiple trainings (for example, 5 to 10 times).
  • the terminal device extracts the feature information corresponding to each user's gesture track according to the process of extracting the feature information, and stores the information multiple times.
  • the feature information of the gesture track is used as a pre-stored feature information template.
  • the device can enter the authentication state.
  • a group of pre-stored feature information templates can be used to distinguish different users. When a user uses multiple users, multiple sets of pre-stored feature information templates can be stored. Each group of pre-stored feature information templates can carry user identifiers to distinguish different users.
  • the user identity is performed according to the extracted feature information of each segment of the stroke and the feature information of each segment of the corresponding segment of the pre-stored feature information template, which may be:
  • DTW dynamic time warping
  • the DTW distance between the shape information and the velocity information between each stroke and the template is calculated separately.
  • the two feature information are speed information and shape information, respectively, and their physical units are different. Therefore, when calculating the overall DTW distance, the DTW distances of the two feature information need to be normalized to make their values. The range is between 0-1 and then added.
  • each gesture trajectory template determines whether to accept the current user according to the preset threshold, when more than half of the templates are currently being authenticated. The user is accepted when the trajectory DTW distance is less than a preset threshold.
  • the preset threshold is set, for example, to the average value of the DTW distance between the feature information of each stroke corresponding to all the tracks in the pre-stored feature information template plus one standard deviation.
  • the feature information including the shape information, the speed information and the acceleration information as an example
  • the user can start the gesture recognition process of the terminal device in an active or passive manner.
  • the terminal device prompts the recognition process to start
  • the user can start to make a gesture
  • the terminal device starts to record the gyroscope data in the gesture process.
  • Users can customize the style of gestures, such as five-pointed stars, Chinese characters, and so on.
  • the recognition may be ended in an active or passive manner.
  • the device recognizes that the gesture ends, the trajectory construction, personal user feature information extraction and recognition are started, and if the user exists in the database, it is finally displayed on the display screen. Prompt the identified user and accept the user to log in to the terminal device. If the user does not exist in the database, the login is forbidden.
  • the user identification method provided in this embodiment constructs a trajectory of the user gesture according to the rotational angular velocity measured by the gyroscope in the terminal device in real time, performs segmentation processing on the trajectory, and extracts feature information for each segment of the stroke, and finally extracts the feature information according to the extraction.
  • the feature information of each stroke of each stroke is identified by the feature information of each stroke corresponding to one track in at least one set of pre-stored feature information templates, because the feature information is extracted on the basis of the user gesture track and the pre-stored feature information template is extracted.
  • the comparison to identify the user's identity is not limited by the user's posture of holding the terminal device. The user and the terminal device can still recognize the user identity in various postures, such as standing, standing, and lying. Improved recognition accuracy and user experience.
  • FIG. 2 is a schematic diagram of the training process, as shown in Figure 2, including:
  • the terminal device prompts the user to perform multiple trainings (for example, 5 to 10 times), and stores feature information of multiple gesture trajectories as a pre-stored feature information template.
  • FIG 3 is a schematic diagram of the authentication identification process, as shown in Figure 3, including:
  • the authenticated user succeeds and executes the corresponding command.
  • the user identification method of the present application is described in detail in three different usage scenarios.
  • FIG. 4 is a flowchart of Embodiment 2 of the user identification method of the present application. As shown in FIG. 4, the method in this embodiment may include:
  • the rotation matrix C t of the terminal device posture change from the previous moment to the current moment is calculated according to the rotational angular velocity of the current time
  • P t P t ⁇ *C t
  • the user's forearm can be regarded as a rigid body moving around the elbow, that is, a straight line moves around the origin in the initial absolute coordinate system.
  • the gyroscope can obtain the angular velocity of the current device rotation, and thus can calculate the rotation matrix of the device posture change from the previous moment to the current moment, and this rotation matrix also describes the posture change of the user's arm.
  • the current rotation matrix C t is:
  • ( ⁇ x , ⁇ y , ⁇ z ) is the rotational angular velocity of the current time t obtained by the gyroscope
  • C t ⁇ is the rotation matrix of the previous time t ⁇
  • I is the unit matrix.
  • the three-dimensional coordinate point of each moment of the user's arm is obtained, that is, the trajectory of the user's gesture is obtained
  • S402. Determine a segmentation point in the trajectory according to the three-dimensional curvature, and obtain at least one stroke.
  • each stroke is divided by the length of the trajectory.
  • the track length of the entire gesture becomes unity.
  • the axis of each stroke is a line segment from the starting point to the ending point.
  • the DTW distance between the shape information and the velocity information between each stroke and the template is calculated separately. For each feature information, since there are multiple strokes, it is necessary to calculate the DTW distance of the gesture track and the corresponding stroke in the template, and add the DTW distances of all the strokes as the DTW distance of one feature of the two tracks. It is worth noting that the two feature information are speed information and shape information, respectively, and their physical units are different. Therefore, when calculating the overall DTW distance, the DTW distances of the two feature information need to be normalized to make their values. The range is between 0-1 and then added.
  • each gesture trajectory template determines whether to accept the current user according to the preset threshold, when more than half of the templates are currently being authenticated. The user is accepted when the trajectory DTW distance is less than a preset threshold.
  • the preset threshold is set, for example, to the average value of the DTW distance between the feature information of each stroke corresponding to all the tracks in the pre-stored feature information template plus one standard deviation.
  • FIG. 5 is a flowchart of Embodiment 3 of the user identification method of the present application. As shown in FIG. 5, the method in this embodiment may include:
  • the rotation of the trajectory is normalized, which may be:
  • u(i) is a three-dimensional coordinate point constituting the trajectory projected on the YZ plane
  • v(i) is the value of the Z-axis projected on the YZ plane of the three-dimensional coordinate point constituting the trajectory
  • N is the number of three-dimensional coordinate points constituting the trajectory.
  • the rotation axis of the minimum moment of inertia of the trajectory is searched, and the trajectory is rotated to a position where the rotation axis of the minimum moment of inertia is parallel to the Y axis projected to the Y-Z plane.
  • the size of the trajectory is normalized, which may be:
  • each group of pre-stored feature information templates may carry a user identifier for distinguishing different users.
  • FIG. 6 is a flowchart of Embodiment 4 of the user identification method of the present application. As shown in FIG. 6, the method in this embodiment may include:
  • S601 Construct a trajectory of the user gesture according to the rotational angular velocity measured by the gyroscope in the terminal device in real time, and the trajectory is composed of three-dimensional coordinate points of the user arm at each moment.
  • S602. Determine a segmentation point in the trajectory according to the three-dimensional curvature, and obtain at least one stroke.
  • each stroke is divided by the length of the trajectory.
  • the track length of the entire gesture becomes unity.
  • the axis of each stroke is a line segment from the starting point to the ending point.
  • the acceleration information of each stroke is calculated according to the speed information of each stroke. For the speed of the current moment, Speed for the previous moment, acceleration information
  • the calculation formula is:
  • FIG. 7 is a schematic structural diagram of Embodiment 1 of a user identity identification apparatus according to the present application.
  • the user identity identification apparatus may be implemented as part or all of a terminal device by using software, hardware, or a combination of the two.
  • the embodiment of the present invention is as shown in FIG. 7, the embodiment of the present invention is as shown in FIG.
  • the apparatus may include: a trajectory construction module 11, a stroke segmentation processing module 12, an information extraction module 13, and an identification module 14, wherein
  • the trajectory construction module 11 is configured to construct a trajectory of the user gesture according to the rotational angular velocity measured by the gyro in the terminal device in real time, and the trajectory is composed of three-dimensional coordinate points of the user's arm at each moment.
  • the stroke segmentation processing module 12 is configured to perform stroke segmentation processing on the trajectory.
  • the information extraction module 13 is configured to perform feature information extraction for each piece of strokes.
  • the identification module 14 is configured to perform user identification according to the extracted feature information of each segment of the stroke and the feature information of each segment of the stroke corresponding to one of the at least one set of pre-stored feature information templates, where each set of pre-stored feature information templates includes at least one track corresponding Characteristic information of each stroke.
  • the feature information includes shape information and speed information, or the feature information includes length information, angle information, and speed information, or the feature information includes shape information, speed information, and acceleration information.
  • trajectory construction module 11 is specifically configured to:
  • P t - is the three-dimensional coordinate point of the previous moment of the current time
  • the three-dimensional coordinate point at which the user gesture starts is the origin.
  • the device in this embodiment may be used to implement the technical solution of the method embodiment shown in FIG. 1 , and the implementation principle is similar, and details are not described herein again.
  • the user identity recognition apparatus constructs a trajectory of the user gesture according to the rotational angular velocity measured by the gyroscope in the terminal device in real time, performs segmentation processing on the trajectory of the trajectory, and extracts feature information for each segment of the stroke, and finally extracts according to the feature.
  • the feature information of each stroke of each stroke is identified by the feature information of each stroke corresponding to one track in at least one set of pre-stored feature information templates, because the feature information is extracted on the basis of the user gesture track and the pre-stored feature information template is extracted.
  • the comparison to identify the user's identity is not limited by the user's posture of holding the terminal device. The user and the terminal device can still recognize the user identity in various postures, such as standing, standing, and lying. Improved recognition accuracy and user experience.
  • FIG. 8 is a schematic structural diagram of Embodiment 2 of the user identity identification apparatus of the present application. As shown in FIG. 8, the apparatus of this embodiment is based on the apparatus structure shown in FIG. 7. Further, the stroke segmentation processing module 12 includes the first a determining unit 121 and a first normalizing unit 122, the first determining unit 121 is configured to determine a segment point in the trajectory according to the three-dimensional curvature, to obtain at least one stroke, and the first normalization unit 122 is configured to perform each stroke Size normalization and rotation normalization.
  • the first normalization unit 122 is specifically configured to:
  • the axis of each stroke is a line segment from the starting point to the ending point, and the initial absolute coordinates. It is the same as the three coordinate axes of the terminal device coordinate system at the start of the user gesture, and the origin of the initial absolute coordinate system is the three-dimensional coordinate point of the position where the user's elbow is located.
  • the information extraction module 13 is specifically configured to:
  • the three-dimensional coordinate sequence of each stroke is extracted at a medium time interval, and the shape information of each stroke is obtained, and the first derivative sequence of the three-dimensional position of each stroke interval with respect to time is calculated, and the velocity information of each stroke is obtained.
  • the information extraction module 13 is specifically configured to:
  • Extract the three-dimensional coordinate sequence of each stroke in the middle time interval obtain the shape information of each stroke, calculate the first derivative sequence of the three-dimensional position of each stroke with respect to time, and obtain the speed information of each stroke, according to each stroke
  • the velocity information calculates the acceleration information for each stroke.
  • the device in this embodiment may be used to implement the technical solution of the method embodiment shown in FIG. 1 , and the implementation principle is similar, and details are not described herein again.
  • FIG. 9 is a schematic structural diagram of Embodiment 3 of the user identity identification apparatus of the present application.
  • the apparatus of this embodiment is based on the apparatus structure shown in FIG. 7, and further, the stroke segmentation processing module 12 includes a second.
  • the normalization unit 123 and the second determining unit 124 are configured to perform rotation normalization and size normalization on the trajectory.
  • the second determining unit 124 is configured to determine a segmentation point in the trajectory of the rotation normalization and the size normalization according to the two-dimensional curvature, to obtain at least one stroke.
  • the second normalization unit 123 is specifically configured to:
  • u(i) is a three-dimensional coordinate point constituting the trajectory projected on the YZ plane
  • v(i) is the value of the Z-axis projected on the YZ plane of the three-dimensional coordinate point constituting the trajectory
  • N is the number of three-dimensional coordinate points constituting the trajectory
  • the information extraction module 13 is specifically configured to:
  • Calculate the length of each stroke obtain the length information of each stroke, calculate the angle between two consecutive strokes, and the angle is the angle between the minimum moments of inertia of the two strokes, and obtain the continuous between the two strokes.
  • the angle information is used to calculate a first-order derivative sequence of the three-dimensional position of each stroke of the medium-distance interval with respect to time, and obtain the velocity information of each stroke.
  • the identification module 14 is specifically configured to:
  • the dynamic time-timed DTW distance of the extracted feature information of each stroke of each stroke and the feature information of each stroke of a set of pre-stored feature information templates is calculated according to the calculated DTW distance.
  • the preset threshold determines whether the user who initiated the user gesture is accepted.
  • the preset threshold is an average value of the DTW distance between the feature information of each stroke corresponding to all the tracks in the pre-stored feature information template plus a standard deviation.
  • each set of pre-stored feature information templates carries a user identifier.
  • the device in this embodiment may be used to implement the technical solution of the method embodiment shown in FIG. 1 , and the implementation principle is similar, and details are not described herein again.
  • the application also provides a user identity recognition device, including: a memory and a processor;
  • the memory is used to store program instructions
  • the processor is configured to execute the user identity identification method in the foregoing method embodiment.
  • the application further provides a readable storage medium, where the execution instruction is stored, and when the at least one processor of the user identification device executes the execution instruction, the user identification device performs the user identity in the foregoing method embodiment. recognition methods.
  • the application also provides a program product comprising an execution instruction stored in a readable storage medium.
  • At least one processor of the user identification device can read the execution instruction from a readable storage medium, and the at least one processor executes the execution instruction such that the user identification device implements the user identification method in the above method embodiment.
  • the aforementioned program can be stored in a computer readable storage medium.
  • the program when executed, performs the steps including the foregoing method embodiments; and the foregoing storage medium includes various media that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disk.

Abstract

Disclosed are a user identification method and device. The method comprises: constructing a trajectory for a user gesture according to an angular rotational speed measured in real time by a gyroscope of a terminal device, wherein the trajectory consists of three-dimensional coordinates of an arm of a user at each moment (S101); performing writing stroke segmentation processing on the trajectory and extracting feature information from each segment of a writing stroke (S102); and performing user identification according to the extracted feature information from each of the segments of the writing stroke and feature information of each segment of a writing stroke corresponding to a trajectory in at least one group of pre-stored feature information templates, wherein each group of pre-stored feature information templates comprise feature information of each segment of a writing stroke corresponding to at least one trajectory (S103). As a result, user identification is not limited by a handheld terminal device with respect to gestures recognized, thus increasing the accuracy of identification and enhancing the user experience.

Description

用户身份识别方法及装置User identification method and device
本申请要求于2017年3月6日提交中国专利局、申请号为201710128556.2、申请名称为“用户身份识别方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。The present application claims the priority of the Chinese Patent Application, filed on Jan. 6, 2017, the entire disclosure of which is hereby incorporated by reference.
技术领域Technical field
本申请涉及通信技术领域,尤其涉及一种用户身份识别方法及装置。The present application relates to the field of communications technologies, and in particular, to a user identity identification method and apparatus.
背景技术Background technique
随着智能设备的普及其功能的丰富和强大,大部分家庭都拥有了越来越多的智能设备,例如智能手表、智能手环、智能测氧仪、智能血压仪等。在这些智能设备中,有时需要其记录部分敏感数据(如自身的医疗数据、运动数据等),而用户不希望这些数据被他人看到,因此需要智能设备能够识别当前使用者的身份。然而与手机输入用户密码不同的是,大部分智能设备并不具有触摸屏,此外大部分智能设备为了减少成本而不会选择添加额外的昂贵传感器(如指纹读取器等)来进行身份识别,而且,大部分智能设备并不具有很强的运算能力。With the versatility and power of smart devices, most homes have more and more smart devices, such as smart watches, smart bracelets, smart oxygen meters, and smart blood pressure monitors. In these smart devices, it is sometimes necessary to record part of sensitive data (such as its own medical data, motion data, etc.), and the user does not want the data to be seen by others, so the smart device is required to recognize the identity of the current user. However, unlike mobile phone input user passwords, most smart devices do not have a touch screen, and most smart devices do not choose to add additional expensive sensors (such as fingerprint readers) for identification in order to reduce costs, and Most smart devices do not have strong computing power.
相关技术中,空中签名(AirSig)是一种基于手势识别的身份认证方法,采用AirSig可以为缺乏触摸屏、没有昂贵传感器及有限计算资源的智能设备输入密码从而进行用户身份识别,用户手握智能设备发起空中手势,智能设备即可识别空中手势发起人的身份。AirSig通过在智能设备中安装加速计和陀螺仪,用户手握智能设备发起空中手势时,将加速计和陀螺仪输出的实时数据与预存特征信息模板数据比较来识别用户身份。In the related art, AirSig is an identity authentication method based on gesture recognition. AirSig can input a password for a smart device lacking a touch screen, no expensive sensors and limited computing resources for user identification, and the user holds the smart device. Initiating an air gesture, the smart device can identify the identity of the air gesture initiator. AirSig identifies the user's identity by installing an accelerometer and a gyroscope in the smart device. When the user holds the smart device to initiate an air gesture, the real-time data output by the accelerometer and the gyroscope is compared with the pre-stored feature information template data.
上述相关技术中是直接将加速计和陀螺仪输出的实时数据与预存特征信息模板数据比较来识别用户身份,因此实时数据对应的用户手握智能设备的姿势需要与预存特征信息模板对应的用户手握智能设备的姿势一致才能确保识别的准确性,其受到用户手握智能设备的姿势的限制。In the above related art, the real-time data outputted by the accelerometer and the gyroscope is directly compared with the pre-stored feature information template data to identify the user identity. Therefore, the user's hand holding the smart device corresponding to the real-time data needs the user hand corresponding to the pre-stored feature information template. The gesture of holding the smart device is consistent to ensure the accuracy of the recognition, which is limited by the posture of the user holding the smart device.
发明内容Summary of the invention
本申请提供一种用户身份识别方法及装置,以解决相关技术中存在用户手握智能设备的姿势受限制的问题。The present application provides a user identification method and apparatus to solve the problem that the posture of the user holding the smart device is limited in the related art.
第一方面,本申请提供一种用户身份识别方法,包括:In a first aspect, the application provides a user identity identification method, including:
根据终端设备中陀螺仪实时测得的旋转角速度构建用户手势的轨迹,所述轨迹由用户手臂每一时刻的三维坐标点构成;对所述轨迹进行笔画分段处理,并对每一段笔画进行特征信息提取;根据提取的每一段笔画的特征信息与至少一组预存特征信息模板中一个轨迹对应的每一段笔画的特征信息进行用户身份识别,每一组预存特征信息 模板包括至少一个轨迹对应的每一段笔画的特征信息。Constructing a trajectory of the user gesture according to the rotational angular velocity measured by the gyroscope in the terminal device in real time, the trajectory is composed of three-dimensional coordinate points of the user's arm at each moment; segmenting the stroke of the trajectory, and performing features on each stroke Information extraction; performing user identification according to the extracted feature information of each stroke of each stroke and the feature information of each stroke corresponding to one of the at least one set of pre-stored feature information templates, each set of pre-stored feature information templates including at least one track corresponding to each Characteristic information of a stroke.
通过根据终端设备中陀螺仪实时测得的旋转角速度构建用户手势的轨迹,对所述轨迹进行笔画分段处理,并对每一段笔画进行特征信息提取,最后根据提取的每一段笔画的特征信息与至少一组预存特征信息模板中一个轨迹对应的每一段笔画的特征信息进行用户身份识别,由于是在用户手势轨迹的基础上提取特征信息并与预存的特征信息模板比较来识别用户身份,不会受到用户手握终端设备的姿势的限制。用户和终端设备在各种姿势下,比如站、立、躺的情况下,依然可以识别用户身份,提高了识别的准确性和用户体验。The trajectory of the user gesture is constructed according to the rotational angular velocity measured by the gyroscope in the terminal device in real time, the stroke segmentation processing is performed on the trajectory, and the feature information is extracted for each stroke, and finally, according to the extracted feature information of each stroke The feature information of each stroke corresponding to one track in at least one set of pre-stored feature information templates is used for user identification. Since the feature information is extracted based on the user gesture track and compared with the pre-stored feature information template, the user identity is not recognized. It is limited by the posture of the user holding the terminal device. Users and terminal devices can still identify the user in various postures, such as standing, standing, and lying, improving the accuracy of recognition and user experience.
在一种可能的设计中,所述特征信息包括形状信息和速度信息,或者,所述特征信息包括长度信息、角度信息和速度信息,或者,所述特征信息包括形状信息、速度信息和加速度信息。In one possible design, the feature information includes shape information and speed information, or the feature information includes length information, angle information, and speed information, or the feature information includes shape information, speed information, and acceleration information. .
在一种可能的设计中,所述根据陀螺仪实时测得的旋转角速度构建用户手势的轨迹,包括:In a possible design, the trajectory of the user gesture is constructed according to the rotational angular velocity measured by the gyroscope in real time, including:
根据当前时刻的旋转角速度计算前一时刻到当前时刻终端设备姿态变化的旋转矩阵C tCalculating a rotation matrix C t of the attitude change of the terminal device from the previous moment to the current moment according to the rotational angular velocity of the current time;
根据公式P t=P t-*C t计算用户手臂当前时刻的三维坐标点P t,得到用户手臂每一时刻的三维坐标点; According to the formula P t = P t - * C t calculated three-dimensional coordinates at the current time point of the user's arm P t, to obtain three-dimensional coordinates of each point of time of the user's arm;
其中,P t-为当前时刻的前一时刻的三维坐标点,用户手势开始的三维坐标点为原点。 Where P t - is the three-dimensional coordinate point of the previous moment of the current time, and the three-dimensional coordinate point at which the user gesture starts is the origin.
在一种可能的设计中,所述对所述轨迹进行笔画分段处理,包括:In a possible design, the segmenting processing of the trajectory includes:
根据三维曲率确定出所述轨迹中的分段点,得到至少一段笔画;Determining a segmentation point in the trajectory according to the three-dimensional curvature to obtain at least one stroke;
对每一段笔画进行大小归一化和旋转归一化。Size normalization and rotation normalization for each stroke.
在一种可能的设计中,所述对每一段笔画进行大小归一化,包括:In one possible design, the size of each stroke is normalized, including:
对每一段笔画的大小均除以所述轨迹的长度;Dividing the size of each stroke by the length of the trajectory;
所述对每一段笔画进行旋转归一化,包括:The rotation normalizes each stroke, including:
将每一段笔画的轴旋转到与初始绝对坐标系的X轴平行,每一段笔画的轴为从起始点到结束点的一条线段,所述初始绝对坐标系与用户手势起始时刻所述终端设备坐标系的三坐标轴相同,所述初始绝对坐标系的原点是用户肘部所在位置的三维坐标点。Rotating the axis of each stroke to be parallel to the X axis of the initial absolute coordinate system, the axis of each stroke is a line segment from the starting point to the ending point, the initial absolute coordinate system and the terminal device of the user gesture start time The three coordinate axes of the coordinate system are the same, and the origin of the initial absolute coordinate system is a three-dimensional coordinate point of the position where the user's elbow is located.
在一种可能的设计中,所述对每一段笔画进行特征信息提取,包括:In a possible design, the feature information extraction is performed for each stroke, including:
提取每一段笔画中等时间间隔的三维坐标序列,得到每一段笔画的形状信息;Extracting a three-dimensional coordinate sequence of each stroke of a medium time interval to obtain shape information of each stroke;
计算每一段笔画中等距离间隔的三维位置关于时间的一阶导数序列,得到每一段笔画的速度信息。A sequence of first derivative of the three-dimensional position of each stroke of the middle distance interval with respect to time is calculated, and the velocity information of each stroke is obtained.
在一种可能的设计中,所述对每一段笔画进行特征信息提取,包括:In a possible design, the feature information extraction is performed for each stroke, including:
提取每一段笔画中等时间间隔的三维坐标序列,得到每一段笔画的形状信息;Extracting a three-dimensional coordinate sequence of each stroke of a medium time interval to obtain shape information of each stroke;
计算每一段笔画中等距离间隔的三维位置关于时间的一阶导数序列,得到每一段笔画的速度信息;Calculating a first-order derivative sequence of the three-dimensional position of each stroke of the medium-distance interval with respect to time, and obtaining speed information of each stroke;
根据每一段笔画的速度信息计算每一段笔画的加速度信息。The acceleration information of each stroke is calculated according to the speed information of each stroke.
在一种可能的设计中,所述对所述轨迹进行笔画分段处理,包括:In a possible design, the segmenting processing of the trajectory includes:
对所述轨迹进行旋转归一化和大小归一化;Rotating normalization and size normalization of the trajectory;
根据二维曲率确定出旋转归一化和大小归一化后的轨迹中的分段点,得到至少一段笔画。The segmentation points in the trajectory after the rotation normalization and the size normalization are determined according to the two-dimensional curvature, and at least one stroke is obtained.
在一种可能的设计中,所述对所述轨迹进行旋转归一化,包括:In one possible design, the rotation normalization of the trajectory includes:
根据构成所述轨迹的三维坐标点确定出二维坐标序列[u(i),v(i)],i=1…N,其中,u(i)为构成所述轨迹的三维坐标点投影在Y-Z平面上的Y轴的值,v(i)为构成所述轨迹的三维坐标点投影在Y-Z平面上的Z轴的值,N为构成所述轨迹的三维坐标点的数量;Determining a two-dimensional coordinate sequence [u(i), v(i)], i=1...N according to three-dimensional coordinate points constituting the trajectory, wherein u(i) is a three-dimensional coordinate point constituting the trajectory projected The value of the Y-axis on the YZ plane, v(i) is the value of the Z-axis projected on the YZ plane of the three-dimensional coordinate point constituting the trajectory, and N is the number of three-dimensional coordinate points constituting the trajectory;
寻找所述轨迹的最小转动惯量的转动轴,将所述轨迹旋转到自身最小转动惯量转动轴与投影到Y-Z平面的Y轴平行的位置;Finding a rotation axis of the minimum moment of inertia of the trajectory, rotating the trajectory to a position where the minimum moment of inertia rotation axis is parallel to the Y axis projected to the Y-Z plane;
计算所述轨迹重心
Figure PCTCN2018078139-appb-000001
Figure PCTCN2018078139-appb-000002
Calculating the center of gravity of the track
Figure PCTCN2018078139-appb-000001
Figure PCTCN2018078139-appb-000002
计算所述轨迹的协方差矩阵
Figure PCTCN2018078139-appb-000003
其中,
Figure PCTCN2018078139-appb-000004
Figure PCTCN2018078139-appb-000005
Calculating the covariance matrix of the trajectory
Figure PCTCN2018078139-appb-000003
among them,
Figure PCTCN2018078139-appb-000004
Figure PCTCN2018078139-appb-000005
将构成所述轨迹的所有三维坐标点乘以I;Multiplying all three-dimensional coordinate points constituting the trajectory by I;
所述对所述轨迹进行大小归一化,包括:The normalizing the size of the trajectory includes:
计算所述轨迹的宽W和高H,将所述二维坐标序列中的u(i)除以W,将所述二维坐标序列中的V(i)除以H。The width W and height H of the trajectory are calculated, u(i) in the two-dimensional coordinate sequence is divided by W, and V(i) in the two-dimensional coordinate sequence is divided by H.
在一种可能的设计中,所述对每一段笔画进行特征信息提取,包括:In a possible design, the feature information extraction is performed for each stroke, including:
计算每一段笔画的长度,得到每一段笔画的长度信息;Calculate the length of each stroke and obtain the length information of each stroke;
计算连续的两段笔画之间的夹角,所述夹角为所述两段笔画各自最小转动惯量轴的夹角,得到连续的两段笔画之间的角度信息;Calculating an angle between two consecutive strokes, wherein the angle is an angle between respective minimum moments of inertia axes of the two strokes, and obtaining angle information between two consecutive strokes;
计算每一段笔画中等距离间隔的三维位置关于时间的一阶导数序列,得到每一段笔画的速度信息。A sequence of first derivative of the three-dimensional position of each stroke of the middle distance interval with respect to time is calculated, and the velocity information of each stroke is obtained.
在一种可能的设计中,所述根据提取的每一段笔画的特征信息与至少一组预存特征信息模板中对应的每一段笔画的特征信息进行用户身份识别,包括:In a possible design, the user identity is determined according to the feature information of each of the extracted strokes and the feature information of each of the corresponding strokes in the at least one set of pre-stored feature information templates, including:
对每一组预存特征信息模板,计算提取的每一段笔画的特征信息与一组预存特征信息模板中一个轨迹对应的每一段笔画的特征信息的动态时间归整DTW距离;For each set of pre-stored feature information templates, calculating a dynamic time-rounded DTW distance of the extracted feature information of each segment of the stroke and the feature information of each segment of the stroke corresponding to one of the pre-stored feature information templates;
根据计算出的DTW距离与预设阈值确定是否接受发起所述用户手势的用户。Determining whether to accept the user who initiated the user gesture is based on the calculated DTW distance and a preset threshold.
在一种可能的设计中,所述预设阈值为一组预存特征信息模板中所有轨迹对应的每一段笔画的特征信息之间的DTW距离的平均值与所述一组预存特征信息模板中所有轨迹对应的每一段笔画的特征信息之间的DTW距离的标准差之和。In a possible design, the preset threshold is an average of DTW distances between feature information of each segment of strokes corresponding to all the tracks in a set of pre-stored feature information templates, and all of the set of pre-stored feature information templates. The sum of the standard deviations of the DTW distances between the feature information of each stroke corresponding to the track.
在一种可能的设计中,所述每一组预存特征信息模板携带用户标识。In a possible design, each set of pre-stored feature information templates carries a user identifier.
第二方面,本申请提供一种用户身份识别装置,包括:In a second aspect, the application provides a user identity identification device, including:
轨迹构建模块,用于根据终端设备中陀螺仪实时测得的旋转角速度构建用户手势的轨迹,所述轨迹由用户手臂每一时刻的三维坐标点构成;笔画分段处理模块,用于对所述轨迹进行笔画分段处理;信息提取模块,用于对每一段笔画进行特征信息提取;识别模块,用于根据提取的每一段笔画的特征信息与至少一组预存特征信息模板中一个轨迹对应的每一段笔画的特征信息进行用户身份识别,每一组预存特征信息模板包 括至少一个轨迹对应的每一段笔画的特征信息。a trajectory construction module, configured to construct a trajectory of the user gesture according to a real-time measured rotational angular velocity of the gyroscope in the terminal device, where the trajectory is composed of three-dimensional coordinate points of the user's arm at each moment; a stroke segmentation processing module is configured to The trajectory performs segmentation processing of the stroke; the information extraction module is configured to extract feature information for each segment of the stroke; and the identification module is configured to: according to the extracted feature information of each segment of the stroke and each of the at least one set of pre-stored feature information templates The feature information of a stroke is used for user identification, and each set of pre-stored feature information templates includes feature information of each stroke corresponding to at least one track.
通过根据终端设备中陀螺仪实时测得的旋转角速度构建用户手势的轨迹,对所述轨迹进行笔画分段处理,并对每一段笔画进行特征信息提取,最后根据提取的每一段笔画的特征信息与至少一组预存特征信息模板中一个轨迹对应的每一段笔画的特征信息进行用户身份识别,由于是在用户手势轨迹的基础上提取特征信息并与预存的特征信息模板比较来识别用户身份,不会受到用户手握终端设备的姿势的限制。用户和终端设备在各种姿势下,比如站、立、躺的情况下,依然可以识别用户身份,提高了识别的准确性和用户体验。The trajectory of the user gesture is constructed according to the rotational angular velocity measured by the gyroscope in the terminal device in real time, the stroke segmentation processing is performed on the trajectory, and the feature information is extracted for each stroke, and finally, according to the extracted feature information of each stroke The feature information of each stroke corresponding to one track in at least one set of pre-stored feature information templates is used for user identification. Since the feature information is extracted based on the user gesture track and compared with the pre-stored feature information template, the user identity is not recognized. It is limited by the posture of the user holding the terminal device. Users and terminal devices can still identify the user in various postures, such as standing, standing, and lying, improving the accuracy of recognition and user experience.
在一种可能的设计中,所述特征信息包括形状信息和速度信息,或者,所述特征信息包括长度信息、角度信息和速度信息,或者,所述特征信息包括形状信息、速度信息和加速度信息。In one possible design, the feature information includes shape information and speed information, or the feature information includes length information, angle information, and speed information, or the feature information includes shape information, speed information, and acceleration information. .
在一种可能的设计中,所述轨迹构建模块具体用于:In a possible design, the trajectory building module is specifically configured to:
根据当前时刻的旋转角速度计算前一时刻到当前时刻终端设备姿态变化的旋转矩阵C tCalculating a rotation matrix C t of the attitude change of the terminal device from the previous moment to the current moment according to the rotational angular velocity of the current time;
根据公式P t=P t-*C t计算用户手臂当前时刻的三维坐标点P t,得到用户手臂每一时刻的三维坐标点; According to the formula P t = P t - * C t calculated three-dimensional coordinates at the current time point of the user's arm P t, to obtain three-dimensional coordinates of each point of time of the user's arm;
其中,P t-为当前时刻的前一时刻的三维坐标点,用户手势开始的三维坐标点为原点。 Where P t - is the three-dimensional coordinate point of the previous moment of the current time, and the three-dimensional coordinate point at which the user gesture starts is the origin.
在一种可能的设计中,所述笔画分段处理模块包括:In a possible design, the stroke segmentation processing module includes:
第一确定单元,用于根据三维曲率确定出所述轨迹中的分段点,得到至少一段笔画;a first determining unit, configured to determine a segment point in the trajectory according to the three-dimensional curvature, to obtain at least one stroke;
第一归一化单元,用于对每一段笔画进行大小归一化和旋转归一化。The first normalization unit is used for size normalization and rotation normalization of each stroke.
在一种可能的设计中,所述第一归一化单元具体用于:In a possible design, the first normalization unit is specifically configured to:
对每一段笔画的大小均除以所述轨迹的长度;Dividing the size of each stroke by the length of the trajectory;
将每一段笔画的轴旋转到与初始绝对坐标系的X轴平行,每一段笔画的轴为从起始点到结束点的一条线段,所述初始绝对坐标系与用户手势起始时刻所述终端设备坐标系的三坐标轴相同,所述初始绝对坐标系的原点是用户肘部所在位置的三维坐标点。Rotating the axis of each stroke to be parallel to the X axis of the initial absolute coordinate system, the axis of each stroke is a line segment from the starting point to the ending point, the initial absolute coordinate system and the terminal device of the user gesture start time The three coordinate axes of the coordinate system are the same, and the origin of the initial absolute coordinate system is a three-dimensional coordinate point of the position where the user's elbow is located.
在一种可能的设计中,所述信息提取模块具体用于:In a possible design, the information extraction module is specifically configured to:
提取每一段笔画中等时间间隔的三维坐标序列,得到每一段笔画的形状信息;Extracting a three-dimensional coordinate sequence of each stroke of a medium time interval to obtain shape information of each stroke;
计算每一段笔画中等距离间隔的三维位置关于时间的一阶导数序列,得到每一段笔画的速度信息。A sequence of first derivative of the three-dimensional position of each stroke of the middle distance interval with respect to time is calculated, and the velocity information of each stroke is obtained.
在一种可能的设计中,所述信息提取模块具体用于:In a possible design, the information extraction module is specifically configured to:
提取每一段笔画中等时间间隔的三维坐标序列,得到每一段笔画的形状信息;Extracting a three-dimensional coordinate sequence of each stroke of a medium time interval to obtain shape information of each stroke;
计算每一段笔画中等距离间隔的三维位置关于时间的一阶导数序列,得到每一段笔画的速度信息;Calculating a first-order derivative sequence of the three-dimensional position of each stroke of the medium-distance interval with respect to time, and obtaining speed information of each stroke;
根据每一段笔画的速度信息计算每一段笔画的加速度信息。The acceleration information of each stroke is calculated according to the speed information of each stroke.
在一种可能的设计中,所述笔画分段处理模块包括:In a possible design, the stroke segmentation processing module includes:
第二归一化单元,用于对所述轨迹进行旋转归一化和大小归一化;a second normalization unit, configured to perform rotation normalization and size normalization on the trajectory;
第二确定单元,用于根据二维曲率确定出旋转归一化和大小归一化后的轨迹中的 分段点,得到至少一段笔画。And a second determining unit, configured to determine a segmentation point in the trajectory after the rotation normalization and the size normalization according to the two-dimensional curvature, to obtain at least one stroke.
在一种可能的设计中,所述第二归一化单元具体用于:In a possible design, the second normalization unit is specifically configured to:
根据构成所述轨迹的三维坐标点确定出二维坐标序列[u(i),v(i)],i=1…N,其中,u(i)为构成所述轨迹的三维坐标点投影在Y-Z平面上的Y轴的值,v(i)为构成所述轨迹的三维坐标点投影在Y-Z平面上的Z轴的值,N为构成所述轨迹的三维坐标点的数量;Determining a two-dimensional coordinate sequence [u(i), v(i)], i=1...N according to three-dimensional coordinate points constituting the trajectory, wherein u(i) is a three-dimensional coordinate point constituting the trajectory projected The value of the Y-axis on the YZ plane, v(i) is the value of the Z-axis projected on the YZ plane of the three-dimensional coordinate point constituting the trajectory, and N is the number of three-dimensional coordinate points constituting the trajectory;
寻找所述轨迹的最小转动惯量的转动轴,将所述轨迹旋转到自身最小转动惯量转动轴与投影到Y-Z平面的Y轴平行的位置;Finding a rotation axis of the minimum moment of inertia of the trajectory, rotating the trajectory to a position where the minimum moment of inertia rotation axis is parallel to the Y axis projected to the Y-Z plane;
计算所述轨迹重心
Figure PCTCN2018078139-appb-000006
Figure PCTCN2018078139-appb-000007
Calculating the center of gravity of the track
Figure PCTCN2018078139-appb-000006
Figure PCTCN2018078139-appb-000007
计算所述轨迹的协方差矩阵
Figure PCTCN2018078139-appb-000008
其中,
Figure PCTCN2018078139-appb-000009
Figure PCTCN2018078139-appb-000010
Calculating the covariance matrix of the trajectory
Figure PCTCN2018078139-appb-000008
among them,
Figure PCTCN2018078139-appb-000009
Figure PCTCN2018078139-appb-000010
将构成所述轨迹的所有三维坐标点乘以I;Multiplying all three-dimensional coordinate points constituting the trajectory by I;
计算所述轨迹的宽W和高H,将所述二维坐标序列中的u(i)除以W,将所述二维坐标序列中的V(i)除以H。The width W and height H of the trajectory are calculated, u(i) in the two-dimensional coordinate sequence is divided by W, and V(i) in the two-dimensional coordinate sequence is divided by H.
在一种可能的设计中,所述信息提取模块具体用于:In a possible design, the information extraction module is specifically configured to:
计算每一段笔画的长度,得到每一段笔画的长度信息;Calculate the length of each stroke and obtain the length information of each stroke;
计算连续的两段笔画之间的夹角,所述夹角为所述两段笔画各自最小转动惯量轴的夹角,得到连续的两段笔画之间的角度信息;Calculating an angle between two consecutive strokes, wherein the angle is an angle between respective minimum moments of inertia axes of the two strokes, and obtaining angle information between two consecutive strokes;
计算每一段笔画中等距离间隔的三维位置关于时间的一阶导数序列,得到每一段笔画的速度信息。A sequence of first derivative of the three-dimensional position of each stroke of the middle distance interval with respect to time is calculated, and the velocity information of each stroke is obtained.
在一种可能的设计中,所述识别模块具体用于:In a possible design, the identification module is specifically used to:
对每一组预存特征信息模板,计算提取的每一段笔画的特征信息与一组预存特征信息模板中一个轨迹对应的每一段笔画的特征信息的动态时间归整DTW距离;For each set of pre-stored feature information templates, calculating a dynamic time-rounded DTW distance of the extracted feature information of each segment of the stroke and the feature information of each segment of the stroke corresponding to one of the pre-stored feature information templates;
根据计算出的DTW距离与预设阈值确定是否接受发起所述用户手势的用户。Determining whether to accept the user who initiated the user gesture is based on the calculated DTW distance and a preset threshold.
在一种可能的设计中,所述预设阈值为一组预存特征信息模板中所有轨迹对应的每一段笔画的特征信息之间的DTW距离的平均值与所述一组预存特征信息模板中所有轨迹对应的每一段笔画的特征信息之间的DTW距离的标准差之和。In a possible design, the preset threshold is an average of DTW distances between feature information of each segment of strokes corresponding to all the tracks in a set of pre-stored feature information templates, and all of the set of pre-stored feature information templates. The sum of the standard deviations of the DTW distances between the feature information of each stroke corresponding to the track.
在一种可能的设计中,所述每一组预存特征信息模板携带用户标识。In a possible design, each set of pre-stored feature information templates carries a user identifier.
第三方面,本申请提供一种用户身份识别装置,包括:In a third aspect, the application provides a user identity recognition apparatus, including:
存储器和处理器;Memory and processor;
存储器用于存储程序指令;The memory is used to store program instructions;
处理器用于执行第一方面及第一方面任一种可能的设计中的用户身份识别方法。The processor is configured to perform the user identification method in any of the possible aspects of the first aspect and the first aspect.
第四方面,本申请提供一种可读存储介质,可读存储介质中存储有执行指令,当用户身份识别装置的至少一个处理器执行该执行指令时,用户身份识别装置执行第一方面及第一方面任一种可能的设计中的用户身份识别方法。In a fourth aspect, the present application provides a readable storage medium, where an execution instruction is stored, and when at least one processor of the user identification device executes the execution instruction, the user identification device performs the first aspect and the On the one hand, any possible design of the user identification method.
第五方面,本申请提供一种程序产品,该程序产品包括执行指令,该执行指令存储在可读存储介质中。用户身份识别装置的至少一个处理器可以从可读存储介质读取 该执行指令,至少一个处理器执行该执行指令使得用户身份识别装置实施第一方面及第一方面任一种可能的设计中的用户身份识别方法。In a fifth aspect, the present application provides a program product comprising an execution instruction stored in a readable storage medium. At least one processor of the user identification device can read the execution instructions from a readable storage medium, the at least one processor executing the execution instructions such that the user identification device implements the first aspect and any of the possible aspects of the first aspect User identification method.
附图说明DRAWINGS
图1为本申请用户身份识别方法实施例一的流程图;1 is a flowchart of Embodiment 1 of a method for identifying a user of the present application;
图2为训练流程示意图;2 is a schematic diagram of a training process;
图3为认证识别流程示意图;Figure 3 is a schematic diagram of the authentication identification process;
图4为本申请用户身份识别方法实施例二的流程图;4 is a flowchart of Embodiment 2 of a method for identifying a user of the present application;
图5为本申请用户身份识别方法实施例三的流程图;FIG. 5 is a flowchart of Embodiment 3 of a method for identifying a user identity according to the present application;
图6为本申请用户身份识别方法实施例四的流程图;6 is a flowchart of Embodiment 4 of a method for identifying a user identity according to the present application;
图7为本申请用户身份识别装置实施例一的结构示意图;FIG. 7 is a schematic structural diagram of Embodiment 1 of a user identity identification apparatus according to the present application;
图8为本申请用户身份识别装置实施例二的结构示意图;8 is a schematic structural diagram of Embodiment 2 of a user identity identification apparatus according to the present application;
图9为本申请用户身份识别装置实施例三的结构示意图。FIG. 9 is a schematic structural diagram of Embodiment 3 of a user identity identification apparatus according to the present application.
具体实施方式detailed description
本申请提供的用户身份识别方法及装置,可适用于各种终端设备中,例如手机、智能手表、智能血压仪等,用于对用户身份的识别,终端设备不需要触摸屏,用户手握终端设备发起空中手势,终端设备即可识别空中手势发起人的身份,本申请通过借助终端设备中安装的陀螺仪绘制用户手势轨迹,在用户手势轨迹的基础上提取特征信息并与预存的特征模板比较来识别用户身份,相比较相关技术中直接将加速计和陀螺仪输出的实时数据与预存特征信息模板数据比较来识别用户身份,不会受到用户手握终端设备的姿势的限制。用户和终端设备在各种姿势下,比如站、立、躺的情况下,依然可以识别用户身份。下面结合附图详细说明本申请的技术方案。The user identification method and device provided by the application can be applied to various terminal devices, such as a mobile phone, a smart watch, an intelligent blood pressure meter, etc., for identifying a user identity, the terminal device does not need a touch screen, and the user holds the terminal device. When the air gesture is initiated, the terminal device can identify the identity of the air gesture initiator. The present application draws the user gesture track by using the gyroscope installed in the terminal device, extracts the feature information based on the user gesture track, and compares it with the pre-stored feature template. The identity of the user is recognized, and the real-time data of the accelerometer and the gyroscope output is directly compared with the pre-stored feature information template data to identify the user identity, which is not limited by the posture of the user holding the terminal device. The user and the terminal device can still recognize the user identity in various postures, such as standing, standing, and lying. The technical solution of the present application will be described in detail below with reference to the accompanying drawings.
图1为本申请用户身份识别方法实施例一的流程图,如图1所示,本实施例的方法可以包括:FIG. 1 is a flowchart of Embodiment 1 of a user identification method of the present application. As shown in FIG. 1 , the method in this embodiment may include:
S101、根据终端设备中陀螺仪实时测得的旋转角速度构建用户手势的轨迹,轨迹由用户手臂每一时刻的三维坐标点构成。S101. Construct a trajectory of the user gesture according to the rotational angular velocity measured by the gyroscope in the terminal device in real time, and the trajectory is composed of three-dimensional coordinate points of the user arm at each moment.
具体来说,终端设备中安装有陀螺仪一个传感器,用户将手持或将具有陀螺仪的终端设备戴在自身手腕上后通过在空中写字的方式作为一种终端设备的输入,陀螺仪能够获取手的轨迹,每一个用户即使做同一个手势轨迹也有不同,因为根据人体运动学的原理,当人要去做一个手势的时候,首先大脑会将这个手势分段,每一段都是一个弧形,对于每一段即每一个弧形,由于每个人神经控制肌肉的方式不同而导致在每一笔中速度从开始到结束服从正态分布,因此每一段都不会是一个完美的弧形且带有个人特征,人写字也是基于此原理。Specifically, the terminal device is equipped with a gyro sensor, and the user can hold the hand device or the terminal device with the gyroscope on the wrist and then write in the air as an input of the terminal device, and the gyroscope can acquire the hand. The trajectory of each user is different even if they do the same gesture trajectory, because according to the principle of kinesiology, when a person wants to make a gesture, the brain will first segment the gesture, and each segment is an arc. For each segment, ie each arc, each segment is not a perfect arc and has a perfect arc due to the different ways in which the nerves control the muscles, resulting in a normal distribution of velocity from start to finish in each stroke. Personal characteristics, people writing is also based on this principle.
其中,构建用户手势的轨迹,设计到两个坐标系,设备坐标系和初始绝对坐标系,设备坐标系即终端设备本身的坐标系,这是由终端设备自己定义的。初始绝对坐标系与用户手势起始时刻终端设备坐标系的三坐标轴相同,初始绝对坐标系的原点是用户肘部所在位置的三维坐标点。为了构建用户手势的轨迹,首先需要为人的手臂建模。人的手臂分为上臂和前臂,上臂连接肘部和肩膀,前臂连接腕部和肘部。当用户在空 中写字时,动作不会非常大,位移的主要部分是腕部相对于肘部的运动产生的,这一过程就可以视为刚体(前臂)绕着肘部(支点)运动。陀螺仪记录了旋转的角速度,因此只要知道刚体的长度就可以计算出运动轨迹。The trajectory of the user gesture is constructed, and the two coordinate systems, the device coordinate system and the initial absolute coordinate system are designed, and the device coordinate system is the coordinate system of the terminal device itself, which is defined by the terminal device itself. The initial absolute coordinate system is the same as the three coordinate axes of the terminal device coordinate system at the start of the user gesture, and the origin of the initial absolute coordinate system is the three-dimensional coordinate point of the position where the user's elbow is located. In order to construct the trajectory of the user's gesture, it is first necessary to model the human arm. The human arm is divided into an upper arm and a forearm, the upper arm connects the elbow and the shoulder, and the forearm connects the wrist and the elbow. When the user writes in the air, the movement is not very large. The main part of the displacement is the movement of the wrist relative to the elbow. This process can be regarded as the rigid body (forearm) moving around the elbow (the fulcrum). The gyroscope records the angular velocity of the rotation, so the motion trajectory can be calculated as long as the length of the rigid body is known.
其中,根据陀螺仪实时测得的旋转角速度构建用户手势的轨迹,具体可以为:The trajectory of the user gesture is constructed according to the rotational angular velocity measured by the gyroscope in real time, which may be:
根据当前时刻的旋转角速度计算前一时刻到当前时刻终端设备姿态变化的旋转矩阵C t,根据公式P t=P t-*C t计算用户手臂当前时刻的三维坐标点P t,,得到用户手臂每一时刻的三维坐标点。其中,P t-为当前时刻的前一时刻的三维坐标点,用户手势开始的三维坐标点为原点。 The rotation angular velocity before the current time to the current time point is calculated a time change in posture of the terminal device rotation matrix C t, according to the formula P t = P t - * C t calculated three-dimensional coordinates at the current time point of the user's arm P t ,, to obtain the user's arm Three-dimensional coordinate points at each moment. Where P t - is the three-dimensional coordinate point of the previous moment of the current time, and the three-dimensional coordinate point at which the user gesture starts is the origin.
在用户的肘部做手势时动作不会非常大的前提下,用户的前臂可以视为一个绕着手肘运动的刚体,即是一段直线在初始绝对坐标系中绕着原点运动。陀螺仪可以获得当前设备旋转的角速度,因而可以计算出前一时刻到当前时刻设备姿态变化的旋转矩阵,而这一旋转矩阵也描述了用户手臂的姿态变化。当前时刻的旋转矩阵C t为: Under the premise that the user's elbow gesture is not very large, the user's forearm can be regarded as a rigid body moving around the elbow, that is, a straight line moves around the origin in the initial absolute coordinate system. The gyroscope can obtain the angular velocity of the current device rotation, and thus can calculate the rotation matrix of the device posture change from the previous moment to the current moment, and this rotation matrix also describes the posture change of the user's arm. The current rotation matrix C t is:
Figure PCTCN2018078139-appb-000011
Figure PCTCN2018078139-appb-000011
其中,(ω x,ω y,ω z)为陀螺仪获得的当前时刻t的旋转角速度,C t-为前一时刻t -的旋转矩阵,I为单位矩阵。 Where (ω x , ω y , ω z ) is the rotational angular velocity of the current time t obtained by the gyroscope, C t− is the rotation matrix of the previous time t , and I is the unit matrix.
得到用户手臂每一时刻的三维坐标点,即得到用户手势的轨迹。The three-dimensional coordinate point of each moment of the user's arm is obtained, that is, the trajectory of the user's gesture is obtained
S102、对轨迹进行笔画分段处理,并对每一段笔画进行特征信息提取。S102: Perform stroke segmentation processing on the trajectory, and perform feature information extraction on each stroke.
具体来说,在获得用户手势的轨迹之后,根据运动学原理,人在空中写字时脑中会将整个轨迹分段按照顺序去写,并且每一段都会近似于一条弧线,由于每个人本身的肌肉和神经的反应不同,因此每个人画一笔的时候的图形与脑中所想的弧线会有一个差值,而这个差值符合正态分布。因此,轨迹需要进行分段,分段的原则是每一笔近似一个弧线。区分不同的人需要提取隐藏在每一笔中的特征。对于一个轨迹,分段点都是两段笔画的连接处,即是两段笔画之间的转折点。根据轨迹的形状特征,寻找到每一笔的转折点并将笔画进行分段。在笔画分段后考虑到用户每一次所做的手势的大小和速度不可能完全一致,本实施例中在每一笔轨迹的基础上进行归一化处理,并在归一化后提取每一笔的个人特征。Specifically, after obtaining the trajectory of the user's gesture, according to the kinematics principle, when the person writes in the air, the entire trajectory segment is written in order, and each segment is approximated by an arc, due to each person's own The muscles and nerves react differently, so each person draws a stroke and the curve has a difference from the arc in the brain, and the difference is in a normal distribution. Therefore, the trajectory needs to be segmented, and the principle of segmentation is that each pen approximates an arc. Differentiating between different people requires extracting the features hidden in each pen. For a trajectory, the segmentation point is the junction of the two strokes, which is the turning point between the two strokes. According to the shape feature of the trajectory, find the turning point of each pen and segment the stroke. After the segmentation of the strokes, it is considered that the size and speed of the gestures made by the user may not be completely consistent. In this embodiment, the normalization process is performed on the basis of each trajectory, and each of the trajectories is normalized and extracted. Personal characteristics of the pen.
作为一种可实施的方式,对轨迹进行笔画分段处理,具体可以为:As an implementable manner, the stroke processing is performed on the trajectory, which may specifically be:
根据三维曲率确定出轨迹中的分段点,得到至少一段笔画,然后对每一段笔画进行大小归一化和旋转归一化。The segmentation points in the trajectory are determined according to the three-dimensional curvature, at least one stroke is obtained, and then each of the strokes is subjected to size normalization and rotation normalization.
其中,由于笔画分段出的曲率远大于周边点的曲率,本实施例中设定一个阈值来决定该点是否为分段点。对每一段笔画进行大小归一化,具体可以为:对每一段笔画的大小均除以轨迹的长度。这样整个手势的轨迹长度就变为单位1。对每一段笔画进行旋转归一化,具体可以为:将每一段笔画的轴旋转到与初始绝对坐标系的X轴平行,每一段笔画的轴为从起始点到结束点的一条线段,初始绝对坐标系与用户手势起始时刻终端设备坐标系的三坐标轴相同,初始绝对坐标系的原点是用户肘部所在位置的三维坐标点。Wherein, since the curvature of the stroke segment is much larger than the curvature of the peripheral point, in this embodiment, a threshold is set to determine whether the point is a segmentation point. The size of each stroke is normalized, which can be: the size of each stroke is divided by the length of the track. Thus the track length of the entire gesture becomes unity. Rotate the normalization of each stroke, which can be: rotate the axis of each stroke to be parallel to the X axis of the initial absolute coordinate system. The axis of each stroke is a line segment from the starting point to the ending point. The coordinate system is the same as the three coordinate axes of the terminal device coordinate system at the start of the user gesture, and the origin of the initial absolute coordinate system is the three-dimensional coordinate point of the position where the user's elbow is located.
其中,特征信息包括形状信息和速度信息,或者,特征信息包括长度信息、角度 信息和速度信息,或者,特征信息包括形状信息、速度信息和加速度信息。The feature information includes shape information and speed information, or the feature information includes length information, angle information, and speed information, or the feature information includes shape information, speed information, and acceleration information.
1、特征信息包括形状信息和速度信息时,对每一段笔画进行特征信息提取,具体可以为:1. When the feature information includes shape information and speed information, feature information is extracted for each stroke, which may be:
提取每一段笔画中等时间间隔的三维坐标序列,得到每一段笔画的形状信息;计算每一段笔画中等距离间隔的三维位置关于时间的一阶导数序列,得到每一段笔画的速度信息。
Figure PCTCN2018078139-appb-000012
为当前位置,
Figure PCTCN2018078139-appb-000013
为前一时刻的位置,t为当前时刻,t -为前一时刻,则速度信息
Figure PCTCN2018078139-appb-000014
的计算公式为:
The three-dimensional coordinate sequence of each stroke is extracted at a medium time interval, and the shape information of each stroke is obtained; the first derivative sequence of the three-dimensional position of each stroke with respect to time is calculated, and the speed information of each stroke is obtained.
Figure PCTCN2018078139-appb-000012
For the current location,
Figure PCTCN2018078139-appb-000013
For the position of the previous moment, t is the current moment, t - is the previous moment, then the speed information
Figure PCTCN2018078139-appb-000014
The calculation formula is:
Figure PCTCN2018078139-appb-000015
Figure PCTCN2018078139-appb-000015
2、特征信息包括形状信息、速度信息和加速度信息时,对每一段笔画进行特征信息提取,具体可以为:2. When the feature information includes shape information, speed information, and acceleration information, feature information is extracted for each stroke, which may be:
提取每一段笔画中等时间间隔的三维坐标序列,得到每一段笔画的形状信息;计算每一段笔画中等距离间隔的三维位置关于时间的一阶导数序列,得到每一段笔画的速度信息;根据每一段笔画的速度信息计算每一段笔画的加速度信息。
Figure PCTCN2018078139-appb-000016
为当前时刻的速度,
Figure PCTCN2018078139-appb-000017
为前一时刻速度,加速度信息
Figure PCTCN2018078139-appb-000018
计算公式为:
Figure PCTCN2018078139-appb-000019
Extracting a three-dimensional coordinate sequence of each stroke of a medium time interval to obtain shape information of each stroke; calculating a first derivative sequence of the three-dimensional position of each stroke with respect to time, and obtaining speed information of each stroke; according to each stroke The velocity information calculates the acceleration information for each stroke.
Figure PCTCN2018078139-appb-000016
For the speed of the current moment,
Figure PCTCN2018078139-appb-000017
Speed for the previous moment, acceleration information
Figure PCTCN2018078139-appb-000018
The calculation formula is:
Figure PCTCN2018078139-appb-000019
3、特征信息包括长度信息、角度信息和速度信息时,对每一段笔画进行特征信息提取,具体可以为:3. When the feature information includes length information, angle information, and speed information, feature information is extracted for each stroke, which may be:
计算每一段笔画的长度,得到每一段笔画的长度信息;计算连续的两段笔画之间的夹角,夹角为两段笔画各自最小转动惯量轴的夹角,得到连续的两段笔画之间的角度信息;计算每一段笔画中等距离间隔的三维位置关于时间的一阶导数序列,得到每一段笔画的速度信息。Calculate the length of each stroke and obtain the length information of each stroke; calculate the angle between two consecutive strokes, the angle is the angle between the minimum moments of inertia of the two strokes, and obtain the continuous between the two strokes The angle information; the first derivative sequence of the three-dimensional position of each stroke interval with respect to time is calculated, and the velocity information of each stroke is obtained.
作为另一种可实施的方式,对轨迹进行笔画分段处理,具体可以为:As another implementable manner, the stroke processing of the trajectory may be specifically:
对轨迹进行旋转归一化和大小归一化;根据二维曲率确定出旋转归一化和大小归一化后的轨迹中的分段点,得到至少一段笔画。Rotate normalization and size normalization of the trajectory; determine the segmentation points in the trajectory after rotation normalization and size normalization according to the two-dimensional curvature, and obtain at least one stroke.
其中,对轨迹进行旋转归一化,具体可以为:Wherein, the rotation of the trajectory is normalized, which may be:
首先,根据构成轨迹的三维坐标点确定出二维坐标序列[u(i),v(i)],i=1…N,其中,u(i)为构成轨迹的三维坐标点投影在Y-Z平面上的Y轴的值,v(i)为构成轨迹的三维坐标点投影在Y-Z平面上的Z轴的值,N为构成轨迹的三维坐标点的数量。因为人在空中写字的平面都是在身体前方与人身体平面平行,即初始绝对坐标系的Y-Z平面,因此将三维坐标点投影在Y-Z二维坐标平面。First, a two-dimensional coordinate sequence [u(i), v(i)], i=1...N is determined according to the three-dimensional coordinate points constituting the trajectory, wherein u(i) is a three-dimensional coordinate point constituting the trajectory projected on the YZ plane The value of the upper Y-axis, v(i) is the value of the Z-axis projected on the YZ plane of the three-dimensional coordinate point constituting the trajectory, and N is the number of three-dimensional coordinate points constituting the trajectory. Because the plane in which people write in the air is parallel to the human body plane in front of the body, that is, the Y-Z plane of the initial absolute coordinate system, the three-dimensional coordinate points are projected on the Y-Z two-dimensional coordinate plane.
接着,寻找轨迹的最小转动惯量的转动轴,将轨迹旋转到自身最小转动惯量转动轴与投影到Y-Z平面的Y轴平行的位置。Next, the rotation axis of the minimum moment of inertia of the trajectory is searched, and the trajectory is rotated to a position where the rotation axis of the minimum moment of inertia is parallel to the Y axis projected to the Y-Z plane.
计算轨迹重心
Figure PCTCN2018078139-appb-000020
Figure PCTCN2018078139-appb-000021
Calculate the track center of gravity
Figure PCTCN2018078139-appb-000020
Figure PCTCN2018078139-appb-000021
计算轨迹的协方差矩阵
Figure PCTCN2018078139-appb-000022
其中,
Figure PCTCN2018078139-appb-000023
Figure PCTCN2018078139-appb-000024
Calculate the covariance matrix of the trajectory
Figure PCTCN2018078139-appb-000022
among them,
Figure PCTCN2018078139-appb-000023
Figure PCTCN2018078139-appb-000024
最后,将构成轨迹的所有三维坐标点乘以I。Finally, all the three-dimensional coordinate points that make up the trajectory are multiplied by I.
其中,对轨迹进行大小归一化,具体可以为:Wherein, the size of the trajectory is normalized, which may be:
计算轨迹的宽W和高H,将二维坐标序列中的u(i)除以W,将二维坐标序列中的V(i)除以H。Calculate the width W and height H of the trajectory, divide u(i) in the two-dimensional coordinate sequence by W, and divide V(i) in the two-dimensional coordinate sequence by H.
在归一化后的轨迹上,根据曲率分笔画,对于每一段笔画,提取笔画的特征信息。On the normalized trajectory, according to the curvature parting stroke, the feature information of the stroke is extracted for each stroke.
S103、根据提取的每一段笔画的特征信息与至少一组预存特征信息模板中一个轨迹对应的每一段笔画的特征信息进行用户身份识别,每一组预存特征信息模板包括至少一个轨迹对应的每一段笔画的特征信息。S103. Perform user identification according to the extracted feature information of each stroke of each stroke and the feature information of each stroke corresponding to one track of the at least one set of pre-stored feature information templates, where each set of pre-stored feature information templates includes each segment corresponding to at least one track. Characteristic information of strokes.
具体来说,预存特征信息模板包括至少一个轨迹对应的每一段笔画的特征信息,在获得用户手势的轨迹的特征信息后,会进行用户身份识别,若终端设备之前没有经过用户手势的训练,终端设备会提示用户进行多次的训练(例如5到10次),当处于用户训练状态,终端设备会根据上述特征信息提取的过程提取出每个用户手势轨迹对应的特征信息,并存储下多次手势轨迹的特征信息作为预存特征信息模板。当训练完成后,设备即可以进入认证状态。一组预存特征信息模板对应一个用户,当有多用户使用时,可以存储多组预存特征信息模板,每一组预存特征信息模板可以携带用户标识用于区分不同的用户。Specifically, the pre-stored feature information template includes feature information of each segment of the stroke corresponding to the at least one track. After obtaining the feature information of the track of the user gesture, the user identity is performed. If the terminal device has not been trained by the user gesture before, the terminal The device will prompt the user to perform multiple trainings (for example, 5 to 10 times). When the user is in the training state, the terminal device extracts the feature information corresponding to each user's gesture track according to the process of extracting the feature information, and stores the information multiple times. The feature information of the gesture track is used as a pre-stored feature information template. When the training is completed, the device can enter the authentication state. A group of pre-stored feature information templates can be used to distinguish different users. When a user uses multiple users, multiple sets of pre-stored feature information templates can be stored. Each group of pre-stored feature information templates can carry user identifiers to distinguish different users.
其中,根据提取的每一段笔画的特征信息与至少一组预存特征信息模板中对应的每一段笔画的特征信息进行用户身份识别,具体可以为:The user identity is performed according to the extracted feature information of each segment of the stroke and the feature information of each segment of the corresponding segment of the pre-stored feature information template, which may be:
对每一组预存特征信息模板,计算提取的每一段笔画的特征信息与一组预存特征信息模板中一个轨迹对应的每一段笔画的特征信息的动态时间归整(Dynamic Time Warping,DTW)距离,根据计算出的DTW距离与预设阈值确定是否接受发起用户手势的用户。For each set of pre-stored feature information templates, calculate the dynamic time warping (DTW) distance of the feature information of each segment of the extracted strokes and the feature information of each segment of the strokes corresponding to one of the pre-stored feature information templates. A user is determined to accept the user gesture based on the calculated DTW distance and the preset threshold.
以特征信息包括形状信息和速度信息为例,得到手势轨迹对应的每一段笔画的特征信息后,分别计算每一段笔画和模板之间的形状信息和速度信息的DTW距离。对于每一个特征信息,由于有多段笔画,因此需要计算手势轨迹和模板中对应笔画的DTW距离,并将所有笔画的DTW距离做平方和相加作为两个轨迹的一个特征的DTW距离。值得注意的是两个特征信息分别是速度信息和形状信息,它们的物理单位不相同,因此计算整体的DTW距离时候需要把两个特征信息的DTW距离做归一化处理,使得它们的取值范围在0-1之间后进行相加。在获得一组模板中所有手势轨迹与当前需要识别的手势轨迹之间的DTW距离后,每一手势轨迹模板会根据预设阈值来判断自己是否接受当前用户,当超过半数的模板与当前正在认证的轨迹DTW距离小于预设阈值的时候就接受该用户。预设阈值例如设定为一组预存特征信息模板中所有轨迹对应的每一段笔画的特征信息之间的DTW距离的平均值加上一个标准差。Taking the feature information including the shape information and the speed information as an example, after obtaining the feature information of each stroke corresponding to the gesture track, the DTW distance between the shape information and the velocity information between each stroke and the template is calculated separately. For each feature information, since there are multiple strokes, it is necessary to calculate the DTW distance of the gesture track and the corresponding stroke in the template, and add the DTW distances of all the strokes as the DTW distance of one feature of the two tracks. It is worth noting that the two feature information are speed information and shape information, respectively, and their physical units are different. Therefore, when calculating the overall DTW distance, the DTW distances of the two feature information need to be normalized to make their values. The range is between 0-1 and then added. After obtaining the DTW distance between all the gesture trajectories in a set of templates and the currently recognized gesture trajectory, each gesture trajectory template determines whether to accept the current user according to the preset threshold, when more than half of the templates are currently being authenticated. The user is accepted when the trajectory DTW distance is less than a preset threshold. The preset threshold is set, for example, to the average value of the DTW distance between the feature information of each stroke corresponding to all the tracks in the pre-stored feature information template plus one standard deviation.
以特征信息包括形状信息、速度信息和加速度信息为例,得到手势轨迹对应的每一段笔画的特征信息后,分别计算每一段笔画的三个特征信息和模板对应的三个特征信息之间的DTW距离,并将该DTW距离作为k近邻算法(k nearest neighbor,KNN)中的距离,根据KNN(k=3)算法来决定当前的具体用户。Taking the feature information including the shape information, the speed information and the acceleration information as an example, after obtaining the feature information of each stroke corresponding to the gesture track, respectively calculating the DTW between the three feature information of each stroke and the three feature information corresponding to the template. The distance is used as the distance in the k nearest neighbor (KNN), and the current specific user is determined according to the KNN (k=3) algorithm.
本实施例中,用户可以通过主动或是被动的方式开始终端设备的手势识别过程,终端设备提示识别过程开始后,用户可以开始做手势,同时终端设备开始记录手势过程中的陀螺仪数据。用户可以自定义手势的样式,如五角星、中文字等。当用户完成 手势后,可以通过主动或被动的方式结束识别,当设备识别到这手势结束后开始进行轨迹构建、个人用户特征信息提取和识别,若该用户存在数据库中,则最终在显示屏上提示所识别的用户并接受该用户登入终端设备,若该用户不存在数据库中未能识别则禁止登陆。In this embodiment, the user can start the gesture recognition process of the terminal device in an active or passive manner. After the terminal device prompts the recognition process to start, the user can start to make a gesture, and the terminal device starts to record the gyroscope data in the gesture process. Users can customize the style of gestures, such as five-pointed stars, Chinese characters, and so on. After the user completes the gesture, the recognition may be ended in an active or passive manner. When the device recognizes that the gesture ends, the trajectory construction, personal user feature information extraction and recognition are started, and if the user exists in the database, it is finally displayed on the display screen. Prompt the identified user and accept the user to log in to the terminal device. If the user does not exist in the database, the login is forbidden.
本实施例提供的用户身份识别方法,通过根据终端设备中陀螺仪实时测得的旋转角速度构建用户手势的轨迹,对轨迹进行笔画分段处理,并对每一段笔画进行特征信息提取,最后根据提取的每一段笔画的特征信息与至少一组预存特征信息模板中一个轨迹对应的每一段笔画的特征信息进行用户身份识别,由于是在用户手势轨迹的基础上提取特征信息并与预存的特征信息模板比较来识别用户身份,不会受到用户手握终端设备的姿势的限制。用户和终端设备在各种姿势下,比如站、立、躺的情况下,依然可以识别用户身份。提高了识别的准确性和用户体验。The user identification method provided in this embodiment constructs a trajectory of the user gesture according to the rotational angular velocity measured by the gyroscope in the terminal device in real time, performs segmentation processing on the trajectory, and extracts feature information for each segment of the stroke, and finally extracts the feature information according to the extraction. The feature information of each stroke of each stroke is identified by the feature information of each stroke corresponding to one track in at least one set of pre-stored feature information templates, because the feature information is extracted on the basis of the user gesture track and the pre-stored feature information template is extracted. The comparison to identify the user's identity is not limited by the user's posture of holding the terminal device. The user and the terminal device can still recognize the user identity in various postures, such as standing, standing, and lying. Improved recognition accuracy and user experience.
下面采用几个具体的实施例,对图1所示方法实施例的技术方案进行详细说明。The technical solutions of the method embodiment shown in FIG. 1 are described in detail below by using several specific embodiments.
对于用户身份识别,分为两个过程,一是训练过程存储用户的手势轨迹的特征信息作为验证模板,二是针对接收到的手势开始识别。图2为训练流程示意图,如图2所示,包括:For user identification, there are two processes. One is that the training process stores the feature information of the user's gesture track as a verification template, and the other is to start recognition for the received gesture. Figure 2 is a schematic diagram of the training process, as shown in Figure 2, including:
S201、确定用户手势结束后,获取陀螺仪测得的旋转角速度。S201. After determining that the user gesture ends, obtain a rotation angular velocity measured by the gyroscope.
S202、根据陀螺仪测得的旋转角速度构建用户手势的轨迹,对轨迹进行笔画分段处理,并对每一段笔画进行特征信息提取。S202. Construct a trajectory of the user gesture according to the rotation angular velocity measured by the gyro, perform segmentation processing on the trajectory, and perform feature information extraction on each stroke.
S203、存储提取的特征信息。S203. Store the extracted feature information.
一般地,终端设备会提示用户进行多次的训练(例如5到10次),存储下多次手势轨迹的特征信息作为预存特征信息模板。Generally, the terminal device prompts the user to perform multiple trainings (for example, 5 to 10 times), and stores feature information of multiple gesture trajectories as a pre-stored feature information template.
图3为认证识别流程示意图,如图3所示,包括:Figure 3 is a schematic diagram of the authentication identification process, as shown in Figure 3, including:
S301、检测用户是否触发认证过程。若是执行S202。S301. Detect whether the user triggers an authentication process. If it is executed S202.
S302、确定用户手势结束后,获取陀螺仪测得的旋转角速度。S302. After determining that the user gesture ends, obtain a rotation angular velocity measured by the gyroscope.
S303、根据陀螺仪测得的旋转角速度构建用户手势的轨迹,对轨迹进行笔画分段处理,并对每一段笔画进行特征信息提取。S303. Construct a trajectory of the user gesture according to the rotational angular velocity measured by the gyro, perform segmentation processing on the trajectory, and perform feature information extraction on each stroke.
S304、根据提取的每一段笔画的特征信息与预存特征信息模板中一个轨迹对应的每一段笔画的特征信息进行用户身份识别,确定是否与预存特征信息模板相吻合,若是则执行S305,否则结束。S304. Perform user identification according to the feature information of each stroke of the extracted stroke and the feature information of each stroke corresponding to one track in the pre-stored feature information template, and determine whether it matches the pre-stored feature information template. If yes, execute S305, otherwise, the process ends.
S305、认证用户成功并执行相应的命令。S305. The authenticated user succeeds and executes the corresponding command.
下面分三种不同的使用场景详细说明本申请的用户身份识别方法。The user identification method of the present application is described in detail in three different usage scenarios.
首先是单一用户识别场景,图4为本申请用户身份识别方法实施例二的流程图,结合图4所示,本实施例的方法可以包括:The first step is a single user identification scenario. FIG. 4 is a flowchart of Embodiment 2 of the user identification method of the present application. As shown in FIG. 4, the method in this embodiment may include:
S401、根据终端设备中陀螺仪实时测得的旋转角速度构建用户手势的轨迹,轨迹由用户手臂每一时刻的三维坐标点构成。S401. Construct a trajectory of the user gesture according to the rotational angular velocity measured by the gyroscope in the terminal device in real time, and the trajectory is composed of three-dimensional coordinate points of the user's arm at each moment.
具体地,根据当前时刻的旋转角速度计算前一时刻到当前时刻终端设备姿态变化的旋转矩阵C t,根据公式P t=P t-*C t计算用户手臂当前时刻的三维坐标点P t,得到用户手臂每一时刻的三维坐标点。其中,P t-为当前时刻的前一时刻的三维坐标点,用户手势开始的三维坐标点为原点。 Specifically, the rotation matrix C t of the terminal device posture change from the previous moment to the current moment is calculated according to the rotational angular velocity of the current time, and the three-dimensional coordinate point P t of the current moment of the user arm is calculated according to the formula P t =P t −*C t The three-dimensional coordinate point of the user's arm at each moment. Where P t - is the three-dimensional coordinate point of the previous moment of the current time, and the three-dimensional coordinate point at which the user gesture starts is the origin.
在用户的肘部做手势时动作不会非常大的前提下,用户的前臂可以视为一个绕着手肘运动的刚体,即是一段直线在初始绝对坐标系中绕着原点运动。陀螺仪可以获得当前设备旋转的角速度,因而可以计算出前一时刻到当前时刻设备姿态变化的旋转矩阵,而这一旋转矩阵也描述了用户手臂的姿态变化。当前时刻的旋转矩阵C t为: Under the premise that the user's elbow gesture is not very large, the user's forearm can be regarded as a rigid body moving around the elbow, that is, a straight line moves around the origin in the initial absolute coordinate system. The gyroscope can obtain the angular velocity of the current device rotation, and thus can calculate the rotation matrix of the device posture change from the previous moment to the current moment, and this rotation matrix also describes the posture change of the user's arm. The current rotation matrix C t is:
Figure PCTCN2018078139-appb-000025
Figure PCTCN2018078139-appb-000025
其中,(ω x,ω y,ω z)为陀螺仪获得的当前时刻t的旋转角速度,C t-为前一时刻t -的旋转矩阵,I为单位矩阵。 Where (ω x , ω y , ω z ) is the rotational angular velocity of the current time t obtained by the gyroscope, C t− is the rotation matrix of the previous time t , and I is the unit matrix.
得到用户手臂每一时刻的三维坐标点,即得到用户手势的轨迹。The three-dimensional coordinate point of each moment of the user's arm is obtained, that is, the trajectory of the user's gesture is obtained
S402、根据三维曲率确定出轨迹中的分段点,得到至少一段笔画。S402. Determine a segmentation point in the trajectory according to the three-dimensional curvature, and obtain at least one stroke.
S403、然后对每一段笔画进行大小归一化和旋转归一化。S403, then size normalize and rotate normalize each stroke.
具体地,对每一段笔画的大小均除以轨迹的长度。这样整个手势的轨迹长度就变为单位1。将每一段笔画的轴旋转到与初始绝对坐标系的X轴平行,每一段笔画的轴为从起始点到结束点的一条线段。Specifically, the size of each stroke is divided by the length of the trajectory. Thus the track length of the entire gesture becomes unity. Rotate the axis of each stroke to be parallel to the X axis of the initial absolute coordinate system. The axis of each stroke is a line segment from the starting point to the ending point.
S404、提取每一段笔画中等时间间隔的三维坐标序列,得到每一段笔画的形状信息;计算每一段笔画中等距离间隔的三维位置关于时间的一阶导数序列,得到每一段笔画的速度信息。
Figure PCTCN2018078139-appb-000026
为当前位置,
Figure PCTCN2018078139-appb-000027
为前一时刻的位置,t为当前时刻,t -为前一时刻,则速度信息
Figure PCTCN2018078139-appb-000028
的计算公式为:
S404: Extract a three-dimensional coordinate sequence of each stroke of a medium time interval to obtain shape information of each stroke; calculate a first derivative sequence of the three-dimensional position of each stroke with respect to time, and obtain speed information of each stroke.
Figure PCTCN2018078139-appb-000026
For the current location,
Figure PCTCN2018078139-appb-000027
For the position of the previous moment, t is the current moment, t - is the previous moment, then the speed information
Figure PCTCN2018078139-appb-000028
The calculation formula is:
Figure PCTCN2018078139-appb-000029
Figure PCTCN2018078139-appb-000029
S405、根据提取的每一段笔画的特征信息与预存特征信息模板中每一个轨迹对应的每一段笔画的特征信息进行用户身份识别。S405. Perform user identification according to the feature information of each segment of the extracted strokes and the feature information of each segment of the stroke corresponding to each of the pre-stored feature information templates.
具体地,分别计算每一段笔画和模板之间的形状信息和速度信息的DTW距离。对于每一个特征信息,由于有多段笔画,因此需要计算手势轨迹和模板中对应笔画的DTW距离,并将所有笔画的DTW距离做平方和相加作为两个轨迹的一个特征的DTW距离。值得注意的是两个特征信息分别是速度信息和形状信息,它们的物理单位不相同,因此计算整体的DTW距离时候需要把两个特征信息的DTW距离做归一化处理,使得它们的取值范围在0-1之间后进行相加。在获得一组模板中所有手势轨迹与当前需要识别的手势轨迹之间的DTW距离后,每一手势轨迹模板会根据预设阈值来判断自己是否接受当前用户,当超过半数的模板与当前正在认证的轨迹DTW距离小于预设阈值的时候就接受该用户。预设阈值例如设定为一组预存特征信息模板中所有轨迹对应的每一段笔画的特征信息之间的DTW距离的平均值加上一个标准差。Specifically, the DTW distance between the shape information and the velocity information between each stroke and the template is calculated separately. For each feature information, since there are multiple strokes, it is necessary to calculate the DTW distance of the gesture track and the corresponding stroke in the template, and add the DTW distances of all the strokes as the DTW distance of one feature of the two tracks. It is worth noting that the two feature information are speed information and shape information, respectively, and their physical units are different. Therefore, when calculating the overall DTW distance, the DTW distances of the two feature information need to be normalized to make their values. The range is between 0-1 and then added. After obtaining the DTW distance between all the gesture trajectories in a set of templates and the currently recognized gesture trajectory, each gesture trajectory template determines whether to accept the current user according to the preset threshold, when more than half of the templates are currently being authenticated. The user is accepted when the trajectory DTW distance is less than a preset threshold. The preset threshold is set, for example, to the average value of the DTW distance between the feature information of each stroke corresponding to all the tracks in the pre-stored feature information template plus one standard deviation.
接着是多用户识别场景,即就是同一设备可以允许多个用户使用时如何识别用户的情况。图5为本申请用户身份识别方法实施例三的流程图,结合图5所示,本实施例的方法可以包括:This is followed by a multi-user identification scenario, which is how the same device can allow multiple users to identify the user. FIG. 5 is a flowchart of Embodiment 3 of the user identification method of the present application. As shown in FIG. 5, the method in this embodiment may include:
S501、根据终端设备中陀螺仪实时测得的旋转角速度构建用户手势的轨迹,轨迹由用户手臂每一时刻的三维坐标点构成。S501. Construct a trajectory of the user gesture according to the rotational angular velocity measured by the gyroscope in the terminal device in real time, and the trajectory is composed of three-dimensional coordinate points of the user's arm at each moment.
具体的过程与S401相同,此处不再赘述。The specific process is the same as that of S401, and will not be described here.
S502、对轨迹进行旋转归一化和大小归一化。S502: Perform rotation normalization and size normalization on the trajectory.
其中,对轨迹进行旋转归一化,具体可以为:Wherein, the rotation of the trajectory is normalized, which may be:
首先,根据构成轨迹的三维坐标点确定出二维坐标序列[u(i),v(i)],i=1…N,其中,u(i)为构成轨迹的三维坐标点投影在Y-Z平面上的Y轴的值,v(i)为构成轨迹的三维坐标点投影在Y-Z平面上的Z轴的值,N为构成轨迹的三维坐标点的数量。因为人在空中写字的平面都是在身体前方与人身体平面平行,即初始绝对坐标系的Y-Z平面,因此将三维坐标点投影在Y-Z二维坐标平面。First, a two-dimensional coordinate sequence [u(i), v(i)], i=1...N is determined according to the three-dimensional coordinate points constituting the trajectory, wherein u(i) is a three-dimensional coordinate point constituting the trajectory projected on the YZ plane The value of the upper Y-axis, v(i) is the value of the Z-axis projected on the YZ plane of the three-dimensional coordinate point constituting the trajectory, and N is the number of three-dimensional coordinate points constituting the trajectory. Because the plane in which people write in the air is parallel to the human body plane in front of the body, that is, the Y-Z plane of the initial absolute coordinate system, the three-dimensional coordinate points are projected on the Y-Z two-dimensional coordinate plane.
接着,寻找轨迹的最小转动惯量的转动轴,将轨迹旋转到自身最小转动惯量转动轴与投影到Y-Z平面的Y轴平行的位置。Next, the rotation axis of the minimum moment of inertia of the trajectory is searched, and the trajectory is rotated to a position where the rotation axis of the minimum moment of inertia is parallel to the Y axis projected to the Y-Z plane.
计算轨迹重心
Figure PCTCN2018078139-appb-000030
Figure PCTCN2018078139-appb-000031
Calculate the track center of gravity
Figure PCTCN2018078139-appb-000030
Figure PCTCN2018078139-appb-000031
计算轨迹的协方差矩阵
Figure PCTCN2018078139-appb-000032
其中,
Figure PCTCN2018078139-appb-000033
Figure PCTCN2018078139-appb-000034
Calculate the covariance matrix of the trajectory
Figure PCTCN2018078139-appb-000032
among them,
Figure PCTCN2018078139-appb-000033
Figure PCTCN2018078139-appb-000034
最后,将构成轨迹的所有三维坐标点乘以I。Finally, all the three-dimensional coordinate points that make up the trajectory are multiplied by I.
其中,对轨迹进行大小归一化,具体可以为:Wherein, the size of the trajectory is normalized, which may be:
计算轨迹的宽W和高H,将二维坐标序列中的u(i)除以W,将二维坐标序列中的V(i)除以H。Calculate the width W and height H of the trajectory, divide u(i) in the two-dimensional coordinate sequence by W, and divide V(i) in the two-dimensional coordinate sequence by H.
S503、根据二维曲率确定出旋转归一化和大小归一化后的轨迹中的分段点,得到至少一段笔画。S503. Determine a segmentation point in the trajectory after the rotation normalization and the size normalization according to the two-dimensional curvature, to obtain at least one stroke.
S504、计算每一段笔画的长度,得到每一段笔画的长度信息;计算连续的两段笔画之间的夹角,夹角为两段笔画各自最小转动惯量轴的夹角,得到连续的两段笔画之间的角度信息;计算每一段笔画中等距离间隔的三维位置关于时间的一阶导数序列,得到每一段笔画的速度信息。S504, calculating the length of each stroke, obtaining the length information of each stroke; calculating the angle between the two consecutive strokes, the angle is the angle between the minimum moments of inertia of the two strokes, and obtaining two consecutive strokes The angle information between each segment; the first derivative sequence of the three-dimensional position of each stroke interval with respect to time is calculated, and the velocity information of each stroke is obtained.
S505、对于多组预存特征信息模板中的每一组预存模块,根据提取的每一段笔画的特征信息与一组预存特征信息模板中每一个轨迹对应的每一段笔画的特征信息进行用户身份识别。S505. For each group of pre-stored modules in the plurality of sets of pre-stored feature information templates, perform user identification according to the feature information of each segment of the extracted strokes and the feature information of each segment of the stroke corresponding to each of the set of pre-stored feature information templates.
具体地,根据长度信息、角度信息和速度信息计算与各个用户对应的模板之间的DTW距离,最后通过各个用户对应的模板多数投票的方式判断当前用户是否为拥有者之一并判断当前的拥有者的是哪一个。本实施例中每一组预存特征信息模板可以携带用户标识用于区分不同的用户。Specifically, the DTW distance between the templates corresponding to each user is calculated according to the length information, the angle information, and the speed information, and finally, the current user is one of the owners and determines the current possession by means of a template majority vote of each user. Which one is it? In this embodiment, each group of pre-stored feature information templates may carry a user identifier for distinguishing different users.
接着是在共享终端设备下的多用户场景,该场景为弱安全场景,旨在通过手势轨迹区分不同的用户,例如区分家庭共享血压仪的用户,当用户使用血压仪时判断当前用户并提取该用户的历史记录。图6为本申请用户身份识别方法实施例四的流程图,结合图6所示,本实施例的方法可以包括:Followed by a multi-user scenario under the shared terminal device, the scenario is a weak security scenario, which aims to distinguish different users by gesture trajectory, for example, distinguishing users who share the blood pressure meter from the home, and when the user uses the blood pressure meter, determine the current user and extract the user. User history. FIG. 6 is a flowchart of Embodiment 4 of the user identification method of the present application. As shown in FIG. 6, the method in this embodiment may include:
S601、根据终端设备中陀螺仪实时测得的旋转角速度构建用户手势的轨迹,轨迹由用户手臂每一时刻的三维坐标点构成。S601: Construct a trajectory of the user gesture according to the rotational angular velocity measured by the gyroscope in the terminal device in real time, and the trajectory is composed of three-dimensional coordinate points of the user arm at each moment.
具体的过程与S401相同,此处不再赘述。The specific process is the same as that of S401, and will not be described here.
S602、根据三维曲率确定出轨迹中的分段点,得到至少一段笔画。S602. Determine a segmentation point in the trajectory according to the three-dimensional curvature, and obtain at least one stroke.
S603、然后对每一段笔画进行大小归一化和旋转归一化。S603, then size normalize and rotate normalize each stroke.
具体地,对每一段笔画的大小均除以轨迹的长度。这样整个手势的轨迹长度就变为单位1。将每一段笔画的轴旋转到与初始绝对坐标系的X轴平行,每一段笔画的轴为从起始点到结束点的一条线段。Specifically, the size of each stroke is divided by the length of the trajectory. Thus the track length of the entire gesture becomes unity. Rotate the axis of each stroke to be parallel to the X axis of the initial absolute coordinate system. The axis of each stroke is a line segment from the starting point to the ending point.
S604、提取每一段笔画中等时间间隔的三维坐标序列,得到每一段笔画的形状信息;计算每一段笔画中等距离间隔的三维位置关于时间的一阶导数序列,得到每一段笔画的速度信息。
Figure PCTCN2018078139-appb-000035
为当前位置,
Figure PCTCN2018078139-appb-000036
为前一时刻的位置,t为当前时刻,t -为前一时刻,则速度信息
Figure PCTCN2018078139-appb-000037
的计算公式为:
S604. Extract a three-dimensional coordinate sequence of each stroke of a medium time interval to obtain shape information of each stroke; calculate a first derivative sequence of the three-dimensional position of each stroke with respect to time, and obtain speed information of each stroke.
Figure PCTCN2018078139-appb-000035
For the current location,
Figure PCTCN2018078139-appb-000036
For the position of the previous moment, t is the current moment, t - is the previous moment, then the speed information
Figure PCTCN2018078139-appb-000037
The calculation formula is:
Figure PCTCN2018078139-appb-000038
Figure PCTCN2018078139-appb-000038
根据每一段笔画的速度信息计算每一段笔画的加速度信息。
Figure PCTCN2018078139-appb-000039
为当前时刻的速度,
Figure PCTCN2018078139-appb-000040
为前一时刻速度,加速度信息
Figure PCTCN2018078139-appb-000041
计算公式为:
Figure PCTCN2018078139-appb-000042
The acceleration information of each stroke is calculated according to the speed information of each stroke.
Figure PCTCN2018078139-appb-000039
For the speed of the current moment,
Figure PCTCN2018078139-appb-000040
Speed for the previous moment, acceleration information
Figure PCTCN2018078139-appb-000041
The calculation formula is:
Figure PCTCN2018078139-appb-000042
S605、对于多组预存特征信息模板中的每一组预存模块,根据提取的每一段笔画的特征信息与一组预存特征信息模板中每一个轨迹对应的每一段笔画的特征信息进行用户身份识别。S605. For each group of pre-stored modules in the plurality of sets of pre-stored feature information templates, perform user identification according to the feature information of each segment of the extracted strokes and the feature information of each segment of the stroke corresponding to each of the set of pre-stored feature information templates.
具体地,计算每一组预存特征信息模板中的每一段笔画的特征信息与当前所要识别的手势的相对应笔画的特征信息的DTW距离,并将该距离作为KNN分类算法中的距离,根据KNN(k=3)算法来决定当前的具体用户。Specifically, calculating a DTW distance of the feature information of each stroke of each set of pre-stored feature information templates and the feature information of the corresponding stroke of the currently recognized gesture, and using the distance as the distance in the KNN classification algorithm, according to the KNN The (k=3) algorithm determines the current specific user.
图7为本申请用户身份识别装置实施例一的结构示意图,该用户身份识别装置可以通过软件、硬件或者两者的结合实现成为终端设备的部分或者全部,如图7所示,本实施例的装置可以包括:轨迹构建模块11、笔画分段处理模块12、信息提取模块13和识别模块14,其中,FIG. 7 is a schematic structural diagram of Embodiment 1 of a user identity identification apparatus according to the present application. The user identity identification apparatus may be implemented as part or all of a terminal device by using software, hardware, or a combination of the two. As shown in FIG. 7, the embodiment of the present invention is as shown in FIG. The apparatus may include: a trajectory construction module 11, a stroke segmentation processing module 12, an information extraction module 13, and an identification module 14, wherein
轨迹构建模块11用于根据终端设备中陀螺仪实时测得的旋转角速度构建用户手势的轨迹,轨迹由用户手臂每一时刻的三维坐标点构成。The trajectory construction module 11 is configured to construct a trajectory of the user gesture according to the rotational angular velocity measured by the gyro in the terminal device in real time, and the trajectory is composed of three-dimensional coordinate points of the user's arm at each moment.
笔画分段处理模块12用于对轨迹进行笔画分段处理。The stroke segmentation processing module 12 is configured to perform stroke segmentation processing on the trajectory.
信息提取模块13用于对每一段笔画进行特征信息提取。The information extraction module 13 is configured to perform feature information extraction for each piece of strokes.
识别模块14用于根据提取的每一段笔画的特征信息与至少一组预存特征信息模板中一个轨迹对应的每一段笔画的特征信息进行用户身份识别,每一组预存特征信息模板包括至少一个轨迹对应的每一段笔画的特征信息。The identification module 14 is configured to perform user identification according to the extracted feature information of each segment of the stroke and the feature information of each segment of the stroke corresponding to one of the at least one set of pre-stored feature information templates, where each set of pre-stored feature information templates includes at least one track corresponding Characteristic information of each stroke.
其中,特征信息包括形状信息和速度信息,或者,特征信息包括长度信息、角度信息和速度信息,或者,特征信息包括形状信息、速度信息和加速度信息。The feature information includes shape information and speed information, or the feature information includes length information, angle information, and speed information, or the feature information includes shape information, speed information, and acceleration information.
进一步地,轨迹构建模块11具体用于:Further, the trajectory construction module 11 is specifically configured to:
根据当前时刻的旋转角速度计算前一时刻到当前时刻终端设备姿态变化的旋转矩阵C t,根据公式P t=P t-*C t计算用户手臂当前时刻的三维坐标点P t,得到用户手臂每一时刻的三维坐标点。其中,P t-为当前时刻的前一时刻的三维坐标点,用户手势开始的三维坐标点为原点。 A front rotation angular velocity of the current time point is calculated at a time to the current time the posture change of the terminal device rotation matrix C t, according to the formula P t = P t - * C t calculated three-dimensional coordinates of the points the user's arm at the current time P t, to obtain the user's arm per A three-dimensional coordinate point at a time. Where P t - is the three-dimensional coordinate point of the previous moment of the current time, and the three-dimensional coordinate point at which the user gesture starts is the origin.
本实施例的装置,可以用于执行图1所示方法实施例的技术方案,其实现原理类 似,此处不再赘述。The device in this embodiment may be used to implement the technical solution of the method embodiment shown in FIG. 1 , and the implementation principle is similar, and details are not described herein again.
本实施例提供的用户身份识别装置,通过根据终端设备中陀螺仪实时测得的旋转角速度构建用户手势的轨迹,对轨迹进行笔画分段处理,并对每一段笔画进行特征信息提取,最后根据提取的每一段笔画的特征信息与至少一组预存特征信息模板中一个轨迹对应的每一段笔画的特征信息进行用户身份识别,由于是在用户手势轨迹的基础上提取特征信息并与预存的特征信息模板比较来识别用户身份,不会受到用户手握终端设备的姿势的限制。用户和终端设备在各种姿势下,比如站、立、躺的情况下,依然可以识别用户身份。提高了识别的准确性和用户体验。The user identity recognition apparatus provided in this embodiment constructs a trajectory of the user gesture according to the rotational angular velocity measured by the gyroscope in the terminal device in real time, performs segmentation processing on the trajectory of the trajectory, and extracts feature information for each segment of the stroke, and finally extracts according to the feature. The feature information of each stroke of each stroke is identified by the feature information of each stroke corresponding to one track in at least one set of pre-stored feature information templates, because the feature information is extracted on the basis of the user gesture track and the pre-stored feature information template is extracted. The comparison to identify the user's identity is not limited by the user's posture of holding the terminal device. The user and the terminal device can still recognize the user identity in various postures, such as standing, standing, and lying. Improved recognition accuracy and user experience.
图8为本申请用户身份识别装置实施例二的结构示意图,如图8所示,本实施例的装置在图7所示装置结构的基础上,进一步地,笔画分段处理模块12包括第一确定单元121和第一归一化单元122,第一确定单元121用于根据三维曲率确定出轨迹中的分段点,得到至少一段笔画,第一归一化单元122用于对每一段笔画进行大小归一化和旋转归一化。FIG. 8 is a schematic structural diagram of Embodiment 2 of the user identity identification apparatus of the present application. As shown in FIG. 8, the apparatus of this embodiment is based on the apparatus structure shown in FIG. 7. Further, the stroke segmentation processing module 12 includes the first a determining unit 121 and a first normalizing unit 122, the first determining unit 121 is configured to determine a segment point in the trajectory according to the three-dimensional curvature, to obtain at least one stroke, and the first normalization unit 122 is configured to perform each stroke Size normalization and rotation normalization.
进一步地,第一归一化单元122具体用于:Further, the first normalization unit 122 is specifically configured to:
对每一段笔画的大小均除以轨迹的长度,将每一段笔画的轴旋转到与初始绝对坐标系的X轴平行,每一段笔画的轴为从起始点到结束点的一条线段,初始绝对坐标系与用户手势起始时刻终端设备坐标系的三坐标轴相同,初始绝对坐标系的原点是用户肘部所在位置的三维坐标点。Divide the size of each stroke by the length of the trajectory, and rotate the axis of each stroke to be parallel to the X axis of the initial absolute coordinate system. The axis of each stroke is a line segment from the starting point to the ending point, and the initial absolute coordinates. It is the same as the three coordinate axes of the terminal device coordinate system at the start of the user gesture, and the origin of the initial absolute coordinate system is the three-dimensional coordinate point of the position where the user's elbow is located.
可选的,信息提取模块13具体用于:Optionally, the information extraction module 13 is specifically configured to:
提取每一段笔画中等时间间隔的三维坐标序列,得到每一段笔画的形状信息,计算每一段笔画中等距离间隔的三维位置关于时间的一阶导数序列,得到每一段笔画的速度信息。The three-dimensional coordinate sequence of each stroke is extracted at a medium time interval, and the shape information of each stroke is obtained, and the first derivative sequence of the three-dimensional position of each stroke interval with respect to time is calculated, and the velocity information of each stroke is obtained.
可选的,信息提取模块13具体用于:Optionally, the information extraction module 13 is specifically configured to:
提取每一段笔画中等时间间隔的三维坐标序列,得到每一段笔画的形状信息,计算每一段笔画中等距离间隔的三维位置关于时间的一阶导数序列,得到每一段笔画的速度信息,根据每一段笔画的速度信息计算每一段笔画的加速度信息。Extract the three-dimensional coordinate sequence of each stroke in the middle time interval, obtain the shape information of each stroke, calculate the first derivative sequence of the three-dimensional position of each stroke with respect to time, and obtain the speed information of each stroke, according to each stroke The velocity information calculates the acceleration information for each stroke.
本实施例的装置,可以用于执行图1所示方法实施例的技术方案,其实现原理类似,此处不再赘述。The device in this embodiment may be used to implement the technical solution of the method embodiment shown in FIG. 1 , and the implementation principle is similar, and details are not described herein again.
图9为本申请用户身份识别装置实施例三的结构示意图,如图9所示,本实施例的装置在图7所示装置结构的基础上,进一步地,笔画分段处理模块12包括第二归一化单元123和第二确定单元124,第二归一化单元123用于对轨迹进行旋转归一化和大小归一化。第二确定单元124用于根据二维曲率确定出旋转归一化和大小归一化后的轨迹中的分段点,得到至少一段笔画。FIG. 9 is a schematic structural diagram of Embodiment 3 of the user identity identification apparatus of the present application. As shown in FIG. 9, the apparatus of this embodiment is based on the apparatus structure shown in FIG. 7, and further, the stroke segmentation processing module 12 includes a second. The normalization unit 123 and the second determining unit 124 are configured to perform rotation normalization and size normalization on the trajectory. The second determining unit 124 is configured to determine a segmentation point in the trajectory of the rotation normalization and the size normalization according to the two-dimensional curvature, to obtain at least one stroke.
进一步地,第二归一化单元123具体用于:Further, the second normalization unit 123 is specifically configured to:
根据构成轨迹的三维坐标点确定出二维坐标序列[u(i),v(i)],i=1…N,其中,u(i)为构成轨迹的三维坐标点投影在Y-Z平面上的Y轴的值,v(i)为构成轨迹的三维坐标点投影在Y-Z平面上的Z轴的值,N为构成轨迹的三维坐标点的数量;A two-dimensional coordinate sequence [u(i), v(i)], i=1...N is determined according to the three-dimensional coordinate points constituting the trajectory, wherein u(i) is a three-dimensional coordinate point constituting the trajectory projected on the YZ plane The value of the Y-axis, v(i) is the value of the Z-axis projected on the YZ plane of the three-dimensional coordinate point constituting the trajectory, and N is the number of three-dimensional coordinate points constituting the trajectory;
寻找轨迹的最小转动惯量的转动轴,将轨迹旋转到自身最小转动惯量转动轴与投影到Y-Z平面的Y轴平行的位置;Finding the rotation axis of the minimum moment of inertia of the trajectory, rotating the trajectory to a position where the rotation axis of the minimum moment of inertia is parallel to the Y axis projected to the Y-Z plane;
计算轨迹重心
Figure PCTCN2018078139-appb-000043
Figure PCTCN2018078139-appb-000044
Calculate the track center of gravity
Figure PCTCN2018078139-appb-000043
Figure PCTCN2018078139-appb-000044
计算轨迹的协方差矩阵
Figure PCTCN2018078139-appb-000045
其中,
Figure PCTCN2018078139-appb-000046
Figure PCTCN2018078139-appb-000047
Calculate the covariance matrix of the trajectory
Figure PCTCN2018078139-appb-000045
among them,
Figure PCTCN2018078139-appb-000046
Figure PCTCN2018078139-appb-000047
将构成轨迹的所有三维坐标点乘以I;Multiplying all three-dimensional coordinate points constituting the trajectory by I;
计算轨迹的宽W和高H,将二维坐标序列中的u(i)除以W,将二维坐标序列中的V(i)除以H。Calculate the width W and height H of the trajectory, divide u(i) in the two-dimensional coordinate sequence by W, and divide V(i) in the two-dimensional coordinate sequence by H.
可选的,信息提取模块13具体用于:Optionally, the information extraction module 13 is specifically configured to:
计算每一段笔画的长度,得到每一段笔画的长度信息,计算连续的两段笔画之间的夹角,夹角为两段笔画各自最小转动惯量轴的夹角,得到连续的两段笔画之间的角度信息,计算每一段笔画中等距离间隔的三维位置关于时间的一阶导数序列,得到每一段笔画的速度信息。Calculate the length of each stroke, obtain the length information of each stroke, calculate the angle between two consecutive strokes, and the angle is the angle between the minimum moments of inertia of the two strokes, and obtain the continuous between the two strokes. The angle information is used to calculate a first-order derivative sequence of the three-dimensional position of each stroke of the medium-distance interval with respect to time, and obtain the velocity information of each stroke.
在上述实施例中,进一步地,识别模块14具体用于:In the above embodiment, the identification module 14 is specifically configured to:
对每一组预存特征信息模板,计算提取的每一段笔画的特征信息与一组预存特征信息模板中一个轨迹对应的每一段笔画的特征信息的动态时间归整DTW距离,根据计算出的DTW距离与预设阈值确定是否接受发起用户手势的用户。其中的预设阈值为一组预存特征信息模板中所有轨迹对应的每一段笔画的特征信息之间的DTW距离的平均值加上一个标准差。For each set of pre-stored feature information templates, the dynamic time-timed DTW distance of the extracted feature information of each stroke of each stroke and the feature information of each stroke of a set of pre-stored feature information templates is calculated according to the calculated DTW distance. And the preset threshold determines whether the user who initiated the user gesture is accepted. The preset threshold is an average value of the DTW distance between the feature information of each stroke corresponding to all the tracks in the pre-stored feature information template plus a standard deviation.
进一步地,每一组预存特征信息模板携带用户标识。Further, each set of pre-stored feature information templates carries a user identifier.
本实施例的装置,可以用于执行图1所示方法实施例的技术方案,其实现原理类似,此处不再赘述。The device in this embodiment may be used to implement the technical solution of the method embodiment shown in FIG. 1 , and the implementation principle is similar, and details are not described herein again.
本申请还提供一种用户身份识别装置,包括:存储器和处理器;The application also provides a user identity recognition device, including: a memory and a processor;
存储器用于存储程序指令;The memory is used to store program instructions;
处理器用于执行上述方法实施例中的用户身份识别方法。The processor is configured to execute the user identity identification method in the foregoing method embodiment.
本申请还提供一种可读存储介质,可读存储介质中存储有执行指令,当用户身份识别装置的至少一个处理器执行该执行指令时,用户身份识别装置执行上述方法实施例中的用户身份识别方法。The application further provides a readable storage medium, where the execution instruction is stored, and when the at least one processor of the user identification device executes the execution instruction, the user identification device performs the user identity in the foregoing method embodiment. recognition methods.
本申请还提供一种程序产品,该程序产品包括执行指令,该执行指令存储在可读存储介质中。用户身份识别装置的至少一个处理器可以从可读存储介质读取该执行指令,至少一个处理器执行该执行指令使得用户身份识别装置实施上述方法实施例中的用户身份识别方法。The application also provides a program product comprising an execution instruction stored in a readable storage medium. At least one processor of the user identification device can read the execution instruction from a readable storage medium, and the at least one processor executes the execution instruction such that the user identification device implements the user identification method in the above method embodiment.
本领域普通技术人员可以理解:实现上述各方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成。前述的程序可以存储于一计算机可读取存储介质中。该程序在执行时,执行包括上述各方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。One of ordinary skill in the art will appreciate that all or part of the steps to implement the various method embodiments described above may be accomplished by hardware associated with the program instructions. The aforementioned program can be stored in a computer readable storage medium. The program, when executed, performs the steps including the foregoing method embodiments; and the foregoing storage medium includes various media that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disk.

Claims (26)

  1. 一种用户身份识别方法,其特征在于,包括:A user identification method, comprising:
    根据终端设备中陀螺仪实时测得的旋转角速度构建用户手势的轨迹,所述轨迹由用户手臂每一时刻的三维坐标点构成;Constructing a trajectory of the user gesture according to the rotational angular velocity measured by the gyroscope in real time in the terminal device, the trajectory being composed of three-dimensional coordinate points of the user's arm at each moment;
    对所述轨迹进行笔画分段处理,并对每一段笔画进行特征信息提取;Performing segmentation processing on the trajectory and extracting feature information for each stroke;
    根据提取的每一段笔画的特征信息与至少一组预存特征信息模板中一个轨迹对应的每一段笔画的特征信息进行用户身份识别,每一组预存特征信息模板包括至少一个轨迹对应的每一段笔画的特征信息。Performing user identification according to the feature information of each piece of stroke extracted and the feature information of each piece of stroke corresponding to one track in at least one set of pre-stored feature information templates, each set of pre-stored feature information templates including each piece of stroke corresponding to at least one track Feature information.
  2. 根据权利要求1所述的方法,其特征在于,The method of claim 1 wherein
    所述特征信息包括形状信息和速度信息,或者,The feature information includes shape information and speed information, or
    所述特征信息包括长度信息、角度信息和速度信息,或者,The feature information includes length information, angle information, and speed information, or
    所述特征信息包括形状信息、速度信息和加速度信息。The feature information includes shape information, speed information, and acceleration information.
  3. 根据权利要求1所述的方法,其特征在于,所述根据陀螺仪实时测得的旋转角速度构建用户手势的轨迹,包括:The method according to claim 1, wherein the constructing a trajectory of the user gesture according to the rotational angular velocity measured by the gyroscope in real time comprises:
    根据当前时刻的旋转角速度计算前一时刻到当前时刻终端设备姿态变化的旋转矩阵C tCalculating a rotation matrix C t of the attitude change of the terminal device from the previous moment to the current moment according to the rotational angular velocity of the current time;
    根据公式P t=P t-*C t计算用户手臂当前时刻的三维坐标点P t,得到用户手臂每一时刻的三维坐标点; According to the formula P t = P t - * C t calculated three-dimensional coordinates at the current time point of the user's arm P t, to obtain three-dimensional coordinates of each point of time of the user's arm;
    其中,P t-为当前时刻的前一时刻的三维坐标点,用户手势开始的三维坐标点为原点。 Where P t - is the three-dimensional coordinate point of the previous moment of the current time, and the three-dimensional coordinate point at which the user gesture starts is the origin.
  4. 根据权利要求3所述的方法,其特征在于,所述对所述轨迹进行笔画分段处理,包括:The method according to claim 3, wherein said performing segmentation processing on said trajectory comprises:
    根据三维曲率确定出所述轨迹中的分段点,得到至少一段笔画;Determining a segmentation point in the trajectory according to the three-dimensional curvature to obtain at least one stroke;
    对每一段笔画进行大小归一化和旋转归一化。Size normalization and rotation normalization for each stroke.
  5. 根据权利要求4所述的方法,其特征在于,所述对每一段笔画进行大小归一化,包括:The method according to claim 4, wherein said normalizing each segment of the stroke comprises:
    对每一段笔画的大小均除以所述轨迹的长度;Dividing the size of each stroke by the length of the trajectory;
    所述对每一段笔画进行旋转归一化,包括:The rotation normalizes each stroke, including:
    将每一段笔画的轴旋转到与初始绝对坐标系的X轴平行,每一段笔画的轴为从起始点到结束点的一条线段,所述初始绝对坐标系与用户手势起始时刻所述终端设备坐标系的三坐标轴相同,所述初始绝对坐标系的原点是用户肘部所在位置的三维坐标点。Rotating the axis of each stroke to be parallel to the X axis of the initial absolute coordinate system, the axis of each stroke is a line segment from the starting point to the ending point, the initial absolute coordinate system and the terminal device of the user gesture start time The three coordinate axes of the coordinate system are the same, and the origin of the initial absolute coordinate system is a three-dimensional coordinate point of the position where the user's elbow is located.
  6. 根据权利要求4或5所述的方法,其特征在于,所述对每一段笔画进行特征信息提取,包括:The method according to claim 4 or 5, wherein the extracting feature information for each stroke comprises:
    提取每一段笔画中等时间间隔的三维坐标序列,得到每一段笔画的形状信息;Extracting a three-dimensional coordinate sequence of each stroke of a medium time interval to obtain shape information of each stroke;
    计算每一段笔画中等距离间隔的三维位置关于时间的一阶导数序列,得到每一段笔画的速度信息。A sequence of first derivative of the three-dimensional position of each stroke of the middle distance interval with respect to time is calculated, and the velocity information of each stroke is obtained.
  7. 根据权利要求4或5所述的方法,其特征在于,所述对每一段笔画进行特征信息提取,包括:The method according to claim 4 or 5, wherein the extracting feature information for each stroke comprises:
    提取每一段笔画中等时间间隔的三维坐标序列,得到每一段笔画的形状信息;Extracting a three-dimensional coordinate sequence of each stroke of a medium time interval to obtain shape information of each stroke;
    计算每一段笔画中等距离间隔的三维位置关于时间的一阶导数序列,得到每一段笔画的速度信息;Calculating a first-order derivative sequence of the three-dimensional position of each stroke of the medium-distance interval with respect to time, and obtaining speed information of each stroke;
    根据每一段笔画的速度信息计算每一段笔画的加速度信息。The acceleration information of each stroke is calculated according to the speed information of each stroke.
  8. 根据权利要求3所述的方法,其特征在于,所述对所述轨迹进行笔画分段处理,包括:The method according to claim 3, wherein said performing segmentation processing on said trajectory comprises:
    对所述轨迹进行旋转归一化和大小归一化;Rotating normalization and size normalization of the trajectory;
    根据二维曲率确定出旋转归一化和大小归一化后的轨迹中的分段点,得到至少一段笔画。The segmentation points in the trajectory after the rotation normalization and the size normalization are determined according to the two-dimensional curvature, and at least one stroke is obtained.
  9. 根据权利要求8所述的方法,其特征在于,所述对所述轨迹进行旋转归一化,包括:The method of claim 8 wherein said normalizing said trajectory of rotation comprises:
    根据构成所述轨迹的三维坐标点确定出二维坐标序列[u(i),v(i)],i=1…N,其中,u(i)为构成所述轨迹的三维坐标点投影在Y-Z平面上的Y轴的值,v(i)为构成所述轨迹的三维坐标点投影在Y-Z平面上的Z轴的值,N为构成所述轨迹的三维坐标点的数量;Determining a two-dimensional coordinate sequence [u(i), v(i)], i=1...N according to three-dimensional coordinate points constituting the trajectory, wherein u(i) is a three-dimensional coordinate point constituting the trajectory projected The value of the Y-axis on the YZ plane, v(i) is the value of the Z-axis projected on the YZ plane of the three-dimensional coordinate point constituting the trajectory, and N is the number of three-dimensional coordinate points constituting the trajectory;
    寻找所述轨迹的最小转动惯量的转动轴,将所述轨迹旋转到自身最小转动惯量转动轴与投影到Y-Z平面的Y轴平行的位置;Finding a rotation axis of the minimum moment of inertia of the trajectory, rotating the trajectory to a position where the minimum moment of inertia rotation axis is parallel to the Y axis projected to the Y-Z plane;
    计算所述轨迹重心
    Figure PCTCN2018078139-appb-100001
    Calculating the center of gravity of the track
    Figure PCTCN2018078139-appb-100001
    计算所述轨迹的协方差矩阵
    Figure PCTCN2018078139-appb-100002
    其中,
    Figure PCTCN2018078139-appb-100003
    Figure PCTCN2018078139-appb-100004
    Calculating the covariance matrix of the trajectory
    Figure PCTCN2018078139-appb-100002
    among them,
    Figure PCTCN2018078139-appb-100003
    Figure PCTCN2018078139-appb-100004
    将构成所述轨迹的所有三维坐标点乘以I;Multiplying all three-dimensional coordinate points constituting the trajectory by I;
    所述对所述轨迹进行大小归一化,包括:The normalizing the size of the trajectory includes:
    计算所述轨迹的宽W和高H,将所述二维坐标序列中的u(i)除以W,将所述二维坐标序列中的V(i)除以H。The width W and height H of the trajectory are calculated, u(i) in the two-dimensional coordinate sequence is divided by W, and V(i) in the two-dimensional coordinate sequence is divided by H.
  10. 根据权利要求6或7所述的方法,其特征在于,所述对每一段笔画进行特征信息提取,包括:The method according to claim 6 or 7, wherein the extracting feature information for each stroke comprises:
    计算每一段笔画的长度,得到每一段笔画的长度信息;Calculate the length of each stroke and obtain the length information of each stroke;
    计算连续的两段笔画之间的夹角,所述夹角为所述两段笔画各自最小转动惯量轴的夹角,得到连续的两段笔画之间的角度信息;Calculating an angle between two consecutive strokes, wherein the angle is an angle between respective minimum moments of inertia axes of the two strokes, and obtaining angle information between two consecutive strokes;
    计算每一段笔画中等距离间隔的三维位置关于时间的一阶导数序列,得到每一段笔画的速度信息。A sequence of first derivative of the three-dimensional position of each stroke of the middle distance interval with respect to time is calculated, and the velocity information of each stroke is obtained.
  11. 根据权利要求1所述的方法,其特征在于,所述根据提取的每一段笔画的特征信息与至少一组预存特征信息模板中对应的每一段笔画的特征信息进行用户身份识别,包括:The method according to claim 1, wherein the user identity is determined according to the feature information of each of the extracted strokes and the feature information of each of the corresponding strokes in the at least one set of pre-stored feature information templates, including:
    对每一组预存特征信息模板,计算提取的每一段笔画的特征信息与一组预存特征信息模板中一个轨迹对应的每一段笔画的特征信息的动态时间归整DTW距离;For each set of pre-stored feature information templates, calculating a dynamic time-rounded DTW distance of the extracted feature information of each segment of the stroke and the feature information of each segment of the stroke corresponding to one of the pre-stored feature information templates;
    根据计算出的DTW距离与预设阈值确定是否接受发起所述用户手势的用户。Determining whether to accept the user who initiated the user gesture is based on the calculated DTW distance and a preset threshold.
  12. 根据权利要求11所述的方法,其特征在于,所述预设阈值为一组预存特征信 息模板中所有轨迹对应的每一段笔画的特征信息之间的DTW距离的平均值与所述一组预存特征信息模板中所有轨迹对应的每一段笔画的特征信息之间的DTW距离的标准差之和。The method according to claim 11, wherein the preset threshold is an average value of DTW distances between feature information of each segment of strokes corresponding to all tracks in a set of pre-stored feature information templates and the set of pre-stored The sum of the standard deviations of the DTW distances between the feature information of each stroke corresponding to all the tracks in the feature information template.
  13. 根据权利要求1所述的方法,其特征在于,所述每一组预存特征信息模板携带用户标识。The method according to claim 1, wherein each of the pre-stored feature information templates carries a user identifier.
  14. 一种用户身份识别装置,其特征在于,包括:A user identity recognition device, comprising:
    轨迹构建模块,用于根据终端设备中陀螺仪实时测得的旋转角速度构建用户手势的轨迹,所述轨迹由用户手臂每一时刻的三维坐标点构成;a trajectory construction module, configured to construct a trajectory of the user gesture according to a real-time measured rotational angular velocity of the gyroscope in the terminal device, where the trajectory is composed of three-dimensional coordinate points of the user's arm at each moment;
    笔画分段处理模块,用于对所述轨迹进行笔画分段处理;a stroke segmentation processing module, configured to perform stroke segmentation processing on the track;
    信息提取模块,用于对每一段笔画进行特征信息提取;An information extraction module, configured to extract feature information for each stroke;
    识别模块,用于根据提取的每一段笔画的特征信息与至少一组预存特征信息模板中一个轨迹对应的每一段笔画的特征信息进行用户身份识别,每一组预存特征信息模板包括至少一个轨迹对应的每一段笔画的特征信息。The identification module is configured to perform user identification according to the extracted feature information of each segment of the stroke and the feature information of each segment of the stroke corresponding to one of the at least one set of pre-stored feature information templates, where each set of pre-stored feature information templates includes at least one track corresponding Characteristic information of each stroke.
  15. 根据权利要求14所述的装置,其特征在于,The device of claim 14 wherein:
    所述特征信息包括形状信息和速度信息,或者,The feature information includes shape information and speed information, or
    所述特征信息包括长度信息、角度信息和速度信息,或者,The feature information includes length information, angle information, and speed information, or
    所述特征信息包括形状信息、速度信息和加速度信息。The feature information includes shape information, speed information, and acceleration information.
  16. 根据权利要求14所述的装置,其特征在于,所述轨迹构建模块具体用于:The device according to claim 14, wherein the trajectory building module is specifically configured to:
    根据当前时刻的旋转角速度计算前一时刻到当前时刻终端设备姿态变化的旋转矩阵C tCalculating a rotation matrix C t of the attitude change of the terminal device from the previous moment to the current moment according to the rotational angular velocity of the current time;
    根据公式P t=P t-*C t计算用户手臂当前时刻的三维坐标点P t,得到用户手臂每一时刻的三维坐标点; According to the formula P t = P t - * C t calculated three-dimensional coordinates at the current time point of the user's arm P t, to obtain three-dimensional coordinates of each point of time of the user's arm;
    其中,P t-为当前时刻的前一时刻的三维坐标点,用户手势开始的三维坐标点为原点。 Where P t - is the three-dimensional coordinate point of the previous moment of the current time, and the three-dimensional coordinate point at which the user gesture starts is the origin.
  17. 根据权利要求16所述的装置,其特征在于,所述笔画分段处理模块包括:The apparatus according to claim 16, wherein the stroke segmentation processing module comprises:
    第一确定单元,用于根据三维曲率确定出所述轨迹中的分段点,得到至少一段笔画;a first determining unit, configured to determine a segment point in the trajectory according to the three-dimensional curvature, to obtain at least one stroke;
    第一归一化单元,用于对每一段笔画进行大小归一化和旋转归一化。The first normalization unit is used for size normalization and rotation normalization of each stroke.
  18. 根据权利要求17所述的装置,其特征在于,所述第一归一化单元具体用于:The device according to claim 17, wherein the first normalization unit is specifically configured to:
    对每一段笔画的大小均除以所述轨迹的长度;Dividing the size of each stroke by the length of the trajectory;
    将每一段笔画的轴旋转到与初始绝对坐标系的X轴平行,每一段笔画的轴为从起始点到结束点的一条线段,所述初始绝对坐标系与用户手势起始时刻所述终端设备坐标系的三坐标轴相同,所述初始绝对坐标系的原点是用户肘部所在位置的三维坐标点。Rotating the axis of each stroke to be parallel to the X axis of the initial absolute coordinate system, the axis of each stroke is a line segment from the starting point to the ending point, the initial absolute coordinate system and the terminal device of the user gesture start time The three coordinate axes of the coordinate system are the same, and the origin of the initial absolute coordinate system is a three-dimensional coordinate point of the position where the user's elbow is located.
  19. 根据权利要求17或18所述的装置,其特征在于,所述信息提取模块具体用于:The device according to claim 17 or 18, wherein the information extraction module is specifically configured to:
    提取每一段笔画中等时间间隔的三维坐标序列,得到每一段笔画的形状信息;Extracting a three-dimensional coordinate sequence of each stroke of a medium time interval to obtain shape information of each stroke;
    计算每一段笔画中等距离间隔的三维位置关于时间的一阶导数序列,得到每一段笔画的速度信息。A sequence of first derivative of the three-dimensional position of each stroke of the middle distance interval with respect to time is calculated, and the velocity information of each stroke is obtained.
  20. 根据权利要求17或18所述的装置,其特征在于,所述信息提取模块具体用 于:The apparatus according to claim 17 or 18, wherein said information extraction module is specifically for:
    提取每一段笔画中等时间间隔的三维坐标序列,得到每一段笔画的形状信息;Extracting a three-dimensional coordinate sequence of each stroke of a medium time interval to obtain shape information of each stroke;
    计算每一段笔画中等距离间隔的三维位置关于时间的一阶导数序列,得到每一段笔画的速度信息;Calculating a first-order derivative sequence of the three-dimensional position of each stroke of the medium-distance interval with respect to time, and obtaining speed information of each stroke;
    根据每一段笔画的速度信息计算每一段笔画的加速度信息。The acceleration information of each stroke is calculated according to the speed information of each stroke.
  21. 根据权利要求16所述的装置,其特征在于,所述笔画分段处理模块包括:The apparatus according to claim 16, wherein the stroke segmentation processing module comprises:
    第二归一化单元,用于对所述轨迹进行旋转归一化和大小归一化;a second normalization unit, configured to perform rotation normalization and size normalization on the trajectory;
    第二确定单元,用于根据二维曲率确定出旋转归一化和大小归一化后的轨迹中的分段点,得到至少一段笔画。And a second determining unit, configured to determine, according to the two-dimensional curvature, a segmentation point in the trajectory of the rotation normalization and the size normalization, to obtain at least one stroke.
  22. 根据权利要求21所述的装置,其特征在于,所述第二归一化单元具体用于:The device according to claim 21, wherein the second normalization unit is specifically configured to:
    根据构成所述轨迹的三维坐标点确定出二维坐标序列[u(i),v(i)],i=1…N,其中,u(i)为构成所述轨迹的三维坐标点投影在Y-Z平面上的Y轴的值,v(i)为构成所述轨迹的三维坐标点投影在Y-Z平面上的Z轴的值,N为构成所述轨迹的三维坐标点的数量;Determining a two-dimensional coordinate sequence [u(i), v(i)], i=1...N according to three-dimensional coordinate points constituting the trajectory, wherein u(i) is a three-dimensional coordinate point constituting the trajectory projected The value of the Y-axis on the YZ plane, v(i) is the value of the Z-axis projected on the YZ plane of the three-dimensional coordinate point constituting the trajectory, and N is the number of three-dimensional coordinate points constituting the trajectory;
    寻找所述轨迹的最小转动惯量的转动轴,将所述轨迹旋转到自身最小转动惯量转动轴与投影到Y-Z平面的Y轴平行的位置;Finding a rotation axis of the minimum moment of inertia of the trajectory, rotating the trajectory to a position where the minimum moment of inertia rotation axis is parallel to the Y axis projected to the Y-Z plane;
    计算所述轨迹重心
    Figure PCTCN2018078139-appb-100005
    Calculating the center of gravity of the track
    Figure PCTCN2018078139-appb-100005
    计算所述轨迹的协方差矩阵
    Figure PCTCN2018078139-appb-100006
    其中,
    Figure PCTCN2018078139-appb-100007
    Figure PCTCN2018078139-appb-100008
    Calculating the covariance matrix of the trajectory
    Figure PCTCN2018078139-appb-100006
    among them,
    Figure PCTCN2018078139-appb-100007
    Figure PCTCN2018078139-appb-100008
    将构成所述轨迹的所有三维坐标点乘以I;Multiplying all three-dimensional coordinate points constituting the trajectory by I;
    计算所述轨迹的宽W和高H,将所述二维坐标序列中的u(i)除以W,将所述二维坐标序列中的V(i)除以H。The width W and height H of the trajectory are calculated, u(i) in the two-dimensional coordinate sequence is divided by W, and V(i) in the two-dimensional coordinate sequence is divided by H.
  23. 根据权利要求19或20所述的装置,其特征在于,所述信息提取模块具体用于:The device according to claim 19 or 20, wherein the information extraction module is specifically configured to:
    计算每一段笔画的长度,得到每一段笔画的长度信息;Calculate the length of each stroke and obtain the length information of each stroke;
    计算连续的两段笔画之间的夹角,所述夹角为所述两段笔画各自最小转动惯量轴的夹角,得到连续的两段笔画之间的角度信息;Calculating an angle between two consecutive strokes, wherein the angle is an angle between respective minimum moments of inertia axes of the two strokes, and obtaining angle information between two consecutive strokes;
    计算每一段笔画中等距离间隔的三维位置关于时间的一阶导数序列,得到每一段笔画的速度信息。A sequence of first derivative of the three-dimensional position of each stroke of the middle distance interval with respect to time is calculated, and the velocity information of each stroke is obtained.
  24. 根据权利要求14所述的装置,其特征在于,所述识别模块具体用于:The device according to claim 14, wherein the identification module is specifically configured to:
    对每一组预存特征信息模板,计算提取的每一段笔画的特征信息与一组预存特征信息模板中一个轨迹对应的每一段笔画的特征信息的动态时间归整DTW距离;For each set of pre-stored feature information templates, calculating a dynamic time-rounded DTW distance of the extracted feature information of each segment of the stroke and the feature information of each segment of the stroke corresponding to one of the pre-stored feature information templates;
    根据计算出的DTW距离与预设阈值确定是否接受发起所述用户手势的用户。Determining whether to accept the user who initiated the user gesture is based on the calculated DTW distance and a preset threshold.
  25. 根据权利要求24所述的装置,其特征在于,所述预设阈值为一组预存特征信息模板中所有轨迹对应的每一段笔画的特征信息之间的DTW距离的平均值与所述一组预存特征信息模板中所有轨迹对应的每一段笔画的特征信息之间的DTW距离的标准差之和。The device according to claim 24, wherein the preset threshold is an average value of DTW distances between feature information of each segment of strokes corresponding to all tracks in a set of pre-stored feature information templates and the set of pre-stored The sum of the standard deviations of the DTW distances between the feature information of each stroke corresponding to all the tracks in the feature information template.
  26. 根据权利要求14所述的装置,其特征在于,所述每一组预存特征信息模板携带用户标识。The device according to claim 14, wherein each of the pre-stored feature information templates carries a user identifier.
PCT/CN2018/078139 2017-03-06 2018-03-06 User identification method and device WO2018161893A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710128556.2 2017-03-06
CN201710128556.2A CN108536314A (en) 2017-03-06 2017-03-06 Method for identifying ID and device

Publications (1)

Publication Number Publication Date
WO2018161893A1 true WO2018161893A1 (en) 2018-09-13

Family

ID=63447267

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/078139 WO2018161893A1 (en) 2017-03-06 2018-03-06 User identification method and device

Country Status (2)

Country Link
CN (1) CN108536314A (en)
WO (1) WO2018161893A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117633276A (en) * 2024-01-25 2024-03-01 江苏欧帝电子科技有限公司 Writing track recording and broadcasting method, system and terminal

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109409316B (en) * 2018-11-07 2022-04-01 极鱼(北京)科技有限公司 Over-the-air signature method and device
CN110490059A (en) * 2019-07-10 2019-11-22 广州幻境科技有限公司 A kind of gesture identification method, system and the device of wearable intelligent ring
CN110942042B (en) * 2019-12-02 2022-11-08 深圳棒棒帮科技有限公司 Three-dimensional handwritten signature authentication method, system, storage medium and equipment
CN112598424A (en) * 2020-12-29 2021-04-02 武汉天喻聚联科技有限公司 Authentication method and system based on action password
CN116630993A (en) * 2023-05-12 2023-08-22 北京竹桔科技有限公司 Identity information recording method, device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103034429A (en) * 2011-10-10 2013-04-10 北京千橡网景科技发展有限公司 Identity authentication method and device for touch screen
CN103295028A (en) * 2013-05-21 2013-09-11 深圳Tcl新技术有限公司 Gesture operation control method, gesture operation control device and intelligent display terminal
CN103631501A (en) * 2013-10-11 2014-03-12 金硕澳门离岸商业服务有限公司 Data transmission method based on gesture control
CN105431813A (en) * 2013-05-20 2016-03-23 微软技术许可有限责任公司 Attributing user action based on biometric identity

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0515796D0 (en) * 2005-07-30 2005-09-07 Mccarthy Peter A motion capture and identification device
CN102810008B (en) * 2012-05-16 2016-01-13 北京捷通华声语音技术有限公司 A kind of air input, method and input collecting device in the air
CN103257711B (en) * 2013-05-24 2016-01-20 河南科技大学 space gesture input method
CN103679213B (en) * 2013-12-13 2017-02-08 电子科技大学 3D gesture recognition method
CN103927532B (en) * 2014-04-08 2017-11-03 武汉汉德瑞庭科技有限公司 Person's handwriting method for registering based on stroke feature
CN103984416B (en) * 2014-06-10 2017-02-08 北京邮电大学 Gesture recognition method based on acceleration sensor
KR20160133305A (en) * 2015-05-12 2016-11-22 삼성전자주식회사 Gesture recognition method, a computing device and a control device
CN105630174A (en) * 2016-01-22 2016-06-01 上海斐讯数据通信技术有限公司 Intelligent terminal dialing system and method
CN105912910A (en) * 2016-04-21 2016-08-31 武汉理工大学 Cellphone sensing based online signature identity authentication method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103034429A (en) * 2011-10-10 2013-04-10 北京千橡网景科技发展有限公司 Identity authentication method and device for touch screen
CN105431813A (en) * 2013-05-20 2016-03-23 微软技术许可有限责任公司 Attributing user action based on biometric identity
CN103295028A (en) * 2013-05-21 2013-09-11 深圳Tcl新技术有限公司 Gesture operation control method, gesture operation control device and intelligent display terminal
CN103631501A (en) * 2013-10-11 2014-03-12 金硕澳门离岸商业服务有限公司 Data transmission method based on gesture control

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117633276A (en) * 2024-01-25 2024-03-01 江苏欧帝电子科技有限公司 Writing track recording and broadcasting method, system and terminal

Also Published As

Publication number Publication date
CN108536314A (en) 2018-09-14

Similar Documents

Publication Publication Date Title
WO2018161893A1 (en) User identification method and device
Tian et al. KinWrite: Handwriting-Based Authentication Using Kinect.
US9965608B2 (en) Biometrics-based authentication method and apparatus
CN107679446B (en) Human face posture detection method, device and storage medium
KR102466995B1 (en) Method and device for authenticating user
US9355236B1 (en) System and method for biometric user authentication using 3D in-air hand gestures
Jain et al. Exploring orientation and accelerometer sensor data for personal authentication in smartphones using touchscreen gestures
US20150177842A1 (en) 3D Gesture Based User Authorization and Device Control Methods
TWI569176B (en) Method and system for identifying handwriting track
JP5919944B2 (en) Non-contact biometric authentication device
CN109829368B (en) Palm feature recognition method and device, computer equipment and storage medium
CN104850773B (en) Method for authenticating user identity for intelligent mobile terminal
CN105980973A (en) User-authentication gestures
CN107545252A (en) Face identification method and device in video based on multi-pose Face model
Lu et al. Gesture on: Enabling always-on touch gestures for fast mobile access from the device standby mode
US20160291704A1 (en) Disambiguation of styli by correlating acceleration on touch inputs
WO2014169837A1 (en) Method and system for online handwriting authentication on the basis of palm side surface information
Mendels et al. User identification for home entertainment based on free-air hand motion signatures
Li et al. Handwritten signature authentication using smartwatch motion sensors
Zhang et al. Fine-grained and real-time gesture recognition by using IMU sensors
US10180717B2 (en) Information processing device, information processing method, and program
Ciuffo et al. Smartwatch-based transcription biometrics
KR102629007B1 (en) Method and ststem for user authentication
Iyer et al. Generalized hand gesture recognition for wearable devices in IoT: Application and implementation challenges
Nugrahaningsih et al. Soft biometrics through hand gestures driven by visual stimuli

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18764482

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18764482

Country of ref document: EP

Kind code of ref document: A1