CN114272612A - Identity recognition method, identity recognition device, storage medium and terminal - Google Patents

Identity recognition method, identity recognition device, storage medium and terminal Download PDF

Info

Publication number
CN114272612A
CN114272612A CN202111537298.6A CN202111537298A CN114272612A CN 114272612 A CN114272612 A CN 114272612A CN 202111537298 A CN202111537298 A CN 202111537298A CN 114272612 A CN114272612 A CN 114272612A
Authority
CN
China
Prior art keywords
data
target
touch operation
identity recognition
touch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111537298.6A
Other languages
Chinese (zh)
Inventor
杨明慧
王欢
邱若男
潘蓝兰
王安宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Douku Software Technology Co Ltd
Original Assignee
Hangzhou Douku Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Douku Software Technology Co Ltd filed Critical Hangzhou Douku Software Technology Co Ltd
Priority to CN202111537298.6A priority Critical patent/CN114272612A/en
Publication of CN114272612A publication Critical patent/CN114272612A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an identity recognition method, an identity recognition device, a storage medium and a terminal, wherein the method comprises the following steps: acquiring target touch operation data and target gesture action data of a target user during operation on a terminal display screen, then obtaining first identity recognition data based on the target touch operation data and a first identity recognition model, obtaining second identity recognition data based on the target gesture action data and a second identity recognition model, and finally carrying out identity recognition on the target user based on the first identity recognition data and the second identity recognition data; under the condition that the private information of the user does not need to be acquired, the identity recognition of the target user is realized.

Description

Identity recognition method, identity recognition device, storage medium and terminal
Technical Field
The present application relates to the field of identity recognition technologies, and in particular, to an identity recognition method, an identity recognition device, a storage medium, and a terminal.
Background
The online game, as an important category in the online culture industry, is continuously and rapidly developing in recent years, and has become an important entertainment mode for people. The size of the entire gaming industry, as well as the number of users, has increased year by year, with some superminiature games even involving many children's users.
However, as an entertainment mode of a network space, the rich social expression of the network game is superior to that of simple entertainment to some extent, and the children users are indulged in the network game, which is very unfavorable for the development of physical and mental health.
Disclosure of Invention
The embodiment of the application provides an identity recognition method, an identity recognition device, a storage medium and a terminal, and the identity of a user can be confirmed by acquiring data when the user and the terminal interact. The technical scheme is as follows:
in a first aspect, an embodiment of the present application provides an identity identification method, where the method includes:
acquiring target touch operation data and target gesture action data of a target user during operation on a terminal display screen;
obtaining first identity identification data based on the target touch operation data and a first identity identification model;
obtaining second identity recognition data based on the target gesture action data and a second identity recognition model;
and identifying the target user based on the first identification data and the second identification data.
In a second aspect, an embodiment of the present application provides an identity recognition apparatus, where the target identity recognition apparatus includes:
the target data acquisition module is used for acquiring target touch operation data and target gesture action data when a target user operates a terminal display screen;
the first data generation module is used for obtaining first identity identification data based on the target touch operation data and a first identity identification model;
the second data generation module is used for obtaining second identity recognition data based on the target gesture action data and a second identity recognition model;
and the identity confirmation module is used for carrying out identity recognition on the target user based on the first identity recognition data and the second identity recognition data.
In a third aspect, embodiments of the present application provide a storage medium having at least one instruction stored thereon, where the at least one instruction is adapted to be loaded by a processor and to perform the above-mentioned method steps.
In a fourth aspect, an embodiment of the present application provides a terminal, which may include: a processor and a memory; wherein the memory stores at least one instruction adapted to be loaded by the processor and to perform the above-mentioned method steps.
The beneficial effects brought by the technical scheme provided by some embodiments of the application at least comprise:
by adopting the identity recognition method provided by the embodiment of the application, the target touch operation data and the target gesture action data of the target user during the operation on the terminal display screen are collected, then the first identity recognition data is obtained based on the target touch operation data and the first identity recognition model, the second identity recognition data is obtained based on the target gesture action data and the second identity recognition model, finally the target user is subjected to identity recognition based on the first identity recognition data and the second identity recognition data, under the condition of not needing to obtain the privacy information of the user, the identity recognition of the target user is realized, and the touch operation data and the gesture action data are respectively subjected to identity prediction by using two models and two identity recognition data are obtained, and the final identity recognition is carried out based on the two identity recognition data, the complexity of the model and the complexity of training are reduced, and the accuracy of the identity recognition result is ensured.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of an identity recognition method according to an embodiment of the present application;
fig. 2 is a schematic flow chart of an identity recognition method according to an embodiment of the present application;
fig. 3 is a schematic flow chart of an identity recognition method according to an embodiment of the present application;
FIG. 4 provides an exemplary schematic diagram of data acquisition according to an embodiment of the present application;
fig. 5 is a schematic diagram illustrating an example of a first touch feature vector and a first touch operation vector sequence according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram illustrating an example of data enhancement according to an embodiment of the present application;
FIG. 7 provides an exemplary diagram of data concatenation according to an embodiment of the present application;
FIG. 8 is a schematic diagram illustrating exemplary gesture data preprocessing according to an embodiment of the present disclosure;
fig. 9 is a schematic flowchart of an identity recognition method according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an identification device according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an identity confirmation module according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of an identification device according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a touch model training module according to an embodiment of the present disclosure;
fig. 14 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description of the present application, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In the description of the present application, it is noted that, unless explicitly stated or limited otherwise, "including" and "having" and any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art. Further, in the description of the present application, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
In the prior art, in order to realize network anti-addiction for a child user, a game-related enterprise sets forth that a new technology is applied to anti-addiction work, for example, a child identification technology based on face recognition and a child identification technology based on voice recognition, and judges whether an operator is a child by acquiring personal information of the operator, such as face, voice, and the like. However, personal information such as face and voice belongs to sensitive personal information, and personal information such as face and voice is used for identification, which may reveal privacy of users.
Based on the above, the method for identifying the identity is provided, and comprises the steps of acquiring target touch operation data and target gesture action data of a target user during operation on a terminal display screen, obtaining first identity identification data based on the target touch operation data and a first identity identification model, obtaining second identity identification data based on the target gesture action data and a second identity identification model, and finally identifying the identity of the target user based on the first identity identification data and the second identity identification data, so that the identity of the target user is identified under the condition that privacy information of the user does not need to be obtained.
The following is a detailed description with reference to specific examples. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims. The flow diagrams depicted in the figures are merely exemplary and need not be performed in the order of the steps shown. For example, some steps are parallel, and there is no strict sequence relationship in logic, so the actual execution sequence is variable.
Referring to fig. 1, a schematic flow chart of an identity recognition method is provided in an embodiment of the present application. In a specific embodiment, the identity recognition method is applied to an identity recognition device and a terminal configured with the identity recognition device. The specific process of this embodiment will be described below by taking a terminal as an example, and it is understood that the terminal applied in this embodiment may be a smart phone, a tablet computer, a desktop computer, a wearable device, and the like, which is not limited herein. As will be described in detail with respect to the flow shown in fig. 1, the identity recognition method may specifically include the following steps:
s101, acquiring target touch operation data and target gesture action data of a target user during operation on a terminal display screen;
in one embodiment, when a target user operates a terminal display screen, the terminal acquires target touch operation data corresponding to touch operation of the target user according to a preset frequency, and simultaneously acquires target gesture action data corresponding to a sensor of the terminal when the target user operates the terminal display screen according to the same preset frequency.
The target touch operation data refers to action data of a target user when the target user operates a terminal display screen, the target gesture action data refers to data of a sensor when the target user operates the terminal display screen, and the sensor can be an accelerometer sensor, a gyroscope sensor and the like in the terminal.
It is understood that when the user operates the display screen of the terminal, the movement of the hand of the user may cause the terminal to follow the movement of the hand of the user, and at this time, the values of the sensors in the terminal may change, such as the accelerometer sensor, the gyroscope sensor, and the like, and the values of the sensors correspond to the movement of the hand of the user at the same time.
In one embodiment, the target touch operation data collected at a certain time may include: a timestamp of the current moment, coordinates touched by the finger of the target user, a screen direction flag, a finger number, and the like.
In one embodiment, the target gesture motion data collected at a time may include: a timestamp of the current time, accelerometer data, gyroscope data, etc.
It should be understood that, when a target user performs a series of touch operations on a terminal, due to different usage habits of each user, the touch operations of different target users have operation habit characteristics that are unique to the user, and the target touch operation data and the target gesture action data respond to real-time corresponding data of the touch operations of the target user, and include unique operation habit characteristics of the target user during the touch operations.
S102, obtaining first identity identification data based on the target touch operation data and a first identity identification model;
the first identity recognition model is a pre-trained neural network model based on a full-connection module, and can output first identity recognition data for predicting the user identity of the target user according to the target touch operation data.
In one embodiment, the first identity data may be a probability that the target user is an identity user.
In one embodiment, the target touch operation data is subjected to data preprocessing, the target touch operation data is converted into data suitable for model input, and then the data subjected to format conversion is input into a first identity recognition model, and the first identity recognition model can output first identity recognition data used for predicting the user identity of the target user after operation according to the input data.
S103, obtaining second identity recognition data based on the target gesture action data and a second identity recognition model;
the second identity recognition model may be a pre-trained convolutional neural network model based on a full-connection module and a convolutional module, and the second identity recognition model may output second identity recognition data for predicting the identity of the user according to the target gesture motion data.
In one embodiment, the second identification data may be a probability that the target user is an identity user.
In one embodiment, the target gesture motion data is subjected to data preprocessing, the target gesture motion data is converted into data suitable for being input by a second identity recognition model, the data after format conversion is input into the second identity recognition model, and the second identity recognition model can output second identity recognition data for predicting the user identity of the target user after operation according to the input data.
S104, identifying the target user based on the first identification data and the second identification data.
In one embodiment, the first identification data and the second identification data are weighted and summed according to a preset weight, and the identity of the user is finally determined according to the weighted and summed result.
In the embodiment of the application, when a target user performs touch operation on a display screen, data correspondingly generated by a terminal is divided into target touch operation data and target gesture action data. The target touch operation data and the target gesture action data are data generated when a target user performs touch operation on the display screen, and both the data have the operation habit characteristics of the target user. Therefore, in one or more embodiments of the present application, the identity of the target user may be predicted by combining the first identification data corresponding to the target touch operation data and the second identification data corresponding to the target gesture action data, or may be predicted by using one of the target touch operation data and the target gesture action data alone.
Optionally, in an embodiment, target touch operation data of a target user during operation on a display screen of the terminal is collected, first identity identification data is generated based on the target touch operation data and the first identity identification model, and the identity of the target user is determined based on the first identity identification data.
Optionally, in an embodiment, the target gesture action data of the target user when operating the terminal display screen is collected, second identification data is generated based on the target gesture action data and the second identification model, and the identity of the target user is determined based on the second identification data.
By adopting the identity recognition method provided by the embodiment of the application, the target touch operation data of a target user during the operation on a terminal display screen and the target gesture action data corresponding to the touch operation are collected, then the target touch operation data and a pre-trained first identity recognition model are utilized to generate the first identity recognition data for predicting the identity of the user, the target gesture action data and a pre-trained second identity recognition model are utilized to generate the second identity recognition data for predicting the identity of the user, the identity of the target user is finally determined by combining the first identity recognition data and the second identity recognition data, the identity recognition of the target user is realized under the condition that the privacy information of the user is not required to be obtained, and the touch operation data and the gesture action data are respectively subjected to the identity prediction by using two models and two identity recognition data are obtained, and final identity recognition is performed based on the two identity recognition data, so that the model complexity and the training complexity are reduced, and the accuracy of the identity recognition result is ensured.
Referring to fig. 2, another embodiment of the present application provides a flowchart illustrating an identity recognition method. As shown in fig. 2, the identity recognition method may include the following steps:
the method comprises the steps of S201, obtaining sample touch operation data and sample gesture action data, wherein the sample touch operation data comprises first sample touch operation data of a first age group user when operating a terminal display screen and second sample touch operation data of a second age group user when operating the terminal display screen, and the sample gesture action data comprises first sample gesture action data of the first age group user when operating the terminal display screen and second sample gesture action data of the second age group user when operating the terminal display screen;
in one embodiment, the user group is divided into two user groups of different age groups according to needs, including users of a first age group and users of a second age group, first sample touch operation data and first sample gesture action data corresponding to touch operation of a certain number of users of the first age group during operation of the terminal display screen are collected, and second sample touch operation data and second sample gesture action data corresponding to touch operation of a certain number of users of the second age group during operation of the terminal display screen are collected. The first sample touch operation data and the second sample touch operation data may include: a timestamp of the current moment, coordinates touched by the user's finger, a screen direction flag, a finger number, the user's age, the user's ID, and the like; the first sample gesture motion data and the second sample gesture motion data may include: a timestamp of the current time, accelerometer data, gyroscope data, etc.
The first sample touch operation data and the second sample touch operation data are used for training a first identity recognition model, and the first sample gesture action data and the second sample gesture action data are used for training a second identity recognition model.
In one embodiment, a first preset number of first age group users are selected, first sample touch operation data of touch operation of the first age group users on a display screen of the terminal and first sample gesture action data corresponding to the touch operation are collected respectively within a certain time, then a second preset number of second age group users are selected, second sample touch operation data of touch operation of the second age group users on the display screen of the terminal and second sample gesture action data corresponding to the touch operation are collected respectively within a certain time.
In one embodiment, the first age group of users may be 0-18 years old underage users, and the second age end users may be 18-70 years old adult users; or, the first age group user is the children user of 0 ~ 14 years of age, and the second age group user is the non-children user of 14 ~ 70 years of age, and the age classification of first age group user and second age group user can be set for by oneself according to the demand, and to this, this application embodiment does not do not specifically limit.
Optionally, the user group of each age group has an operation habit feature that is unique to the age group and different from the operation habit features of users of other age groups, so that the embodiment of the application is not limited to the user group division of the users of the first age group and the users of the second age group, and may further include users of the third age group and users of the fourth age group, and the like. For example, the first age group of users are children users in 0-14 years of age, the second age group of users are teenagers users in 14-20 years of age, the third age group of users are young users in 20-35 years of age, and the fourth age group of users are middle-aged users in 35-50 years of age.
S202, training a first identity recognition model based on the first sample touch operation data and the second sample touch operation data;
the first sample touch operation data is touch operation data correspondingly generated by users of a first age group when the display screen performs touch operation, and the second sample touch operation data is touch operation data correspondingly generated by users of a second age group when the display screen performs touch operation.
In one embodiment, the first sample touch operation data and the second sample touch operation data are used for training the first identity recognition model, so that the first identity recognition model can accurately judge whether the user is a user of a first age group or a user of a second age group according to the touch operation data of any user. The first identity recognition model can be a neural network model based on a full-connection module, and can also be other neural network models based on deep learning.
It is to be understood that the first sample touch operation data and the second sample touch operation data cannot be directly input to the neural network model as raw data, and therefore, in the process of training the first identity recognition model based on the first sample touch operation data and the second sample touch operation data, a data processing process is further included before the first sample touch operation data and the second sample touch operation data are input to the model.
Illustratively, on the basis of fig. 2, as shown in fig. 3, step S202 may include step S2021, step S2022, and step S2023.
S2021, extracting features of the first sample touch operation data according to a plurality of preset dimensions to obtain first touch feature vectors corresponding to the first age group users in each preset dimension, and extracting features of the second sample touch operation data according to a plurality of preset dimensions to obtain second touch feature vectors corresponding to the second age group users in each preset dimension;
in an embodiment, the first sample touch operation data and the second sample touch operation data are respectively subjected to data preprocessing to obtain a first touch operation vector sequence corresponding to each first age group user and a second touch operation vector sequence corresponding to each second age group user, then the first touch operation vectors in each first touch operation vector sequence are respectively subjected to feature extraction processing according to a plurality of preset dimensions to obtain first touch feature vectors corresponding to each first age group user in each preset dimension, and finally the second touch operation vectors in each second touch operation vector sequence are respectively subjected to feature extraction processing according to a plurality of preset dimensions to obtain second touch feature vectors corresponding to each second age group user in each preset dimension.
It should be understood that, in the embodiment of the present application, the first sample touch operation data and the second sample touch operation data are collected according to a preset frequency, the data information collected each time includes a timestamp of a current time, coordinates touched by a finger of a user, a screen direction flag, a finger number, an age of the user, an ID of the user, and other pieces of information, and the information collected at a single time cannot reflect an operation habit characteristic of the user, so that consecutive touch operation data collected for multiple times are integrated into one touch operation vector according to a sequence of the timestamps. For example, in a sample data acquisition time period, a user performs a touch operation for 4 times in total by a single finger, and acquires effective touch operation data for 20 times in total according to a preset frequency, the twenty acquired touch operation data all include corresponding timestamp marks, and the 20 acquired touch operation data can be divided into 4 groups of continuous touch operation data according to the timestamp marks, where each group of touch operation data corresponds to 1 touch operation respectively. Referring to fig. 3, an exemplary schematic diagram of collecting touch operation data is provided for an embodiment of the present application.
As shown in fig. 4, fig. 4 shows a data acquisition situation when a user performs one touch operation, and as shown in the figure, in one sliding touch operation, five touch operation data are acquired according to a preset frequency, and the five touch operation data are integrated according to a sequence of timestamps, so that a touch operation vector corresponding to the touch operation as shown in the figure can be obtained.
In addition, in the sample data acquisition process, each user executes multiple touch operations, and all touch operation data (including the first sample touch operation data and the second sample touch operation data) corresponding to each user are respectively subjected to integration preprocessing, so that a touch operation vector sequence containing a plurality of touch operation vectors corresponding to each user is obtained. In other words, the first sample touch operation data and the second sample touch operation data are respectively subjected to data preprocessing, so as to obtain a first touch operation vector sequence corresponding to each first age group user and a second touch operation vector sequence corresponding to each second age group user.
And then extracting the characteristics of each touch operation vector in the touch operation vector sequence from a plurality of preset dimensions to obtain action characteristic vectors corresponding to the preset dimensions. Specifically, feature extraction processing is performed on the first touch operation vectors in each first touch operation vector sequence according to a plurality of preset dimensions, so as to obtain first touch feature vectors corresponding to the first age group users in each preset dimension, and finally, feature extraction processing is performed on the second touch operation vectors in each second touch operation vector sequence according to a plurality of preset dimensions, so as to obtain second touch feature vectors corresponding to the second age group users in each preset dimension. Wherein the preset dimensions include, but are not limited to: a duration of time; the relative distance of the coordinates in the entire sliding motion; the range of coordinates in the entire sliding motion; the maximum and minimum values of the coordinates in the entire sliding motion; vector distance; a track distance; the direction of the trajectory; the mean value of the contact area; variance of contact area; vector velocity; a trajectory speed; an accelerometer mean value; an accelerometer variance; the mean value of the pitch angles; mean value of yaw angle; the mean value of the roll angle; variance of pitch angle; variance of yaw angle; variance of roll angle; minimum value of pitch angle; minimum value of yaw angle; the minimum value of the roll angle; maximum value of pitch angle; maximum value of yaw angle; the maximum value of the roll angle; root mean square of pitch angle; root mean square of yaw angle; root mean square of the roll angle; average deviation of pitch angle and average deviation of yaw angle; average deviation of the roll angle, etc. Each touch operation vector in the touch operation vector sequence corresponding to each user can extract an action feature vector corresponding to the dimension.
Referring to fig. 5, an exemplary diagram of a first touch feature vector and a first touch operation vector sequence is provided for an embodiment of the present application. As shown in fig. 4, the first touch operation vector sequence corresponding to a user of a first age group includes a first touch operation vector 1, a first touch operation vector 2, and a first touch operation vector 3 … …, where feature extraction is performed on the first touch operation vector 1 to obtain m first touch feature vectors corresponding to m preset dimensions.
Further, the relationship between the second touch characteristic vector, the second touch operation vector sequence and the second age group user can be similar to fig. 5, which is not repeated herein.
In one embodiment, after the feature vectors are extracted, the feature vectors with data significant abnormality are removed.
Optionally, in an embodiment, after the feature vectors are extracted, data equalization processing is performed on the first touch feature vector of each user in the first age group and the second touch feature vector of each user in the second age group.
It should be understood that, in the process of collecting sample data, since the number of users in each first age group and the number of users in each second age group used for collecting samples may be different, and the number of frequencies of the touch operations performed by the users in each first age group and the users in each second age group within the preset time period of collecting samples may also be different, data equalization processing is performed on the first touch characteristic vector of each user in the first age group and the second touch characteristic vector of each user in the second age group, so that the sample data amounts of the first touch characteristic vector corresponding to the user in the first age group and the second touch characteristic vector corresponding to the user in the second age group are equalized.
Optionally, data equalization processing is performed on each first touch feature vector and each second touch feature vector based on a few-class sample synthesis Oversampling technology (SMOTE). Optionally, other data equalization algorithms may also be used, which is not limited in this embodiment of the present application.
Optionally, in an embodiment, after the data equalization, data enhancement processing is performed on the first touch feature vector corresponding to the first age group of users and the second touch feature vector corresponding to the second age group of users according to a preset data enhancement magnification, so as to obtain the first touch feature vector corresponding to the first age group of users and the second touch feature vector corresponding to the second age group of users after the data enhancement.
Specifically, the data enhancement processing is performed on a first touch characteristic vector corresponding to a first age group user and a second touch characteristic vector corresponding to a second age group user according to a preset data enhancement magnification, and the data enhancement processing includes: calculating the variance of each action characteristic vector under each preset dimension of each user, constructing a Gaussian distribution with the mean value of 0 based on the calculated variances, collecting numerical values in a specific interval of the Gaussian distribution as newly generated noise under the preset dimension, and superposing the newly generated noise to each corresponding preset dimension to synthesize enhanced data.
Optionally, the preset data enhancement magnification and the specific interval may be freely adjusted according to different tasks.
Referring to fig. 6, an exemplary diagram of data enhancement is provided according to an embodiment of the present application. As shown in fig. 6, a first touch operation vector sequence corresponding to a user of a first age group includes a first touch operation vector 1 and a first touch operation vector 2 … …, where each first touch operation vector in the first touch operation vector sequence may be extracted to obtain a first touch feature vector corresponding to a preset dimension, where the first touch feature vector x is new noise collected in a gaussian distribution with a variance of the first touch feature vector in the preset dimension 1 and a mean of the first touch feature vector in the gaussian distribution with the mean of 0, and the new noise is added to the corresponding preset dimension to enhance data based on the gaussian distribution.
S2022, performing array cascade on the first touch characteristic vectors corresponding to the first age group users to obtain first touch characteristic vector sequences corresponding to the first age group users, and performing array cascade on the second motion vectors corresponding to the second age group users to obtain second touch characteristic vector sequences corresponding to the second age group users;
in one embodiment, array cascading is performed on first touch characteristic vectors of users in a first age group in each preset dimension to obtain a two-dimensional first touch characteristic vector sequence, wherein each first touch characteristic vector subjected to array cascading is extracted from the same first touch operation vector; and performing array cascading on the second touch characteristic vectors of the users in the second age group in each preset dimension to obtain a two-dimensional second touch characteristic vector sequence, wherein each second touch characteristic vector subjected to array cascading is extracted from the same two second touch operation vectors.
Specifically, please refer to fig. 7, which provides an exemplary diagram of data concatenation according to an embodiment of the present application. As shown in fig. 7, after feature extraction, the collected first touch operation vector sequence of a user of a certain first age group obtains first touch feature vectors corresponding to each preset dimension as shown in the figure, and the first touch feature vector 1, the first touch feature vector 2, and the first touch feature vector n of … … are subjected to array cascade, so as to obtain a two-dimensional first touch feature vector sequence as shown in the figure.
S2023, training the first identity recognition model based on each of the first touch feature vector sequences and each of the second touch feature vector sequences.
Specifically, a first touch characteristic vector sequence corresponding to each first age group user and a second touch characteristic vector sequence corresponding to each second age group user are sequentially input into the first identity recognition model according to a preset turn, and the first identity recognition model is trained, so that the first identity recognition model learns the operation habit characteristics of the first age group user and the second age group user respectively.
S203, training a second identity recognition model based on the first sample gesture motion data and the second sample gesture motion data;
the first sample gesture action data is gesture action data correspondingly generated by users of a first age group when the display screen performs touch operation, and the second sample gesture action data is gesture action data correspondingly generated by users of a second age group when the display screen performs touch operation.
In one embodiment, the first sample gesture motion data and the second sample gesture motion data are used for training the second identity recognition model, so that the second identity recognition model can accurately judge whether the user is a first age user or a second age user according to the gesture motion data of any user. The second identity recognition model can be a convolutional neural network model based on a full-connection module and a convolutional module, and can also be other neural network models based on deep learning.
The first sample gesture action data and the second sample gesture action data are data of a sensor on the terminal, which are acquired when a user operates a display screen of the terminal
It is understood that the first sample gesture motion data and the second sample gesture motion data cannot be directly input to the neural network model as raw data, and therefore, in the process of training the second identity recognition model based on the first sample gesture motion data and the second sample gesture motion data, a data processing process is further included before the first sample gesture motion data and the second sample gesture motion data are input to the model.
Illustratively, on the basis of fig. 2, as shown in fig. 3, step S203 may include step S2031, step S2032, and step S2033.
S2031, respectively preprocessing the first sample gesture motion data and the second sample gesture motion data to obtain a first gesture motion vector sequence corresponding to each first age user and a second gesture motion vector sequence corresponding to each second age user;
specifically, the first sample gesture motion data is divided into a plurality of time periods according to the time stamps, the sample gesture motion data in each time period is averaged, the average value of each sample sensor obtained by averaging in each time period is used as a first gesture motion vector, each time period corresponds to a first gesture motion vector, and the time periods correspond to a first gesture motion vector sequence; and dividing the second sample gesture motion data into a plurality of time periods according to the time stamps, averaging the sample gesture motion data in each time period, taking the average value of each sample sensor obtained by averaging in each time period as a second sensor vector, wherein each time period corresponds to one second sensor vector, and the plurality of time periods correspond to one second gesture motion vector sequence.
Referring to fig. 8, an exemplary schematic diagram of gesture data preprocessing is provided for the embodiment of the present application. As shown in fig. 8, a gesture data collected when one user performs a touch operation is shown, where each line of data shown in the middle represents gesture data collected under one timestamp, for example, X1, Y1, Z1, X11, Y11, and Z11, which are gesture data collected under the same timestamp. By dividing the time period, as shown in the figure, the time period includes data corresponding to three timestamps, and averaging three values under the three timestamps of each type of gesture action data, the first gesture action vector as shown in the figure is obtained. And cascading the first gesture motion vectors obtained by averaging the multiple time periods to obtain a first gesture motion vector sequence, wherein the gesture motion data are data of a sensor on the terminal.
S2032, converting each first gesture motion vector sequence into a first three-dimensional vector matrix respectively based on the set continuous time window, and converting each second gesture motion vector sequence into a second three-dimensional vector matrix respectively based on the set continuous time window;
specifically, a two-dimensional first gesture motion vector sequence is converted into a three-dimensional first three-dimensional vector matrix by setting a fixed continuous time window, and a two-dimensional second gesture motion vector sequence is converted into a three-dimensional second three-dimensional vector matrix by setting a fixed continuous time window.
S2033, training the second identity recognition model based on each of the first three-dimensional vector matrices and each of the second three-dimensional vector matrices.
Specifically, a first three-dimensional vector matrix corresponding to each first age group user and a second three-dimensional vector matrix corresponding to each second age group user are sequentially input into the second identity recognition model according to a preset turn, and the second identity recognition model is trained, so that the second identity recognition model learns the operation habit characteristics of the first age group user and the second age group user respectively.
S204, acquiring target touch operation data and target gesture action data of a target user during operation on a terminal display screen;
s205, obtaining first identity identification data based on the target touch operation data and the first identity identification model;
illustratively, on the basis of fig. 2, as shown in fig. 9, step S205 may include step S2051, step S2052, and step S2053.
S2051, extracting features of the target touch operation data according to a plurality of preset dimensions to obtain target touch feature vectors corresponding to the target touch operation data in each preset dimension;
specifically, the step of extracting the features of the target touch operation data in step S2051 is the same as the step of extracting the features of the sample touch operation data in step S2021, and for a specific explanation of this step, reference may be made to the specific explanation in step S2021, which is not repeated herein.
S2052, performing array cascade on each target touch characteristic vector to obtain a target touch characteristic vector sequence;
specifically, the process of the array cascade in step S2052 is the same as the process of the array cascade in step S2022, and for the specific explanation of this step, reference may be made to the specific explanation in step S2022, which is not described herein again.
S2053, inputting the target touch feature vector sequence into the first identity recognition model to obtain first identity recognition data.
Specifically, a two-dimensional target touch characteristic vector sequence is input into a trained first identity recognition model, and the first identity recognition model can output first identity recognition data for determining the identity of a user corresponding to the target touch characteristic vector sequence according to feature information of each dimension included in the target touch characteristic vector sequence.
S206, obtaining second identity recognition data based on the target gesture action data and a second identity recognition model;
illustratively, on the basis of fig. 2, as shown in fig. 9, step S206 may include step S2061, step S2062 and step S2063.
S2061, performing data preprocessing on the target gesture action data to obtain a target gesture action vector sequence corresponding to the target gesture action data;
specifically, the detailed explanation of this step may refer to the detailed explanation in step S2031, and is not repeated here.
S2062, converting the target gesture motion vector sequence into a target three-dimensional vector matrix based on the set continuous time window;
specifically, the detailed explanation of this step may refer to the detailed explanation in step S2032, and is not repeated here.
S2063, inputting the target three-dimensional vector matrix into a second identity recognition model to obtain second identity recognition data.
Specifically, the two-dimensional target three-dimensional vector matrix is input into a trained second identity recognition model, and the second identity recognition model can output second identity recognition data for determining the identity of the user corresponding to the target three-dimensional vector matrix according to feature information of each dimension contained in the target three-dimensional vector matrix.
S207, carrying out weighted summation on the first identity identification data and the second identity identification data according to preset weight to obtain target identity identification data;
it is understood that the first identification data is identification data generated based on target touch operation data, and the second identification data is identification data generated based on target gesture action data. The target touch operation data and the target gesture action data can both express the operation habit characteristics of the target user. In one embodiment, the final target identification data may be obtained by performing weighted summation according to weights preset for the first identification data and the second identification data.
In one embodiment, the first identification data and the second identification data are the same weight value.
S208, determining the identity of the target user based on the target identity recognition data and a preset recognition data threshold value.
Specifically, if the target identification data is greater than or equal to the preset identification data threshold, determining that the target user is a first age group user; and if the target identification data is smaller than the preset identification data threshold value, determining that the target user is a second age group user, wherein the second age group user and the first age group user are users in different age groups.
For example, if the target identification data is a probability indicating whether the target user is a first-age user, assuming that the preset identification threshold is 80%, if the target identification data is 85%, the target user is a first-age user, and if the target identification data is 60%, the target user is a second-age user.
In the embodiment of the application, by acquiring sample touch operation data and sample gesture action data of two age groups and respectively training two neural network models by adopting the sample touch operation data and the sample gesture action data, the complexity of the models is reduced, and the model effect is improved; in the training process of the first identity recognition model, a data equalization and data enhancement technology is adopted for the sample touch operation data, so that the data volume of the sample touch operation data is expanded, the dispersion and the difference of the data are kept, the data quantity and the data quality of the sample touch operation data in the training process of the first identity recognition model are ensured, and the model effect of the first identity recognition model is ensured; in the training process of the second identity recognition model, through setting a continuous time window, the model is introduced into the processing problem of a time sequence, the data dimensionality is improved, richer and deeper information can be mined by utilizing the second identity recognition model, and the model effect of the second identity recognition model is ensured.
By adopting the identity recognition method provided by the embodiment of the application, the target touch operation data and the target gesture action data of the target user during the operation on the terminal display screen are collected, then the target touch operation data and the pre-trained first identity recognition model are utilized to generate the first identity recognition data for predicting the identity of the user, the target gesture action data and the pre-trained second identity recognition model are utilized to generate the second identity recognition data for predicting the identity of the user, the identity of the target user is finally determined by combining the first identity recognition data and the second identity recognition data, the identity recognition of the target user is realized under the condition that the privacy information of the user is not required to be obtained, the touch operation data and the gesture action data are respectively subjected to the identity prediction by using two models and two identity recognition data are obtained, and the final identity recognition is carried out based on the two identity recognition data, the complexity of the model and the complexity of training are reduced, and the accuracy of the identity recognition result is ensured.
In one embodiment, when the identity of the user needs to be identified and confirmed for a specific scene, the trained model can be subjected to transfer learning without repeatedly collecting a large amount of data to train the first identity recognition model and the second identity recognition model. For example, when identity recognition and confirmation are required for users of a game, only touch operation data and gesture action data of a small number of game users need to be collected to respectively train the trained first identity recognition model and the trained second identity recognition model, and parameters of the network model are finely adjusted.
Fig. 10 is a schematic structural diagram of an identification device according to an embodiment of the present application. As shown in fig. 10, the identification apparatus 1 may be implemented by software, hardware or a combination of both as all or a part of a terminal. According to some embodiments, the identity recognition apparatus 1 includes a target data acquisition module 11, a first data generation module 12, a second data generation module 13, and an identity confirmation module 14, and specifically includes:
the target data acquisition module 11 is used for acquiring target touch operation data and target gesture action data when a target user operates a terminal display screen;
a first data generation module 12, configured to obtain first identity identification data based on the target touch operation data and a first identity identification model;
the second data generation module 13 is configured to obtain second identity recognition data based on the target gesture action data and a second identity recognition model;
an identity confirmation module 14, configured to perform identity recognition on the target user based on the first identity data and the second identity data.
Optionally, the first data generating module 12 is specifically configured to:
extracting the characteristics of the target touch operation data according to a plurality of preset dimensions to obtain target touch characteristic vectors corresponding to the target touch operation data in each preset dimension;
carrying out array cascade on each target touch characteristic vector to obtain a target touch characteristic vector sequence;
and inputting the target touch characteristic vector sequence into the first identity recognition model to obtain first identity recognition data.
Optionally, the second data generating module 13 is specifically configured to:
performing data preprocessing on the target gesture action data to obtain a target gesture action vector sequence corresponding to the target gesture action data;
converting the target gesture motion vector sequence into a target three-dimensional vector matrix based on the set continuous time window;
and inputting the target three-dimensional vector matrix into a second identity recognition model to obtain second identity recognition data.
Optionally, please refer to fig. 11, which provides a schematic structural diagram of an identity confirmation module according to an embodiment of the present application. As shown in fig. 11, the identity confirmation module 14 includes:
the data fusion unit 141 is configured to perform weighted summation on the first identity identification data and the second identity identification data according to a preset weight to obtain target identity identification data;
an identity confirmation unit 142, configured to determine the identity of the target user based on the target identity identification data and a preset identification data threshold.
Optionally, the identity confirmation unit 142 is specifically configured to:
if the target identity identification data is larger than or equal to the preset identification data threshold value, determining that the target user is a first age group user;
and if the target identification data is smaller than the preset identification data threshold value, determining that the target user is a second age group user, wherein the second age group user and the first age group user are users in different age groups.
Optionally, please refer to fig. 12, which provides a schematic structural diagram of an identity recognition apparatus according to an embodiment of the present application. As shown in fig. 12, the apparatus further includes:
the sample data acquisition module 15 is configured to acquire sample touch operation data and sample gesture action data, where the sample touch operation data includes first sample touch operation data of a first age group user when operating the terminal display screen and second sample touch operation data of a second age group user when operating the terminal display screen, and the sample gesture action data includes first sample gesture action data of the first age group user when operating the terminal display screen and second sample gesture action data of the second age group user when operating the terminal display screen;
a touch model training module 16, configured to train a first identity recognition model based on the first sample touch operation data and the second sample touch operation data;
a sensor model training module 17, configured to train a second identity recognition model based on the first sample gesture motion data and the second sample gesture motion data.
Optionally, please refer to fig. 13, which provides a schematic structural diagram of a touch model training module according to an embodiment of the present application. As shown in fig. 13, the touch model training module 16 includes:
a feature extraction unit 161, configured to extract features of the first sample touch operation data according to multiple preset dimensions, to obtain first touch feature vectors corresponding to the first age group users in the preset dimensions, respectively, and extract features of the second sample touch operation data according to the multiple preset dimensions, to obtain second touch feature vectors corresponding to the second age group users in the preset dimensions, respectively;
the data conversion unit 162 is configured to perform array cascade on the first touch characteristic vectors corresponding to the first age group users to obtain first touch characteristic vector sequences corresponding to the first age group users, and perform array cascade on the second motion vectors corresponding to the second age group users to obtain second touch characteristic vector sequences corresponding to the second age group users;
a model training unit 163 configured to train the first identity recognition model based on each of the first touch feature vector sequences and each of the second touch feature vector sequences.
Optionally, as shown in fig. 13, the touch model training module 16 further includes:
the data processing unit 164 is configured to perform data equalization processing on each of the first touch characteristic vectors and each of the second touch characteristic vectors and/or perform data enhancement processing on each of the first touch characteristic vectors and each of the second touch characteristic vectors to obtain each of the processed first touch characteristic vectors and each of the processed second touch characteristic vectors.
Optionally, the feature extraction unit 161 is specifically configured to:
respectively performing data preprocessing on the first sample touch operation data and the second sample touch operation data to obtain a first touch operation vector sequence corresponding to each first age group user and a second touch operation vector sequence corresponding to each second age group user;
performing feature extraction processing on the first touch operation vectors in each first touch operation vector sequence according to a plurality of preset dimensions to obtain first touch feature vectors corresponding to users of each first age group in each preset dimension;
and respectively performing feature extraction processing on the second touch operation vectors in each second touch operation vector sequence according to a plurality of preset dimensions to obtain second touch feature vectors corresponding to the second age group users in each preset dimension.
Optionally, the sensor model training module 17 is specifically configured to:
respectively carrying out data preprocessing on the first sample gesture motion data and the second sample gesture motion data to obtain a first gesture motion vector sequence corresponding to each first age user and a second gesture motion vector sequence corresponding to each second age user;
respectively converting each first gesture motion vector sequence into a first three-dimensional vector matrix based on the set continuous time window, and respectively converting each second gesture motion vector sequence into a second three-dimensional vector matrix based on the set continuous time window;
and training the second identity recognition model based on each first three-dimensional vector matrix and each second three-dimensional vector matrix.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
By adopting the identity recognition method provided by the embodiment of the application, the target touch operation data and the target gesture action data of the target user during the operation on the terminal display screen are collected, then the target touch operation data and the pre-trained first identity recognition model are utilized to generate the first identity recognition data for predicting the identity of the user, the target gesture action data and the pre-trained second identity recognition model are utilized to generate the second identity recognition data for predicting the identity of the user, the identity of the target user is finally determined by combining the first identity recognition data and the second identity recognition data, the identity recognition of the target user is realized under the condition that the privacy information of the user is not required to be obtained, the touch operation data and the gesture action data are respectively subjected to the identity prediction by using two models and two identity recognition data are obtained, and the final identity recognition is carried out based on the two identity recognition data, the complexity of the model and the complexity of training are reduced, and the accuracy of the identity recognition result is ensured.
An embodiment of the present application further provides a computer storage medium, where the computer storage medium may store a plurality of instructions, where the instructions are suitable for being loaded by a processor and executing the identity recognition method according to the embodiment shown in fig. 1 to 9, and a specific execution process may refer to specific descriptions of the embodiment shown in fig. 1 to 9, which is not described herein again.
The present application further provides a computer program product, where at least one instruction is stored in the computer program product, and the at least one instruction is loaded by the processor and executes the identity recognition method according to the embodiment shown in fig. 1 to 9, where a specific execution process may refer to specific descriptions of the embodiment shown in fig. 1 to 9, and is not described herein again.
Referring to fig. 14, a block diagram of a terminal according to an exemplary embodiment of the present application is shown. A terminal in the present application may include one or more of the following components: a processor 110, a memory 120, an input device 130, an output device 140, and a bus 150. The processor 110, memory 120, input device 130, and output device 140 may be connected by a bus 150.
Processor 110 may include one or more processing cores. The processor 110 connects various parts within the entire terminal using various interfaces and lines, and performs various functions of the terminal 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120 and calling data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), field-programmable gate Array (FPGA), and Programmable Logic Array (PLA). The processor 110 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a communication chip.
The Memory 120 may include a Random Access Memory (RAM) or a read-only Memory (ROM). Optionally, the memory 120 includes a non-transitory computer-readable medium. The memory 120 may be used to store instructions, programs, code sets, or instruction sets.
The input device 130 is used for receiving input instructions or data, and the input device 130 includes, but is not limited to, a keyboard, a mouse, a camera, a microphone, or a touch device. The output device 140 is used for outputting instructions or data, and the output device 140 includes, but is not limited to, a display device, a speaker, and the like. In the embodiment of the present application, the input device 130 may be a temperature sensor for acquiring an operating temperature of the terminal. The output device 140 may be a speaker for outputting audio signals.
In addition, those skilled in the art will appreciate that the configurations of the terminals illustrated in the above-described figures do not constitute limitations on the terminals, as the terminals may include more or less components than those illustrated, or some components may be combined, or a different arrangement of components may be used. For example, the terminal further includes a radio frequency circuit, an input unit, a sensor, an audio circuit, a wireless fidelity (WiFi) module, a power supply, a bluetooth module, and other components, which are not described herein again.
In the embodiment of the present application, the main body of execution of each step may be the terminal described above. Optionally, the execution subject of each step is an operating system of the terminal. The operating system may be an android system, an IOS system, or another operating system, which is not limited in this embodiment of the present application.
In the terminal of fig. 14, the processor 110 may be configured to call the identification program stored in the memory 120 and execute the program to implement the identification method according to the various method embodiments of the present application.
By adopting the identity recognition method provided by the embodiment of the application, the target touch operation data and the target gesture action data of the target user during the operation on the terminal display screen are collected, then the target touch operation data and the pre-trained first identity recognition model are utilized to generate the first identity recognition data for predicting the identity of the user, the target gesture action data and the pre-trained second identity recognition model are utilized to generate the second identity recognition data for predicting the identity of the user, the identity of the target user is finally determined by combining the first identity recognition data and the second identity recognition data, the identity recognition of the target user is realized under the condition that the privacy information of the user is not required to be obtained, the touch operation data and the gesture action data are respectively subjected to the identity prediction by using two models and two identity recognition data are obtained, and the final identity recognition is carried out based on the two identity recognition data, the complexity of the model and the complexity of training are reduced, and the accuracy of the identity recognition result is ensured.
It is clear to a person skilled in the art that the solution of the present application can be implemented by means of software and/or hardware. The "unit" and "module" in this specification refer to software and/or hardware that can perform a specific function independently or in cooperation with other components, where the hardware may be, for example, a Field-ProgrammaBLE Gate Array (FPGA), an Integrated Circuit (IC), or the like.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some service interfaces, devices or units, and may be an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program, which is stored in a computer-readable memory, and the memory may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above description is only an exemplary embodiment of the present application, and the scope of the present application is not limited thereto. That is, all equivalent changes and modifications made in accordance with the teachings of this application are intended to be included within the scope thereof. Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.

Claims (13)

1. An identity recognition method, the method comprising:
acquiring target touch operation data and target gesture action data of a target user during operation on a terminal display screen;
obtaining first identity identification data based on the target touch operation data and a first identity identification model;
obtaining second identity recognition data based on the target gesture action data and a second identity recognition model;
and identifying the target user based on the first identification data and the second identification data.
2. The method of claim 1, wherein obtaining first identity recognition data based on the target touch operation data and a first identity recognition model comprises:
extracting the characteristics of the target touch operation data according to a plurality of preset dimensions to obtain target touch characteristic vectors corresponding to the target touch operation data in each preset dimension;
carrying out array cascade on each target touch characteristic vector to obtain a target touch characteristic vector sequence;
and inputting the target touch characteristic vector sequence into the first identity recognition model to obtain first identity recognition data.
3. The method of claim 1, wherein obtaining second identification data based on the target gesture motion data and a second identification model comprises:
performing data preprocessing on the target gesture action data to obtain a target gesture action vector sequence corresponding to the target gesture action data;
converting the target gesture motion vector sequence into a target three-dimensional vector matrix based on the set continuous time window;
and inputting the target three-dimensional vector matrix into a second identity recognition model to obtain second identity recognition data.
4. The method of claim 1, wherein said identifying the target user based on the first identification data and the second identification data comprises:
carrying out weighted summation on the first identity identification data and the second identity identification data according to preset weight to obtain target identity identification data;
and determining the identity of the target user based on the target identity recognition data and a preset recognition data threshold value.
5. The method of claim 4, wherein determining the identity of the target user based on the target identification data and a preset identification data threshold comprises:
if the target identity identification data is larger than or equal to the preset identification data threshold value, determining that the target user is a first age group user;
and if the target identification data is smaller than the preset identification data threshold value, determining that the target user is a second age group user, wherein the second age group user and the first age group user are users in different age groups.
6. The method according to claim 1, wherein before collecting target touch operation data and target gesture action data of a target user during operation on a terminal display screen, the method further comprises:
the method comprises the steps of obtaining sample touch operation data and sample gesture action data, wherein the sample touch operation data comprise first sample touch operation data of a first age group user when operating a terminal display screen and second sample touch operation data of a second age group user when operating the terminal display screen, and the sample gesture action data comprise first sample gesture action data of the first age group user when operating the terminal display screen and second sample gesture action data of the second age group user when operating the terminal display screen;
training a first identity recognition model based on the first sample touch operation data and the second sample touch operation data;
training a second identity recognition model based on the first sample gesture motion data and the second sample gesture motion data.
7. The method of claim 6, wherein training a first identity recognition model based on the first sample touch operation data and the second sample touch operation data comprises:
extracting features of the first sample touch operation data according to a plurality of preset dimensions to obtain first touch feature vectors corresponding to the first age group users in the preset dimensions respectively, and extracting features of the second sample touch operation data according to the plurality of preset dimensions to obtain second touch feature vectors corresponding to the second age group users in the preset dimensions respectively;
performing array cascade on the first touch characteristic vectors corresponding to the users in each first age group to obtain first touch characteristic vector sequences corresponding to the users in each first age group, and performing array cascade on the second motion vectors corresponding to the users in each second age group to obtain second touch characteristic vector sequences corresponding to the users in each second age group;
training the first identity recognition model based on each of the first touch feature vector sequences and each of the second touch feature vector sequences.
8. The method according to claim 7, wherein after the extracting features of the first sample touch operation data according to a plurality of preset dimensions to obtain first touch feature vectors corresponding to the first age group users in the preset dimensions respectively, and extracting features of the second sample touch operation data according to the plurality of preset dimensions to obtain second touch feature vectors corresponding to the second age group users in the preset dimensions respectively, the method further comprises:
and performing data equalization processing on each first touch characteristic vector and each second touch characteristic vector and/or performing data enhancement processing on each first touch characteristic vector and each second touch characteristic vector to obtain each processed first touch characteristic vector and each processed second touch characteristic vector.
9. The method according to claim 7, wherein the extracting features of the first sample touch operation data according to a plurality of preset dimensions to obtain first touch feature vectors corresponding to the first age group users in the preset dimensions respectively, and extracting features of the second sample touch operation data according to the plurality of preset dimensions to obtain second touch feature vectors corresponding to the second age group users in the preset dimensions respectively comprises:
respectively performing data preprocessing on the first sample touch operation data and the second sample touch operation data to obtain a first touch operation vector sequence corresponding to each first age group user and a second touch operation vector sequence corresponding to each second age group user;
performing feature extraction processing on the first touch operation vectors in each first touch operation vector sequence according to a plurality of preset dimensions to obtain first touch feature vectors corresponding to users of each first age group in each preset dimension;
and respectively performing feature extraction processing on the second touch operation vectors in each second touch operation vector sequence according to a plurality of preset dimensions to obtain second touch feature vectors corresponding to the second age group users in each preset dimension.
10. The method of claim 6, wherein training a second identity recognition model based on the first sample gesture motion data and the second sample gesture motion data comprises:
respectively carrying out data preprocessing on the first sample gesture motion data and the second sample gesture motion data to obtain a first gesture motion vector sequence corresponding to each first age user and a second gesture motion vector sequence corresponding to each second age user;
respectively converting each first gesture motion vector sequence into a first three-dimensional vector matrix based on the set continuous time window, and respectively converting each second gesture motion vector sequence into a second three-dimensional vector matrix based on the set continuous time window;
and training the second identity recognition model based on each first three-dimensional vector matrix and each second three-dimensional vector matrix.
11. An identification device, the device comprising:
the target data acquisition module is used for acquiring target touch operation data and target gesture action data when a target user operates a terminal display screen;
the first data generation module is used for obtaining first identity identification data based on the target touch operation data and a first identity identification model;
the second data generation module is used for obtaining second identity recognition data based on the target gesture action data and a second identity recognition model;
and the identity confirmation module is used for carrying out identity recognition on the target user based on the first identity recognition data and the second identity recognition data.
12. A storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, performs the steps of the method of any one of claims 1 to 10.
13. A terminal, comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the steps of the method according to any of claims 1-10.
CN202111537298.6A 2021-12-14 2021-12-14 Identity recognition method, identity recognition device, storage medium and terminal Pending CN114272612A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111537298.6A CN114272612A (en) 2021-12-14 2021-12-14 Identity recognition method, identity recognition device, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111537298.6A CN114272612A (en) 2021-12-14 2021-12-14 Identity recognition method, identity recognition device, storage medium and terminal

Publications (1)

Publication Number Publication Date
CN114272612A true CN114272612A (en) 2022-04-05

Family

ID=80872620

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111537298.6A Pending CN114272612A (en) 2021-12-14 2021-12-14 Identity recognition method, identity recognition device, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN114272612A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115081334A (en) * 2022-06-30 2022-09-20 支付宝(杭州)信息技术有限公司 Method, system, apparatus and medium for predicting age bracket or gender of user
CN115718913A (en) * 2023-01-09 2023-02-28 荣耀终端有限公司 User identity identification method and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115081334A (en) * 2022-06-30 2022-09-20 支付宝(杭州)信息技术有限公司 Method, system, apparatus and medium for predicting age bracket or gender of user
CN115718913A (en) * 2023-01-09 2023-02-28 荣耀终端有限公司 User identity identification method and electronic equipment

Similar Documents

Publication Publication Date Title
CN108154398B (en) Information display method, device, terminal and storage medium
CN108108821B (en) Model training method and device
US11052321B2 (en) Applying participant metrics in game environments
CN107169454B (en) Face image age estimation method and device and terminal equipment thereof
CN114272612A (en) Identity recognition method, identity recognition device, storage medium and terminal
US11452941B2 (en) Emoji-based communications derived from facial features during game play
CN104778173B (en) Target user determination method, device and equipment
CN110390704A (en) Image processing method, device, terminal device and storage medium
CN101853259A (en) Methods and device for adding and processing label with emotional data
CN110765939B (en) Identity recognition method and device, mobile terminal and storage medium
CN109189544A (en) Method and apparatus for generating dial plate
CN112329816A (en) Data classification method and device, electronic equipment and readable storage medium
CN109766683B (en) Protection method for sensor fingerprint of mobile intelligent device
CN108573306A (en) Export method, the training method and device of deep learning model of return information
CN112215238A (en) Method, system and device for constructing general feature extraction model
CN111078742A (en) User classification model training method, user classification method and device
CN111191503A (en) Pedestrian attribute identification method and device, storage medium and terminal
CN107491991A (en) Based on the man-machine recognition methods rocked and apply its advertisement placement method and system
CN114223139B (en) Interface switching method and device, wearable electronic equipment and storage medium
CN105302559B (en) User behavior real-time processing method
CN110874609B (en) User clustering method, storage medium, device and system based on user behaviors
CN106730834A (en) Game data processing method and device
CN116580208A (en) Image processing method, image model training method, device, medium and equipment
CN112163571B (en) Method, device, equipment and storage medium for identifying attribute of electronic equipment user
CN116129534A (en) Image living body detection method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination