CN111382403A - Training method, device, equipment and storage medium of user behavior recognition model - Google Patents

Training method, device, equipment and storage medium of user behavior recognition model Download PDF

Info

Publication number
CN111382403A
CN111382403A CN202010184727.5A CN202010184727A CN111382403A CN 111382403 A CN111382403 A CN 111382403A CN 202010184727 A CN202010184727 A CN 202010184727A CN 111382403 A CN111382403 A CN 111382403A
Authority
CN
China
Prior art keywords
user
training
model
recognition model
behavior data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010184727.5A
Other languages
Chinese (zh)
Inventor
邱君华
李宏宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongdun Holdings Co Ltd
Original Assignee
Tongdun Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongdun Holdings Co Ltd filed Critical Tongdun Holdings Co Ltd
Priority to CN202010184727.5A priority Critical patent/CN111382403A/en
Publication of CN111382403A publication Critical patent/CN111382403A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/316User authentication by observing the pattern of computer usage, e.g. typical user behaviour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Social Psychology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a training method, a device, equipment and a storage medium for a user behavior recognition model. The training method of the user behavior recognition model comprises the following steps: acquiring at least one piece of behavior data of a user; and performing model updating training on the pre-trained user behavior recognition model based on the plurality of pieces of behavior data of the user and the plurality of sample users. According to the training method of the user behavior recognition model provided by the invention, the personalized model for recognizing the user behavior with high precision can be obtained by optimizing the basic model by using any behavior data of the user to be recognized, so that the personal information safety of the user is effectively ensured.

Description

Training method, device, equipment and storage medium of user behavior recognition model
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and an apparatus for training a user behavior recognition model, a computer device, and a computer-readable storage medium.
Background
With the continuous improvement of the social informatization degree, the work and life style of people are gradually transferred from off-line to on-line, such as: e-commerce, internet banking, etc. Accordingly, online users tend to face security issues such as personal sensitive information leakage, theft of bank accounts, and the like.
In order to ensure the information security of the user, the operation behavior of the user on the terminal device needs to be authenticated to identify the identity of the user. However, the existing common authentication method generally has many defects such as low identification precision and poor user experience.
It is to be noted that the above information disclosed in the background section is only for enhancement of understanding of the background of the invention, and therefore it may contain information that does not constitute prior art that is already known to a person of ordinary skill in the art.
Disclosure of Invention
In view of the above, the present invention provides a training method and apparatus for a user behavior recognition model, a computer device, and a computer-readable storage medium.
Additional features and advantages of the invention will be set forth in the detailed description which follows, or may be learned by practice of the invention.
According to an aspect of the present invention, there is provided a training method for a user behavior recognition model, including: acquiring at least one piece of behavior data of a user; and performing model updating training on the pre-trained user behavior recognition model based on the plurality of pieces of behavior data of the user and the plurality of sample users.
According to an embodiment of the present invention, the pre-training process of the user behavior recognition model includes: acquiring a plurality of pieces of behavior data of a plurality of sample users; and training an initial user behavior recognition model based on the plurality of pieces of behavior data of the plurality of sample users.
According to an embodiment of the present invention, training an initial user behavior recognition model based on a plurality of pieces of behavior data of the plurality of sample users includes: respectively extracting a first quantity of behavior data of one sample user of the plurality of sample users and a second quantity of behavior data of other sample users to form training data of a training task; for each training task, the following operations are respectively performed: selecting a third quantity of pieces of behavior data from the training data of the training task, and performing first training on the user behavior recognition model according to the third quantity of pieces of behavior data to calculate and obtain a first model parameter of the user recognition model; selecting a fourth quantity of behavior data except the third quantity of behavior data from the training data of the training task, and performing second training on the user behavior recognition model based on the first model parameter according to the fourth quantity of behavior data; and updating second model parameters of the user recognition model based on the second training result of each training task.
According to an embodiment of the present invention, the first training of the user behavior recognition model according to the third number of pieces of behavior data to calculate a first model parameter of the user recognition model includes: based on the current second model parameter, carrying out derivation on the loss function of the user behavior recognition model to obtain a first gradient value; and subtracting the product of the first gradient value and the first learning rate from the second model parameter to calculate the first model parameter.
According to an embodiment of the present invention, the second training of the user behavior recognition model based on the first model parameter includes: based on the first model parameter, carrying out derivation on a loss function of the user identification model to obtain a second gradient value; updating second model parameters of the user recognition model based on the second training results of each training task, including: and subtracting the current second model parameter by the product of the sum of the second gradient values of the training tasks and the second learning rate to update the second model parameter.
According to an embodiment of the present invention, performing model update training on a pre-trained user behavior recognition model based on a plurality of pieces of behavior data of the user and a plurality of sample users includes: based on the updated second model parameter, derivation is carried out on the loss function of the user identification model to obtain a third gradient value; and subtracting the product of the third gradient value and the first learning rate from the second model parameter to obtain the first model parameter.
According to an embodiment of the present invention, the acquiring at least one piece of behavior data of the user includes: and acquiring 1-10 pieces of behavior data of the user.
According to another aspect of the present invention, there is provided a training apparatus for a user behavior recognition model, including: the data acquisition module is used for acquiring at least one behavior data of a user; and the model training module is used for carrying out model updating training on the pre-trained user behavior recognition model based on the user and a plurality of pieces of behavior data of a plurality of sample users.
According to still another aspect of the present invention, there is provided a computer apparatus comprising: the device comprises a memory, a processor and executable instructions stored in the memory and capable of running in the processor, wherein the processor executes the executable instructions to realize the training method of any one of the user behavior recognition models.
According to yet another aspect of the present invention, there is provided a computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, implement any of the above-described methods for training a user behavior recognition model.
According to the training method of the user behavior recognition model provided by the invention, the personalized model for recognizing the user behavior with high precision can be obtained by optimizing the basic model by using any behavior data of the user to be recognized, so that the personal information safety of the user is effectively ensured.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
FIG. 1 is a flow chart illustrating a method of training a user behavior recognition model according to an exemplary embodiment.
FIG. 2 is a flow chart illustrating another method of training a user behavior recognition model in accordance with an exemplary embodiment.
FIG. 3 is a flow chart illustrating yet another method of training a user behavior recognition model, according to an exemplary embodiment.
FIG. 4 is a flow chart illustrating yet another method of training a user behavior recognition model, according to an exemplary embodiment.
FIG. 5 is a flow chart illustrating yet another method of training a user behavior recognition model, according to an exemplary embodiment.
FIG. 6 is a flow chart illustrating yet another method of training a user behavior recognition model, according to an exemplary embodiment.
FIG. 7 is a block diagram illustrating a training apparatus for a user behavior recognition model according to an example embodiment.
FIG. 8 is a schematic diagram illustrating a configuration of a computer device, according to an example embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The drawings are merely schematic illustrations of the invention and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, apparatus, steps, and so forth. In other instances, well-known structures, methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
As described above, in order to solve the problem of many defects in the currently-used user behavior authentication method, the invention provides a training method for a user behavior identification model, which identifies the identity of a user by performing non-sensory authentication on the operation behavior of the user at a terminal device, thereby ensuring the information security.
The non-inductive authentication is an important development direction in the field of identity authentication in the future. The non-inductive authentication is to judge whether the identity of the current user is the exclusive user according to the behavior data of the user, so as to protect the information security of the exclusive user. The non-inductive authentication does not need to guide the input of verification information such as account passwords and the like: the current actions of a user such as clicking a keyboard, clicking a mouse or touching a touch screen of a mobile terminal (for example, a smart phone, a tablet computer and the like) can enable the user to complete identity authentication without any feeling.
The method of the present invention will be specifically described below by way of various embodiments.
FIG. 1 is a flow chart illustrating a method of training a user behavior recognition model according to an exemplary embodiment. The training method of the user behavior recognition model shown in fig. 1 can be applied to both the server side and the client side.
Referring to fig. 1, a training method 10 of a user behavior recognition model includes:
in step S102, at least one piece of behavior data of the user is acquired.
The behavior data of the user may include, for example, the user's behavior data collected from the terminal device: the method comprises the following steps of operating behavior data such as behavior data of keyboard knocking at the PC end, behavior data of mouse clicking, behavior data of touch screen at the mobile end, gesture behavior data and the like. It should be noted that the present invention is not limited by the type of the user behavior data.
In step S104, model update training is performed on the pre-trained user behavior recognition model based on the plurality of pieces of behavior data of the user and the plurality of sample users.
In the present invention, the initial user behavior recognition model (base model) is not limited to any form or architecture of Neural network model, and may be, for example, a Multi-Layer Perception Machine (MLP) model, a Convolutional Neural Network (CNN) model, a cyclic Convolutional Neural network (RNN) model, or the like.
The depth and width of the network model depend on the scale of the user behavior data, and can be specifically selected according to the actual application scenario, for example: when the sample data size in the pre-training process is large, a neural network model with a deep layer number can be selected to learn more model parameters and generate no overfitting; when the method is applied to a mobile client, a simplified neural network model can be selected.
According to the training method of the user behavior recognition model provided by the embodiment of the invention, the personalized model for recognizing the user behavior with high precision can be obtained by optimizing the basic model by using any behavior data of the user to be recognized, so that the personal information safety of the user is effectively ensured.
It should be clearly understood that the present disclosure describes how to make and use particular examples, but the principles of the present disclosure are not limited to any details of these examples. Rather, these principles can be applied to many other embodiments based on the teachings of the present disclosure.
In view of the above, fig. 2 is a flowchart illustrating another training method for a user behavior recognition model according to an exemplary embodiment. The difference from the method 10 of fig. 1 is that the method of fig. 2 further provides a pre-training process of the base model. Likewise, the training method of the user behavior recognition model shown in fig. 2 can be applied to a server side and a client side, for example.
Referring to FIG. 2, the pre-training process 20 of the user behavior recognition model may include:
in step S202, pieces of behavior data of a plurality of sample users are acquired.
In step S204, an initial user behavior recognition model is trained based on a plurality of pieces of behavior data of a plurality of sample users.
In some embodiments, as shown in fig. 3, step S204 may further include:
in step S2042, a first quantity of behavior data of one sample user of the plurality of sample users and a second quantity of behavior data of other sample users are respectively extracted to form training data of a training task.
For each training task, "one sample user" may be randomly selected from a plurality of sample users; the first quantity of behavior data can be randomly extracted from a plurality of behavior data of the sample user, and the second quantity of behavior data can be randomly extracted from a plurality of behavior data aggregated by all sample users except the sample user; the first number and the second number may be equal or unequal, and the sum is preferably an even number.
In step S2044, for each training task, the following operations are performed:
in step S2044', a third number of pieces of behavior data are selected from the training data of the training task, and the first training is performed on the user behavior recognition model according to the third number of pieces of behavior data, so as to calculate a first model parameter of the user recognition model.
Wherein the third number of pieces of behavior data may be randomly selected from the training data of the training task; the third number is preferably half of the training data (even) of the training session, but the invention is not limited in this regard.
In step S2044 ″, a fourth number of pieces of behavior data, excluding the third number of pieces of behavior data, are selected from the training data of the training task, and a second training is performed on the user behavior recognition model based on the first model parameter according to the fourth number of pieces of behavior data.
The fourth quantity of behavior data may also be randomly selected from training data other than the third quantity of behavior data, that is, the fourth quantity is less than or equal to the third quantity. Preferably, the fourth number is equal to the third number (the first number + the second number) ÷ 2, but the present invention is not limited to this.
In step S2046, second model parameters of the user recognition model are updated based on the second training result of each training task.
It should be noted that the "first" and "second" are only used to clearly distinguish the step S2044' and the step S2044 "in each training task, and do not implicitly express that the training phases of the two steps have substantial differences. That is, the "first model parameter" and the "second model parameter" are both model parameters of the user recognition model, wherein the "first model parameter" can be understood as an intermediate result of each training task, and the "second model parameter" is a result after further optimization based on the "first model parameter". Similarly, the "first training" and the "second training" are also only used to distinguish the two training stages, and there is no substantial difference in the learning method.
In this embodiment of the present invention, the steps S2042 to S2046 may be a loop iteration training process, and the termination node of the iteration is that the model parameters of the user recognition model have converged or reach the preset iteration number.
As mentioned above, the user behavior authentication is essentially a "binary" task, i.e. determining whether the current user behavior data is the behavior data of the exclusive user. In the prior art, user behavior authentication is mainly based on tools such as a support vector machine, a decision tree, and an euclidean/mahalanobis distance to classify the behavior data of the current user. However, these methods require training a large amount of behavior data of sample users and dedicated users when constructing a classification tool, which results in poor user experience, and meanwhile, the recognition accuracy is generally low, and the information security of the user cannot be effectively guaranteed.
In the embodiment of the invention, the behavior data of the used sample users can be less, namely the training of the basic model is easier, and the method is suitable for the scene with more sample users and less behavior data stock of each sample user. For example, the training data for each training task may be only 20 pieces (first number, second number, 10, first number, 8, second number, 12, etc.).
Correspondingly, in some embodiments, the method of the present invention may also use only a small amount of behavior data of the dedicated user in the update training stage of the base model, for example, only 1 to 10 pieces of behavior data of the user operating in the terminal device need to be acquired. That is, the embodiment of the present invention updates the model parameters of the basic model using a small amount of behavior data of a certain user, so that the personalized model for the user can be quickly obtained, and the user experience can be effectively improved while the user behavior recognition accuracy is ensured.
The training procedure in steps S2044 to S2046 will be specifically described below in various embodiments.
FIG. 4 is a flow chart illustrating yet another method of training a user behavior recognition model, according to an exemplary embodiment. The difference from the method shown in fig. 3 is that the method shown in fig. 4 further provides a specific method of the first training phase, i.e. further provides an embodiment of the step S2044'. As described above, the training data used in the first training phase is a third number of bars.
Referring to fig. 4, step S2044' may further include:
in step S402, a derivative is performed on the loss function of the user behavior recognition model based on the current second model parameter, so as to obtain a first gradient value.
In step S404, the product of the first gradient value and the first learning rate is subtracted from the second model parameter to obtain a first model parameter.
For each training task TiMay be based on the current model parameter θ(n)Identifying loss functions of the model for user behavior
Figure BDA0002413749660000081
Learning the preset step number in the current gradient direction, wherein the formula is as follows:
Figure BDA0002413749660000082
wherein (n +1) is the iteration loop serial number of the steps S2042 to S2046, f is the user behavior recognition model, α is the first learning rate,
Figure BDA0002413749660000083
is the gradient operator.
Finally, the resulting θ 'is learned'i,(n+1)I.e. in the (n +1) th iteration loop, training task TiEnvironmentally updated (first) model parameters.
In view of the above, fig. 5 is a flowchart illustrating a training method of a further user behavior recognition model according to an exemplary embodiment. The difference from the method shown in fig. 3 is that the method shown in fig. 5 further provides a specific method of the second training phase, i.e. provides an embodiment of step S2044 "and step S2046, respectively. As described above, the training data used in the second training phase is a fourth number of bars.
Referring to fig. 5, step S2044 ″ may further include:
in step S502, a derivative is performed on the loss function of the user identification model based on the first model parameter to obtain a second gradient value.
Then, step S2046 may further include:
in step S504, the current second model parameter is subtracted by the product of the sum of the second gradient values of the respective training tasks and the second learning rate to update the second model parameter.
That is, in the (n +1) th iteration loop, based on each training task TiLearning the derived parameter θ 'in a first training phase'i,(n+1)Loss functions for the user recognition models, respectively
Figure BDA0002413749660000084
And obtaining a plurality of second gradient values again, wherein the obtained second gradient values are summarized by the following formula:
Figure BDA0002413749660000085
where β is the second learning rate.
In each iteration cycle, θ in the above formula (2)(n+1)Compared with theta' in the above formula (1)i,(n+1)Robustness and stability are enhanced. When the model parameters of the user identification model are converged or reach the preset iteration number N, the finally learned theta(N)That is, the (second) model parameters after the user identification model is updated through the pre-training process, and the initialized model parameters before the model is updated and trained for a certain user in step S104 to obtain the personalized model.
FIG. 6 is a flow chart illustrating yet another method of training a user behavior recognition model, according to an exemplary embodiment. With reference to fig. 5, the method shown in fig. 6 further provides a specific method for obtaining a personalized model specific to a user, i.e. further provides an embodiment of step S104 of the method 10.
Referring to fig. 6, step S104 may further include:
in step S602, a derivative is performed on the loss function of the user identification model based on the updated second model parameter, and a third gradient value is obtained.
In step S604, the product of the third gradient value and the first learning rate is subtracted from the second model parameter to obtain the first model parameter.
Similar to the first training phase, may be based on the updated model parameter θ(N)Identifying loss functions of the model for user behavior
Figure BDA0002413749660000091
Learning the preset step number in the current gradient direction, wherein the formula is as follows:
Figure BDA0002413749660000092
finally, the obtained θ is learnedeI.e. the model parameters of the personalized model of the user, can subsequently be used to identify the behavioural data collected by the terminal device of the user in real time,
Figure BDA0002413749660000093
namely the result of the user identity authentication.
It should be noted that the training data used in the model update training phase includes: at least one piece of behavior data of the user and at least one piece of behavior data from a plurality of sample users. As described above, in the embodiment of the present invention, the behavior data of the user and the sample user used in the model update training stage may be very small, for example, only 1-10 behavior data of the user and 1-10 behavior data of the sample user are needed.
The small sample training method of the user behavior recognition model provided by the embodiment of the invention is described below by taking the behavior data of the user hitting the keyboard as an example. The specific values referred to below are exemplary values for the description and are not intended to limit the process of the invention in any way.
Assuming 40 sample users, each sample user taps the keyboard 100 times for a total of 4000 pieces of behavior data. For this scenario, the complete training process of the method of the invention is as follows:
the method comprises the following steps: generating a plurality of training tasks: at each training task TiIn the method, one sample user is randomly selected from 40 sample users, 10 sample users are randomly extracted from the behavior data of 100 keyboard strokes of the sample user, and the other 39 sample users are randomly extracted from the behavior data of 3900 keyboard strokes of the other 39 sample usersTaking out 10 pieces of data to form a training data set containing 20 pieces of behavior data;
step two: for each training task T separatelyiRandomly selecting 10 pieces of behavior data from the training data set, training an initial user behavior recognition model using the 10 pieces of behavior data, and obtaining updated model parameters θ' ″ by the above equation (1)i,(n+1)
Step three: for each training task T separatelyiUsing the remaining 10 pieces of behavior data in the training data set to train the updated user behavior recognition model again, and summarizing all training tasks through the formula (2) to obtain a model parameter theta(n+1)
Step four: repeating the steps one to three, and matching the model parameter theta(n+1)Iteration is carried out until the convergence or the preset iteration number N is reached;
the steps one to four form the pre-training process of the basic model.
Step five: acquiring 5 pieces of behavior data corresponding to 5 times of knocking a keyboard by a certain user at a PC end of the certain user, and randomly extracting 5 pieces of behavior data from 4000 pieces of behavior data of 40 sample users;
step six: the 10 pieces of behavior data are used for training the user behavior recognition model which is trained in advance again, and the updated model parameter theta is obtained through the formula (3)e
The above-mentioned steps five to six constitute a fast migration process of optimizing the generic model into a model dedicated to recognizing the user's behavior.
Those skilled in the art will appreciate that all or part of the steps implementing the above embodiments are implemented as computer programs executed by a CPU. The computer program, when executed by the CPU, performs the functions defined by the method provided by the present invention. The program may be stored in a computer readable storage medium, which may be a read-only memory, a magnetic or optical disk, or the like.
Furthermore, it should be noted that the above-mentioned figures are only schematic illustrations of the processes involved in the method according to exemplary embodiments of the invention, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
The following are embodiments of the apparatus of the present invention that may be used to perform embodiments of the method of the present invention. For details which are not disclosed in the embodiments of the apparatus of the present invention, reference is made to the embodiments of the method of the present invention.
FIG. 7 is a block diagram illustrating a training apparatus for a user behavior recognition model according to an example embodiment.
Referring to fig. 7, the training device 70 for the user behavior recognition model includes: a data acquisition module 702, and a model training module 704.
The data acquiring module 702 is configured to acquire at least one behavior data of a user.
The model training module 704 is configured to perform model update training on a pre-trained user behavior recognition model based on a plurality of pieces of behavior data of a user and a plurality of sample users.
According to the training device for the user behavior recognition model provided by the embodiment of the invention, the personalized model for recognizing the user behavior with high precision can be obtained by optimizing the basic model by using any behavior data of the user to be recognized, so that the personal information safety of the user is effectively ensured.
It is noted that the block diagrams shown in the above figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
FIG. 8 is a schematic diagram illustrating a configuration of a computer device, according to an example embodiment. It should be noted that the computer device shown in fig. 8 is only an example, and should not bring any limitation to the function and the scope of the application of the embodiment of the present invention.
As shown in fig. 8, the computer apparatus 800 includes a Central Processing Unit (CPU)801 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage section 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data necessary for the operation of the apparatus 800 are also stored. The CPU 801, ROM 802, and RAM 803 are connected to each other via a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
The following components are connected to the I/O interface 805: an input portion 806 including a keyboard, a mouse, and the like; an output section 807 including a signal such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 808 including a hard disk and the like; and a communication section 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. A drive 810 is also connected to the I/O interface 805 as necessary. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as necessary, so that a computer program read out therefrom is mounted on the storage section 808 as necessary.
In particular, according to an embodiment of the present invention, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the invention include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 809 and/or installed from the removable medium 811. The computer program performs the above-described functions defined in the apparatus of the present invention when executed by the Central Processing Unit (CPU) 801.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present invention may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a transmitting unit, an obtaining unit, a determining unit, and a first processing unit. The names of these units do not in some cases constitute a limitation to the unit itself, and for example, the sending unit may also be described as a "unit sending a picture acquisition request to a connected server".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise:
acquiring at least one piece of behavior data of a user; and performing model updating training on the pre-trained user behavior recognition model based on the plurality of pieces of behavior data of the user and the plurality of sample users.
Exemplary embodiments of the present invention are specifically illustrated and described above. It is to be understood that the invention is not limited to the precise construction, arrangements, or instrumentalities described herein; on the contrary, the invention is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (10)

1. A training method of a user behavior recognition model is characterized by comprising the following steps:
acquiring at least one piece of behavior data of a user; and
and performing model updating training on the pre-trained user behavior recognition model based on the plurality of pieces of behavior data of the user and the plurality of sample users.
2. The method of claim 1, wherein the pre-training process of the user behavior recognition model comprises:
acquiring a plurality of pieces of behavior data of a plurality of sample users; and
and training an initial user behavior recognition model based on a plurality of pieces of behavior data of the plurality of sample users.
3. The method of claim 2, wherein training an initial user behavior recognition model based on a plurality of pieces of behavior data for the plurality of sample users comprises:
respectively extracting a first quantity of behavior data of one sample user of the plurality of sample users and a second quantity of behavior data of other sample users to form training data of a training task;
for each training task, the following operations are respectively performed:
selecting a third quantity of pieces of behavior data from the training data of the training task, and performing first training on the user behavior recognition model according to the third quantity of pieces of behavior data to calculate and obtain a first model parameter of the user recognition model; and
selecting a fourth quantity of behavior data except the third quantity of behavior data from the training data of the training task, and performing second training on the user behavior recognition model based on the first model parameter according to the fourth quantity of behavior data; and
updating second model parameters of the user recognition model based on the second training results of each training task.
4. The method of claim 3, wherein performing a first training on the user behavior recognition model according to the third quantity of pieces of behavior data to calculate first model parameters of the user recognition model comprises:
based on the current second model parameter, carrying out derivation on the loss function of the user behavior recognition model to obtain a first gradient value; and
and subtracting the product of the first gradient value and the first learning rate from the second model parameter to calculate the first model parameter.
5. The method of claim 4, wherein second training the user behavior recognition model based on the first model parameters comprises: based on the first model parameter, carrying out derivation on a loss function of the user identification model to obtain a second gradient value;
updating second model parameters of the user recognition model based on the second training results of each training task, including: and subtracting the current second model parameter by the product of the sum of the second gradient values of the training tasks and the second learning rate to update the second model parameter.
6. The method of claim 5, wherein model update training a pre-trained user behavior recognition model based on a plurality of pieces of behavior data of the user and a plurality of sample users comprises:
based on the updated second model parameter, derivation is carried out on the loss function of the user identification model to obtain a third gradient value; and
subtracting the product of the third gradient value and the first learning rate from the second model parameter to obtain the first model parameter.
7. The method of any one of claims 1-6, wherein obtaining at least one piece of behavioral data of a user comprises: and acquiring 1-10 pieces of behavior data of the user.
8. An apparatus for training a user behavior recognition model, comprising:
the data acquisition module is used for acquiring at least one behavior data of a user; and
and the model training module is used for carrying out model updating training on the pre-trained user behavior recognition model based on the user and the plurality of pieces of behavior data of the plurality of sample users.
9. A computer device, comprising: memory, processor and executable instructions stored in the memory and executable in the processor, characterized in that the processor implements the method according to any of claims 1-7 when executing the executable instructions.
10. A computer-readable storage medium having stored thereon computer-executable instructions, which when executed by a processor, implement the method of any one of claims 1-7.
CN202010184727.5A 2020-03-17 2020-03-17 Training method, device, equipment and storage medium of user behavior recognition model Pending CN111382403A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010184727.5A CN111382403A (en) 2020-03-17 2020-03-17 Training method, device, equipment and storage medium of user behavior recognition model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010184727.5A CN111382403A (en) 2020-03-17 2020-03-17 Training method, device, equipment and storage medium of user behavior recognition model

Publications (1)

Publication Number Publication Date
CN111382403A true CN111382403A (en) 2020-07-07

Family

ID=71217281

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010184727.5A Pending CN111382403A (en) 2020-03-17 2020-03-17 Training method, device, equipment and storage medium of user behavior recognition model

Country Status (1)

Country Link
CN (1) CN111382403A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114297407A (en) * 2021-12-29 2022-04-08 镇江多游网络科技有限公司 Game data middle platform construction method based on knowledge graph
CN114417944A (en) * 2020-10-09 2022-04-29 腾讯科技(深圳)有限公司 Recognition model training method and device and user abnormal behavior recognition method and device
CN115103127A (en) * 2022-08-22 2022-09-23 环球数科集团有限公司 High-performance embedded intelligent camera design system and method
CN115146743A (en) * 2022-08-31 2022-10-04 平安银行股份有限公司 Character recognition model training method, character recognition method, device and system

Citations (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015176560A1 (en) * 2014-05-22 2015-11-26 华为技术有限公司 User behavior recognition method, user equipment, and behavior recognition server
CN105306495A (en) * 2015-11-30 2016-02-03 百度在线网络技术(北京)有限公司 User identification method and device
CN105654945A (en) * 2015-10-29 2016-06-08 乐视致新电子科技(天津)有限公司 Training method of language model, apparatus and equipment thereof
WO2017071126A1 (en) * 2015-10-28 2017-05-04 同济大学 Touch-screen user key-press behavior pattern construction and analysis system and identity recognition method thereof
WO2017201506A1 (en) * 2016-05-20 2017-11-23 Google Llc Training neural networks using synthetic gradients
WO2017201511A1 (en) * 2016-05-20 2017-11-23 Google Llc Training machine learning models
WO2017219991A1 (en) * 2016-06-23 2017-12-28 华为技术有限公司 Optimization method and apparatus suitable for model of pattern recognition, and terminal device
CN107679557A (en) * 2017-09-19 2018-02-09 平安科技(深圳)有限公司 Driving model training method, driver's recognition methods, device, equipment and medium
CN109033995A (en) * 2018-06-29 2018-12-18 出门问问信息科技有限公司 Identify the method, apparatus and intelligence wearable device of user behavior
CN109145868A (en) * 2018-09-11 2019-01-04 广州杰赛科技股份有限公司 A kind of Activity recognition method and apparatus assisting running training
WO2019015641A1 (en) * 2017-07-19 2019-01-24 阿里巴巴集团控股有限公司 Model training method and method, apparatus, and device for determining data similarity
CN109325516A (en) * 2018-08-13 2019-02-12 众安信息技术服务有限公司 A kind of integrated learning approach and device towards image classification
CN109409209A (en) * 2018-09-11 2019-03-01 广州杰赛科技股份有限公司 A kind of Human bodys' response method and apparatus
CN109446923A (en) * 2018-10-10 2019-03-08 北京理工大学 Depth based on training characteristics fusion supervises convolutional neural networks Activity recognition method
CN109858549A (en) * 2019-01-30 2019-06-07 腾讯科技(深圳)有限公司 Training method, device and the medium of application identification and its identification model
CN110046647A (en) * 2019-03-08 2019-07-23 同盾控股有限公司 A kind of identifying code machine Activity recognition method and device
KR20190099156A (en) * 2019-08-06 2019-08-26 엘지전자 주식회사 Method and device for authenticating user using user's behavior pattern
CN110278175A (en) * 2018-03-14 2019-09-24 阿里巴巴集团控股有限公司 Graph structure model training, the recognition methods of rubbish account, device and equipment
US20190318099A1 (en) * 2018-04-16 2019-10-17 International Business Machines Corporation Using Gradients to Detect Backdoors in Neural Networks
CN110503089A (en) * 2019-07-03 2019-11-26 平安科技(深圳)有限公司 OCR identification model training method, device and computer equipment based on crowdsourcing technology
CN110516418A (en) * 2019-08-21 2019-11-29 阿里巴巴集团控股有限公司 A kind of operation user identification method, device and equipment
US20190370383A1 (en) * 2018-05-30 2019-12-05 International Business Machines Corporation Automatic Processing of Ambiguously Labeled Data
CN110557447A (en) * 2019-08-26 2019-12-10 腾讯科技(武汉)有限公司 user behavior identification method and device, storage medium and server
CN110751264A (en) * 2019-09-19 2020-02-04 清华大学 Electricity consumption mode identification method based on orthogonal self-coding neural network
CN110766062A (en) * 2019-10-15 2020-02-07 广州织点智能科技有限公司 Commodity recognition model training method and device, electronic equipment and storage medium
US10572885B1 (en) * 2018-10-25 2020-02-25 Beijing Trusfort Technology Co., Ltd. Training method, apparatus for loan fraud detection model and computer device
CN110874638A (en) * 2020-01-19 2020-03-10 同盾控股有限公司 Behavior analysis-oriented meta-knowledge federation method, device, electronic equipment and system

Patent Citations (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015176560A1 (en) * 2014-05-22 2015-11-26 华为技术有限公司 User behavior recognition method, user equipment, and behavior recognition server
WO2017071126A1 (en) * 2015-10-28 2017-05-04 同济大学 Touch-screen user key-press behavior pattern construction and analysis system and identity recognition method thereof
CN105654945A (en) * 2015-10-29 2016-06-08 乐视致新电子科技(天津)有限公司 Training method of language model, apparatus and equipment thereof
CN105306495A (en) * 2015-11-30 2016-02-03 百度在线网络技术(北京)有限公司 User identification method and device
WO2017201506A1 (en) * 2016-05-20 2017-11-23 Google Llc Training neural networks using synthetic gradients
WO2017201511A1 (en) * 2016-05-20 2017-11-23 Google Llc Training machine learning models
WO2017219991A1 (en) * 2016-06-23 2017-12-28 华为技术有限公司 Optimization method and apparatus suitable for model of pattern recognition, and terminal device
WO2019015641A1 (en) * 2017-07-19 2019-01-24 阿里巴巴集团控股有限公司 Model training method and method, apparatus, and device for determining data similarity
CN107679557A (en) * 2017-09-19 2018-02-09 平安科技(深圳)有限公司 Driving model training method, driver's recognition methods, device, equipment and medium
CN110278175A (en) * 2018-03-14 2019-09-24 阿里巴巴集团控股有限公司 Graph structure model training, the recognition methods of rubbish account, device and equipment
US20190318099A1 (en) * 2018-04-16 2019-10-17 International Business Machines Corporation Using Gradients to Detect Backdoors in Neural Networks
US20190370383A1 (en) * 2018-05-30 2019-12-05 International Business Machines Corporation Automatic Processing of Ambiguously Labeled Data
CN109033995A (en) * 2018-06-29 2018-12-18 出门问问信息科技有限公司 Identify the method, apparatus and intelligence wearable device of user behavior
CN109325516A (en) * 2018-08-13 2019-02-12 众安信息技术服务有限公司 A kind of integrated learning approach and device towards image classification
CN109409209A (en) * 2018-09-11 2019-03-01 广州杰赛科技股份有限公司 A kind of Human bodys' response method and apparatus
CN109145868A (en) * 2018-09-11 2019-01-04 广州杰赛科技股份有限公司 A kind of Activity recognition method and apparatus assisting running training
CN109446923A (en) * 2018-10-10 2019-03-08 北京理工大学 Depth based on training characteristics fusion supervises convolutional neural networks Activity recognition method
US10572885B1 (en) * 2018-10-25 2020-02-25 Beijing Trusfort Technology Co., Ltd. Training method, apparatus for loan fraud detection model and computer device
CN109858549A (en) * 2019-01-30 2019-06-07 腾讯科技(深圳)有限公司 Training method, device and the medium of application identification and its identification model
CN110046647A (en) * 2019-03-08 2019-07-23 同盾控股有限公司 A kind of identifying code machine Activity recognition method and device
CN110503089A (en) * 2019-07-03 2019-11-26 平安科技(深圳)有限公司 OCR identification model training method, device and computer equipment based on crowdsourcing technology
KR20190099156A (en) * 2019-08-06 2019-08-26 엘지전자 주식회사 Method and device for authenticating user using user's behavior pattern
CN110516418A (en) * 2019-08-21 2019-11-29 阿里巴巴集团控股有限公司 A kind of operation user identification method, device and equipment
CN110557447A (en) * 2019-08-26 2019-12-10 腾讯科技(武汉)有限公司 user behavior identification method and device, storage medium and server
CN110751264A (en) * 2019-09-19 2020-02-04 清华大学 Electricity consumption mode identification method based on orthogonal self-coding neural network
CN110766062A (en) * 2019-10-15 2020-02-07 广州织点智能科技有限公司 Commodity recognition model training method and device, electronic equipment and storage medium
CN110874638A (en) * 2020-01-19 2020-03-10 同盾控股有限公司 Behavior analysis-oriented meta-knowledge federation method, device, electronic equipment and system

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
刘辉,杨俊安,许学忠: "一种改进的隐马尔科夫模型训练方法及其在声目标识别中的应用", 《电路与系统学报》 *
周欣然,滕召胜利,易钊: "基于混合协同粒子群优化的广义T-S模糊模型训练方法", 《系统工程与电子技术》 *
蒋鹏飞等: "基于深度森林与CWGAN-GP的移动应用网络行为分类与评估", 《计算机科学》 *
谢澈澈: "基于深度学习和无监督域适应的跨用户行为识别研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
赵建民等: "基于BP神经网络学习率优化的研究", 《微型电脑应用》 *
魏丽冉等: "基于深度神经网络的人体动作识别方法", 《济南大学学报(自然科学版)》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114417944A (en) * 2020-10-09 2022-04-29 腾讯科技(深圳)有限公司 Recognition model training method and device and user abnormal behavior recognition method and device
CN114417944B (en) * 2020-10-09 2024-04-09 腾讯科技(深圳)有限公司 Recognition model training method and device, and user abnormal behavior recognition method and device
CN114297407A (en) * 2021-12-29 2022-04-08 镇江多游网络科技有限公司 Game data middle platform construction method based on knowledge graph
CN115103127A (en) * 2022-08-22 2022-09-23 环球数科集团有限公司 High-performance embedded intelligent camera design system and method
CN115146743A (en) * 2022-08-31 2022-10-04 平安银行股份有限公司 Character recognition model training method, character recognition method, device and system
CN115146743B (en) * 2022-08-31 2022-12-16 平安银行股份有限公司 Character recognition model training method, character recognition method, device and system

Similar Documents

Publication Publication Date Title
CN111382403A (en) Training method, device, equipment and storage medium of user behavior recognition model
WO2019233421A1 (en) Image processing method and device, electronic apparatus, and storage medium
CN108805091B (en) Method and apparatus for generating a model
CN109101919B (en) Method and apparatus for generating information
CN107766940A (en) Method and apparatus for generation model
US10552471B1 (en) Determining identities of multiple people in a digital image
EP3893125A1 (en) Method and apparatus for searching video segment, device, medium and computer program product
CN109993150B (en) Method and device for identifying age
CN106650350B (en) Identity authentication method and system
CN112116008B (en) Processing method of target detection model based on intelligent decision and related equipment thereof
TW202026984A (en) User identity verification method, device and system
CN108460365B (en) Identity authentication method and device
JP7384943B2 (en) Training method for character generation model, character generation method, device, equipment and medium
WO2022116487A1 (en) Voice processing method and apparatus based on generative adversarial network, device, and medium
CN109145783B (en) Method and apparatus for generating information
CN109977839A (en) Information processing method and device
CN109934191A (en) Information processing method and device
CN113902956B (en) Training method of fusion model, image fusion method, device, equipment and medium
CN110046571B (en) Method and device for identifying age
CN107729928A (en) Information acquisition method and device
CN108415653A (en) Screen locking method and device for terminal device
CN111178139A (en) Identity authentication method, payment method and payment equipment
CN106803092B (en) Method and device for determining standard problem data
CN108521516A (en) Control method and device for terminal device
CN112468658A (en) Voice quality detection method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination