CN116205376B - Behavior prediction method, training method and device of behavior prediction model - Google Patents

Behavior prediction method, training method and device of behavior prediction model Download PDF

Info

Publication number
CN116205376B
CN116205376B CN202310467649.3A CN202310467649A CN116205376B CN 116205376 B CN116205376 B CN 116205376B CN 202310467649 A CN202310467649 A CN 202310467649A CN 116205376 B CN116205376 B CN 116205376B
Authority
CN
China
Prior art keywords
user
target user
prediction model
information
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310467649.3A
Other languages
Chinese (zh)
Other versions
CN116205376A (en
Inventor
滕志勇
刘永威
刘思喆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Apoco Blue Technology Co ltd
Original Assignee
Beijing Apoco Blue Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Apoco Blue Technology Co ltd filed Critical Beijing Apoco Blue Technology Co ltd
Priority to CN202310467649.3A priority Critical patent/CN116205376B/en
Publication of CN116205376A publication Critical patent/CN116205376A/en
Application granted granted Critical
Publication of CN116205376B publication Critical patent/CN116205376B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0645Rental transactions; Leasing transactions
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application relates to a behavior prediction method, a training method of a behavior prediction model, a training device of the behavior prediction method, a training device of the behavior prediction model, a computer device, a storage medium and a computer program product. The method comprises the following steps: acquiring user information of a target user; the user information comprises offline information and real-time information corresponding to the target user; the user information is used for representing the behavior and the environment of the target user; determining the user type of the target user according to the offline information and the real-time information; determining a target prediction model according to the user type of the target user; and inputting the offline information and the real-time information of the target user into a target prediction model, and performing data processing on the offline information and the real-time information through the target prediction model to obtain a prediction result corresponding to the target user. By adopting the method, a relatively accurate prediction result of the user behavior can be obtained.

Description

Behavior prediction method, training method and device of behavior prediction model
Technical Field
The present application relates to the field of artificial intelligence technology, and in particular, to a behavior prediction method, a training method of a behavior prediction model, a device, a computer apparatus, a storage medium, and a computer program product.
Background
With the development of sharing economy, sharing bicycles has become one of the important modes of urban travel. While the sharing bicycle company provides convenient travel service for the city, financial risks can be brought to the sharing bicycle company for arrearage behaviors of users.
In the conventional technology, based on a conventional statistical method, calculating the arrearage probability of a user according to historical behavior data of the user and a time sequence prediction algorithm to obtain the estimated arrearage probability of the user.
However, the current traditional method is used for predicting the user behavior through the historical behavior data of the user and a time sequence prediction algorithm, the current traditional statistical method is used for aiming at the single user characteristic, and the time sequence prediction algorithm of the traditional statistical method has limitations, so that the prediction accuracy of arrearage behavior of the user is lower.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a behavior prediction method, a training method of a behavior prediction model, an apparatus, a computer device, a computer-readable storage medium, and a computer program product.
In a first aspect, the present application provides a behavior prediction method. The method comprises the following steps:
Acquiring user information of a target user; the user information comprises offline information and real-time information corresponding to the target user; the user information is used for representing the behavior and the environment of the target user;
determining the user type of the target user according to the offline information and the real-time information;
determining a target prediction model according to the user type of the target user;
and inputting the offline information and the real-time information of the target user into the target prediction model, and performing data processing on the offline information and the real-time information through the target prediction model to obtain a prediction result corresponding to the target user.
In one embodiment, the user type of the target user is a first user type, the target prediction model is a first prediction model, and the data processing is performed on the offline information and the real-time information through the target prediction model to obtain a prediction result corresponding to the target user, including:
in the first prediction model, extracting features of offline information and real-time information of the target user to obtain a first feature and a second feature of the target user;
Performing feature fusion on the first feature and the second feature to obtain a fused first target user feature;
and carrying out classification prediction processing on the first target user characteristics to obtain a prediction result corresponding to the target user.
In one embodiment, the offline information includes user portrait, user behavior, and city information in user time sequence information, and the real-time information includes code scanning context; in the first prediction model, extracting features of offline information and real-time information of the target user to obtain a first feature and a second feature of the target user, including:
extracting features of the user portrait, the user behavior, the city information and the code scanning context of the target user through a feature embedding layer in the first prediction model to obtain first features of the target user;
and extracting the user time sequence information of the target user through the long-term and short-term memory structure of the first prediction model to obtain time sequence characteristics of user history behaviors as second characteristics of the target user.
In one embodiment, the first feature includes the user portrait feature, user behavior feature, city information feature, and code scanning context feature; and performing feature fusion on the first feature and the second feature to obtain a fused first target user feature, wherein the feature fusion comprises the following steps:
Performing feature fusion on the user portrait features, the user behavior features, the city information features and the code scanning context features through an embedded fusion structure of the first prediction model to obtain first fusion features;
and fusing the first fused features with the second features of the target user through a feature interaction layer of the first prediction model to obtain fused first target user features.
In one embodiment, the classifying and predicting the first target user feature to obtain a predicted result corresponding to the target user includes:
and determining the reputation level of the target user according to a first classification decision layer in the first prediction model, and taking the reputation level of the target user as a prediction result corresponding to the first user type.
In one embodiment, the user type of the target user is a second user type, the target prediction model is a second prediction model, and the data processing is performed on the offline information and the real-time information through the target prediction model to obtain a prediction result corresponding to the target user, including:
in the second prediction model, extracting the characteristics of the offline information and the real-time information of the target user to obtain a third characteristic of the target user; the third feature comprises a user image feature, a city information feature and a code scanning context feature;
Performing feature fusion on the user portrait features, the city information features and the code scanning context features in the third features to obtain fused second target user features;
and performing classification prediction processing on the second target user characteristics to obtain a prediction result corresponding to the target user.
In one embodiment, the offline information comprises user portraits and city information, and the real-time information comprises a barcode scanning context; and in the second prediction model, extracting features of the offline information and the real-time information of the target user to obtain a third feature of the target user, including:
and extracting the characteristics of the user portrait, the city information and the code scanning context of the target user through a characteristic embedding layer in the second prediction model to obtain a third characteristic of the target user.
In one embodiment, the feature fusion of the user portrait feature, the city information feature, and the code scanning context feature in the third feature to obtain a fused second target user feature includes:
and carrying out feature fusion on the user portrait features, the city information features and the code scanning context features in the third features through an embedded fusion structure of the second prediction model to obtain second target user features.
In one embodiment, the performing a classification prediction process on the second target user feature to obtain a prediction result corresponding to the target user includes:
and carrying out classification prediction on the second target user characteristics through a second classification decision layer in the second prediction model to obtain target behavior probability of the target user, and taking the target behavior probability as a prediction result corresponding to the target user.
In a second aspect, the present application provides a method for training a predictive model, the method comprising:
acquiring a plurality of first training samples and a first prediction model corresponding to the first training samples; the first training sample comprises a first sample label, and the user type corresponding to the first training sample is a first user type;
inputting each first training sample in a plurality of first training samples into the first prediction model, and performing data processing on offline information and real-time information in each first training sample according to the first prediction model to obtain a first prediction result corresponding to each first training sample;
according to a first prediction result corresponding to each first training sample, the first sample label included in each first training sample and a preset first loss function, calculating first loss of the first prediction model, and when the first loss meets preset training conditions, training the first prediction model is completed, and a target prediction model corresponding to the first user type is obtained.
In one embodiment, the performing data processing on the offline information and the real-time information in each first training sample according to the first prediction model to obtain a first prediction result corresponding to each first training sample includes:
performing feature extraction on the offline information and the real-time information of a plurality of first training samples according to the first prediction model to obtain first features and second features corresponding to each first training sample;
performing feature fusion on the first features and the second features corresponding to each first training sample to obtain third target user features;
and carrying out classification prediction processing on each first training sample according to the third target user characteristics to obtain a first prediction result corresponding to each first training sample.
In a third aspect, the present application provides a method for training a prediction model, the method comprising:
acquiring a plurality of second training samples and a second prediction model corresponding to the second training samples; the second training sample comprises a second sample label, and the user type corresponding to the second training sample is a second user type;
inputting each second training sample in the plurality of second training samples into the second prediction model, and performing data processing on offline information and real-time information in each second training sample according to the second prediction model to obtain a second prediction result corresponding to each training sample;
And calculating a second loss of the second prediction model according to a second prediction result corresponding to each training sample, the second sample label corresponding to each second training sample and a second loss function, and completing training of the second prediction model when the second loss meets a preset training condition to obtain a target prediction model corresponding to the second user type.
In one embodiment, the performing data processing on the offline information and the real-time information in each second training sample according to the second prediction model to obtain a second prediction result corresponding to each training sample includes:
performing feature extraction on the offline information and the real-time information of a plurality of second training samples according to the second prediction model to obtain third features corresponding to each second training sample; the third feature comprises a user image feature, a city information feature and a code scanning context feature;
carrying out feature fusion on the user portrait features, the city information features and the code scanning context features in the third features corresponding to each second training sample to obtain fourth target user features;
And performing classification prediction processing on each second training sample according to the fourth target user characteristics to obtain a second prediction result corresponding to each second training sample.
In a fourth aspect, the application further provides a behavior prediction device. The device comprises:
the acquisition module is used for acquiring the user information of the target user; the user information comprises offline information and real-time information corresponding to the target user; the user information is used for representing the behavior and the environment of the target user;
the first determining module is used for determining the user type of the target user according to the offline information and the real-time information;
the second determining module is used for determining a target prediction model according to the user type of the target user;
and the data processing module is used for inputting the offline information and the real-time information of the target user into the target prediction model, and performing data processing on the offline information and the real-time information through the target prediction model to obtain a prediction result corresponding to the target user.
In one embodiment, the data processing module is specifically configured to:
in the first prediction model, extracting features of offline information and real-time information of the target user to obtain a first feature and a second feature of the target user;
Performing feature fusion on the first feature and the second feature to obtain a fused first target user feature;
and carrying out classification prediction processing on the first target user characteristics to obtain a prediction result corresponding to the target user.
In one embodiment, the data processing module is specifically configured to:
extracting features of the user portrait, the user behavior, the city information and the code scanning context of the target user through a feature embedding layer in the first prediction model to obtain first features of the target user;
and extracting the user time sequence information of the target user through the long-term and short-term memory structure of the first prediction model to obtain time sequence characteristics of user history behaviors as second characteristics of the target user.
In one embodiment, the data processing module is specifically configured to:
performing feature fusion on the user portrait features, the user behavior features, the city information features and the code scanning context features through an embedded fusion structure of the first prediction model to obtain first fusion features;
and fusing the first fused features with the second features of the target user through a feature interaction layer of the first prediction model to obtain fused first target user features.
In one embodiment, the data processing module is specifically configured to:
and determining the reputation level of the target user according to a first classification decision layer in the first prediction model, and taking the reputation level of the target user as a prediction result corresponding to the first user type.
In one embodiment, the data processing module is specifically configured to:
in the second prediction model, extracting the characteristics of the offline information and the real-time information of the target user to obtain a third characteristic of the target user; the third feature comprises a user image feature, a city information feature and a code scanning context feature;
performing feature fusion on the user portrait features, the city information features and the code scanning context features in the third features to obtain fused second target user features;
and performing classification prediction processing on the second target user characteristics to obtain a prediction result corresponding to the target user.
In one embodiment, the data processing module is specifically configured to:
and extracting the characteristics of the user portrait, the city information and the code scanning context of the target user through a characteristic embedding layer in the second prediction model to obtain a third characteristic of the target user.
In one embodiment, the data processing module is specifically configured to:
and carrying out feature fusion on the user portrait features, the city information features and the code scanning context features in the third features through an embedded fusion structure of the second prediction model to obtain second target user features.
In one embodiment, the data processing module is specifically configured to:
and carrying out classification prediction on the second target user characteristics through a second classification decision layer in the second prediction model to obtain target behavior probability of the target user, and taking the target behavior probability as a prediction result corresponding to the target user.
In a fifth aspect, the present application further provides a training device for a behavior prediction model. The device comprises:
the second acquisition module is used for acquiring a plurality of first training samples and a first prediction model corresponding to the first training samples; the first training sample comprises a first sample label, and the user type corresponding to the first training sample is a first user type;
the second data processing module is used for inputting each first training sample in the plurality of first training samples into the first prediction model, and carrying out data processing on the offline information and the real-time information in each first training sample according to the first prediction model to obtain a first prediction result corresponding to each first training sample;
The first calculation module is used for calculating first loss of the first prediction model according to a first prediction result corresponding to each first training sample, the first sample label included in each first training sample and a preset first loss function, and training the first prediction model when the first loss meets preset training conditions to obtain a target prediction model corresponding to the first user type.
In one embodiment, the second data processing module is specifically configured to:
performing feature extraction on the offline information and the real-time information of a plurality of first training samples according to the first prediction model to obtain first features and second features corresponding to each first training sample;
performing feature fusion on the first features and the second features corresponding to each first training sample to obtain third target user features;
and carrying out classification prediction processing on each first training sample according to the third target user characteristics to obtain a first prediction result corresponding to each first training sample.
In a sixth aspect, the present application further provides a training device for a behavior prediction model. The device comprises:
The third acquisition module is used for acquiring a plurality of second training samples and a second prediction model corresponding to the second training samples; the second training sample comprises a second sample label, and the user type corresponding to the second training sample is a second user type;
the third data processing module is used for inputting each second training sample in the plurality of second training samples into the second prediction model, and carrying out data processing on the offline information and the real-time information in each second training sample according to the second prediction model to obtain a second prediction result corresponding to each training sample;
and the second calculation module is used for calculating second loss of the second prediction model according to a second prediction result corresponding to each training sample, the second sample label corresponding to each second training sample and a second loss function, and completing training of the second prediction model when the second loss meets a preset training condition to obtain a target prediction model corresponding to the second user type.
In one embodiment, the third data processing module is specifically configured to:
performing feature extraction on the offline information and the real-time information of a plurality of second training samples according to the second prediction model to obtain third features corresponding to each second training sample; the third feature comprises a user image feature, a city information feature and a code scanning context feature;
Carrying out feature fusion on the user portrait features, the city information features and the code scanning context features in the third features corresponding to each second training sample to obtain fourth target user features;
and performing classification prediction processing on each second training sample according to the fourth target user characteristics to obtain a second prediction result corresponding to each second training sample.
In a seventh aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring user information of a target user; the user information comprises offline information and real-time information corresponding to the target user; the user information is used for representing the behavior and the environment of the target user;
determining the user type of the target user according to the offline information and the real-time information;
determining a target prediction model according to the user type of the target user;
and inputting the offline information and the real-time information of the target user into the target prediction model, and performing data processing on the offline information and the real-time information through the target prediction model to obtain a prediction result corresponding to the target user.
In an eighth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method of the first aspect.
In a ninth aspect, the present application also provides a computer program product. The computer program product comprising a computer program which, when executed by a processor, implements the steps of the method of the first aspect.
According to the training method, the training device, the computer equipment, the storage medium and the computer program product of the behavior prediction model of the behavior prediction method, the user type of the target user can be determined according to the real-time information and the offline information in the user information of the target user, and the offline information and the real-time information of the target user can be processed by using the targeted target prediction model according to different user types, so that a more accurate prediction result of the user behavior can be obtained.
Drawings
FIG. 1 is a diagram of an application environment for a behavior prediction method in one embodiment;
FIG. 2 is a flow diagram of a method of behavior prediction for a target user of a first user type in one embodiment;
FIG. 3 is a flowchart illustrating a first predictive model feature extraction step in one embodiment;
FIG. 4 is a schematic diagram of a model internal structure of a first predictive model in one embodiment;
FIG. 5 is a flowchart illustrating a first predictive model feature fusion step in one embodiment;
FIG. 6 is a flow diagram of a method of behavior prediction for a target user of a first user type in one embodiment;
FIG. 7 is a schematic diagram of a model internal structure of a second predictive model in one embodiment;
FIG. 8 is a flow diagram of a method of model training of a first predictive model in one embodiment;
FIG. 9 is a flow diagram of a data processing process during a first predictive model training process in one embodiment;
FIG. 10 is a flow diagram of a method of model training of a second predictive model in one embodiment;
FIG. 11 is a flow diagram of a data processing process during training of a second predictive model in one embodiment;
FIG. 12 is a block diagram of a behavior prediction apparatus in one embodiment;
fig. 13 is an internal structural view of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
In one embodiment, as shown in fig. 1, a behavior prediction method is provided, where the method is applied to a terminal to illustrate the behavior prediction method, it is understood that the method may also be applied to a server, and may also be applied to a system including the terminal and the server, and implemented through interaction between the terminal and the server. In this embodiment, the method includes the steps of:
step 102, obtaining user information of the target user.
The user information comprises offline information and real-time information corresponding to the target user; the user information is used to characterize the behavior and environment of the target user.
In the embodiment of the application, the target user can be divided into a first user type and a second user type, wherein the first user type can also be called an old user, the second user type can also be called a new user, and when the target user is the old user, the user information of the target user can but is not limited to user portrait, user behavior information, city information, code scanning context information and user time sequence information; when the target user is a new user, the user information of the target user comprises user images, city information and code scanning context information.
The terminal acquires real-time information of the target user, and acquires offline information corresponding to the target user according to user information of the target user.
Optionally, the user timing information is used to reflect arrearage information of the target user in a historical time period. Specifically, the terminal acquires historical arrearage data in a historical time period of a target user, and constructs and obtains user time sequence information.
And 104, determining the user type of the target user according to the offline information and the real-time information.
In the embodiment of the application, a terminal firstly judges the user type of a target user, specifically, when a record of historical behavior information of the target user exists in a database of the terminal, the current target user is characterized as a first user type, and the user of the first user type can be called an old user; when no record of the historical behavior information of the target user exists in the database of the terminal, the target type of the current target user is determined as a second user type, and the user of the second user type can be called a new user.
The user type of the target user is used for the terminal to select a target prediction model for target user behavior prediction.
And 106, determining a target prediction model according to the user type of the target user.
In the embodiment of the application, the terminal determines the target prediction model for predicting the behavior of the target user according to the user type of the target user. When the user type of the target user is the first user type, the terminal determines the first prediction model as a target prediction model, namely a prediction model for the old user; and when the user type of the target user is the second user type, the terminal determines the second prediction model as a target prediction model, namely a prediction model for a new user.
And step 108, inputting the offline information and the real-time information of the target user into a target prediction model, and performing data processing on the offline information and the real-time information through the target prediction model to obtain a prediction result corresponding to the target user.
In the embodiment of the application, the terminal inputs the offline information and the real-time information into the target prediction model corresponding to the user type of the target user, extracts the target user characteristics for the processing of the classification decision layer of the target prediction model aiming at the offline information and the real-time information of the target user based on the target prediction model, and predicts the behavior of the target user according to the target user characteristics by the classification decision layer of the target prediction model to obtain the prediction result corresponding to the target user.
Optionally, before the offline information and the real-time information of the target user are input into the target prediction model, the terminal performs data cleaning on the offline information and the real-time information of the target user, removes useless fields or re-extracts missing fields, and the like, so as to obtain cleaned offline information and real-time information, and performs the processing in the step 108 on the cleaned offline information and real-time information.
Optionally, when the user type of the target user is the first user type, the classification decision layer of the target prediction model may include multiple reputation levels, and the classification decision layer determines, as the prediction result of the target user, the reputation level with the highest probability among the reputation levels of the target user according to the calculation of the probability that the target user is the reputation level of the target user by the target user feature of the target user.
Optionally, when the user type of the target user is the second user type, the classification decision layer of the target prediction model may be a classification algorithm, where the classification decision layer classifies, according to the characteristics of the target user, the behavior of the target user as having a higher arrearage risk or a lower arrearage risk, and uses the classification result as the prediction result of the target user.
According to the behavior prediction method, the user type of the target user can be determined according to the real-time information and the offline information in the user information of the target user, and the offline characteristics and the real-time characteristics of the target user are subjected to data processing by using the targeted target prediction model corresponding to the user type according to different user types, so that a relatively accurate prediction result of the user behavior can be obtained.
In one embodiment, as shown in fig. 2, step 108 of performing data processing on the offline information and the real-time information through the target prediction model to obtain a prediction result corresponding to the target user includes:
and 202, in the first prediction model, extracting features of offline information and real-time information of the target user to obtain a first feature and a second feature of the target user.
The user type of the target user is a first user type, and the target prediction model is a first prediction model.
In the embodiment of the application, after the terminal determines the target prediction model as the first prediction model, the terminal performs feature extraction on the offline information and the real-time information of the target user with the user type of the first user type to obtain the first feature and the second feature of the target user with the first user type. Specifically, taking offline information including user portraits, user behaviors, city information and user time sequence information as examples, and real-time information including code scanning context for illustration, the terminal performs feature extraction on the user portraits, the user behavior information, the city information, the code scanning context and the user time sequence information through a first prediction model to obtain user portraits features, user behavior features, city information features and code scanning context features, and takes the obtained user time sequence features as first features of target users.
Specifically, the second feature is a user time sequence feature, discretization processing is performed on the historical arrearage information in a preset time period to obtain a discretized time period, and the historical arrearage information of the target user in the discretized time period is extracted to obtain the user time sequence information.
The terminal performs feature extraction on the user portraits of the target user according to the user portraits embedding layer to obtain user portraits features, performs feature extraction on the user behaviors according to the user behavior embedding layer to obtain user behavior features, performs feature extraction on the city information according to the city information embedding layer to obtain city information features, and performs feature extraction on the code scanning context according to the context embedding layer to obtain code scanning context features.
And 204, carrying out feature fusion on the first features and the second features to obtain fused first target user features.
In the embodiment of the application, in a first prediction model, a terminal performs feature fusion according to a feature interaction layer in the first prediction model to obtain a fused first target user feature. The first target user feature is used for representing a feature vector of a target user with a user type being the first user type, namely, the behavior feature and the environment feature of the user with the first user type (namely, the old user) are reflected, and then, the behavior of the target user can be predicted through the first prediction model and the first target user feature.
And 206, performing classification prediction processing on the first target user characteristics to obtain a prediction result corresponding to the target user.
In the embodiment of the present application, the classification result of the first prediction model may include a plurality of reputation levels. In the process of classifying and predicting through the first prediction model, the terminal predicts the behavior classification of the target user according to the characteristics of the first target user, determines the probability of the target user corresponding to a plurality of reputation grades, and obtains the prediction result corresponding to the target user according to the probability of the target user corresponding to a plurality of reputation grades. Optionally, the reputation level is classified into low risk users, medium risk users and high risk users, and different behavior rules can be set for target users with different reputation levels. For example, when the reputation level of the target user is a low-risk user, the behavior rule of the target user is not limited, and the target user can directly sweep the code to unlock the sharing bicycle; when the reputation level of the target user is the risk user, the behavior rule of the target user is a prompt for timely paying orders; when the reputation level of the target user is a high-risk user, the behavior rule of the target user is a limit that the pre-stored amount is required in advance, and the target user can sweep the code to unlock the shared bicycle under the condition that the target user meets the pre-stored amount.
In this embodiment, when the user type of the target user is the first user type, the offline information and the real-time information of the target user are extracted and fused according to the first prediction model, and the offline information and the real-time information of the target user can enable the first target user feature of the target user to have pertinence, so that the accuracy of predicting the behavior of the target user is improved.
In one embodiment, as shown in fig. 3, step 202 of extracting features from offline information and real-time information of a target user, obtaining a first feature and a second feature of the target user includes:
and step 302, extracting features of user portraits, user behaviors, city information and code scanning contexts of the target user through a feature embedding layer in the first prediction model to obtain first features of the target user.
Wherein, the offline information comprises user portrait, user behavior and city information in user time sequence information; the real-time information contains a code scanning context.
In the embodiment of the application, the user portrait of the target user can comprise user age, user gender, user registration duration and user attribution; the user behavior of the target user may include an order amount within approximately 7 days, 14 days, 30 days, 60 days, 90 days, an arrearage order amount within 7 days, 14 days, 30 days, 60 days, 90 days, an arrearage rate within 7 days, 14 days, 30 days, 60 days, 90 days, an order price within 7 days, 14 days, 30 days, 60 days, 90 days, an order duration within 7 days, 14 days, 30 days, 60 days, 90 days, an average arrearage duration within 7 days, 14 days, 30 days, 60 days, 90 days, a last use distance present duration; the urban information can comprise provinces (adopting provincial codes) to which cities belong, the number of urban stations, the number of urban put vehicles, the urban vehicle effect, the urban user quantity, urban service area population and urban poi (interest point) density, and the urban poi density can be shared single vehicle put density; the code scanning context may include code scanning time period, code scanning week attribute, cell phone model, client type, real-time weather, code scanning vehicle model, code scanning vehicle power, and number of shared bicycles within a code scanning position radius of 50 m.
Optionally, the terminal normalizes the user portrait, user behavior, city information and code scanning context of the target user.
Fig. 4 is an internal structure diagram of the first prediction model, as shown in fig. 4, the terminal performs feature extraction on a user portrait, user behavior, city information and a code scanning context of the target user through a feature embedding layer of the first prediction model, so as to obtain a feature vector corresponding to the user portrait, a feature vector corresponding to the user behavior, a feature vector corresponding to the city information and a feature vector corresponding to the code scanning context, and the feature vector is used as a first feature of the target user.
Step 304, extracting the user time sequence information of the target user through the long-term and short-term memory structure of the first prediction model to obtain the time sequence characteristics of the user history behavior as the second characteristics of the target user.
In the embodiment of the present application, as shown in fig. 4, the first prediction model includes a long-short-term memory structure, where the long-short-term memory structure is used to extract time sequence information of a target user, and the time sequence information may include an arrearage of the target user in approximately 14 days, for example, as shown in fig. 4, h1 represents an initial vector, f1 represents a first 14 days, x1 represents an arrearage of the first 14 days, and h represents an output time sequence feature vector.
The terminal selects the arrearage quantity of the corresponding date of the target user through the long-short-period memory structure, extracts the corresponding date and the arrearage quantity, and can capture the time law of the historical arrearage of the user by using the long-short-period memory structure to obtain the second characteristic of the target user, namely the time sequence characteristic of the historical behavior of the target user.
In this embodiment, the offline information and the real-time information of the target user may be extracted through different structures of the first prediction model, so as to obtain a first feature and a second feature capable of representing the features of the target user of the first user type, where the features contained in the first feature and the second feature comprehensively embody the features of the target user and the environment where the target user is located, and according to the information contained in the first feature and the second feature, the accuracy of predicting the behavior of the target user of the first user type may be improved.
In one embodiment, as shown in fig. 5, step 204 performs feature fusion on the first feature and the second feature to obtain a fused first target user feature, including:
step 502, performing feature fusion on the user portrait features, the user behavior features, the city information features and the code scanning context features through an embedded fusion structure of the first prediction model to obtain first fusion features.
The first features comprise user portrait features, user behavior features, city information features and code scanning context features.
In the embodiment of the application, as shown in fig. 4, the terminal performs feature fusion on feature vectors corresponding to user image features, user behavior features, city information features and code scanning context features respectively through an embedded fusion structure of the first prediction model, and initially obtains a first fusion feature for determining a first target user feature.
Step 504, fusing the first fused feature with the second feature of the target user through the feature interaction layer of the first prediction model to obtain the fused first target user feature.
In the embodiment of the application, after the first prediction model determines the second characteristic (i.e. the instant characteristic) of the target user according to the long-short-period memory structure, the terminal performs characteristic fusion on the first fusion characteristic and the second characteristic through the characteristic interaction layer of the first prediction model to obtain the first target user characteristic. The first target user characteristic can represent the characteristic corresponding to the complete real-time information of the target user with the user type of the first user type and the characteristic corresponding to the offline information.
In this embodiment, the feature interaction layer of the first prediction model may obtain the first target user feature for target user behavior prediction, and at the same time, the first target user feature may embody real-time information and offline information corresponding to the complete target user, so as to improve accuracy of the first prediction model in predicting the target user behavior of the first user type.
In one embodiment, step 206 performs a classification prediction process on the first target user feature to obtain a prediction result corresponding to the target user, including:
and determining the reputation level of the target user according to a first classification decision layer in the first prediction model, and taking the reputation level of the target user as a prediction result corresponding to the first user type.
In an embodiment of the present application, as shown in fig. 4, the first classification decision layer of the first prediction model may be a softmax function (a classification function). The terminal inputs the first target user characteristics into a first classification decision layer of a first prediction model, the first classification decision layer calculates the probability of each classification result in a plurality of preset classification results (namely, reputation grades) of the target user, determines the classification result of the target user, namely, determines the reputation grade of the target user, and outputs the determined reputation grade of the target user as a prediction result corresponding to the first user type.
In this embodiment, the classification decision layer of the first prediction model classifies the reputation level of the target user, so that a prediction result corresponding to the first user type can be determined, and the prediction of the target user behavior is realized.
In one embodiment, as shown in fig. 6, step 108 of performing data processing on the offline information and the real-time information through the target prediction model to obtain a prediction result corresponding to the target user includes:
in step 602, in the second prediction model, feature extraction is performed on the offline information and the real-time information of the target user, so as to obtain a third feature of the target user.
The user type of the target user is a second user type, and the target prediction model is a second prediction model.
The third feature comprises a user image feature, a city information feature and a code scanning context feature.
In the embodiment of the application, after the terminal determines the target prediction model as the second prediction model, the terminal performs feature extraction on the offline information and the real-time information of the target user with the user type of the second user type to obtain the third feature of the target user with the second user type. Specifically, taking offline information including user portrait and city information and real-time information including code scanning context as an example for illustration, the terminal performs feature extraction on the user portrait, city information and code scanning context through the second prediction model to obtain user portrait features, city information features and code scanning context features as third features of the target user.
And step 604, performing feature fusion on the user portrait features, the city information features and the code scanning context features in the third features to obtain fused second target user features.
The third feature comprises a user image feature, a city information feature and a code scanning context feature.
In the embodiment of the present application, the offline information of the target user (i.e., the new user) of the second user type does not include user behavior information and user timing information.
In the second prediction model, the terminal performs feature fusion on the user image feature, the city information feature and the code scanning context feature of the target user with the user type of the second user type according to the feature interaction layer of the second prediction model to obtain a fused second target user feature. The second target user features are used for representing feature vectors of target users with the user type being the second user type, namely representing behavior features and environment features of the new user, and behavior prediction can be performed on the target users according to the second target user features through a second prediction model.
And step 606, performing classification prediction processing on the second target user characteristics to obtain a prediction result corresponding to the target user.
In the embodiment of the application, the user features included in the second target user features obtained by the second prediction model are simpler than the first target user features, the prediction result of the second prediction model is the probability of the arrearage behavior of the user, when the probability of the arrearage behavior of the target user is larger than a preset threshold, the target user is determined to be at higher risk of arrearage behavior, and when the probability of the arrearage behavior of the target user is smaller than the preset threshold, the target user is determined to be at lower risk of arrearage behavior. Specifically, the terminal performs classification prediction processing on the target user of the second user type according to the second prediction model, and outputs a prediction result of the target user of the second user type, and then the terminal determines that the prediction result corresponding to the target user represents that the arrearage risk is higher or the arrearage risk is lower.
When the predicted result of the target user is that the arrearage risk is higher, the behavior rule of the target user is that the limitation of pre-stored amount is needed, and the target user can sweep the code to unlock the shared bicycle under the condition that the target user meets the pre-stored amount; when the predicted result of the target user is that the arrearage risk is low, the behavior rule of the target user is not limited, and the target user can directly scan the code to unlock the shared bicycle.
In this embodiment, when the user type of the target user is the second user type, the offline information and the real-time information of the target user are extracted and fused according to the second prediction model, and the first target user feature of the target user can be targeted through the offline information and the real-time information of the target user, so that the accuracy of predicting the behavior of the target user is improved.
In one embodiment, step 602 performs feature extraction on offline information and real-time information of the target user in the second prediction model to obtain a third feature of the target user, including:
and extracting the characteristics of the user image, the city information and the code scanning context of the target user through a characteristic embedding layer in the second prediction model to obtain a third characteristic of the target user.
Wherein, the offline information comprises user portraits and city information; the real-time information includes a scan code context.
In the embodiment of the present application, the user image, city information and code scanning context of the target user are the same as the attribute of the target user of the first user type in step 302, and will not be described here again.
As shown in fig. 7, fig. 7 is an internal model structure of the second prediction model, the terminal performs feature extraction on the user image, the city information and the code scanning context of the target user through the feature embedding layer of the second prediction model, so as to obtain a feature vector corresponding to the user image, a feature vector corresponding to the user behavior, a feature vector corresponding to the city information and a feature vector corresponding to the code scanning context, and uses the feature vector as a third feature of the target user.
In this embodiment, the feature extraction is performed on the target user of the second user type through the second prediction model, so that a third feature with pertinence to the second user type can be obtained, and the accuracy of predicting the target user behavior of the second user type can be improved according to the information contained in the third feature.
In one embodiment, step 604 performs feature fusion on the user portrait feature, the city information feature, and the scan context feature in the third feature to obtain a fused second target user feature, including:
And carrying out feature fusion on the user portrait features, the city information features and the code scanning context features in the third features through an embedded fusion structure of the second prediction model to obtain second target user features.
The third feature comprises a user image feature, a city information feature and a code scanning context feature.
In the embodiment of the application, as shown in fig. 7, the terminal performs feature fusion on feature vectors corresponding to user portrait features, city information features and code scanning context features corresponding to target users of a second user type through an embedded fusion structure of a second prediction model, so as to obtain second target user features.
In this embodiment, the embedded fusion structure of the second prediction model obtains the second target user characteristic having pertinence to the target user of the second user type, so that the accuracy of predicting the target user behavior of the second user type by the second prediction model can be improved.
In one embodiment, step 606 performs a classification prediction process on the second target user feature to obtain a prediction result corresponding to the target user, including:
and carrying out classification prediction on the second target user characteristics through a second classification decision layer in the second prediction model to obtain target behavior probability of the target user, and taking the target behavior probability as a prediction result corresponding to the target user.
In the embodiment of the present application, as shown in fig. 7, the second classification decision layer of the second prediction model may be a sigmoid function (a classification function). The terminal inputs the second target user characteristics into a second classification decision layer of the second prediction model, the second classification decision layer carries out second classification on the target users aiming at the second target user characteristics to obtain classification results corresponding to the target users of the second user type, namely the arrearage risk degree of the target users is obtained, and the arrearage risk degree is used as a prediction result of the target users of the second user type.
In this embodiment, the classification decision layer of the second prediction model classifies the arrearage risk degree of the target user, so as to determine the prediction result corresponding to the second user type, and implement prediction of the target user behavior.
In one embodiment, as shown in fig. 8, a method for training a predictive model is provided, the method comprising:
step 802, a plurality of first training samples and a first prediction model corresponding to the first training samples are obtained.
The first training sample comprises a first sample label, and the user type corresponding to the first training sample is a first user type.
In the embodiment of the application, the first training sample can be obtained according to the historical user and the behavior and environment information of the historical user recorded by the terminal or the server database. Optionally, the terminal performs data cleaning on the behavior and the environmental information of the historical user to obtain a first training sample which can be used for training the first prediction model.
Step 804, inputting each first training sample in the plurality of first training samples into the first prediction model, and performing data processing on the offline information and the real-time information in each first training sample according to the first prediction model to obtain a first prediction result corresponding to each first training sample.
In the embodiment of the application, after each first training sample in a plurality of first training samples is input into a first prediction model, a terminal performs feature extraction, feature fusion and user behavior prediction on each first training sample according to offline information and real-time information of each first training sample to obtain a first prediction result corresponding to each first training sample.
Step 806, calculating a first loss of the first prediction model according to the first prediction result corresponding to each first training sample, the first sample label included in each first training sample and a preset first loss function, and completing training of the first prediction model when the first loss meets a preset training condition to obtain a target prediction model corresponding to the first user type.
In the embodiment of the application, the preset loss condition can be preset iteration times or preset loss threshold value, and the embodiment of the application does not limit the loss condition. And when the first loss of the first prediction model meets a preset training condition, stopping iteration of the first prediction model by the terminal, and taking the current first prediction model as a target prediction model corresponding to the first user type.
In this embodiment, the first prediction model is trained through the first training sample, so as to obtain a trained first prediction model, and the trained first prediction model can implement accurate prediction of the target user of the first user type in a real scene.
In one embodiment, as shown in fig. 9, step 804 performs data processing on the offline information and the real-time information in each first training sample according to the first prediction model to obtain a first prediction result corresponding to each first training sample, including:
and step 902, extracting features of offline information and real-time information of a plurality of first training samples according to the first prediction model to obtain first features corresponding to each first training sample and second features corresponding to each first training sample.
In the embodiment of the application, a terminal performs feature extraction on a user portrait of a target user according to a user portrait embedding layer in a first prediction model to obtain user portrait features, performs feature extraction on user behaviors according to the user behavior embedding layer to obtain user behavior features, performs feature extraction on city information according to a city information embedding layer to obtain city information features, performs feature extraction on a code scanning context according to a context embedding layer to obtain code scanning context features, and determines the user portrait features, the user behavior features, the city information features and the code scanning context features as first features corresponding to each first training sample; and the terminal performs feature extraction on the time sequence information of each first training sample according to the long-short-period memory structure in the first prediction model to obtain the user time sequence feature corresponding to each first training sample, and determines the user time sequence feature corresponding to each first training sample as a second feature.
And 904, carrying out feature fusion on the first features corresponding to each first training sample and the second features corresponding to each first training sample to obtain third target user features.
In the embodiment of the application, the terminal performs feature fusion on the first features and the second features corresponding to each first training sample according to the feature interaction layer in the first prediction model to obtain the fused third target user features.
Step 906, performing classification prediction processing on each first training sample according to the third target user characteristics to obtain a first prediction result corresponding to each first training sample.
In the embodiment of the application, the terminal performs behavior classification prediction on each first training sample according to the third target user characteristics, determines a classification result corresponding to each first training sample, and determines a first prediction result based on the classification result.
In this embodiment, feature extraction is performed on each first training sample through the user portrait embedding layer, the user behavior embedding layer, the city information embedding layer, the context embedding layer and the long-short-term memory structure, and feature fusion is performed on the first feature and the second feature according to the embedding fusion layer and the feature interaction layer of the first prediction model, so that a first prediction result for training the first prediction model can be obtained, and the training effect of the first prediction model can be reflected according to the first prediction result.
In one embodiment, as shown in fig. 10, a method for training a predictive model is provided, the method comprising:
step 1002, a plurality of second training samples and second prediction models corresponding to the second training samples are obtained.
The second training sample comprises a second sample label, and the user type corresponding to the first training sample is a second user type.
In the embodiment of the application, the terminal scans the user information when the code is scanned for the first time in the historical user information in the terminal or server database, and the user behavior and the environment information when the code is scanned for the first time by the historical user.
Step 1004, inputting each second training sample in the plurality of second training samples into a second prediction model, and performing data processing on the offline information and the real-time information in each second training sample according to the second prediction model to obtain a second prediction result corresponding to each training sample.
In the embodiment of the application, after each second training sample in a plurality of second training samples is input into a second prediction model, the terminal performs feature extraction, feature fusion and user behavior prediction on each second training sample according to the offline information and the real-time information of each second training sample to obtain a second prediction result corresponding to each second training sample.
Step 1006, calculating a second loss of the second prediction model according to the second prediction result corresponding to each training sample, the second sample label corresponding to each second training sample and the second loss function, and completing training of the second prediction model when the second loss meets the preset training condition, so as to obtain a target prediction model corresponding to the second user type.
In the embodiment of the application, the preset loss condition may be preset iteration times or a preset loss threshold, and when the second loss of the second prediction model meets the preset training condition, the terminal stops iteration of the second prediction model, and takes the current second prediction model as the target prediction model corresponding to the second user type.
In this embodiment, the second prediction model is trained through the second training sample, so as to obtain a trained second prediction model, and the trained second prediction model can implement accurate prediction of the target user of the second user type in the real scene.
In one embodiment, as shown in fig. 11, step 1004 performs data processing on the offline information and the real-time information in each second training sample according to the second prediction model to obtain a second prediction result corresponding to each training sample, including:
And 1102, extracting features of the offline information and the real-time information of the plurality of second training samples according to the second prediction model to obtain third features corresponding to each second training sample.
The third feature comprises a user image feature, a city information feature and a code scanning context feature.
In the embodiment of the application, the terminal performs feature extraction on the user portrait, the city information and the code scanning context of each second training sample in the plurality of second training samples to obtain a second feature corresponding to each second training sample and a third feature corresponding to each second training sample.
And 1104, carrying out feature fusion on the user portrait features, the city information features and the code scanning context features in the third features corresponding to each second training sample to obtain fourth target user features.
In the embodiment of the application, the terminal performs feature fusion on the portrait features, the city information features and the code scanning context features in the third features corresponding to each second training sample according to the feature interaction layer in the second prediction model to obtain the fused fourth target user features.
And 1106, performing classification prediction processing on each second training sample according to the fourth target user characteristics to obtain a second prediction result corresponding to each second training sample.
In the embodiment of the application, the terminal performs behavior classification prediction on each second training sample according to the fourth target user characteristic, determines a classification result corresponding to each second training sample, and takes the classification result as a second prediction result.
In this embodiment, feature extraction is performed on each second training sample through the user portrait embedding layer, the city information embedding layer and the context embedding layer, feature fusion is performed on the user portrait features, the city information features and the code scanning context features in the third features according to the embedding fusion layer and the feature interaction layer of the second prediction model, so that a second prediction result for training the second prediction model can be obtained, and a training effect of the second prediction model can be reflected according to the second prediction result.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a behavior prediction device for realizing the behavior prediction method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in the embodiments of one or more behavior prediction devices provided below may be referred to the limitation of the behavior prediction method hereinabove, and will not be repeated here.
In one embodiment, as shown in fig. 12, there is provided a behavior prediction apparatus 1200 comprising: an acquisition module 1201, a first determination module 1202, a second determination module 1203, and a data processing module 1204, wherein:
an obtaining module 1201, configured to obtain user information of a target user; the user information comprises offline information and real-time information corresponding to the target user; the user information is used for representing the behavior and the environment of the target user;
a first determining module 1202, configured to determine a user type of the target user according to the offline information and the real-time information;
a second determining module 1203, configured to determine a target prediction model according to the user type of the target user;
the data processing module 1204 inputs the offline information and the real-time information of the target user into the target prediction model, and performs data processing on the offline information and the real-time information through the target prediction model to obtain a prediction result corresponding to the target user.
In one embodiment, the data processing module is specifically configured to:
in the first prediction model, extracting the characteristics of the offline information and the real-time information of the target user to obtain a first characteristic and a second characteristic of the target user;
feature fusion is carried out on the first features and the second features, and fused first target user features are obtained;
and carrying out classification prediction processing on the first target user characteristics to obtain a prediction result corresponding to the target user.
In one embodiment, the data processing module is specifically configured to:
extracting features of user portraits, user behaviors, city information and code scanning contexts of the target user through a feature embedding layer in the first prediction model to obtain first features of the target user;
and extracting the user time sequence information of the target user through the long-period memory structure of the first prediction model to obtain the time sequence characteristics of the user history behavior as the second characteristics of the target user.
In one embodiment, the data processing module is specifically configured to:
carrying out feature fusion on the user portrait features, the user behavior features, the city information features and the code scanning context features through an embedded fusion structure of the first prediction model to obtain first fusion features;
And fusing the first fused features with the second features of the target user through a feature interaction layer of the first prediction model to obtain fused first target user features.
In one embodiment, the data processing module is specifically configured to:
and determining the reputation level of the target user according to a first classification decision layer in the first prediction model, and taking the reputation level of the target user as a prediction result corresponding to the first user type.
In one embodiment, the data processing module is specifically configured to:
in the second prediction model, extracting the characteristics of the offline information and the real-time information of the target user to obtain a third characteristic of the target user; the third feature comprises a user image feature, a city information feature and a code scanning context feature;
feature fusion is carried out on the user portrait features, the city information features and the code scanning context features in the third features, and fused second target user features are obtained;
and performing classification prediction processing on the second target user characteristics to obtain a prediction result corresponding to the target user.
In one embodiment, the data processing module is specifically configured to:
and extracting the characteristics of the user image, the city information and the code scanning context of the target user through a characteristic embedding layer in the second prediction model to obtain a third characteristic of the target user.
In one embodiment, the data processing module is specifically configured to:
and carrying out feature fusion on the user portrait features, the city information features and the code scanning context features in the third features through an embedded fusion structure of the second prediction model to obtain second target user features.
In one embodiment, the data processing module is specifically configured to:
and carrying out classification prediction on the second target user characteristics through a second classification decision layer in the second prediction model to obtain target behavior probability of the target user, and taking the target behavior probability as a prediction result corresponding to the target user.
The application also provides a training device of the behavior prediction model. Comprising the following steps:
the second acquisition module is used for acquiring a plurality of first training samples and a first prediction model corresponding to the first training samples; the first training sample comprises a first sample label, and the user type corresponding to the first training sample is a first user type;
the second data processing module is used for inputting each first training sample in the plurality of first training samples into the first prediction model, and carrying out data processing on the offline information and the real-time information in each first training sample according to the first prediction model to obtain a first prediction result corresponding to each first training sample;
The first calculation module is used for calculating first loss of the first prediction model according to a first prediction result corresponding to each first training sample, a first sample label included in each first training sample and a preset first loss function, and when the first loss meets preset training conditions, training of the first prediction model is completed, and a target prediction model corresponding to the first user type is obtained.
In one embodiment, the second data processing module is specifically configured to:
performing feature extraction on the offline information and the real-time information of a plurality of first training samples according to the first prediction model to obtain first features and second features corresponding to each first training sample;
carrying out feature fusion on the first features and the second features corresponding to each first training sample to obtain third target user features;
and carrying out classification prediction processing on each first training sample according to the third target user characteristics to obtain a first prediction result corresponding to each first training sample.
The application also provides a training device of the behavior prediction model. Comprising the following steps:
the third acquisition module is used for acquiring a plurality of second training samples and second prediction models corresponding to the second training samples; the second training sample comprises a second sample label, and the user type corresponding to the second training sample is a second user type;
The third data processing module is used for inputting each second training sample in the plurality of second training samples into the second prediction model, and carrying out data processing on the offline information and the real-time information in each second training sample according to the second prediction model to obtain a second prediction result corresponding to each training sample;
the second calculation module is used for calculating second loss of the second prediction model according to a second prediction result corresponding to each training sample, a second sample label corresponding to each second training sample and a second loss function, and training the second prediction model when the second loss meets preset training conditions to obtain a target prediction model corresponding to a second user type.
In one embodiment, the third data processing module is specifically configured to:
performing feature extraction on the offline information and the real-time information of the plurality of second training samples according to the second prediction model to obtain third features corresponding to each second training sample; the third feature comprises a user image feature, a city information feature and a code scanning context feature;
feature fusion is carried out on the user portrait features, the city information features and the code scanning context features in the third features corresponding to each second training sample, and fourth target user features are obtained;
And performing classification prediction processing on each second training sample according to the fourth target user characteristics to obtain a second prediction result corresponding to each second training sample.
The respective modules in the behavior prediction apparatus described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 13. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is for storing user history data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a behavior prediction method.
It will be appreciated by those skilled in the art that the structure shown in FIG. 13 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
acquiring user information of a target user; the user information comprises offline information and real-time information corresponding to the target user; the user information is used for representing the behavior and the environment of the target user;
determining the user type of the target user according to the offline information and the real-time information;
determining a target prediction model according to the user type of the target user;
and inputting the offline information and the real-time information of the target user into a target prediction model, and performing data processing on the offline information and the real-time information through the target prediction model to obtain a prediction result corresponding to the target user.
In one embodiment, the processor when executing the computer program further performs the steps of:
in the first prediction model, extracting the characteristics of the offline information and the real-time information of the target user to obtain a first characteristic and a second characteristic of the target user;
feature fusion is carried out on the first features and the second features, and fused first target user features are obtained;
and carrying out classification prediction processing on the first target user characteristics to obtain a prediction result corresponding to the target user.
In one embodiment, the processor when executing the computer program further performs the steps of:
extracting features of user portraits, user behaviors, city information and code scanning contexts of the target user through a feature embedding layer in the first prediction model to obtain first features of the target user;
and extracting the user time sequence information of the target user through the long-period memory structure of the first prediction model to obtain the time sequence characteristics of the user history behavior as the second characteristics of the target user.
In one embodiment, the processor when executing the computer program further performs the steps of:
carrying out feature fusion on the user portrait features, the user behavior features, the city information features and the code scanning context features through an embedded fusion structure of the first prediction model to obtain first fusion features;
And fusing the first fused features with the second features of the target user through a feature interaction layer of the first prediction model to obtain fused first target user features.
In one embodiment, the processor when executing the computer program further performs the steps of:
and determining the reputation level of the target user according to a first classification decision layer in the first prediction model, and taking the reputation level of the target user as a prediction result corresponding to the first user type.
In one embodiment, the processor when executing the computer program further performs the steps of:
in the second prediction model, extracting the characteristics of the offline information and the real-time information of the target user to obtain a third characteristic of the target user; the third feature comprises a user image feature, a city information feature and a code scanning context feature;
feature fusion is carried out on the user portrait features, the city information features and the code scanning context features in the third features, and fused second target user features are obtained;
and performing classification prediction processing on the second target user characteristics to obtain a prediction result corresponding to the target user.
In one embodiment, the processor when executing the computer program further performs the steps of:
And extracting the characteristics of the user image, the city information and the code scanning context of the target user through a characteristic embedding layer in the second prediction model to obtain a third characteristic of the target user.
In one embodiment, the processor when executing the computer program further performs the steps of:
and carrying out feature fusion on the user portrait features, the city information features and the code scanning context features in the third features through an embedded fusion structure of the second prediction model to obtain second target user features.
In one embodiment, the processor when executing the computer program further performs the steps of:
and carrying out classification prediction on the second target user characteristics through a second classification decision layer in the second prediction model to obtain target behavior probability of the target user, and taking the target behavior probability as a prediction result corresponding to the target user.
In one embodiment, the processor when executing the computer program further performs the steps of:
acquiring a plurality of first training samples and a first prediction model corresponding to the first training samples; the first training sample comprises a first sample label, and the user type corresponding to the first training sample is a first user type;
inputting each first training sample in the plurality of first training samples into a first prediction model, and performing data processing on offline information and real-time information in each first training sample according to the first prediction model to obtain a first prediction result corresponding to each first training sample;
According to the first prediction result corresponding to each first training sample, the first sample label included in each first training sample and a preset first loss function, calculating first loss of the first prediction model, and when the first loss meets a preset training condition, completing training of the first prediction model to obtain a target prediction model corresponding to the first user type.
In one embodiment, the processor when executing the computer program further performs the steps of:
performing feature extraction on the offline information and the real-time information of a plurality of first training samples according to the first prediction model to obtain first features and second features corresponding to each first training sample;
carrying out feature fusion on the first features and the second features corresponding to each first training sample to obtain third target user features;
and carrying out classification prediction processing on each first training sample according to the third target user characteristics to obtain a first prediction result corresponding to each first training sample.
In one embodiment, the processor when executing the computer program further performs the steps of:
acquiring a plurality of second training samples and a second prediction model corresponding to the second training samples; the second training sample comprises a second sample label, and the user type corresponding to the second training sample is a second user type;
Inputting each second training sample in the plurality of second training samples into a second prediction model, and performing data processing on the offline information and the real-time information in each second training sample according to the second prediction model to obtain a second prediction result corresponding to each training sample;
and calculating second loss of the second prediction model according to a second prediction result corresponding to each training sample, a second sample label corresponding to each second training sample and a second loss function, and completing training of the second prediction model when the second loss meets a preset training condition to obtain a target prediction model corresponding to a second user type.
In one embodiment, the processor when executing the computer program further performs the steps of:
performing feature extraction on the offline information and the real-time information of the plurality of second training samples according to the second prediction model to obtain third features corresponding to each second training sample; the third feature comprises a user image feature, a city information feature and a code scanning context feature;
feature fusion is carried out on the user portrait features, the city information features and the code scanning context features in the third features corresponding to each second training sample, and fourth target user features are obtained;
And performing classification prediction processing on each second training sample according to the fourth target user characteristics to obtain a second prediction result corresponding to each second training sample.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
The user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as Static Random access memory (Static Random access memory AccessMemory, SRAM) or dynamic Random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (16)

1. A method of behavior prediction, the method comprising:
acquiring user information of a target user; the user information comprises offline information and real-time information corresponding to the target user; the user information is used for representing the behavior and the environment of the target user;
determining the user type of the target user according to the offline information and the real-time information;
Determining a target prediction model according to the user type of the target user;
when the user type of the target user is a first user type, the target prediction model is a first prediction model, and the user information of the target user is predicted through the first prediction model to obtain a prediction result corresponding to the target user; the first prediction model comprises a long-short-term memory structure and a characteristic embedding layer;
when the user type of the target user is a second user type, the target prediction model is a second prediction model, and the user information of the target user is subjected to prediction processing through the second prediction model to obtain a prediction result corresponding to the target user; the second predictive model includes a feature embedding layer.
2. The method according to claim 1, wherein when the user type of the target user is a first user type, the target prediction model is a first prediction model, and the predicting the user information of the target user by using the first prediction model to obtain a prediction result corresponding to the target user includes:
in the first prediction model, extracting features of offline information and real-time information of the target user to obtain a first feature and a second feature of the target user;
Performing feature fusion on the first feature and the second feature to obtain a fused first target user feature;
and carrying out classification prediction processing on the first target user characteristics to obtain a prediction result corresponding to the target user.
3. The method of claim 2, wherein the offline information includes user portraits, user behaviors, city information, and user timing information, and the real-time information includes code scanning context; in the first prediction model, extracting features of offline information and real-time information of the target user to obtain a first feature and a second feature of the target user, including:
extracting features of the user portrait, the user behavior, the city information and the code scanning context of the target user through a feature embedding layer in the first prediction model to obtain first features of the target user;
and extracting the user time sequence information of the target user through the long-term and short-term memory structure of the first prediction model to obtain time sequence characteristics of user history behaviors as second characteristics of the target user.
4. A method according to claim 2 or 3, wherein the first features comprise user portrayal features, user behavior features, city information features, code scanning context features; and performing feature fusion on the first feature and the second feature to obtain a fused first target user feature, wherein the feature fusion comprises the following steps:
Performing feature fusion on the user portrait features, the user behavior features, the city information features and the code scanning context features through an embedded fusion structure of the first prediction model to obtain first fusion features;
and fusing the first fused features with the second features of the target user through a feature interaction layer of the first prediction model to obtain fused first target user features.
5. The method of claim 2, wherein the classifying and predicting the first target user feature to obtain a prediction result corresponding to the target user includes:
and determining the reputation level of the target user according to a first classification decision layer in the first prediction model, and taking the reputation level of the target user as a prediction result corresponding to the first user type.
6. The method according to claim 1, wherein when the user type of the target user is a second user type, the target prediction model is a second prediction model, and the prediction processing is performed on the user information of the target user through the second prediction model to obtain a prediction result corresponding to the target user, including:
In the second prediction model, extracting the characteristics of the offline information and the real-time information of the target user to obtain a third characteristic of the target user; the third feature comprises a user image feature, a city information feature and a code scanning context feature;
performing feature fusion on the user portrait features, the city information features and the code scanning context features in the third features to obtain fused second target user features;
and performing classification prediction processing on the second target user characteristics to obtain a prediction result corresponding to the target user.
7. The method of claim 6, wherein the offline information includes user portraits and city information, and wherein the real-time information includes scan code context; and in the second prediction model, extracting features of the offline information and the real-time information of the target user to obtain a third feature of the target user, including:
and extracting the characteristics of the user portrait, the city information and the code scanning context of the target user through a characteristic embedding layer in the second prediction model to obtain a third characteristic of the target user.
8. The method of claim 6, wherein the feature fusing the user portrait feature, the city information feature, and the scan context feature in the third feature to obtain a fused second target user feature comprises:
and carrying out feature fusion on the user portrait features, the city information features and the code scanning context features in the third features through an embedded fusion structure of the second prediction model to obtain second target user features.
9. The method of claim 6, wherein the performing a classification prediction process on the second target user feature to obtain a prediction result corresponding to the target user includes:
and carrying out classification prediction on the second target user characteristics through a second classification decision layer in the second prediction model to obtain target behavior probability of the target user, and taking the target behavior probability as a prediction result corresponding to the target user.
10. A method of training a behavior prediction model, the method comprising:
acquiring a plurality of first training samples and a first prediction model corresponding to the first training samples; the first training sample comprises a first sample label, and the user type corresponding to the first training sample is a first user type;
Inputting each first training sample in a plurality of first training samples into the first prediction model, and performing data processing on offline information and real-time information in each first training sample according to the first prediction model to obtain a first prediction result corresponding to each first training sample;
according to a first prediction result corresponding to each first training sample, the first sample label included in each first training sample and a preset first loss function, calculating first loss of the first prediction model, and when the first loss meets preset training conditions, training the first prediction model is completed, and a target prediction model corresponding to the first user type is obtained.
11. The method of claim 10, wherein the performing data processing on the offline information and the real-time information in each first training sample according to the first prediction model to obtain a first prediction result corresponding to each first training sample includes:
performing feature extraction on the offline information and the real-time information of a plurality of first training samples according to the first prediction model to obtain first features and second features corresponding to each first training sample;
Performing feature fusion on the first features and the second features corresponding to each first training sample to obtain third target user features;
and carrying out classification prediction processing on each first training sample according to the third target user characteristics to obtain a first prediction result corresponding to each first training sample.
12. A method of training a behavior prediction model, the method comprising:
acquiring a plurality of second training samples and a second prediction model corresponding to the second training samples; the second training sample comprises a second sample label, and the user type corresponding to the second training sample is a second user type;
inputting each second training sample in the plurality of second training samples into the second prediction model, and performing data processing on offline information and real-time information in each second training sample according to the second prediction model to obtain a second prediction result corresponding to each training sample;
and calculating a second loss of the second prediction model according to a second prediction result corresponding to each training sample, the second sample label corresponding to each second training sample and a second loss function, and completing training of the second prediction model when the second loss meets a preset training condition to obtain a target prediction model corresponding to the second user type.
13. The method of claim 12, wherein the performing data processing on the offline information and the real-time information in each of the second training samples according to the second prediction model to obtain a second prediction result corresponding to each of the training samples includes:
performing feature extraction on the offline information and the real-time information of a plurality of second training samples according to the second prediction model to obtain third features corresponding to each second training sample; the third feature comprises a user image feature, a city information feature and a code scanning context feature;
carrying out feature fusion on the user portrait features, the city information features and the code scanning context features in the third features corresponding to each second training sample to obtain fourth target user features;
and performing classification prediction processing on each second training sample according to the fourth target user characteristics to obtain a second prediction result corresponding to each second training sample.
14. A behavior prediction apparatus, the apparatus comprising:
the acquisition module is used for acquiring the user information of the target user; the user information comprises offline information and real-time information corresponding to the target user; the user information is used for representing the behavior and the environment of the target user;
The first determining module is used for determining the user type of the target user according to the offline information and the real-time information;
the second determining module is used for determining a target prediction model according to the user type of the target user;
the first data processing module is used for predicting the user information of the target user through a first prediction model when the user type of the target user is a first user type, so as to obtain a prediction result corresponding to the target user; the first prediction model comprises a long-short-term memory structure and a characteristic embedding layer;
the second data processing module is used for carrying out prediction processing on the user information of the target user through the second prediction model when the user type of the target user is a second user type, so as to obtain a prediction result corresponding to the target user; the second predictive model includes a feature embedding layer.
15. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 13 when the computer program is executed.
16. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 13.
CN202310467649.3A 2023-04-27 2023-04-27 Behavior prediction method, training method and device of behavior prediction model Active CN116205376B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310467649.3A CN116205376B (en) 2023-04-27 2023-04-27 Behavior prediction method, training method and device of behavior prediction model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310467649.3A CN116205376B (en) 2023-04-27 2023-04-27 Behavior prediction method, training method and device of behavior prediction model

Publications (2)

Publication Number Publication Date
CN116205376A CN116205376A (en) 2023-06-02
CN116205376B true CN116205376B (en) 2023-10-17

Family

ID=86513197

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310467649.3A Active CN116205376B (en) 2023-04-27 2023-04-27 Behavior prediction method, training method and device of behavior prediction model

Country Status (1)

Country Link
CN (1) CN116205376B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017031856A1 (en) * 2015-08-25 2017-03-02 百度在线网络技术(北京)有限公司 Information prediction method and device
CN110415002A (en) * 2019-07-31 2019-11-05 中国工商银行股份有限公司 Customer behavior prediction method and system
WO2019242331A1 (en) * 2018-06-20 2019-12-26 华为技术有限公司 User behavior prediction method and apparatus, and behavior prediction model training method and apparatus
CN111798273A (en) * 2020-07-01 2020-10-20 中国建设银行股份有限公司 Training method of purchase probability prediction model of product and purchase probability prediction method
CN112580952A (en) * 2020-12-09 2021-03-30 腾讯科技(深圳)有限公司 User behavior risk prediction method and device, electronic equipment and storage medium
CN113935251A (en) * 2021-12-17 2022-01-14 北京达佳互联信息技术有限公司 User behavior prediction model generation method and device and user behavior prediction method and device
CN114049529A (en) * 2021-09-22 2022-02-15 北京小米移动软件有限公司 User behavior prediction method, model training method, electronic device, and storage medium
CN114463117A (en) * 2022-02-10 2022-05-10 深圳乐信软件技术有限公司 User behavior prediction method, system and device
CN114493839A (en) * 2022-01-24 2022-05-13 中国农业银行股份有限公司 Risk user prediction model training method, prediction method, equipment and storage medium
US11625796B1 (en) * 2019-10-15 2023-04-11 Airbnb, Inc. Intelligent prediction of an expected value of user conversion

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127363B (en) * 2016-06-12 2022-04-15 腾讯科技(深圳)有限公司 User credit assessment method and device
CN113627518B (en) * 2021-08-07 2023-08-08 福州大学 Method for realizing neural network brain electricity emotion recognition model by utilizing transfer learning

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017031856A1 (en) * 2015-08-25 2017-03-02 百度在线网络技术(北京)有限公司 Information prediction method and device
WO2019242331A1 (en) * 2018-06-20 2019-12-26 华为技术有限公司 User behavior prediction method and apparatus, and behavior prediction model training method and apparatus
CN110415002A (en) * 2019-07-31 2019-11-05 中国工商银行股份有限公司 Customer behavior prediction method and system
US11625796B1 (en) * 2019-10-15 2023-04-11 Airbnb, Inc. Intelligent prediction of an expected value of user conversion
CN111798273A (en) * 2020-07-01 2020-10-20 中国建设银行股份有限公司 Training method of purchase probability prediction model of product and purchase probability prediction method
CN112580952A (en) * 2020-12-09 2021-03-30 腾讯科技(深圳)有限公司 User behavior risk prediction method and device, electronic equipment and storage medium
CN114049529A (en) * 2021-09-22 2022-02-15 北京小米移动软件有限公司 User behavior prediction method, model training method, electronic device, and storage medium
CN113935251A (en) * 2021-12-17 2022-01-14 北京达佳互联信息技术有限公司 User behavior prediction model generation method and device and user behavior prediction method and device
CN114493839A (en) * 2022-01-24 2022-05-13 中国农业银行股份有限公司 Risk user prediction model training method, prediction method, equipment and storage medium
CN114463117A (en) * 2022-02-10 2022-05-10 深圳乐信软件技术有限公司 User behavior prediction method, system and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PredictIve Statistical Models for User Modeling;Ingrid Zukerman等;User Modeling and User-Adapted Interaction;第5-18页 *
社区电商用户复购行为预测及推荐算法研究;石力;中国博士学位论文全文数据库(信息科技辑);第1-120页 *

Also Published As

Publication number Publication date
CN116205376A (en) 2023-06-02

Similar Documents

Publication Publication Date Title
CN110659744B (en) Training event prediction model, and method and device for evaluating operation event
CN110348580B (en) Method and device for constructing GBDT model, and prediction method and device
CN111291264B (en) Access object prediction method and device based on machine learning and computer equipment
CN110795657B (en) Article pushing and model training method and device, storage medium and computer equipment
CN111369299B (en) Identification method, device, equipment and computer readable storage medium
CN110263242A (en) Content recommendation method, device, computer readable storage medium and computer equipment
CN109886330B (en) Text detection method and device, computer readable storage medium and computer equipment
CN112905876A (en) Information pushing method and device based on deep learning and computer equipment
CN110737730A (en) Unsupervised learning-based user classification method, unsupervised learning-based user classification device, unsupervised learning-based user classification equipment and storage medium
CN116363357A (en) Semi-supervised semantic segmentation method and device based on MIM and contrast learning
CN113314188B (en) Graph structure enhanced small sample learning method, system, equipment and storage medium
CN111914949B (en) Zero sample learning model training method and device based on reinforcement learning
Xiao et al. Self-explanatory deep salient object detection
CN116205376B (en) Behavior prediction method, training method and device of behavior prediction model
CN115049852B (en) Bearing fault diagnosis method and device, storage medium and electronic equipment
CN115952930A (en) Social behavior body position prediction method based on IMM-GMR model
Song et al. Text Siamese network for video textual keyframe detection
CN114742297A (en) Power battery treatment method
CN114691981A (en) Session recommendation method, system, device and storage medium
CN114201572A (en) Interest point classification method and device based on graph neural network
CN113469816A (en) Digital currency identification method, system and storage medium based on multigroup technology
CN116610783B (en) Service optimization method based on artificial intelligent decision and digital online page system
CN112115443A (en) Terminal user authentication method and system
CN114821248B (en) Point cloud understanding-oriented data active screening and labeling method and device
CN114676167B (en) User persistence model training method, user persistence prediction method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant